CN114363650A - Display method of public screen file in live broadcast room, electronic equipment and storage medium - Google Patents

Display method of public screen file in live broadcast room, electronic equipment and storage medium Download PDF

Info

Publication number
CN114363650A
CN114363650A CN202111673092.6A CN202111673092A CN114363650A CN 114363650 A CN114363650 A CN 114363650A CN 202111673092 A CN202111673092 A CN 202111673092A CN 114363650 A CN114363650 A CN 114363650A
Authority
CN
China
Prior art keywords
audience
file
anchor
voice
document
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111673092.6A
Other languages
Chinese (zh)
Other versions
CN114363650B (en
Inventor
曾家乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202111673092.6A priority Critical patent/CN114363650B/en
Publication of CN114363650A publication Critical patent/CN114363650A/en
Application granted granted Critical
Publication of CN114363650B publication Critical patent/CN114363650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The technical scheme provided by the embodiment of the specification enables a anchor to generate a certain file aiming at a voice message input by replying a public screen audience file so as to be displayed on a public screen even if the anchor is inconvenient to manually generate the file at present, so that the anchor can more conveniently reply the content of the public screen during live broadcasting, and the generated anchor file is bound with the audience file on the public screen replied by the anchor file, and a binding trigger event of a user to the anchor file or the audience file can be responded; the method enriches the mode of generating the public screen file by the anchor and improves the operation mode of the public screen file by the user.

Description

Display method of public screen file in live broadcast room, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet live broadcast and video processing technologies, and in particular, to a method for displaying a public screen document in a live broadcast room, an electronic device, and a storage medium.
Background
In a live environment, a host often needs to generate a file to reply to content displayed on a public screen in a live room. However, currently, the anchor usually types by hand to generate a file reply public screen content, and records the file reply public screen content on the public screen; the anchor can not type and generate the file to reply the public screen content at any time, for example, the anchor can not manually generate the file to reply the public screen content while focusing on executing the game operation in the live broadcasting process of playing the game.
Disclosure of Invention
In order to overcome the problems in the related art, the present specification provides a display method, an electronic device, and a storage medium for a public screen document in a live broadcast room.
According to a first aspect of embodiments of the present specification, there is provided a method for displaying a public screen document in a live broadcast room, the method including:
receiving voice messages input by a main broadcast, wherein the voice messages comprise voices corresponding to at least part of audience files displayed on a public screen of a live broadcast room and voices replying to the audience files;
performing semantic analysis on the voice message, and generating a main broadcasting file based on the voice replying the audience file;
and binding the audience file with the anchor file so as to enable the client to display the anchor file and respond to a binding trigger event of the user to the anchor file or the audience file.
According to a second aspect of the embodiments of the present specification, there is provided a method for displaying a public screen file in a live broadcast room, which is applied to a client, where the client includes an anchor client and an audience client, the method including:
the anchor client receives a voice message input by an anchor, wherein the voice message comprises at least part of voice corresponding to an audience file displayed on a public screen of a live broadcast room and voice replying the audience file; performing semantic analysis on the voice message, and generating a main broadcasting file based on the voice replying the audience file; informing a server to bind the audience file with the anchor file;
and the audience client receives and displays the anchor file sent by the server, and responds to a binding trigger event of the user to the anchor file or the audience file according to the binding relation between the audience file and the anchor file.
According to a third aspect of embodiments of the present specification, there is provided a method for displaying a public screen file in a live broadcast room, which is applied to a client, where the client includes a main broadcast client and an audience client, the method including:
the anchor client receives a voice message input by an anchor, wherein the voice message comprises at least part of voice corresponding to an audience file displayed on a public screen of a live broadcast room and voice replying the audience file; sending the voice message to a server side so that the server side performs semantic analysis on the voice message and generates a main broadcasting file based on the voice replying the audience file; binding the audience document with the anchor document;
and the audience client receives and displays the anchor file sent by the server, and responds to a binding trigger event of the user to the anchor file or the audience file according to the binding relation between the audience file and the anchor file.
According to a fourth aspect of embodiments herein, there is provided an electronic device comprising a memory for storing executable instructions and a processor;
wherein the processor, when executing the executable instructions, implements the steps of the method of any one of the first to third aspects.
According to a fifth aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the first to third aspects.
According to the technical scheme provided by the embodiment of the specification, a voice message which is input by a main broadcast and comprises at least part of voice corresponding to the audience file displayed on a public screen of a live broadcast room and voice replying the audience file sent by the audience is received and subjected to semantic analysis to generate a statement, the main broadcast file is generated based on the voice replying the audience file and is bound with the audience file, so that the main broadcast file or the audience file is displayed at a client and a binding trigger event of a user to the main broadcast file or the audience file is responded. Even if the anchor is inconvenient to manually generate the file at present, a certain file can be generated through the voice message input by the anchor to be displayed on a public screen, so that the anchor can more conveniently reply the content of the public screen during live broadcasting, and the generated anchor file is bound with the file of the audience, and the binding trigger event of the user to the anchor file or the file of the audience can be responded; the method enriches the mode of generating the public screen file by the anchor and improves the operation mode of the public screen file by the user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present specification and together with the description, serve to explain the principles of the specification.
Fig. 1 is a schematic structural diagram of a live network architecture provided in an exemplary embodiment.
Fig. 2 is a flowchart of a method for displaying a public screen file in a live broadcast room according to an exemplary embodiment.
Fig. 3-8 are diagrams of different effects provided by exemplary embodiments.
Fig. 9 is an interaction diagram of a method for displaying a public screen file in a live broadcast room according to another exemplary embodiment.
Fig. 10 is an interaction diagram of a method for displaying a public screen file in a live broadcast room according to another exemplary embodiment.
Fig. 11 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In an exemplary application scenario, the method for displaying a public screen file in a live broadcast room of the embodiment of the present specification may be used in a live broadcast scenario. As shown in fig. 1, fig. 1 is a schematic diagram of a live network architecture according to an exemplary embodiment of the present application. The live network architecture may include a server 100 and a plurality of terminals. The server 100 may be referred to as a background server, a component server, or the like, and is configured to provide a background service of live webcasting. The server 100 may include a server, a server cluster, or a cloud platform, and may also be a program for executing a service. The terminal may be a smart terminal having a live webcast function, for example, the smart terminal may be a smart phone, a tablet computer, a PDA (Personal Digital Assistant), a multimedia player, a wearable device, and the like.
In a live network configuration, the terminals may be divided into a main terminal 101 and a viewer terminal 102. The anchor terminal 101 has an anchor client installed therein, and the viewer terminal 102 has a viewer client installed therein. The anchor client and the viewer client may be the same live video application, i.e. the live video application has both live mode and viewer mode, e.g. "YY live"; the anchor client and viewer client may also be different types of live video applications. For the case that the anchor client and the viewer client are the same video live application, when the video live application enters the anchor mode, the video live application may be referred to as an anchor client (hereinafter referred to as "anchor"); when a live video application enters viewer mode, the live video application may be referred to as a viewer client (hereinafter "viewer side"). The viewer terminal 102, in which the viewer client is installed, may view live video uploaded by the anchor client. The anchor terminal 101 and the viewer terminal 102 may be connected to the server 100 through a wired network, a wireless network, or a data transmission line.
In the live network architecture, a viewer can log in to the server 100 of the live network architecture through a viewer client on the viewer terminal 102, an anchor can log in to the server 100 of the live network architecture through an anchor client on the anchor terminal 101, and the viewer and the anchor enter into the same live channel. The anchor client uploads the online live content to the server 100, and the server 100 sends the online live content to the audience client logged into the online live channel for the audience of the audience client to watch. The audience at the audience client can not only watch the live content uploaded by the anchor client, but also interact with the anchor of the live channel or other audiences through the server 100, and for increasing the interest of the interaction between the anchor and the audiences, for example, a function of sending a barrage message is provided.
At present, if a main broadcast wants to reply characters to a public screen file in a live broadcast room during live broadcast, manual operation is needed to realize the character reply. However, in some scenarios, such as a scenario where the anchor plays a game or is remote from the input device (such as when performing an entertainment performance), the anchor does not facilitate manual text retrieval. For example, when playing a game, the anchor sees a file sent by a certain audience on the public screen, wants to reply the file sent by the audience in a form of typing to generate a reply file on the public screen for the audience in the live broadcast room to watch, and the audience entering the live broadcast room after generating the reply file can also see the reply file; however, the anchor is playing the game content live, and is not convenient to type and generate the reply file.
In view of the foregoing problems, an embodiment of the present disclosure provides a method for displaying a public-screen document in a live broadcast room, which is applicable to a live broadcast scenario, and is capable of performing semantic analysis on a voice message, which is input by a anchor and includes at least a part of a voice of an audience document displayed on a public screen in the live broadcast room and a voice of an audience document sent by an audience, and generating a anchor document based on the voice of the audience document and displaying the anchor document on the public screen; the method and the system aim at binding the anchor case and the audience case and respond to the binding trigger event of the user to the anchor case or the audience case, thereby not only enriching the modes of generating the public screen case and replying the audience case by the anchor, but also improving the operation mode of the user to the public screen case.
Referring to fig. 2, fig. 2 is a flowchart of a method for displaying a public screen file in a live broadcast room according to an embodiment of the present disclosure, including the following steps:
s202, receiving voice messages input by a main broadcast, wherein the voice messages comprise at least part of voice corresponding to the audience documents displayed on a public screen of a live broadcast room and voice replying the audience documents;
s204, performing semantic analysis on the voice message, and generating a main broadcasting file based on the voice replying the audience file;
s206, binding the audience file with the anchor file to enable the client to display the anchor file, and responding to a binding trigger event of the user to the anchor file or the audience file.
Generally, the viewer client may participate in a live interaction by sending text and/or images (e.g., emoticons). Illustratively, text or images sent by the viewer's client may be displayed in the live room public screen. In the embodiment of the application, the characters and/or images which are sent by the audience client and displayed in the public screen of the live broadcast room are called as the audience file.
In some scenes that the anchor is inconvenient to type, such as game scenes, the anchor can start a voice recognition function in the anchor client, and the anchor client can collect voice messages in the environment where the anchor is located in real time.
In some embodiments, a voice message input by the anchor while live, for example, a word spoken by the anchor while live, may be received by the anchor client, or the anchor client sends the captured voice message to the server, and the server receives the voice message.
When the host replies to the message of the public screen file, the host generally reads a certain file on the public screen first and then speaks the content of replying the file. If the file on the public screen is "how high you are at the anchor", the way for the anchor to reply to the file is generally: "how high you are anchor, one meter and eight anchor". Of course, when the anchor reads a certain audience document displayed on the public screen of the live broadcast room, the anchor may read the whole of the audience document or a part of the audience document.
When receiving the voice message, the anchor client will first detect whether the voice message contains the voice corresponding to at least part of the audience file displayed on the public screen of the live broadcast room, for example: the voice message entered by the anchor is "how many years you are anchor, anchor 30 years old"; the method includes that a voice message input by a main broadcast is determined whether to include voice corresponding to at least part of the audience documents, wherein the voice corresponding to at least part of the audience documents can be voice corresponding to the whole audience documents, and the voice corresponding to part of the audience documents can also be voice corresponding to part of the audience documents, such as 'the age of the main broadcast', 'the age of you', and the like. After determining that the voice message includes the voice corresponding to at least part of the audience record, it may be determined that the other voice in the voice message except the voice corresponding to the audience record is the voice replying to the audience record.
As will be understood by those skilled in the art, the anchor client may receive the voice message, and detect whether the voice message includes a voice corresponding to at least a part of the audience text displayed on the public screen of the live broadcast room; the server may perform the above detection operation. The anchor client can collect the voice message of the anchor in real time and perform voice analysis in real time, or the anchor client can upload the collected voice message to the server in real time, so that the server performs voice analysis in real time. This specification is not limited thereto.
In one possible implementation, in the process of performing semantic analysis on the voice message to obtain corresponding semantic content, the matching degree of the semantic content and the audience file displayed in the public screen of the live broadcast room can be determined. If the matching degree exceeds a preset threshold value, determining that the voice message comprises at least part of corresponding voices in the audience documents displayed on the public screen of the live broadcast room, wherein the condition can be regarded as that the currently input voice message of the anchor is the voice message of the anchor replying the audience documents, further determining that other voices except the voices corresponding to the audience documents in the voice message are the voices replying the audience documents, and further performing semantic analysis based on the voices replying the audience documents in the voice message to generate the anchor documents; for example, the voice of the reply audience file in the voice message can be converted into a text form to generate the anchor file. The matching degree is determined, so that the accuracy of recognizing the return audience file of the anchor is improved.
In one example, the voice replying to the audience document in the voice message "anchor your age, anchor 30 year old" is "anchor 30 year old" and may be converted into a text formation to reply to the audience document "anchor your age".
For example, in order to improve recognition accuracy, a progress bar indicating the matching degree may be displayed during semantic analysis of the voice message. For example, as shown in fig. 3, a progress bar may be generated on the public screen next to the viewer's copy. When the semantic content obtained by performing semantic analysis on the voice message is completely matched with the audience pattern, the progress of the progress bar is 100 percent; when the semantic content matches 50% of the audience pattern, then the progress of the progress bar is 50%. As shown in fig. 3, the audience document displayed in the live room public screen is "ABCDE", and the semantic content obtained after performing semantic analysis on the voice message in the voice message input by the anchor includes "ABCD", which is 80% overlapped with the audience document "ABCDE" displayed in the anchor room public screen, so that the progress bar is 80% at this time.
It can be understood that the preset threshold may be specifically set according to an actual application scenario, for example, the preset value may be 50%, or the preset value may also be 30% or 80%, and the like, and the embodiment of this specification is not limited thereto. The display mode and the display form of the progress bar are not limited in the present specification.
In a possible implementation manner, in a case that it is determined that the matching degree of the semantic content corresponding to the voice message and the audience pattern reaches a preset threshold, the voice in the voice message within a preset time length after the voice corresponding to at least part of the audience pattern in the audience pattern can be determined as the voice replying to the audience pattern.
For example, the voice within 2 minutes after the voice corresponding to at least part of the audience record is preset as the voice for replying the audience record, after the voice message input by the main broadcast is subjected to semantic analysis to determine that the voice corresponding to at least part of the audience record exists, the voice within 2 minutes after the voice corresponding to at least part of the audience record is determined as the voice for replying the audience record.
In another possible real-time manner, when it is determined that the matching degree between the semantic content corresponding to the voice message and the audience pattern reaches the preset threshold, the voice in a period from the time when the voice corresponding to at least part of the audience pattern is paused for the preset time duration may be the voice replied to the audience pattern.
For example, when the pause duration is preset to be 5 seconds, it is determined that the broadcasting of the audience pattern replies the voice end of the audience pattern, and under the condition that it is determined that the matching degree between the semantic content corresponding to the voice message and the audience pattern reaches a preset threshold, the voice in the period from the voice corresponding to at least part of the audience pattern to the pause duration of 5 seconds is determined as the voice replying the audience pattern. In a specific example, when performing semantic analysis on a voice message, the content of the first 10 seconds is determined as the voice corresponding to at least part of the audience file, and the 35 th to 40 th seconds are the 5 second duration of the first pause, that is, the voice in the 10 second to 35 second duration is determined as the voice of the reply audience file.
Based on the method, the voice replying the audience file in the voice message input by the main broadcast can be more accurately determined, and the main broadcast file displayed on the public screen of the live broadcast room subsequently is more accurate.
In a possible implementation mode, if the voice message includes the voice of the anchor request aiming at the authority limit of the specified user, the client is informed to delete the historical file sent by the specified user on the public screen of the live broadcast room, and the function of sending the file on the public screen of the live broadcast room by the specified user is closed. For example, if the voice message input by the anchor is "screen wai-xiao", all the clients are notified to delete all the historical documents sent by the user "wai-xiao" on the live broadcast public screen, and the function of sending the documents on the live broadcast public screen by the client used by the user "wanxiao" is turned off. Based on the method, disturbance of malicious users in the live broadcast room to the live broadcast room can be shielded, the management capacity of the anchor on the public screen file in the live broadcast room is enhanced, and the appreciation of the public screen file in the live broadcast room is improved.
In one possible embodiment, before the main broadcast document is generated, a predetermined vocabulary in the voice message may be recognized and the predetermined vocabulary may be deleted or replaced, for example, by deleting certain predetermined vocabulary that is not allowed to be displayed on the public screen of the live broadcast room or replacing certain predetermined vocabulary that is not allowed to be displayed on the public screen of the live broadcast room with other identifiers. Further, the appreciation of the public screen file in the live broadcast room is improved based on the voice message input by the anchor.
In some embodiments, after the anchor document is generated, the audience document is bound to the anchor document so that the client displays the anchor document on a live room public screen, and the client responds to a user binding trigger event for the anchor document or the audience document.
In one example, the anchor client receives the voice message input by the anchor and carries out semantic analysis on the voice message to generate an anchor file; then the anchor client sends the anchor file to the server and informs the server to bind the audience file with the anchor file; and the server side sends the anchor file to the audience client side so that the audience client side displays the anchor file, and responds to the binding trigger event of the user to the anchor file or the audience file according to the binding relation between the anchor file and the audience file.
In another example, the anchor terminal may also send the received voice message input by the anchor to the service terminal, and the service terminal performs semantic analysis on the voice message to generate an anchor file; and then the server side sends the anchor file to the audience client side so that the audience client side displays the anchor file, and responds to the binding trigger event of the user to the anchor file or the audience file according to the binding relation between the anchor file and the audience file. Wherein, the binding trigger event may be a click event of the user for the anchor file or the audience file, such as: user a clicks on the anchor document in the public screen, or user a double clicks on the audience document in the public screen, or user a long presses on the audience document in the public screen, etc. The embodiment of the present specification does not limit the click event of the user.
In a possible implementation manner, the anchor file can be set to be visible to all persons according to the preset of the anchor, so that the anchor file bound with the audience file is displayed on all clients; or setting the anchor file to be visible only for the user who sends the audience file bound with the anchor file according to the preset of the anchor, so that the anchor file is only displayed at the client used by the user who sends the audience file and is not displayed at the clients of other users. Based on this, the management ability of the anchor to the anchor file replying to the public screen file can be improved.
In one possible embodiment, the bound anchor and viewer documents may be displayed in the same color on the public screen of the live room as the anchor document and the viewer document, and in a different color than the other documents on the public screen. For example, the font color of other documents on the public screen are all default black, while the font color of the bound anchor and viewer documents are red. Alternatively, the background color of other documents on the public screen may be a default white color, while the background color of the bound anchor and viewer documents may be yellow. The embodiments in this specification do not limit the types of colors and the types of display colors. Based on the method, the identification degree of the bound anchor file and audience file on the public screen of the live broadcast room can be further improved.
In one embodiment, the binding trigger event is a click event of a target user for a anchor or audience document, the target user being the user who sent the audience document. For example, user a sent a copy: the "age of the anchor", the voice message input by the anchor is "age of the anchor, and 30 years of the anchor", the file "age of the anchor" is the audience file, "the anchor 30 is the anchor file, and the user a who sends the audience file is the target user. When the target user a clicks the anchor document "anchor 30 years" on the live broadcast room public screen using the viewer client a, the viewer client a jumps the position where the current document is displayed to the position where the viewer document "anchor years" is displayed on the live broadcast room public screen, so that the viewer document "anchor years" is displayed on the live broadcast room public screen, and thus the target user a can see the viewer document "anchor years" on the live broadcast room public screen. The position where the current document is displayed may be a position where the document clicked on the public screen of the audience client a is located on the public screen of the live broadcast room when the user a uses the audience client a, and the current document is the document clicked by the user a. The current document may also refer to a time point corresponding to a main document "main 30 years" on a public screen of a live broadcast room when the viewer client a clicks the main document, and the document displayed on the public screen of the live broadcast room may include the viewer document or the main document sent by other viewer clients.
Referring to fig. 4, when the target user a clicks the anchor document on the public screen of the live broadcast room by using the viewer client a, the position of the current document of the viewer client a is shown in fig. 4, and the viewer client a jumps the position of the current document to the position of the viewer document displayed on the public screen of the live broadcast room, as shown in fig. 5, fig. 5 is shown as the position of the viewer document displayed on the public screen of the live broadcast room of the viewer client a. (in the drawings of this specification, documents sent by viewers 1-7 are historical documents sent by non-target users on the public screen of the live broadcast room) this embodiment is merely an exemplary illustration, and the click event of a user in this embodiment may be a single click, a double click, or other click operations, or the like, or may also be other non-click events, such as a slide event, and this specification is not limited to this embodiment. Based on this, the operation mode of the user on the public screen file can be further improved.
In one embodiment, the audience documents are sent by the target user, and all the audience documents bound with the main audience documents sent by the target user and the main audience documents bound with all the audience documents respectively can be displayed to the target user according to the request of the target user. For example, target user B sent 3 audience documents replied by the anchor, respectively, "how many years of age the anchor", "anchor i love you", "anchor you real commander"; the anchor documents bound with the 3 audience documents in the anchor documents generated based on the voice messages input by the anchor are respectively ' anchor 30 years old ', ' I love you too, ' thank you, and you are also very commander '. The main and audience documents may be displayed to target user B in response to a request from target user B, such as target user B pressing any of the audience documents and main documents, as shown in fig. 6. Fig. 6 is a list displayed to the target user B, where the list includes all the audience documents sent by the target user B and the anchor documents bound to the audience documents. In this embodiment, the position of the current document may be skipped to the position of the audience document or the anchor document selected by the target user B according to the audience document or the anchor document selected by the target user B. When the target user B selects the audience case "anchor you commander", the position of the currently displayed case on the public screen of the live broadcast room is shown in fig. 6, and the position of the currently displayed case is jumped from the position of the currently displayed case to the position of the audience case "anchor you commander" on the public screen of the live broadcast room, so that the public screen of the live broadcast room displays the audience case "anchor you commander", as shown in fig. 7. Of course, in this embodiment, the mode of selecting the anchor document or the audience document by the target user may be a single click of the anchor document or the audience document, or a double click, and the like, and the embodiment of this specification is not limited thereto. Based on this embodiment, the three groups of bound documents in FIG. 6 may also display different colors, e.g., "how many years of anchor" and "30 years of anchor" are in red font, "anchor I love you" and "I love you too" are in green font; "anchor you are really commander" and "thank you, you are also commander" are blue fonts. Of course, displaying different colors is not necessarily different in font color, but may be different in ground color, and the like. Based on the method, the operation mode of the user on the public screen file and the recognition degree of the user on the bound file are further improved.
In one embodiment, the audience file is sent by a target user, and the position of the current file is jumped to the position of the newly displayed audience file bound with the clicked anchor file displayed on the public screen of the live broadcast room in response to the clicking event of the anchor file by a non-target user, so that the newly displayed audience file is displayed on the public screen of the live broadcast room, wherein the non-target user is a user who has not sent the audience file bound with the anchor file. For example, the anchor document is "anchor 30 years old" in fig. 4, and the audience document bound to the anchor document includes an audience document M sent by a target user a and an audience document N sent by a target user B, as shown in fig. 8, where the audience document N sent by the target user B is the latest audience document sent. At this time, the non-target user C does not send the audience document, and in response to the click event of the non-target user C on the anchor document in fig. 4, the position where the current document is located, that is, the position displayed in fig. 4, is jumped to the position displayed on the public screen of the live broadcast room of the audience document N sent by the target user B in fig. 8. Of course, in this embodiment, the click event of the non-target user C on the anchor document may be a single-click anchor document, a double-click anchor document, and the like, and this embodiment of this specification is not limited thereto. Based on this, the operation mode of the user to the public screen file is further improved.
In one possible implementation, in addition to replying to the audience file in the live broadcast public screen, when detecting that the voice message input by the anchor includes a preset keyword, the voice containing the keyword is converted into a file form to be displayed on the live broadcast public screen as the anchor file. The keywords may be words related to the content currently being live by the anchor. For example, when the anchor plays a game, the keyword may be a game name, a prop name in the game, and the like; when the host plays an talent performance such as singing, the keyword may be lyrics, song title, etc. Of course, before detecting whether the voice message includes the preset keyword, the keyword server may collect and determine the specified keyword in advance. The manner in which the keywords are collected by the keyword server may be collection at a game official website, collection at a music website, or the like.
In one embodiment, when performing semantic analysis on a voice message, when detecting that a preset keyword is included in the voice message input by the anchor, converting a specified time length after the voice of the keyword in the voice message and the voice corresponding to the keyword into a form of a file to generate an anchor file. For example, it may be preset to convert the voice corresponding to the keyword and 20 seconds after the keyword voice appears into a text to be displayed on the public screen. For example, when it is detected that a voice corresponding to a preset keyword appears in the voice message, the voice corresponding to the preset keyword is 5 seconds; and converting the 5 second voice corresponding to the preset keyword and the 20 second voice after the 5 second voice into a file as a main broadcast file, and displaying the main broadcast file on a public screen of a live broadcast room. Of course, this is only an example, and the keyword speech and several seconds of speech before the keyword speech appears may be converted into a text to be displayed on a public screen, and the like, which is not limited in this specification.
Based on the above embodiment, when the anchor document including the keyword is displayed on the public screen of the live broadcast room, close-up processing may be performed on the keyword in the anchor document, for example, when the anchor document is displayed, the font of the keyword in the anchor document is larger, the line is thicker, and the like, and this specification is not limited thereto. Based on the method, the mode that the anchor generates the public screen file of the live broadcast room can be further enriched, and the management capability of the anchor on the public screen file is improved.
Referring to fig. 9, fig. 9 is an interaction diagram of a display method of a live webcast document shown in this specification, the method is applied to clients, the clients include an anchor client 91 and a viewer client 93, and the clients interact with a server 92.
According to the S901, the anchor client receives the voice message input by the anchor, wherein the voice message comprises at least part of the voice corresponding to the audience record displayed on the public screen of the live broadcast room and the voice replying the audience record; the voice message can be a section of speech which is spoken by the anchor in the live broadcast and comprises the voice content of the file on the public screen of the live broadcast room and the voice content of the file replied.
According to S902, the anchor client performs semantic analysis on the voice message, which may be that the anchor client performs semantic analysis on the voice message input by the anchor received in S901 by using a semantic analysis algorithm.
According to S903, the anchor client generates an anchor document based on the voice of the reply audience document in the voice message based on the result of the semantic analysis in S902, and may be a document into which the voice of the reply audience document in the voice message is converted as the anchor document.
According to S904, the anchor client sends a notice of the binding of the audience file and the anchor file to the server so as to inform the server of the binding of the anchor file and the audience file; and the anchor client sends the anchor file to the server.
In one embodiment, it may be that the anchor client captures what the anchor speaks while live, i.e., a voice message input to the anchor, the voice message including at least a portion of the audience text displayed on the public screen in the live room and the voice in reply to the audience text. If the audience file is the "anchor age", the voice message is the "anchor age, and the anchor 30 age", the voice corresponding to at least part of the audience file is the voice "anchor age", and the voice replying the audience file is the voice "anchor 30 age". After receiving the voice message, the anchor client performs semantic analysis on the voice message, and generates an anchor file based on the voice 'anchor 30 years' replying the audience file: "anchor 30 years old". After the anchor client generates the anchor file, the anchor client informs the server to bind the audience file 'anchor age is 30 years' with the anchor file 'anchor age is 30 years', and sends the anchor file 'anchor age is 30 years' to the server.
According to S905 and S906, after receiving a notification sent by the anchor client and used for binding the audience file and the anchor file, the server binds the anchor file and the audience file; and sending the received anchor pattern sent by the anchor client to the viewer client.
According to S907, after receiving the anchor case sent by the server, the viewer client displays the anchor case and displays the anchor case on the public screen of the live broadcast room.
According to S908, the viewer client responds to the user' S binding trigger event to the anchor document or the viewer document according to the binding relationship between the viewer document and the anchor document.
In one possible implementation, the anchor client may determine a matching degree of the semantic content with an audience document displayed in a public screen of a live broadcast room during the process of performing semantic analysis on the voice message to obtain corresponding semantic content. If the matching degree exceeds a preset threshold value, the anchor client determines that the voice message comprises at least part of corresponding voices in the audience documents displayed on the public screen of the live broadcast room, and the condition can be regarded as that the currently input voice message of the anchor is the voice message of the anchor replying the audience documents, so that other voices except the voices corresponding to the audience documents in the voice message can be determined to be the voices replying the audience documents, and further semantic analysis can be carried out on the voices replying the audience documents in the voice message to generate the anchor documents; illustratively, the anchor client may convert the voice of the voice message in reply to the audience pattern into a text form to generate the anchor pattern.
In a possible implementation manner, in a case that it is determined that the matching degree between the semantic content corresponding to the voice message and the audience document reaches a preset threshold, the anchor client may determine, as the voice responding to the audience document, the voice in the voice message within a preset time length after the voice corresponding to at least part of the audience document.
For example, the voice within 2 minutes after the voice corresponding to at least part of the audience record is preset as the voice for replying the audience record, after the anchor client performs semantic analysis on the voice message input by the anchor to determine that the voice corresponding to at least part of the audience record exists, the voice within 2 minutes after the voice corresponding to at least part of the audience record is determined as the voice for replying the audience record.
In one embodiment, when the viewer client receives a click event of the user on the anchor document, the position of the current document may be jumped to the position of the viewer document on the public screen of the live broadcast room, so that the viewer document is displayed on the public screen of the live broadcast room, as shown in fig. 4 and 5, for example, when the viewer client receives the click event of the user on the anchor document in fig. 4, the position of the current document displayed on the anchor document in fig. 4 is jumped to the position of the viewer document displayed in fig. 5. Of course, when the audience client receives the click event of the user aiming at the audience file, the position of the current file is jumped to the position of the anchor file displayed on the public screen of the live broadcast room. In this embodiment, the click event may be a single click, a double click, a long press, and the like, and this specification embodiment is not limited thereto.
In another embodiment, the audience documents are sent by the target user, and after receiving the request of the target user, the client displays all the audience documents which are sent by the target user and bound with the anchor documents and the anchor documents which are respectively bound with the all the audience documents. As shown in fig. 5 and 6, for example, when the client receives a request from the target user to press the audience documents in fig. 5, a list in fig. 6 may be displayed to the target user, in which all the audience documents bound to the anchor document and the anchor document bound to the audience documents sent by the target user are recorded. In this embodiment, the instruction of the audience file or the anchor file selected by the target user may also be received, and the position of the current file is jumped to the position of the audience file or the anchor file selected by the target user. For example, if the target user selects the audience document "anchor you really commander" in fig. 6, the position of the audience document "anchor you really commander" in fig. 6 is jumped to the position of the audience document "anchor you really commander" on the public screen of the live broadcast room in fig. 7. Of course, in this embodiment, the request and the selection operation of the user may be operations of the user such as clicking, double clicking, long-pressing a main play document or an audience document, and the embodiment of this specification is not limited thereto.
In another embodiment, when receiving a click event of a non-target user on a main broadcasting document, the client jumps the position of the current document to the position of the latest displayed audience document bound with the main broadcasting document, wherein the non-target user is a user who does not send the audience document. For example, when the client receives that the non-target user C clicks the anchor document in fig. 4, the position where the anchor document is displayed is jumped to the position where the audience document N is located on the public screen of the live broadcast room in fig. 8, so that the public screen of the live broadcast room displays the audience document N. In fig. 8, the audience document M and the audience document N are both bound to the anchor document in fig. 4, and the audience document N is the latest displayed audience document bound to the anchor document. In this embodiment, the clicking operation of the non-target user may be a single click, a double click, a long press, and the like, and this embodiment of this specification is not limited.
Referring to fig. 10, fig. 10 is an interaction diagram of a display method of a live webcast document shown in this specification, the method is applied to clients, the clients include an anchor client 1010 and a viewer client 1030, and the clients interact with a server 1020.
According to S1001, a main broadcast client receives a voice message input by the main broadcast, wherein the voice message comprises at least part of voice corresponding to an audience record displayed on a public screen of a live broadcast room and voice replying the audience record; the voice message can be a section of speech which is spoken by the anchor in the live broadcast and comprises the voice content of the file on the public screen of the live broadcast room and the voice content of the file replied.
According to S1002, the anchor client sends the received voice message to the server.
According to S1003, after receiving the voice message sent by the anchor client, the server performs voice analysis on the voice message, which may be in a form of analyzing the voice message by using a semantic analysis algorithm and converting the voice message into a file.
According to S1004, the server generates the anchor document based on the voice of the reply audience document in the voice message, which may be in a form of converting the voice of the reply audience document in the voice message into a document as the anchor document.
According to S1005, the server binds the audience document with the anchor document.
The server sends the anchor document to the viewer client, according to S1006.
According to S1007, the spectator client displays the anchor document after receiving the anchor document sent by the server, which may be the spectator document displayed on the public screen of the live broadcast room.
According to S1008, the audience client responds to the user' S binding trigger event to the anchor case or the audience case according to the binding relationship between the audience case and the anchor case.
In a possible implementation manner, in the process of performing semantic analysis on the voice message to obtain corresponding semantic content, the server may determine the matching degree of the semantic content and the audience pattern displayed in the public screen of the live broadcast room. If the matching degree exceeds a preset threshold value, the server determines that the voice message comprises at least part of corresponding voices in the audience documents displayed on the public screen of the live broadcast room, and the condition can be regarded as that the currently input voice message of the anchor broadcast is the voice message of the anchor broadcast replying the audience documents, so that other voices except the voices corresponding to the audience documents in the voice message can be determined to be the voices replying the audience documents, and further semantic analysis can be carried out on the voices replying the audience documents in the voice message to generate the anchor broadcast documents; for example, the server may convert the voice of the voice message replying to the audience pattern into a text form to generate the anchor pattern.
In a possible implementation manner, when it is determined that the matching degree between the semantic content corresponding to the voice message and the audience document reaches a preset threshold, the server may determine, as the voice replying to the audience document, the voice in the voice message within a preset time length after the voice corresponding to at least part of the audience document.
For example, the voice within 2 minutes after the voice corresponding to at least part of the audience record is preset as the voice for replying the audience record, after the server performs semantic analysis on the voice message input by the anchor to determine that the voice corresponding to at least part of the audience record exists, the voice within 2 minutes after the voice corresponding to at least part of the audience record is determined as the voice for replying the audience record.
In one embodiment, when the viewer client receives a user click event for the anchor document, the position of the current document may be jumped to the position where the viewer document is displayed on the public screen of the live room, as shown in fig. 4 and 5, for example, when the viewer client receives a user click event for the anchor document in fig. 4, the position where the anchor document in fig. 4 is jumped to the position where the viewer document in fig. 5 is displayed. Of course, when the audience client receives the click event of the user aiming at the audience file, the position of the current file is jumped to the position of the anchor file displayed on the public screen of the live broadcast room. In this embodiment, the click event may be a single click, a double click, a long press, and the like, and this specification embodiment is not limited thereto.
In another embodiment, the audience documents are sent by the target user, and after receiving the request of the target user, the client displays all the audience documents which are sent by the target user and bound with the anchor documents and the anchor documents which are respectively bound with the audience documents. As shown in fig. 5 and 6, for example, when the client receives a request from the target user to press the audience documents in fig. 5, a list in fig. 6 may be displayed to the target user, in which all the audience documents bound to the anchor document and the anchor document bound to the audience documents sent by the target user are recorded. In this embodiment, an instruction of the audience document or the anchor document selected by the target user may also be received, and the position of the current document is jumped to the position of the audience document or the anchor document selected by the target user, which is displayed on the public screen of the live broadcast room. For example, if the target user selects the audience document "anchor you really commander" in fig. 6, the position of the audience document "anchor you really commander" in fig. 6 is jumped to the position of the audience document "anchor you really commander" on the public screen of the live broadcast room in fig. 7. Of course, in this embodiment, the request and the selection operation of the user may be operations of the user such as clicking, double clicking, long-pressing a main play document or an audience document, and the embodiment of this specification is not limited thereto.
In another embodiment, when receiving a click event of a non-target user on a main broadcast file, the client jumps the position of the current file to the position of the newly displayed audience file bound with the main broadcast file displayed on the public screen of the live broadcast room, so that the newly displayed audience file is displayed on the public screen of the live broadcast room. Wherein the non-target users are users who have not sent the audience copy. For example, when the client receives that the non-target user C clicks on the anchor document in fig. 4, the position where the anchor document is displayed is jumped to the position where the audience document N is displayed on the public screen of the live broadcast room in fig. 8. In fig. 8, the audience document M and the audience document N are both bound to the anchor document in fig. 4, and the audience document N is the latest displayed audience document bound to the anchor document. In this embodiment, the clicking operation of the non-target user may be a single click, a double click, a long press, and the like, and this embodiment of this specification is not limited.
An embodiment of the present specification provides a display device for a public screen file in a live broadcast room, which is applied to a host client, and includes:
the receiving module is used for receiving voice messages input by a main broadcast, wherein the voice messages comprise voices corresponding to at least part of audience documents displayed on a public screen of a live broadcast room and voices replying the audience documents;
the generating module is used for carrying out semantic analysis on the voice message and generating a main broadcasting file based on the voice replying the audience file;
the notification module is used for notifying a server to bind the audience file with the anchor file; and enabling the audience client to receive and display the anchor file sent by the server, and responding to the binding triggering event of the user to the anchor file or the audience file according to the binding relation between the audience file and the anchor file.
An embodiment of the present specification further provides a display device for a public screen file in a live broadcast room, which is applied to a server and includes:
and the binding module is used for binding the anchor file sent by the anchor client and the audience file on the public screen of the live broadcast room and sending the anchor file to the audience client.
An embodiment of the present specification further provides a display device for a public screen document in a live broadcast room, which is applied to a viewer client, and includes:
and the response module is used for displaying the anchor file after receiving the anchor file sent by the server, and responding to a binding trigger event of the user to the anchor file or the audience file according to the binding relation between the audience file and the anchor file.
Correspondingly, an embodiment of the present specification provides another display apparatus for a public screen document in a live broadcast room, which is applied to a host client, and includes:
the receiving module is used for receiving voice messages input by a main broadcast, wherein the voice messages comprise voices corresponding to at least part of audience documents displayed on a public screen of a live broadcast room and voices replying the audience documents;
and the sending module is used for sending the voice message to a server.
Correspondingly, an embodiment of the present specification provides another display apparatus for a public screen document in a live broadcast room, which is applied to a server, and includes:
and the generating module is used for performing semantic analysis on the voice message after receiving the voice message sent by the anchor client and generating the anchor file based on the voice replying the audience file.
And the binding module is used for binding the audience file with the anchor file.
Correspondingly, an embodiment of the present disclosure provides another display device for a public screen document in a live broadcast room, which is applied to a viewer client, and includes:
and the response module is used for displaying the anchor file after receiving the anchor file sent by the server, and responding to a binding trigger event of the user to the anchor file or the audience file according to the binding relation between the audience file and the anchor file.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The display device of the public screen file in the live broadcast room can be applied to electronic equipment. Referring to fig. 11, the present application further provides an electronic device 1100 comprising a memory 1102 for storing executable instructions and a processor 1101; wherein the processor 1101, when executing the executable instructions, implements the steps of any one of the methods described above.
The Processor 1101 executes executable instructions contained in the memory 1102, and the Processor 1101 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1102 stores executable instructions applied to the anchor recommendation method, and the memory 1102 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the apparatus may cooperate with a network storage device that performs a storage function of the memory through a network connection. The storage 1102 may be an internal storage unit of the device 1100, such as a hard disk or a memory of the device 1100. The memory 1102 may also be an external storage device of the device 1100, such as a plug-in hard disk, Smart Media Card (SMC), Secure Digital (SD) Card, Flash memory Card (Flash Card), etc. provided on the device 1100. Further, memory 1102 may also include both internal and external storage for device 1100. The memory 1102 is used to store executable instructions and other programs and data required by the device. The memory 1102 may also be used to temporarily store data that has been output or is to be output.
The various embodiments described herein may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in memory and executed by the controller.
The electronic device 1100 may be a computing device such as a desktop computer, a notebook, a palmtop, and a mobile phone. The device may include, but is not limited to, a processor 1101, a memory 1102. Those skilled in the art will appreciate that fig. 11 is merely an example of an electronic device 1100 and does not constitute a limitation of the electronic device 1100 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the device may also include input-output devices, network access devices, buses, etc.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
In one embodiment, the present application further provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of the method of any of the above embodiments.
This application may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (18)

1. A method for displaying a public screen file in a live broadcast room is characterized by comprising the following steps:
receiving voice messages input by a main broadcast, wherein the voice messages comprise voices corresponding to at least part of audience files displayed on a public screen of a live broadcast room and voices replying to the audience files;
performing semantic analysis on the voice message, and generating a main broadcasting file based on the voice replying the audience file;
and binding the audience file with the anchor file so as to enable the client to display the anchor file and respond to a binding trigger event of the user to the anchor file or the audience file.
2. The method of claim 1, wherein the voice message is semantically analyzed, and the specific step of generating a anchor document based on the voice in reply to the audience document comprises;
in the process of carrying out semantic analysis on the voice message, determining the matching degree of the semantic content corresponding to the voice message and the audience pattern;
if the matching degree exceeds a preset threshold value, determining other voices except the voice corresponding to the audience file in the voice message as the voices for replying the audience file;
generating a master document based on the speech of the reply audience document.
3. The method of claim 2, further comprising:
displaying a progress bar indicating the matching degree in a process of performing semantic analysis on the voice message.
4. The method of claim 2, wherein the determining that the voice other than the voice corresponding to the audience document in the voice message is the voice in reply to the audience document comprises:
and determining the voice in the voice message within the preset time length after the voice corresponding to the audience file as the voice replying the audience file.
5. The method of claim 1, wherein the binding trigger event is a click event of a target user for the anchor document or the audience document, the target user being a user sending the audience document;
the step of responding to a user binding triggering event to the anchor document or the viewer document comprises:
when a click event of the target user on the main broadcasting file is received, jumping to the position of the audience file to display the audience file on a public screen of the live broadcasting room; or
And when a click event of the target user on the audience file is received, jumping to the position of the anchor file to display the anchor file on a public screen of the live broadcast room.
6. The method of claim 1, wherein the audience document is sent by a target user, and wherein the step of responding to a user binding trigger event to the anchor document or the audience document comprises:
displaying all audience documents which are bound with the anchor documents and anchor documents which are respectively bound with the audience documents and are sent by the target user to the target user according to the request of the target user;
and jumping to the position of the selected audience file or the anchor file according to the audience file or the anchor file selected by the target user so as to display the selected audience file or the anchor file on the public screen of the live broadcast room.
7. The method of claim 1, wherein the step of responding to a user binding triggering event for the anchor document or the viewer document comprises:
when a click event of a non-target user on the anchor file is received, jumping to the position of the newly displayed audience file bound to the anchor file to display the newly displayed audience file on a public screen of the live broadcast room; wherein the non-target users are users who have not transmitted the audience paperwork.
8. The method of claim 1, wherein the anchor and audience documents are displayed in the same color and are different from the colors displayed by other documents on the live public screen.
9. The method of claim 1, wherein the step of semantically analyzing the voice message further comprises:
and if the voice message comprises the voice of which the anchor request aims at the authority limit of the specified user, informing the client to delete the historical file sent by the specified user, and closing the function of sending the file on the public screen of the live broadcast room by the specified user.
10. The method of claim 1, wherein generating a master document based on the voice message in reply to the audience document further comprises:
recognizing preset vocabularies in the voice message, and deleting or replacing the vocabularies; the preset vocabulary indicates that the vocabulary displayed in the public screen of the live broadcast room is not allowed.
11. A display method of a public screen file in a live broadcast room is applied to a client, wherein the client comprises a main broadcast client and an audience client, and the method is characterized by comprising the following steps:
the anchor client receives a voice message input by an anchor, wherein the voice message comprises at least part of voice corresponding to an audience file displayed on a public screen of a live broadcast room and voice replying the audience file; performing semantic analysis on the voice message, and generating a main broadcasting file based on the voice replying the audience file; informing a server to bind the audience file with the anchor file;
and the audience client receives and displays the anchor file sent by the server, and responds to a binding trigger event of the user to the anchor file or the audience file according to the binding relation between the audience file and the anchor file.
12. The method of claim 11, wherein the step of semantically analyzing the voice message and generating a anchor document based on the voice in reply to the audience document comprises;
the anchor client determines the matching degree of the semantic content corresponding to the voice message and the audience pattern in the process of performing semantic analysis on the voice message; if the matching degree exceeds a preset threshold value, determining other voices except the voice corresponding to the audience file in the voice message as the voices for replying the audience file; generating a master document based on the speech of the reply audience document.
13. The method of claim 12, wherein the step of determining that the voice other than the voice corresponding to the audience document in the voice message is the voice to reply to the audience document comprises:
and the anchor client determines the voice in the voice message within the preset time length after the voice corresponding to the audience record as the voice replying the audience record.
14. A display method of a public screen file in a live broadcast room is applied to a client, wherein the client comprises a main broadcast client and an audience client, and the method is characterized by comprising the following steps:
the anchor client receives a voice message input by an anchor, wherein the voice message comprises at least part of voice corresponding to an audience file displayed on a public screen of a live broadcast room and voice replying the audience file; sending the voice message to a server side so that the server side performs semantic analysis on the voice message and generates a main broadcasting file based on the voice replying the audience file; binding the audience document with the anchor document;
and the audience client receives and displays the anchor file sent by the server, and responds to a binding trigger event of the user to the anchor file or the audience file according to the binding relation between the audience file and the anchor file.
15. The method of claim 14, wherein the step of semantically analyzing the voice message and generating a anchor document based on the voice in reply to the audience document comprises;
the server side determines the matching degree of the semantic content corresponding to the voice message and the audience pattern in the process of performing semantic analysis on the voice message; if the matching degree exceeds a preset threshold value, determining other voices except the voice corresponding to the audience file in the voice message as the voices for replying the audience file; generating a master document based on the speech of the reply audience document.
16. The method of claim 15, wherein the step of determining that the voice other than the voice corresponding to the audience document in the voice message is the voice to reply to the audience document comprises:
and the server determines the voice in the voice message within the preset time length after the voice corresponding to the audience record as the voice replying the audience record.
17. An electronic device comprising a memory for storing executable instructions and a processor;
wherein the processor, when executing the executable instructions, performs the steps of the method of any one of claims 1 to 16.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 16.
CN202111673092.6A 2021-12-31 2021-12-31 Live broadcast room public screen text display method, electronic equipment and storage medium Active CN114363650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111673092.6A CN114363650B (en) 2021-12-31 2021-12-31 Live broadcast room public screen text display method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111673092.6A CN114363650B (en) 2021-12-31 2021-12-31 Live broadcast room public screen text display method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114363650A true CN114363650A (en) 2022-04-15
CN114363650B CN114363650B (en) 2024-02-06

Family

ID=81105580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111673092.6A Active CN114363650B (en) 2021-12-31 2021-12-31 Live broadcast room public screen text display method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114363650B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201827A (en) * 2023-11-07 2023-12-08 北京沃东天骏信息技术有限公司 Text processing method, system, device, equipment, medium and program product
CN117201827B (en) * 2023-11-07 2024-05-17 北京沃东天骏信息技术有限公司 Text processing method, system, device, equipment, medium and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015078310A1 (en) * 2013-11-29 2015-06-04 腾讯科技(深圳)有限公司 Method, device and system for asking and answering question
CN106878819A (en) * 2017-01-20 2017-06-20 合网络技术(北京)有限公司 The method, system and device of information exchange in a kind of network direct broadcasting
CN109429075A (en) * 2017-08-25 2019-03-05 阿里巴巴集团控股有限公司 A kind of live content processing method, device and system
CN112073741A (en) * 2020-08-31 2020-12-11 腾讯科技(深圳)有限公司 Live broadcast information processing method and device, electronic equipment and storage medium
CN112218103A (en) * 2020-09-02 2021-01-12 北京达佳互联信息技术有限公司 Live broadcast room interaction method and device, electronic equipment and storage medium
CN112351288A (en) * 2019-08-07 2021-02-09 阿里巴巴集团控股有限公司 Live broadcast information processing method, device, server, terminal and storage medium
CN112765336A (en) * 2021-01-29 2021-05-07 中国平安人寿保险股份有限公司 Bullet screen management method and device, terminal equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015078310A1 (en) * 2013-11-29 2015-06-04 腾讯科技(深圳)有限公司 Method, device and system for asking and answering question
CN106878819A (en) * 2017-01-20 2017-06-20 合网络技术(北京)有限公司 The method, system and device of information exchange in a kind of network direct broadcasting
CN109429075A (en) * 2017-08-25 2019-03-05 阿里巴巴集团控股有限公司 A kind of live content processing method, device and system
CN112351288A (en) * 2019-08-07 2021-02-09 阿里巴巴集团控股有限公司 Live broadcast information processing method, device, server, terminal and storage medium
CN112073741A (en) * 2020-08-31 2020-12-11 腾讯科技(深圳)有限公司 Live broadcast information processing method and device, electronic equipment and storage medium
CN112218103A (en) * 2020-09-02 2021-01-12 北京达佳互联信息技术有限公司 Live broadcast room interaction method and device, electronic equipment and storage medium
CN112765336A (en) * 2021-01-29 2021-05-07 中国平安人寿保险股份有限公司 Bullet screen management method and device, terminal equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201827A (en) * 2023-11-07 2023-12-08 北京沃东天骏信息技术有限公司 Text processing method, system, device, equipment, medium and program product
CN117201827B (en) * 2023-11-07 2024-05-17 北京沃东天骏信息技术有限公司 Text processing method, system, device, equipment, medium and program product

Also Published As

Publication number Publication date
CN114363650B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
KR101999389B1 (en) Identification and presentation of internet-accessible content associated with currently playing television programs
KR102091414B1 (en) Enriching broadcast media related electronic messaging
CN109788345B (en) Live broadcast control method and device, live broadcast equipment and readable storage medium
JP5135024B2 (en) Apparatus, method, and program for notifying content scene appearance
KR20190011829A (en) Estimating and displaying social interest in time-based media
EP3346719A1 (en) Methods and systems for displaying contextually relevant information regarding a media asset
KR101916874B1 (en) Apparatus, method for auto generating a title of video contents, and computer readable recording medium
CN110941738B (en) Recommendation method and device, electronic equipment and computer-readable storage medium
CN110727785A (en) Recommendation method, device and storage medium for training recommendation model and recommending search text
CN111444415B (en) Barrage processing method, server, client, electronic equipment and storage medium
CN110958470A (en) Multimedia content processing method, device, medium and electronic equipment
CN111800668A (en) Bullet screen processing method, device, equipment and storage medium
CN112507163A (en) Duration prediction model training method, recommendation method, device, equipment and medium
CN110891198B (en) Video playing prompt method, multimedia playing prompt method, bullet screen processing method and device
CN114845149B (en) Video clip method, video recommendation method, device, equipment and medium
WO2019146466A1 (en) Information processing device, moving-image retrieval method, generation method, and program
CN114286169A (en) Video generation method, device, terminal, server and storage medium
CN110602528B (en) Video processing method, terminal, server and storage medium
CN112752134A (en) Video processing method and device, storage medium and electronic device
CN114363650B (en) Live broadcast room public screen text display method, electronic equipment and storage medium
JP4575786B2 (en) Content viewing system, content information processing method, and program
CN114880458A (en) Book recommendation information generation method, device, equipment and medium
Yu et al. Interactive broadcast services for live soccer video based on instant semantics acquisition
CN112328152A (en) Media file playing control method and device, electronic equipment and storage medium
CN114554297B (en) Page screenshot method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant