CN106612465A - Live interaction method and device - Google Patents

Live interaction method and device Download PDF

Info

Publication number
CN106612465A
CN106612465A CN201611198395.6A CN201611198395A CN106612465A CN 106612465 A CN106612465 A CN 106612465A CN 201611198395 A CN201611198395 A CN 201611198395A CN 106612465 A CN106612465 A CN 106612465A
Authority
CN
China
Prior art keywords
speech
default
speech message
message
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611198395.6A
Other languages
Chinese (zh)
Other versions
CN106612465B (en
Inventor
蔡毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201911127043.5A priority Critical patent/CN110708607B/en
Priority to CN201611198395.6A priority patent/CN106612465B/en
Publication of CN106612465A publication Critical patent/CN106612465A/en
Application granted granted Critical
Publication of CN106612465B publication Critical patent/CN106612465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides a live interaction method and device. The method comprises the following steps: determining whether a user initiates a statement request in the live process; displaying a plurality of preset statement messages in a live interface when determining that the user initiates the statement request; and performing the live interaction manipulation on the preset statement message when detecting that the preset statement message is triggered. By use of the method provided by the embodiment of the application, the user can rapidly make the statement through the intelligent statement message recommendation, a text editing operation of the user is reduced, the input consuming time of the user in the live interaction is lowered, so that the live interaction manipulation is more intelligent.

Description

Living broadcast interactive method and device
Technical field
The application is related to Internet technical field, more particularly to living broadcast interactive method and device.
Background technology
Network direct broadcasting technology is that the live video data of main broadcaster user is broadcasted to multiple spectators users by a kind of service end The Internet technology of row viewing.In correlation technique, direct broadcast service business provides carries out interaction for spectators user and main broadcaster user Function.For example, comment function is typically provided with live interface, concrete mode typically shows an input control, spectators user Can be during live, by input control editor's text, to be input into comment message, client can send comment message Comment message can be broadcast to each client by direct broadcast service end, direct broadcast service end so that spectators can carry out interaction with main broadcaster. The intelligent level of above-mentioned interaction mode is relatively low, and needs user to carry out, compared with multioperation, also to a certain degree reducing spectators user Participate in interactive enthusiasm.
The content of the invention
To overcome problem present in correlation technique, this application provides living broadcast interactive method and device.
According to the first aspect of the embodiment of the present application, there is provided a kind of living broadcast interactive method, methods described includes:
During live, determine whether user initiates speaking request;
When determining that user initiates speaking request, some default speech message are obtained, and shown in the live interface Described some default speech message;
Detect it is described it is default speech message be triggered when, to it is described it is default speech message carry out living broadcast interactive process.
According to the second aspect of the embodiment of the present application, there is provided a kind of living broadcast interactive device, including:
Speaking request determining module, is used for:During live, determine whether user initiates speaking request;
Speech message display module, is used for:When determining that user initiates speaking request, some default speech message are obtained, And show that described some are preset speech message in the live interface;
Interactive processing module, is used for:Detect it is described it is default speech message be triggered when, to the default speech message Carry out living broadcast interactive process.
The technical scheme that embodiments herein is provided can include following beneficial effect:
In the application, client, can be when it is determined that user initiates speaking request, on the live boundary during live Show that some default speech message are selected for user in face, user can trigger the default speech message needed for it, and this is preset Speech message is sent to direct broadcast service end as living broadcast interactive message, carries out at living broadcast interactive for the direct broadcast service end Reason.The embodiment of the present application is made a speech message by intelligent recommendation so that user can fast make a speech, and reduces the copy editor behaviour of user Make, the input for reducing user in living broadcast interactive process takes, and makes live interaction process more intelligent.
It should be appreciated that the general description of the above and detailed description hereinafter are only exemplary and explanatory, not The application can be limited.
Description of the drawings
Accompanying drawing herein is merged in description and constitutes the part of this specification, shows the enforcement for meeting the application Example, and be used to explain the principle of the application together with description.
Figure 1A is a kind of application scenarios schematic diagram of living broadcast interactive of the application according to an exemplary embodiment.
Figure 1B is a kind of schematic diagram at live interface in correlation technique.
Fig. 2A is a kind of flow chart of living broadcast interactive method of the application according to an exemplary embodiment.
Fig. 2 B are the schematic diagrams at a kind of live interface of the application according to an exemplary embodiment.
Fig. 2 C are the schematic diagrams of the expression fixed point of 3 kind standards of the application according to an exemplary embodiment.
Fig. 3 is a kind of block diagram of living broadcast interactive device of the application according to an exemplary embodiment.
Specific embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Explained below is related to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the application.Conversely, they be only with it is such as appended The example of the consistent apparatus and method of some aspects described in detail in claims, the application.
It is, only merely for the purpose of description specific embodiment, and to be not intended to be limiting the application in term used in this application. " one kind ", " described " and " being somebody's turn to do " of singulative used in the application and appended claims is also intended to include majority Form, unless context clearly shows that other implications.It is also understood that term "and/or" used herein is referred to and wrapped Containing one or more associated any or all possible combinations for listing project.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the application A little information should not necessarily be limited by these terms.These terms are only used for that same type of information is distinguished from each other out.For example, without departing from In the case of the application scope, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determining ".
The scheme of the embodiment of the present application, can apply to that network direct broadcasting etc. is any to be related in the scene of living broadcast interactive, such as scheme It is a kind of application scenarios schematic diagram of living broadcast interactive of the application according to an exemplary embodiment shown in 1A, wraps in Figure 1A Include the server as server device and the smart mobile phone as client device, panel computer and personal computer.Its In, client device can also be PDA (Personal Digital Assistant, personal digital assistant), multimedia The smart machines such as device, wearable device.
Service end in Figure 1A to each client provides direct broadcast service, and user can install live client using smart machine End, by the live client direct broadcast service is obtained, it is also possible to browser client is installed using smart machine, by browsing The live page that device accessing server by customer end is provided, obtains direct broadcast service.Generally, it is related to two class users during live, Broadcasting user based on one class user, another kind of user is spectators user.Client is provided with main broadcaster's direct broadcast function and live watches work( Can, the direct broadcast function that main broadcaster user can be provided using client carries out net cast, and spectators user can use client What is provided watches that function watches the live content of main broadcaster user.
In correlation technique, direct broadcast service business provides many functions that interaction is carried out for spectators user and main broadcaster user, example Function of such as making a speech or function of giving a present.It is a kind of signal at live interface in correlation technique as shown in Figure 1B for function of making a speech Figure, shows an input control in typically live interface, spectators user can be by input control editor's text, the text edited This is sent to direct broadcast service end as interactive message by client, and interactive message can be broadcast to each client by direct broadcast service end End so that the speech of spectators can carry out personalized displaying on the screen of each client, realize the interaction of spectators and main broadcaster.On The intelligent level for stating interaction mode is relatively low, and needs user to carry out more speech edit operation, therefore can to a certain degree drop Low spectators user participates in interactive enthusiasm.
And the scheme that the embodiment of the present application is provided, during live, can it is determined that user initiate speaking request when, Show that some default speech message are selected for user in the live interface, user can trigger the default speech needed for it Message, the default speech message is sent to direct broadcast service end as living broadcast interactive message, carries out for the direct broadcast service end Living broadcast interactive process.The embodiment of the present application is made a speech message by intelligent recommendation so that user can fast make a speech, and reduces user's Copy editor operates, and the input for reducing user in living broadcast interactive process takes, and makes live interaction process more intelligent.Next The embodiment of the present application is described in detail.
As shown in Figure 2 A, Fig. 2A is a kind of flow process of living broadcast interactive method of the application according to an exemplary embodiment Figure, comprises the following steps 201 to 203:
In step 201, during live, determine whether user initiates speaking request.
In step 202., when determining that user initiates speaking request, some default speech message are obtained, and described straight Broadcast and show in interface described some default speech message;
In step 203, detect it is described it is default speech message be triggered when, to it is described it is default speech message carry out directly Broadcast interactive processing.
The method of the embodiment of the present application can be applicable to client device, and client is during live, it may be determined that user Whether there is speech demand, if user initiates speaking request, the speech demand of user can be responded, show some default speeches Message quickly and easily selects required speech message for user, and with more convenient living broadcast interactive is realized.
Wherein, for how to determine whether user has speech demand, i.e., how to determine whether user initiates speaking request, Can be accomplished in several ways in actual applications, for example, it may be detect user pressed equipment some buttons or Person be detect the default sliding trace of user input, can also be detect user input make default gesture, can be with It is to detect user to trigger region for setting in the control or screen of setting etc. using default triggering mode, client can To provide one or more mode, so that user advantageously initiates speaking request.
In order to preferably guide user to make a speech, user is set more easily to initiate speaking request, whether the determination user sends out Speaking request is played, including:
Show a speech triggering object in live interface, by detecting whether the speech triggering object is triggered, really Determine whether user initiates speaking request.
Wherein, the speech triggering object is used to detect the speech demand of user that triggering object to be illustrated in directly due to making a speech In broadcasting interface, user can easily be triggered, so that user advantageously initiates speaking request.In practical application, speech An icon, option or button that triggering object specifically can be shown in live interface etc..As shown in Figure 2 B, it is the application The schematic diagram at a kind of live interface according to an exemplary embodiment, speech triggering object is specially an icon in Fig. 2 B, User can trigger the object by the mode such as click or pressing.
The client of the present embodiment intelligently can recommend suitable default speech message, specific number in live interface Amount and content can be pre-configured with, and above-mentioned default speech message can be common greeting term, such as " hello ", it is also possible to The phrase arranged with reference to live scene, such as " main broadcaster is good beautiful ", " main broadcaster welcomes you ", can also be and collect in advance each sight The speech message that many users are sent, conventional speech message that collected data are analyzed and are determined etc. are actual to answer With in can flexible configuration according to demand, the present embodiment is not construed as limiting to this.As an example, as shown in Figure 2 B 6 default speech message.
In live scene, it is whole it is live during may relate to different live contents, in the face of different live contents, use The desired speech message for sending in family also can be different.For example, just start when live, user would generally wish send " main broadcaster you Well ", the speech message of the greeting such as " welcome main broadcaster ";Or, it is live carry out during, if main broadcaster sings, user may want to Send related speech message such as " singing are listened very well ";Or, main broadcaster is angry, and user may want to send phases such as " main broadcaster calm down " The speech message of pass.Or, main broadcaster says " I Love You ", and user may want to send the related speech such as " main broadcaster I also like you " Message.
In the present embodiment, it can be obtained according to live data to obtain some default speech message, that is to say visitor The default speech message that family end is shown can change during live according to different live datas, in order that user exists When needing speech, the default speech message that client is shown can be adapted with the live content in the stage, enable client essence Standard recommends suitable speech message, makes user make a speech more conveniently, improves the intelligent level that living broadcast interactive is processed, the application Offer is implemented as follows mode:
The default speech message is obtained from default speech data base, is stored with a plurality of in the speech data base Speech sample.
Based on following one or more information, speech sample is selected to disappear as the default speech from the speech data base Breath:
The speech message that the voice messaging of main broadcaster user, other spectators users are sent, and the face table of main broadcaster user Feelings information.
In the present embodiment, speech data base can be pre-configured with, a plurality of speech sample is stored in data base.From a plurality of When suitable default speech message is selected in speech sample, voice messaging, other spectators user institutes of main broadcaster user are may be referred to The speech message of transmission, and one or more factor such as the human facial expression information of main broadcaster user chosen.It is specific to choose Processing mode, can determine key word using above- mentioned information, using searching algorithm, go out and the key word from database search The speech message matched somebody with somebody.Specific way of search refers to correlation technique, and the present embodiment here is not repeated.
Wherein, the voice messaging of main broadcaster user represents the content of speaking of main broadcaster user, using voice messaging as reference factor Default speech message is chosen, more meets live scene feature, can also get suitable speech message.Specifically, can be When detecting user and initiating speaking request, the voice document in preset time period is obtained as the voice messaging, such as 3 seconds, 5 Second, the time period such as 20 seconds.Then, it is concrete obtain key word when, one way in which can be the contrast voice messaging with Default received pronunciation, using the received pronunciation matched with the voice messaging key word is obtained, and the received pronunciation is pre- First it is marked with corresponding key word;In other optional implementations, can also be and the voice messaging is converted to into text Information, from the text message key word is identified, it is suitable to search out from data base by utilizing recognized key word Default speech message.
During live, client can receive the interactive letter that other client users that service end broadcasted are sent Breath, and interaction display is carried out in screen.Due to the interactive information that other spectators users that client is received send, all via Spectators user actively sends, therefore, default speech message is chosen by reference factor of the interactive information, can more meet current Live content, so as to precisely recommend message of suitably making a speech to user.Specifically, can detect user's initiation speech During request, the interactive information that the other users in Preset Time are sent is collected, key word is extracted from interactive information, using institute The key word of identification can search out suitable default speech message from data base.
During live, the human facial expression information of main broadcaster user represents the emotion changes of main broadcaster user, can be according to main broadcaster The human facial expression information of user, is that user recommends related default speech message.Therefore, the present embodiment can obtain live process The video image of middle main broadcaster's client, identifies main broadcaster's human facial expression information from the video image, for the main broadcaster people Facial expression information determines corresponding key word, searches out suitable default by utilizing recognized key word from data base Speech message.Specifically, current frame image can be obtained from live video stream, using image when user initiates speaking request Technology of identification, carries out Face datection in two field picture, and using face calibration algorithm face key point is demarcated, and passes through demarcated people Face key point finds the expression fixed point of matching standard, utilizes the expression fixed point of matched standard, determines the two field picture Human facial expression information.As shown in Figure 2 C, it is the expression fixed point of 3 kind standards of the application according to an exemplary embodiment Schematic diagram.Specific processing procedure may be referred to correlation technique, and the present embodiment here is not repeated.
In above-mentioned three kinds of processing modes, suitable speech sample is chosen from speech data base as pre- based on three category informations If speech message.In actual applications, make a speech sample can be for above-mentioned three category information the characteristics of and be pre-configured with respectively, and In being stored in speech data base.Can be selected in all speech samples stored in speech data base when choosing Take, or based on every category information, scan in corresponding a plurality of speech sample.
In the case where the speech sample pre-set based on every category information is very more, in order to improve selection efficiency, one In individual optional implementation, speech data base can be arranged for the voice messaging, speech message and human facial expression information There is corresponding subdata base;Based on the voice messaging, speech message or human facial expression information, select from the speech data base When selecting speech sample as the default speech message, selected from corresponding subdata base.
In the present embodiment, speech data base is respectively provided with corresponding subdata base for above-mentioned three category information, then per class The corresponding key word of information can be scanned in corresponding subdata base, such that it is able to improve search efficiency, be improved live The performance of interactive processing.
According to above-mentioned various reference factors, client can get more default speech message, and each default speech disappears Breath is in live interface when showing, specific exhibition method can be with flexible configuration, for example, can be in the way of list, by each bar The randomly ordered displaying of default speech message, it is also possible to which the number of words sequence according to speech message shows etc..
In order to be able to more quickly select default speech message for user, some default is shown in the live interface During speech message, some default speech message can be ranked up displaying according to priority;Wherein, priority is higher presets Speech message, before its sequence more;Priority is from high to low:
The default speech message determined by the voice messaging, the default speech determined by the speech message are disappeared Breath and the default speech message determined by main broadcaster's human facial expression information.
In the present embodiment, it is contemplated that voice messaging represents the content of speaking of main broadcaster, default determined by voice messaging Speech message and the adaptedness of live content, higher than the default speech message that the speech message by other users determines, and lead to Cross main broadcaster's human facial expression information and determine that the adaptedness of default speech message is relatively low, so being ranked up exhibition according to above-mentioned priority Show default speech message, message recommendation effect of more preferably making a speech can be realized.
After showing default speech message in live interface, if recommending to have user to wish that the default speech for sending disappears in interface Breath, then user can by clicking on, it is long by etc. triggering mode chosen, client is detecting a certain default message quilt of making a speech During triggering, you can so that the default speech message is carried out into living broadcast interactive process.Specifically, living broadcast interactive is processed, and can be included many The mode of kind, e.g. client at the live interface of local terminal, or client default will send out default speech message display Speech message is sent to direct broadcast service end, is illustrated in public screen by direct broadcast service end and (that is to say direct broadcast service end by the default speech Message is broadcast to other clients, is shown in live interface by each client) etc., specifically, to the default speech message Living broadcast interactive process can flexible configuration according to actual needs.
If it is appreciated that the default speech message that user's triggering client is recommended, that is, represent the default speech message Reach the purpose of accurate recommendation;If there are more other users to also sends identical speech message, the default speech is represented Message more has recommends value, easier to be chosen by user.For above-mentioned situation, the present embodiment provides following examples, can be with The recommendation precision of default speech message is constantly improved during living broadcast interactive.
In the present embodiment, the speech sample configuration has corresponding scoring;
It is when showing some default speech message in the live interface, some default speech message are right according to its The scoring height of the speech sample answered is ranked up displaying;
The scoring determines that number of times is got over as the number of times triggered by user after default speech message by the speech sample Height, the scoring is higher.
For example, each speech sample can be pre-configured with identical scoring, if speech sample is used as default speech message display Behind the live interface, chosen by user, then the scoring of adjustable height speech sample so that obtaining speech sample work next time When being shown for default speech message, the sequence of the message is more forward.
As an example, it is assumed that the acquiescence scoring of all of speech sample is 50 points, after in live showing interface, by with After family is chosen, then the scoring of the speech sample increases by 1 point, and the high phrase that scores is descended again in minor sort can forward displaying.Wherein, comment Divide fraction that the upper limit can also be set, such as upper limit is 100 points, when speech sample reaches 100 points, represents the recommendation of the speech sample Value is very high, processes so as to carry out follow-up scoring, reduces the processing pressure of client.
From above-described embodiment, in sequence shows default speech message, it is related to the reference factor of two kinds of sequences:
A kind of is the prioritization according to following three types information:The default speech determined by the voice messaging is disappeared Breath, the default speech message determined by the speech message and the default speech determined by main broadcaster's human facial expression information Message.
Another kind is the scoring of speech sample.
In actual applications, can select its it is a kind of individually implement, or combine to implement, when combination is implemented, can be with Be first according to the prioritization of above-mentioned three category information, for every category information under multiple default speech message, according to scoring Height sorts.
For example, it is assumed that there is six default speech message, be respectively determined by voice messaging two default speech message, Two default speech message being determined by the speech message and determined by main broadcaster's human facial expression information two are preset Speech message, can be according to the prioritization of above-mentioned three category information, message of making a speech default for two in every category information, then Sort according to the scoring height of default speech message.
Very many speech message that user is sent can be received during live, in the present embodiment, it is possible to use These speech message are realized automatically updating the data base that makes a speech, to enrich the speech sample made a speech in data base.Can at one In the implementation of choosing, the speech data base is updated in the following way speech sample:
Collect the speech message of other clients that service end is sent.
If the speech message is not stored in the speech data base, and number of repetition is more than in preset time period Preset times threshold value, adds the speech message as speech sample into the speech data base;Wherein, the speech sample This scoring determines that number of repetition is higher according to the number of repetition, and the scoring is higher.
In the present embodiment, client can perform aforesaid operations and update speech data base according to predetermined period, it is also possible to During live real-time update, or by user initiate, or under the instruction of the more new command of service end perform Etc. various ways.
When needing to update the data storehouse, the speech message of other clients can be collected, by analysis, if having compared with multi-user Identical speech message is all sent, that is, is represented that the speech message has and is recommended value.Speech is not stored in the speech message In the case of data base, the speech message can be inserted in data base.And the scoring of the speech message is arranged, can basis Its quantity sent by user and determine, more multiuser transmission, then its scoring is higher.
As an example, it is assumed that speech sample acquiescence scoring is 50 points, and repeating speech number of times often increases by one, and scoring increases Plus 1 point, if in the collection cycle, having 10 people to have input " hello " in current 3 second time period, the message is needed as speech sample In being inserted into data base, the scoring of " hello " can set 50+10=60 point.
In practical application, the speech data base of client can be uploaded to direct broadcast service end with the cycle according to setting, by Update to other clients after the synchronization of direct broadcast service end, so that each client can carry out the shared of data base of making a speech, complete The renewal work of new speech sample.
It is corresponding with the embodiment of aforementioned living broadcast interactive method, present invention also provides the embodiment of living broadcast interactive device.
As shown in figure 3, Fig. 3 is a kind of block diagram of living broadcast interactive device of the application according to an exemplary embodiment, Described device includes:
Speaking request determining module 31, is used for:During live, determine whether user initiates speaking request.
Speech message display module 32, is used for:When determining that user initiates speaking request, obtain some default speeches and disappear Breath, and show described some default speech message in the live interface;
Interactive processing module 33, is used for:Detect it is described it is default speech message be triggered when, to it is described it is default speech disappear Breath carries out living broadcast interactive process.
In an optional implementation, the speaking request determining module 31 is additionally operable to:
Show a speech triggering object in live interface, by detecting whether the speech triggering object is triggered, really Determine whether user initiates speaking request.
In an optional implementation, the default speech message is obtained from default speech data base, described Be stored with a plurality of speech sample in speech data base;
Based on following one or more information, speech sample is selected to disappear as the default speech from the speech data base Breath:
The speech message that the voice messaging of main broadcaster user, other spectators users are sent, and the face table of main broadcaster user Feelings information.
In an optional implementation, the speech data base is for the voice messaging, speech message and face Expression information is provided with corresponding subdata base.
The speech message display module 32, is additionally operable to:Based on the voice messaging, speech message or human face expression letter Breath, when selecting speech sample as the default speech message from the speech data base, is carried out from corresponding subdata base Select.
In an optional implementation, the speech message display module 32 is additionally operable to:The default speech message When obtaining from default speech data base, key word is determined using following one or more mode, existed using the key word The speech sample with the Keywords matching is searched in the speech data base:
The voice messaging and default received pronunciation are contrasted, is obtained using the received pronunciation matched with the voice messaging The key word, the received pronunciation is marked with advance corresponding key word;
The voice messaging is converted to into text message, from the text message key word is identified;
Key word is extracted from the speech message;
The video image of main broadcaster's client, identifies main broadcaster's human face expression from the video image during acquisition is live Information, for main broadcaster's human facial expression information corresponding key word is determined.
In an optional implementation, the speech message display module is additionally operable to:
When showing some default speech message in the live interface, by some default speech message according to preferential Level is ranked up displaying;Wherein, the higher default speech message of priority, before its sequence more;Priority is from high to low:
The default speech message determined by the voice messaging, the default speech determined by the speech message are disappeared Breath and the default speech message determined by main broadcaster's human facial expression information.
In an optional implementation, the speech sample configuration has corresponding scoring, and the scoring is by described Speech sample determines that number of times is higher, and the scoring is higher as the number of times triggered by user after default speech message;
The speech message display module, is additionally operable to:
It is when showing some default speech message in the live interface, some default speech message are right according to its The scoring height of the speech sample answered is ranked up displaying.
In an optional implementation, the speech data base is updated in the following way speech sample:
Collect the speech message of other clients that service end is sent;
If the speech message is not stored in the speech data base, and number of repetition is more than in preset time period Preset times threshold value, adds the speech message as speech sample into the speech data base;Wherein, the speech sample This corresponding scoring determines that number of repetition is higher according to the number of repetition, and the scoring is higher.
The function of modules and effect realizes that process specifically refers in said method correspondence step in said apparatus Process is realized, be will not be described here.
For device embodiment, because it corresponds essentially to embodiment of the method, so related part is referring to method reality Apply the part explanation of example.Device embodiment described above is only schematic, wherein described as separating component The module of explanation can be or may not be physically separate, can be as the part that module shows or can also It is not physical module, you can be located at a place, or can also be distributed on multiple mixed-media network modules mixed-medias.Can be according to reality Need the purpose for selecting some or all of module therein to realize application scheme.Those of ordinary skill in the art are not paying In the case of going out creative work, you can to understand and implement.
Those skilled in the art will readily occur to its of the application after the invention that description and practice are applied here is considered Its embodiment.The application is intended to any modification, purposes or the adaptations of the application, these modifications, purposes or The common knowledge in the art that person's adaptations follow the general principle of the application and do not apply including the application Or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope of the application and spirit are by following Claim is pointed out.
It should be appreciated that the application is not limited to the precision architecture for being described above and being shown in the drawings, and And can without departing from the scope carry out various modifications and changes.Scope of the present application is only limited by appended claim.
The preferred embodiment of the application is the foregoing is only, not to limit the application, all essences in the application Within god and principle, any modification, equivalent substitution and improvements done etc. should be included within the scope of the application protection.

Claims (12)

1. a kind of living broadcast interactive method, it is characterised in that methods described includes:
During live, determine whether user initiates speaking request;
When determining that user initiates speaking request, some default speech message are obtained, and shown in the live interface described Some default speech message;
Detect it is described it is default speech message be triggered when, to it is described it is default speech message carry out living broadcast interactive process.
2. method according to claim 1, it is characterised in that whether the determination user initiates speaking request, including:
Show a speech triggering object in live interface, by detecting whether the speech triggering object is triggered, it is determined that with Whether speaking request is initiated in family.
3. method according to claim 1, it is characterised in that the default speech message is from default speech data base Obtain, be stored with a plurality of speech sample in the speech data base;
Based on following one or more information, speech sample is selected as the default speech message from the speech data base:
The speech message that the voice messaging of main broadcaster user, other spectators users are sent, and the human face expression letter of main broadcaster user Breath.
4. method according to claim 3, it is characterised in that the speech data base is directed to the voice messaging, speech Message and human facial expression information are provided with corresponding subdata base;
Based on the voice messaging, speech message or human facial expression information, speech sample is selected to make from the speech data base For it is described it is default speech message when, selected from corresponding subdata base.
5. method according to claim 3, it is characterised in that the default speech message is from default speech data base During acquisition, key word is determined using following one or more mode, searched in the speech data base using the key word With the speech sample of the Keywords matching:
The voice messaging and default received pronunciation are contrasted, obtains described using the received pronunciation matched with the voice messaging Key word, the received pronunciation is marked with advance corresponding key word;
The voice messaging is converted to into text message, from the text message key word is identified;
Key word is extracted from the speech message;
The video image of main broadcaster's client during acquisition is live, identifies that main broadcaster's human face expression is believed from the video image Breath, for main broadcaster's human facial expression information corresponding key word is determined.
6. method according to claim 3, it is characterised in that show that some default speeches disappear in the live interface During breath, some default speech message are ranked up into displaying according to priority;Wherein, the higher default speech of priority disappears Breath, before its sequence more;Priority is from high to low:
It is the default speech message that determined by the voice messaging, the default speech message determined by the speech message, logical Cross the default speech message of main broadcaster's human facial expression information determination.
7. the method according to claim 1 or 6, it is characterised in that the speech sample configuration has corresponding scoring;
It is when showing some default speech message in the live interface, some default speech message are corresponding according to its The scoring height of speech sample is ranked up displaying;
The scoring determines that number of times is higher as the number of times triggered by user after default speech message by the speech sample, The scoring is higher.
8. method according to claim 7, it is characterised in that the speech data base updates in the following way speech sample This:
Collect the speech message of other clients that service end is sent;
If the speech message is not stored in the speech data base, and in preset time period number of repetition more than default Frequency threshold value, adds the speech message as speech sample into the speech data base;Wherein, the speech sample pair The scoring answered determines that number of repetition is higher according to the number of repetition, and the scoring is higher.
9. a kind of living broadcast interactive device, it is characterised in that described device includes:
Speaking request determining module, is used for:During live, determine whether user initiates speaking request;
Speech message display module, is used for:When determining that user initiates speaking request, the default speech message of some of acquisition, and Show described some default speech message in the live interface;
Interactive processing module, is used for:Detect it is described it is default speech message be triggered when, to it is described it is default speech message carry out Living broadcast interactive process.
10. device according to claim 9, it is characterised in that the default speech message is from default speech data base Middle acquisition, be stored with a plurality of speech sample in the speech data base;
Based on following one or more information, speech sample is selected as the default speech message from the speech data base:
The speech message that the voice messaging of main broadcaster user, other spectators users are sent, and the human face expression letter of main broadcaster user Breath.
11. devices according to claim 10, it is characterised in that the speech message display module, are additionally operable to:
When showing some default speech message in the live interface, by some default speech message according to preferential grading Row sequence shows;Wherein, the higher default speech message of priority, before its sequence more;Priority is from high to low:
The default speech message that determined by the voice messaging, the default speech message determined by the speech message, with And the default speech message determined by main broadcaster's human facial expression information.
12. devices according to claim 9 or 10, it is characterised in that the speech sample configuration has corresponding scoring, institute Commentary point determines that number of times is higher, institute's commentary as the number of times triggered by user after default speech message by the speech sample Divide higher;
The speech message display module, is additionally operable to:
It is when showing some default speech message in the live interface, some default speech message are corresponding according to its The scoring height of speech sample is ranked up displaying.
CN201611198395.6A 2016-12-22 2016-12-22 Live broadcast interaction method and device Active CN106612465B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911127043.5A CN110708607B (en) 2016-12-22 2016-12-22 Live broadcast interaction method and device, electronic equipment and storage medium
CN201611198395.6A CN106612465B (en) 2016-12-22 2016-12-22 Live broadcast interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611198395.6A CN106612465B (en) 2016-12-22 2016-12-22 Live broadcast interaction method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201911127043.5A Division CN110708607B (en) 2016-12-22 2016-12-22 Live broadcast interaction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN106612465A true CN106612465A (en) 2017-05-03
CN106612465B CN106612465B (en) 2020-01-03

Family

ID=58636659

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911127043.5A Active CN110708607B (en) 2016-12-22 2016-12-22 Live broadcast interaction method and device, electronic equipment and storage medium
CN201611198395.6A Active CN106612465B (en) 2016-12-22 2016-12-22 Live broadcast interaction method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201911127043.5A Active CN110708607B (en) 2016-12-22 2016-12-22 Live broadcast interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (2) CN110708607B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109150554A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 Multi-person speech method, storage medium, electronic equipment and the system of direct broadcasting room
CN109388701A (en) * 2018-08-17 2019-02-26 深圳壹账通智能科技有限公司 Minutes generation method, device, equipment and computer storage medium
CN109413495A (en) * 2018-09-06 2019-03-01 广州虎牙信息科技有限公司 A kind of login method, device, system, electronic equipment and storage medium
CN110460872A (en) * 2019-09-05 2019-11-15 腾讯科技(深圳)有限公司 Information display method, device, equipment and the storage medium of net cast
CN111263175A (en) * 2020-01-16 2020-06-09 网易(杭州)网络有限公司 Interaction control method and device for live broadcast platform, storage medium and electronic equipment
CN113873282A (en) * 2021-09-30 2021-12-31 广州方硅信息技术有限公司 Live broadcast room guidance speaking method, system, device, medium and computer equipment
CN113938697A (en) * 2021-10-13 2022-01-14 广州方硅信息技术有限公司 Virtual speech method and device in live broadcast room and computer equipment
CN117241055A (en) * 2023-10-31 2023-12-15 书行科技(北京)有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN117241055B (en) * 2023-10-31 2024-05-14 书行科技(北京)有限公司 Live broadcast interaction method and device, electronic equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113589977A (en) * 2020-04-30 2021-11-02 腾讯科技(深圳)有限公司 Message display method and device, electronic equipment and storage medium
CN115271891B (en) * 2022-09-29 2022-12-30 深圳市人马互动科技有限公司 Product recommendation method based on interactive novel and related device
CN117319750A (en) * 2023-08-16 2023-12-29 浙江印象软件有限公司 Live broadcast information real-time display method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140229962A1 (en) * 2005-12-09 2014-08-14 Michael Findlay Television Viewers Interaction and Voting Method
CN104881237A (en) * 2015-06-15 2015-09-02 广州华多网络科技有限公司 Internet interaction method and client
WO2016082692A1 (en) * 2014-11-24 2016-06-02 阿里巴巴集团控股有限公司 Information prompting method and device, and instant messaging system
WO2016094452A1 (en) * 2014-12-08 2016-06-16 Alibaba Group Holding Limited Method and system for providing conversation quick phrases
CN105808070A (en) * 2016-03-31 2016-07-27 广州酷狗计算机科技有限公司 Method and device for setting commenting showing effect

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7860928B1 (en) * 2007-03-22 2010-12-28 Google Inc. Voting in chat system without topic-specific rooms
US8332752B2 (en) * 2010-06-18 2012-12-11 Microsoft Corporation Techniques to dynamically modify themes based on messaging
CN104538027B (en) * 2014-12-12 2018-07-20 复旦大学 The mood of voice social media propagates quantization method and system
US20160174889A1 (en) * 2014-12-20 2016-06-23 Ziv Yekutieli Smartphone text analyses
CN105159988B (en) * 2015-08-28 2018-08-21 广东小天才科技有限公司 A kind of method and device of browsing photo
CN105228013B (en) * 2015-09-28 2018-09-07 百度在线网络技术(北京)有限公司 Barrage information processing method, device and barrage video player
CN106021599A (en) * 2016-06-08 2016-10-12 维沃移动通信有限公司 Emotion icon recommending method and mobile terminal
CN106250553A (en) * 2016-08-15 2016-12-21 珠海市魅族科技有限公司 A kind of service recommendation method and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140229962A1 (en) * 2005-12-09 2014-08-14 Michael Findlay Television Viewers Interaction and Voting Method
WO2016082692A1 (en) * 2014-11-24 2016-06-02 阿里巴巴集团控股有限公司 Information prompting method and device, and instant messaging system
WO2016094452A1 (en) * 2014-12-08 2016-06-16 Alibaba Group Holding Limited Method and system for providing conversation quick phrases
CN104881237A (en) * 2015-06-15 2015-09-02 广州华多网络科技有限公司 Internet interaction method and client
CN105808070A (en) * 2016-03-31 2016-07-27 广州酷狗计算机科技有限公司 Method and device for setting commenting showing effect

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109150554A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 Multi-person speech method, storage medium, electronic equipment and the system of direct broadcasting room
CN109150554B (en) * 2017-06-28 2020-10-16 武汉斗鱼网络科技有限公司 Multi-user voice method, storage medium, electronic equipment and system for live broadcast room
CN109388701A (en) * 2018-08-17 2019-02-26 深圳壹账通智能科技有限公司 Minutes generation method, device, equipment and computer storage medium
CN109413495A (en) * 2018-09-06 2019-03-01 广州虎牙信息科技有限公司 A kind of login method, device, system, electronic equipment and storage medium
CN110460872A (en) * 2019-09-05 2019-11-15 腾讯科技(深圳)有限公司 Information display method, device, equipment and the storage medium of net cast
CN110460872B (en) * 2019-09-05 2022-03-04 腾讯科技(深圳)有限公司 Information display method, device and equipment for live video and storage medium
CN111263175A (en) * 2020-01-16 2020-06-09 网易(杭州)网络有限公司 Interaction control method and device for live broadcast platform, storage medium and electronic equipment
CN113873282A (en) * 2021-09-30 2021-12-31 广州方硅信息技术有限公司 Live broadcast room guidance speaking method, system, device, medium and computer equipment
CN113938697A (en) * 2021-10-13 2022-01-14 广州方硅信息技术有限公司 Virtual speech method and device in live broadcast room and computer equipment
CN113938697B (en) * 2021-10-13 2024-03-12 广州方硅信息技术有限公司 Virtual speaking method and device in live broadcasting room and computer equipment
CN117241055A (en) * 2023-10-31 2023-12-15 书行科技(北京)有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN117241055B (en) * 2023-10-31 2024-05-14 书行科技(北京)有限公司 Live broadcast interaction method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110708607A (en) 2020-01-17
CN106612465B (en) 2020-01-03
CN110708607B (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN106612465A (en) Live interaction method and device
US10795929B2 (en) Interactive music feedback system
KR101909807B1 (en) Method and apparatus for inputting information
US11474779B2 (en) Method and apparatus for processing information
CN107193944B (en) Theme pushing method, terminal, server and computer-readable storage medium
CN105933783B (en) A kind of playback method of barrage, device and terminal device
CN102984050A (en) Method, client and system for searching voices in instant messaging
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
US20070297643A1 (en) Information processing system, information processing method, and program product therefor
CN109474562B (en) Display method and device of identifier, and response method and device of request
US20240061560A1 (en) Audio sharing method and apparatus, device and medium
US10341727B2 (en) Information processing apparatus, information processing method, and information processing program
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
CN104883607A (en) Video screenshot or clipping method, video screenshot or clipping device and mobile device
CN114095749B (en) Recommendation and live interface display method, computer storage medium and program product
CN113746874B (en) Voice package recommendation method, device, equipment and storage medium
CN112653902A (en) Speaker recognition method and device and electronic equipment
CN110688496A (en) Method and device for processing multimedia file
CN112333463A (en) Program recommendation method, system, device and readable storage medium
CN113886612A (en) Multimedia browsing method, device, equipment and medium
CN102968493A (en) Method, client and system for executing voice search by input method tool
CN114938473A (en) Comment video generation method and device
CN113778301A (en) Emotion interaction method based on content service and electronic equipment
CN113676772A (en) Video generation method and device
CN113038185A (en) Bullet screen processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210111

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511442 24 floors, B-1 Building, Wanda Commercial Square North District, Wanbo Business District, 79 Wanbo Second Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right