CN108710632A - A kind of speech playing method and device - Google Patents

A kind of speech playing method and device Download PDF

Info

Publication number
CN108710632A
CN108710632A CN201810289786.1A CN201810289786A CN108710632A CN 108710632 A CN108710632 A CN 108710632A CN 201810289786 A CN201810289786 A CN 201810289786A CN 108710632 A CN108710632 A CN 108710632A
Authority
CN
China
Prior art keywords
voice
voice object
mark
playing
ranked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810289786.1A
Other languages
Chinese (zh)
Inventor
陈鹏礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201810289786.1A priority Critical patent/CN108710632A/en
Publication of CN108710632A publication Critical patent/CN108710632A/en
Pending legal-status Critical Current

Links

Abstract

This application provides a kind of speech playing method and devices, by the mark centralized displaying of voice object, and play in order voice object, to build the atmosphere of clamour, therefore, it is possible to bring intuitive novel speech play effect to user, improve the usage experience of user.Also, compared with existing user needs to click the mode for playing voice remark one by one, there is higher playing efficiency.

Description

A kind of speech playing method and device
Technical field
This application involves electronic information field more particularly to a kind of speech playing methods and device.
Background technology
Currently, in order to improve the interaction effect with user, the overwhelming majority provides reception and shows that user comments using (APP) The function of opinion.For example, the APP of video class, supports user to input and shows voice, word or the picture comment of evaluation video. That is, user when commenting on video, can use the mode of voice, word or picture to express the idea of oneself.
For voice remark, user is needed to click a voice remark, voice remark is just triggered broadcasting, shows language in this way The mode of sound is not intuitive enough and friendly, and user experience is bad.
Voice remark is only a kind of example of voice messaging, currently, the broadcasting of the voice in APP is required to the click of user Triggering.Therefore, the broadcast mode for how improving voice becomes current urgent problem to be solved to improve the experience of user.
Invention content
This application provides a kind of speech playing method and devices, it is therefore intended that solves how to improve the broadcast mode of voice The problem of.
To achieve the goals above, this application provides following technical schemes:
A kind of speech playing method, including:
It will be shown in the identification sets of voice object, the voice object is identified as the user's for issuing the voice object Information;
According to the play instruction for the mark shown in collection, the voice object is playd in order.
Optionally, the broadcasting voice object includes:
According to the correspondence of preset voice content and affective style, the affective style of the voice object is determined;
Using result of broadcast corresponding with the affective style of voice object, the voice object, the broadcasting are played Effect includes following at least one:Dynamic effect, volume, intonation and word speed.
Optionally, further include:
The mark of voice object being played on and the mark of other voice objects are distinctly displayed.
Optionally, described to include by displaying in the identification sets of voice object:
Identification sets by the voice object include in preset region;
The method further includes:
Interactive controls are played in the preset region display;
For the play instruction of the mark shown in collection, play in order the voice object includes the foundation:
After receiving the triggering command for playing interactive controls, the voice object is playd in order.
Optionally, described to include by displaying in the identification sets of voice object:
The voice object is concentrated according to ranking results and is shown, the ranking results include using preset sortord The result of sequence;
The preset sortord includes following at least one:
It is ranked up according to the relevance between the receiving time sequence of the voice object and the voice object;
It is ranked up according to the receiving time sequence of the voice object and the affective style of the voice object;
According to interaction ordering instruction, it is ranked up.
A kind of voice playing device, including:
Display module, for will be shown in the identification sets of voice object, the voice object is identified as publication institute predicate The information of the user of sound object;
Playing module, for according to the play instruction for the mark shown in collection, plaing in order the voice pair As.
Optionally, the playing module is specifically used for:
According to the correspondence of preset voice content and affective style, the affective style of the voice object is determined;Make With result of broadcast corresponding with the affective style of voice object, play the voice object, the result of broadcast include with Lower at least one:Dynamic effect, volume, intonation and word speed.
Optionally, the display module is additionally operable to:
The mark of voice object being played on and the mark of other voice objects are distinctly displayed.
Optionally, the display module is specifically used for:
Identification sets by the voice object include in preset region;
The display module is additionally operable to:Interactive controls are played in the preset region display;
The playing module is specifically used for:After receiving the triggering command for playing interactive controls, institute is playd in order Predicate sound object.
Optionally, the display module is specifically used for:
The voice object is concentrated according to ranking results and is shown, the ranking results include using preset sortord The result of sequence;
Described device further includes:
Sorting module, for being ranked up to the voice object according to following at least one sortord:According to described Relevance between the receiving time sequence of voice object and the voice object is ranked up;According to connecing for the voice object The affective style for receiving time sequencing and the voice object is ranked up;According to interaction ordering instruction, it is ranked up.
Speech playing method and device described herein will be shown in the identification sets of voice object, and be collected according to being directed to Shown in the mark play instruction, play in order voice object, with build clamour atmosphere, therefore, it is possible to give user Bring intuitive novel speech play experience.Also, it needs to click the side for playing voice remark one by one with existing user Formula is compared, and has higher playing efficiency.
Description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with Obtain other attached drawings according to these attached drawings.
Fig. 1 is the exemplary plot of the application scenarios of speech playing method disclosed in the embodiment of the present application;
Fig. 2 is the flow chart of speech playing method disclosed in the embodiment of the present application;
Fig. 3 is the schematic diagram that voice remark concentrates show area in speech playing method disclosed in the embodiment of the present application;
Fig. 4 is the flow chart of another speech playing method disclosed in the embodiment of the present application;
Fig. 5 is the structural schematic diagram of voice playing device disclosed in the embodiment of the present application.
Specific implementation mode
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
Fig. 1 is the example of the application scenarios of speech playing method disclosed in the embodiment of the present application, wherein APP provides video, And comment of the user for the video is received, the comment received is illustrated in the comment area below video by APP.In general, to connect The sequencing for receiving the time of comment, in the displaying comment of comment area.
By taking text comments and voice remark (not shown in figure 1) as an example, the speech playing method described in the present embodiment, purpose It is, concentration broadcasting is carried out to voice remark, build the effect of clamour, enhances intuitive and friendly that voice remark plays, To improve the experience of user.
Fig. 2 show speech playing method disclosed in embodiments herein, including off-line training step and application stage, Specifically include following steps:
S201:Off-line training voice association relational model.
Voice association relational model is act as, and exports the relevance between voice segments.Relevance between voice segments is Relevance between the voice content for including for voice segments.
Specifically, preset voice training collection can be used, by training neural network, voice association relationship mould is obtained Type.That is, the multistage voice that voice training is concentrated inputs neural network as sample, the relevance between this multistage voice is Know, as the output of neural network, to obtain the parameter of neural network.More specifically training method may refer to existing Technology, which is not described herein again.
S202:Off-line training emotion model.
Emotion model is act as, and exports the affective style of voice segments.Wherein, affective style can be pre-set, such as Including excited, cold and detached, calmness etc..
Preset voice training collection can be used, by training neural network, obtains emotion model.With above-mentioned voice association The training process of relational model is similar.More specifically training method may refer to the prior art, and which is not described herein again.
S201 and S202 constitutes off-line training step, and obtained model is used in the application stage.It should be noted that complete After an off-line training, the model obtained using training can be continued in the application stage, can also periodically execute offline instruction Practice, to update the model that the application stage uses.
The application stage is illustrated below.
S203:Obtain voice remark.
The mode for obtaining voice remark may refer to the mode that existing APP obtains voice remark, for example, it is defeated to receive user The voice remark entered, which is not described herein again.
S204:The voice remark got is ranked up.
Specifically, the mode of sequence includes following several:
1, it is ranked up according to the receiving time of voice remark sequence and relevance:On the basis of chronological order, Using voice association relational model, the relevance between voice remark is calculated, and be ranked up according to relevance.For example, receiving Time earliest voice remark makes number one as benchmark, will come second with the maximum voice remark of the relevance of benchmark Position ... and so on (i.e. according to the other voice remarks being ranked sequentially except benchmark for passing part with benchmark relevance).Namely It says, ranking results are:Benchmark is first, and the relevance of subsequent voice remark and benchmark is successively decreased.
2, it is ranked up according to the receiving time of voice remark sequence and affective style:Using emotion model, voice is commented By several affective styles are divided into, on the basis of chronological order, a few class voice remarks are ranked up.Specifically, pressing According to preassigned affective style sequence, arrange voice remark, the voice remark of identical affective style, according to receiving time from The early sequence to evening is ranked up.For example, the voice of multiple users publication sort according to voice after (be APP receptions in bracket To the time of voice remark):
A:Good happy 20,18/,1/1 9:00
B:Sad 20,18/,1/1 10:00
C:Happy 20,18/,1/2 9:10
Assuming that using emotion model, voice remark is divided into happy class and unhappy class, preassigned affective style Sequence is happy class, unhappy class, then the result of the voice remark sequence of above-mentioned user is ACB.
3, it according to interaction ordering instruction, is ranked up:Display sequence interactive window receives artificial (such as APP operations Personnel) sorting operation in the interactive window that sorts instructs, and is ranked up to voice remark according to sorting operation instruction.
For example, the voice remark of VIP user is come front by APP operations personnel, by the voice remark of non-VIP user It comes below.
It should be noted that the execution object that can execute 1,2 and 3,2 successively is the ranking results executed after 1,3 hold Row object is the ranking results executed after 2.Alternatively, 3 can not also be executed.Alternatively, can also only execute arbitrary in 1,2 and 3 It is a kind of or two kinds arbitrary.
S205:According to ranking results, the mark of displaying voice remark is concentrated.
In general, APP only opens the permission made comments to the user of registration.After user's registration, usually have user name and Head portrait.In the present embodiment, for the head portrait of the user of the mark of voice remark to deliver voice remark.
Certainly, the mark of voice remark is not limited to the head portrait of user, can also be what the users such as user name registered in APP Other information.
Specifically, can (predeterminable area can be arranged around existing comment area, such as existing in preset region The upper surface of comment area or below), concentrate the mark of displaying voice remark.Fig. 3 is the bandwagon effect figure of the mark of voice remark.
S206:According to play instruction, voice remark is playd in order according to ranking results.
The advantages of playing voice remark according to ranking results is:It, can be with if sorted according to relevance i.e. 1 Voice remark is played according to relevance sequence (such as sequence from high to low).It, can if sorted according to emotion i.e. 2 The voice remark of same emotion is concentrated in together broadcasting, for example, first playing the voice remark of commendation, then criticism is played Voice remark.Furthermore, it can be seen that the purpose of sequence is so that voice remark is broadcast according to relevance sequence and/or affective style It puts.
Specifically, play instruction can be play instruction input by user, for example, in the mark show area of voice remark Domain shows virtual broadcast button (example for playing interactive controls), as shown in figure 3, user clicks broadcast button (user's point Broadcast button is hit, as user is for mark input play instruction shown in collection, that is, is directed to mark pair shown in collection The voice remark input play instruction answered) after, the play instruction that APP receives user (plays the triggering command of interactive controls One example), start to play voice remark according to ranking results.
Play instruction input by user can also the forms such as voice, do not limit here.
Other than play instruction input by user, play instruction can also be the instruction of preset trigger condition triggering, For example, interface rolling motion is to the show area for the mark that can show voice remark, alternatively, the show area of the mark of voice remark is located at The centre position region of display interface then confirms and receives play instruction.
Further, voice remark can be played using corresponding result of broadcast according to the affective style of voice remark, with Further render the atmosphere of clamour.Result of broadcast refers to playing by the way of, may include dynamic effect (such as flashing), sound The modes such as amount, intonation and word speed.Can be that corresponding result of broadcast is arranged (for example, happy in different affective styles in advance Comment use happy intonation).Before being played, aforementioned emotion model can be used, determines the affective style of voice remark, The correspondence between pre-set affective style and result of broadcast is inquired again, is obtained corresponding with voice remark to be played Result of broadcast.
S207:The mark of voice remark being played on and the mark of other voice remarks are distinctly displayed.
Distinctly displaying mark refers to, display mode is different from other marks, for example, by the use of voice remark being played on The amplification display of account picture, user's head portrait without the voice remark in broadcasting keep original state (not amplifying) to show.The step Purpose is, further renders the atmosphere of clamour.
It should be noted that the prior art can be used to determine " mark of voice remark being played on ", for example, being language Sound is commented on, the head portrait of voice remark is respectively provided with unique ID, when control voice comments on centralized displaying and plays, is commented with voice Therefore the ID of opinion, which is Transfer Parameters, can know the ID for the voice remark being currently played, and then can know and this voice The ID of the corresponding head portraits of ID of comment, which is not described herein again.
It can be seen that from process shown in Fig. 2 after voice remark sorts, according to ranking results, by the mark of voice remark Know centralized displaying, and play in order voice remark, builds the atmosphere of clamour, and further, use special effect play voice remark And show the mark of voice remark, further to render noisy atmosphere, therefore, it is possible to bring intuitive novel voice to comment to user By result of broadcast, the usage experience of user is improved.
Also, compared with existing user needs to click the mode for playing voice remark one by one, there is higher broadcast Put efficiency.
Method shown in Fig. 2 can be extended in other scenes there may be voice, such as chat APP, user Speech message is inputted, the speech playing method described in the present embodiment can build the noisy atmosphere of speech play in the APP that chats It encloses.
As it can be seen that method shown in Fig. 2 can be applied in all APP for supporting phonetic function, and can be summarized as shown in Fig. 4 Process, include the following steps:
S401:It will be shown in the identification sets of voice object.
Wherein, voice object includes voice remark or speech message etc. with information existing for voice mode.Voice object Be identified as publication voice object user information, such as it is aforementioned publication voice object user head portrait.
Concentration refers to gathering together dispersion.That is, centralization and decentralization are opposite.It refers to disperseing originally to concentrate displaying The logo collection of voice remark show together, for example, the mark of the mark of voice remark in the prior art and text reviews Knowledge is shown according to the sequence of receiving time, i.e., is sequence with the time, the mark of the mark of voice remark and text reviews is intersected Displaying (may previous item be voice remark mark, latter several be text reviews mark), and in the present embodiment, voice is commented The logo collection of opinion is shown together.As previously mentioned, the mark of voice remark can be shown in a region, in the region The mark of other comments not including word etc..
The sequence for concentrating the voice object shown can carry out arbitrary sequence as a result, can also be using Fig. 2 institutes The result that the sortord shown is ranked up.
S402:According to the play instruction for the mark shown in collection, voice object is playd in order.
Specific broadcast mode may refer to method shown in Fig. 2, and which is not described herein again.
It should be noted that process shown in Fig. 4, can be compatible in existing APP, and held by triggering command triggering Row.For example, in video class APP, the top in the comment area of some video shows " clamour " button, when the user clicks the button When, execute process shown in Fig. 4:The head portrait centralized displaying of voice remark will be delivered for the video, as shown in figure 3, and collecting Middle display area shows broadcast button, after user clicks broadcast button, plays in order voice remark, further, can be by According to mode shown in Fig. 2, increases and play special efficacy.After user again taps on " clamour " button, " clamour " function is prompted to be closed, Then voice remark is shown according to the prior art.
Fig. 5 show a kind of voice playing device disclosed in the embodiment of the present application, including:Display module and playing module, Optionally, it can also include sorting module.
Wherein, display module in the identification sets of voice object for will show.Playing module is used for according to for concentration exhibition The play instruction for the mark shown plays in order the voice object.
Specifically, the identification sets of the voice object are included in preset region by display module, display module is also used In:It is shown in preset region and plays interactive controls.The playing module receive play interactive controls triggering command after, Play in order voice object.
Sorting module is for being ranked up voice object according to aforementioned sortord, and in display module by voice object Before displaying being concentrated according to ranking results, off-line training voice association relational model and emotion model, and closed using the voice Connection relational model obtains the relevance between the voice object, and the emotion of the voice object is obtained using the emotion model Type.
The specific implementation of the function of the above modules, may refer to above method embodiment, and which is not described herein again.
Device shown in fig. 5 can be integrally disposed in existing APP, and can individually be existed in the form of APP, nothing By which kind of existence form, the speech play mode that user experience more has can be provided.
If the function described in the embodiment of the present application method is realized in the form of SFU software functional unit and as independent production Product are sold or in use, can be stored in a computing device read/write memory medium.Based on this understanding, the application is real Applying the part of a part that contributes to existing technology or the technical solution can be expressed in the form of software products, The software product is stored in a storage medium, including some instructions are used so that a computing device (can be personal meter Calculation machine, server, mobile computing device or network equipment etc.) execute each embodiment the method for the application whole or portion Step by step.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), with Machine accesses various Jie that can store program code such as memory (RAM, Random Access Memory), magnetic disc or CD Matter.
Each embodiment is described by the way of progressive in this specification, the highlights of each of the examples are with it is other The difference of embodiment, just to refer each other for same or similar part between each embodiment.
The foregoing description of the disclosed embodiments enables professional and technical personnel in the field to realize or use the application. Various modifications to these embodiments will be apparent to those skilled in the art, as defined herein General Principle can in other embodiments be realized in the case where not departing from spirit herein or range.Therefore, the application It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest range caused.

Claims (10)

1. a kind of speech playing method, which is characterized in that including:
It will be shown in the identification sets of voice object, the letter for being identified as the user for issuing the voice object of the voice object Breath;
According to the play instruction for the mark shown in collection, the voice object is playd in order.
2. according to the method described in claim 1, it is characterized in that, the broadcasting voice object includes:
According to the correspondence of preset voice content and affective style, the affective style of the voice object is determined;
Using result of broadcast corresponding with the affective style of voice object, the voice object, the result of broadcast are played Including following at least one:Dynamic effect, volume, intonation and word speed.
3. according to the method described in claim 1, it is characterized in that, further including:
The mark of voice object being played on and the mark of other voice objects are distinctly displayed.
4. according to the method described in claim 1, it is characterized in that, described include by displaying in the identification sets of voice object:
Identification sets by the voice object include in preset region;
The method further includes:
Interactive controls are played in the preset region display;
For the play instruction of the mark shown in collection, play in order the voice object includes the foundation:
After receiving the triggering command for playing interactive controls, the voice object is playd in order.
5. according to claim 1-4 any one of them methods, which is characterized in that described to be shown in the identification sets of voice object Including:
The voice object is concentrated according to ranking results and is shown, the ranking results include being sorted using preset sortord Result;
The preset sortord includes following at least one:
It is ranked up according to the relevance between the receiving time sequence of the voice object and the voice object;
It is ranked up according to the receiving time sequence of the voice object and the affective style of the voice object;
According to interaction ordering instruction, it is ranked up.
6. a kind of voice playing device, which is characterized in that including:
Display module, for will be shown in the identification sets of voice object, the voice object is identified as the publication voice pair The information of the user of elephant;
Playing module, for according to the play instruction for the mark shown in collection, plaing in order the voice object.
7. device according to claim 6, which is characterized in that the playing module is specifically used for:
According to the correspondence of preset voice content and affective style, the affective style of the voice object is determined;Using with The corresponding result of broadcast of affective style of the voice object, plays the voice object, the result of broadcast include with down toward Few one kind:Dynamic effect, volume, intonation and word speed.
8. device according to claim 6, which is characterized in that the display module is additionally operable to:
The mark of voice object being played on and the mark of other voice objects are distinctly displayed.
9. device according to claim 6, which is characterized in that the display module is specifically used for:
Identification sets by the voice object include in preset region;
The display module is additionally operable to:Interactive controls are played in the preset region display;
The playing module is specifically used for:After receiving the triggering command for playing interactive controls, institute's predicate is playd in order Sound object.
10. according to claim 69 any one of them device, which is characterized in that the display module is specifically used for:
The voice object is concentrated according to ranking results and is shown, the ranking results include being sorted using preset sortord Result;
Described device further includes:
Sorting module, for being ranked up to the voice object according to following at least one sortord:According to the voice Relevance between the receiving time sequence of object and the voice object is ranked up;According to the voice object reception when Between sequence and the affective style of the voice object be ranked up;According to interaction ordering instruction, it is ranked up.
CN201810289786.1A 2018-04-03 2018-04-03 A kind of speech playing method and device Pending CN108710632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810289786.1A CN108710632A (en) 2018-04-03 2018-04-03 A kind of speech playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810289786.1A CN108710632A (en) 2018-04-03 2018-04-03 A kind of speech playing method and device

Publications (1)

Publication Number Publication Date
CN108710632A true CN108710632A (en) 2018-10-26

Family

ID=63867208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810289786.1A Pending CN108710632A (en) 2018-04-03 2018-04-03 A kind of speech playing method and device

Country Status (1)

Country Link
CN (1) CN108710632A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110139164A (en) * 2019-06-17 2019-08-16 北京小桨搏浪科技有限公司 A kind of voice remark playback method, device, terminal device and storage medium
CN110379406A (en) * 2019-06-14 2019-10-25 北京字节跳动网络技术有限公司 Voice remark conversion method, system, medium and electronic equipment
CN110413834A (en) * 2019-06-14 2019-11-05 北京字节跳动网络技术有限公司 Voice remark method of modifying, system, medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125483A (en) * 2014-07-07 2014-10-29 乐视网信息技术(北京)股份有限公司 Audio comment information generating method and device and audio comment playing method and device
CN104834435A (en) * 2015-05-05 2015-08-12 小米科技有限责任公司 Method and device for playing audio comments
CN105337845A (en) * 2015-10-30 2016-02-17 努比亚技术有限公司 Voice commenting server and method
CN105611481A (en) * 2015-12-30 2016-05-25 北京时代拓灵科技有限公司 Man-machine interaction method and system based on space voices
CN105893432A (en) * 2015-12-09 2016-08-24 乐视网信息技术(北京)股份有限公司 Video comment classification method, video comment display system and server
CN105955990A (en) * 2016-04-15 2016-09-21 北京理工大学 Method for sequencing and screening of comments with consideration of diversity and effectiveness
CN107818787A (en) * 2017-10-31 2018-03-20 努比亚技术有限公司 A kind of processing method of voice messaging, terminal and computer-readable recording medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125483A (en) * 2014-07-07 2014-10-29 乐视网信息技术(北京)股份有限公司 Audio comment information generating method and device and audio comment playing method and device
CN104834435A (en) * 2015-05-05 2015-08-12 小米科技有限责任公司 Method and device for playing audio comments
CN105337845A (en) * 2015-10-30 2016-02-17 努比亚技术有限公司 Voice commenting server and method
CN105893432A (en) * 2015-12-09 2016-08-24 乐视网信息技术(北京)股份有限公司 Video comment classification method, video comment display system and server
CN105611481A (en) * 2015-12-30 2016-05-25 北京时代拓灵科技有限公司 Man-machine interaction method and system based on space voices
CN105955990A (en) * 2016-04-15 2016-09-21 北京理工大学 Method for sequencing and screening of comments with consideration of diversity and effectiveness
CN107818787A (en) * 2017-10-31 2018-03-20 努比亚技术有限公司 A kind of processing method of voice messaging, terminal and computer-readable recording medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110379406A (en) * 2019-06-14 2019-10-25 北京字节跳动网络技术有限公司 Voice remark conversion method, system, medium and electronic equipment
CN110413834A (en) * 2019-06-14 2019-11-05 北京字节跳动网络技术有限公司 Voice remark method of modifying, system, medium and electronic equipment
CN110379406B (en) * 2019-06-14 2021-12-07 北京字节跳动网络技术有限公司 Voice comment conversion method, system, medium and electronic device
CN110139164A (en) * 2019-06-17 2019-08-16 北京小桨搏浪科技有限公司 A kind of voice remark playback method, device, terminal device and storage medium

Similar Documents

Publication Publication Date Title
TWI720062B (en) Voice input method, device and terminal equipment
CN104992709B (en) A kind of the execution method and speech recognition apparatus of phonetic order
CN105657535B (en) A kind of audio identification methods and device
US10661175B2 (en) Intelligent user-based game soundtrack
CN105245589B (en) Information displaying method and device
CN107370887B (en) Expression generation method and mobile terminal
CN114938360A (en) Data processing method and device based on instant messaging application
CN108810637A (en) Video broadcasting method, device and terminal device
CN107995515A (en) The method and device of information alert
CN108710632A (en) A kind of speech playing method and device
CN107832434A (en) Method and apparatus based on interactive voice generation multimedia play list
CN104038473B (en) For intercutting the method, apparatus of audio advertisement, equipment and system
CN104166547B (en) A kind of control method and device of channel
CN110602516A (en) Information interaction method and device based on live video and electronic equipment
CN113126852B (en) Dynamic message display method, related device, equipment and storage medium
CN102365114A (en) Virtual golf simulation device and method for the same
CN110769312B (en) Method and device for recommending information in live broadcast application
TW202006532A (en) Broadcast voice determination method, device and apparatus
US20160300557A1 (en) Method, client and computer storage medium for processing information
CN104599702B (en) A kind of method for playing music
CN109474562A (en) The display methods and device of mark, request response method and device
CN107679196A (en) A kind of multimedia recognition methods, electronic equipment and storage medium
CN112269898A (en) Background music obtaining method and device, electronic equipment and readable storage medium
CN105979388A (en) Video review publishing method and system
CN109726308A (en) A kind of method and apparatus for the background music generating novel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181026