CN105530521A - Streaming media searching method, device and system - Google Patents
Streaming media searching method, device and system Download PDFInfo
- Publication number
- CN105530521A CN105530521A CN201510944464.2A CN201510944464A CN105530521A CN 105530521 A CN105530521 A CN 105530521A CN 201510944464 A CN201510944464 A CN 201510944464A CN 105530521 A CN105530521 A CN 105530521A
- Authority
- CN
- China
- Prior art keywords
- user
- streaming media
- speech data
- attribute
- user emotion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
- G06F16/436—Filtering based on additional data, e.g. user or group profiles using biological or physiological data of a human being, e.g. blood pressure, facial expression, gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
- G06F16/636—Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/252—Processing of multiple end-users' preferences to derive collaborative data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/654—Transmission by server directed to the client
Abstract
The embodiment of the invention discloses a streaming media searching method, device and system applied in the information processing technical field. In the method of the embodiment, a server searches the voice data input by a user contained in a request according to the streaming media sent by a streaming media terminal, determines corresponding user emotion attributes such as sad, exciting, angry or anxious attributes, and then searches the streaming media corresponding to the user emotion attributes and sends the information of the streaming media to the streaming media terminal. Therefore, the server can determine the current emotion (user emotion attribute) of the user according to the voice data input by the user and then searches corresponding streaming media aiming at the determined emotion; the time for the user to input various keywords to search the streaming media corresponding to the current emotion of the user is saved.
Description
Technical field
The present invention relates to technical field of information processing, particularly a kind of Streaming Media lookup method, Apparatus and system.
Background technology
All have stream media playing function in present terminal equipment, such as music, video playback etc., these functions are all by loading certain realization that should be used in terminal equipment.User can input keyword in terminal equipment, by terminal equipment, this keyword is sent to server, and server is play according to returning to terminal equipment after keyword search to corresponding Streaming Media.
Summary of the invention
The embodiment of the present invention provides a kind of Streaming Media lookup method, Apparatus and system, and the user emotion attribute achieving speech data by inputting according to user corresponding searches Streaming Media.
The embodiment of the present invention provides a kind of Streaming Media lookup method, comprising:
The Streaming Media search request that receiving stream media terminal sends, described Streaming Media search request comprises the speech data of user's input;
Determine corresponding user emotion attribute according to described speech data, described user emotion attribute is excited, sad, angry or anxiety;
Search the Streaming Media corresponding with the described user emotion attribute determined, and the information of the described Streaming Media searched is sent to described stream media terminal.
The embodiment of the present invention also provides a kind of Streaming Media lookup method, comprising:
Display user input interface, receives the speech data of user's input from described user's input interface;
The speech data that described user inputs is added in Streaming Media search request, described Streaming Media search request is sent to server;
The information of the Streaming Media that the user emotion attribute corresponding according to described speech data receiving the transmission of described server is searched.
The embodiment of the present invention also provides a kind of Streaming Media to search device, comprising:
Request reception unit, for the Streaming Media search request that receiving stream media terminal sends, described Streaming Media search request comprises the speech data of user's input;
Attribute determining unit, the speech data that the Streaming Media search request for receiving according to described request receiving element comprises determines corresponding user emotion attribute, and described user emotion attribute is excited, sad, angry or anxiety;
Information transmitting unit, for the Streaming Media that the user emotion attribute searched with described attribute determining unit is determined is corresponding, and sends to described stream media terminal by the information of the described Streaming Media searched.
The embodiment of the present invention also provides a kind of Streaming Media to search device, comprising:
Data receipt unit, for showing user's input interface, receives the speech data of user's input from described user's input interface;
Request transmitting unit, the speech data for the user's input received by described data receipt unit adds in Streaming Media search request, and described Streaming Media search request is sent to server;
Information receiving unit, the information of the Streaming Media that the user emotion attribute corresponding according to described speech data sent for receiving described server is searched.
The embodiment of the present invention also provides a kind of Streaming Media seeking system, comprising: stream media terminal and server, wherein:
Described stream media terminal comprises:
Data receipt unit, for showing user's input interface, receives the speech data of user's input from described user's input interface;
Request transmitting unit, the speech data for the user's input received by described data receipt unit adds in Streaming Media search request, and described Streaming Media search request is sent to server;
Information receiving unit, the information of the Streaming Media that the user emotion attribute corresponding according to described speech data sent for receiving described server is searched;
Described server comprises:
Request reception unit, for the Streaming Media search request that receiving stream media terminal sends, described Streaming Media search request comprises the speech data of user's input;
Attribute determining unit, the speech data that the Streaming Media search request for receiving according to described request receiving element comprises determines corresponding user emotion attribute, and described user emotion attribute is excited, sad, angry or anxiety;
Information transmitting unit, for the Streaming Media that the user emotion attribute searched with described attribute determining unit is determined is corresponding, and sends to described stream media terminal by the information of the described Streaming Media searched.
Visible, in the method for the present embodiment, the speech data of the input of the user included by Streaming Media search request that server can send according to stream media terminal, determine corresponding user emotion attribute, such as sad, excited, the attributes such as indignation or anxiety, and then search Streaming Media corresponding to user emotion attribute, and the information of Streaming Media is sent to stream media terminal.Such server can determine the mood that user is current and user emotion attribute according to the speech data of user's input, and then for the mood determined to search corresponding Streaming Media, the time that user inputs various keyword to obtain the Streaming Media corresponding with user's current emotional can be saved.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the flow chart of a kind of Streaming Media lookup method that the embodiment of the present invention provides;
Fig. 2 is the flow chart of the another kind of Streaming Media lookup method that the embodiment of the present invention provides;
Fig. 3 is the structural representation that a kind of Streaming Media that the embodiment of the present invention provides searches device;
Fig. 4 is the structural representation that another kind of Streaming Media that the embodiment of the present invention provides searches device;
Fig. 5 is the structural representation that another kind of Streaming Media that the embodiment of the present invention provides searches device;
Fig. 6 is the structural representation that another kind of Streaming Media that the embodiment of the present invention provides searches device.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Term " first ", " second ", " the 3rd " " 4th " etc. (if existence) in specification of the present invention and claims and above-mentioned accompanying drawing are for distinguishing similar object, and need not be used for describing specific order or precedence.Should be appreciated that the data used like this can be exchanged in the appropriate case, so as embodiments of the invention described herein such as can with except here diagram or describe those except order implement.In addition, term " comprises " and " having " and their any distortion, intention is to cover not exclusive comprising, such as, contain those steps or unit that the process of series of steps or unit, method, system, product or equipment is not necessarily limited to clearly list, but can comprise clearly do not list or for intrinsic other step of these processes, method, product or equipment or unit.
The embodiment of the present invention provides a kind of Streaming Media lookup method, be mainly used in Streaming Media seeking system, comprise stream media terminal and server within the system, wherein, user can obtain stream medium data by stream media terminal to server request, and server can find corresponding stream medium data and send to stream media terminal.Method performed by server in the method system of the present embodiment, flow chart as shown in Figure 1, comprising:
Step 101, the Streaming Media search request that receiving stream media terminal sends, comprises the speech data of user's input in Streaming Media search request.
Step 102, determines corresponding user emotion attribute according to speech data.
Be appreciated that, stream media terminal can comprise user's input interface, this speech data by this user's input interface input speech data, when stream media terminal receives the speech data of user's input, then can be carried in Streaming Media search request and send to server by such user.When server receives Streaming Media search request, the speech data carried can be analyzed, thus determine attribute and the user emotion attribute of the user emotion type belonging to speech data in request.Wherein, the length of the speech data of user's input is moderate, is all generally at about 5 to 10 seconds, if too short, the analysis of server to speech data can be made inaccurate, if oversize, server can be made longer for the analysis time to speech data.Above-mentioned user emotion attribute is the attribute for describing user emotion, specifically can comprise excitement, sad, angry, anxiety etc.
Particularly, server, when analyzing speech data, first can identify the Word message that this speech data is corresponding, if identify Word message, then the type of emotion belonging to Word message is defined as user emotion attribute corresponding to this speech data.Here the Word message that speech data is corresponding refers to the word when user reads when inputting speech data or sings, such as when the voice of user's input " sadness ", then server can identify the word of " sadness ", and the type of emotion belonging to this word i.e. " sad " is defined as above-mentioned user emotion attribute.
If unidentified go out Word message, then server can carry out other analysis to speech data further, particularly, the parameter (the decibel value scope of such as speech data, or voice intensity etc.) of this speech data can be determined, then according to the parameter of speech data, and the corresponding relation of preset parameter and mood attribute, determine the user emotion attribute that speech data is corresponding, wherein, the parameter in corresponding relation comprises following arbitrary information: decibel value scope and voice intensity etc.Such as when user input voice data, can input " shriek ", such server can determine that the decibel value of speech data is higher, and corresponding user emotion attribute is " excitement ".As user's input " sob ", such server can determine that the decibel value of speech data is lower, and corresponding user emotion attribute is " sad ".
Or, if unidentified go out fileinfo, then the feature of the sample voice data included by the feature of speech data and each mood attribute can compare by server, then corresponding according to comparative result determination speech data user emotion attribute.Wherein the feature of each sample voice data included by mood attribute can be in the server preset in advance.
The user emotion attribute that speech data is corresponding can be determined like this by said method, in some cases, the user emotion attribute that server is determined has multiple, such as the speech data of user's input is " shriek ", then this shriek can be the shriek of indignation, also can be excited shriek, then server be the user emotion attribute determined of this speech data can for multiple i.e. " indignation " and " excitement ".Then in this case, if obtain multiple user emotion attribute according to speech data, the attribute list of multiple user emotion attribute first can be sent to stream media terminal by server, select for user, when server receives the user emotion attribute selected in user's dependency list of stream media terminal transmission, then the user emotion attribute of reception is defined as user emotion attribute corresponding to speech data.Such server by the feedback of user, can adjust the user emotion attribute found, improves accuracy rate.
Step 103, the Streaming Media that the user emotion attribute searched and determine in step 102 is corresponding, and the information (title of such as Streaming Media of Streaming Media of will search, or the information such as brief introduction) sending to stream media terminal, such user can select to need the Streaming Media play play by stream media terminal.Particularly, if the user emotion attribute determined is " excitement ", then the Streaming Media that arrives of whois lookup can be the cheerful and light-hearted Streaming Media of melody, the song etc. that such as rhythm is fast; If the user emotion attribute determined is " indignation ", then the Streaming Media that arrives of whois lookup can be the slow Streaming Media of melody, to pacify the mood of user; If the user emotion attribute determined is " anxiety ", then the Streaming Media that arrives of whois lookup can be the slow Streaming Media of melody.
Server is when sending the information of Streaming Media, if described in the Streaming Media searched have multiple, by the number of times that other stream media terminal is play, multiple Streaming Media is sorted according to described Streaming Media, and the information of the multiple Streaming Medias after sequence is sent to stream media terminal.
Visible, in the method for the present embodiment, the speech data of the input of the user included by Streaming Media search request that server can send according to stream media terminal, determine corresponding user emotion attribute, such as sad, excited, the attributes such as indignation or anxiety, and then search Streaming Media corresponding to user emotion attribute, and the information of Streaming Media is sent to stream media terminal.Such server can determine the mood that user is current and user emotion attribute according to the speech data of user's input, and then for the mood determined to search corresponding Streaming Media, the time that user inputs various keyword to obtain the Streaming Media corresponding with user's current emotional can be saved.
The embodiment of the present invention also provides a kind of Streaming Media lookup method, and be mainly used in Streaming Media seeking system, the method performed by the stream media terminal in the method system of the present embodiment, flow chart as shown in Figure 2, comprising:
Step 201, display user input interface, receive the speech data of user's input from user's input interface, this speech data can be the word that user reads or sings, and can also be the voice such as " shriek " or " sob ".
Step 202, the speech data that user inputs is added in Streaming Media search request, Streaming Media search request is sent to server, such server can determine corresponding user emotion attribute according to speech data wherein, then Streaming Media is searched according to user emotion attribute, the method that server specifically performs is shown in described in above-described embodiment, does not repeat at this.
Step 203, the information of the Streaming Media that the user emotion attribute that the server that reception server sends is corresponding according to speech data is searched.The information of the Streaming Media that such user can receive according to stream media terminal, selects some Streaming Medias to play.
It should be noted that, the length of the speech data that the above-mentioned user received from user's input interface inputs is moderate, general is all at about 5 to 10 seconds, if too short, the analysis of server to speech data can be made inaccurate, if oversize, server can be made longer for the analysis time to speech data.Then in a specific embodiment, stream media terminal is after execution above-mentioned steps 201, also to judge the length of the speech data that user inputs, if the length of speech data is outside preset scope (such as 5 to 10 seconds), display user notification, user notification is for notifying that the length of speech data described in user needs in preset scope.
In addition, when stream media terminal is after performing above-mentioned steps 202, if server obtains multiple user emotion attribute according to the speech data that user inputs, then the attribute list of multiple user emotion attribute can be sent to stream media terminal by server, after the attribute list of multiple user emotion attributes of stream media terminal reception server transmission, show this attribute list, such user can select one or more user emotion attribute from this attribute list; When stream media terminal receives the user emotion attribute selected in the list of user's dependency, then the user emotion attribute of selection is sent to server, search corresponding Streaming Media with convenient service device.
Visible, the speech data that in the present embodiment, the user of moderate length can input by stream media terminal sends to server by Streaming Media search request, such server can determine the mood that user is current and user emotion attribute according to the speech data of user's input, and then for the mood determined to search corresponding Streaming Media, the time that user inputs various keyword to obtain the Streaming Media corresponding with user's current emotional can be saved.
The embodiment of the present invention also provides a kind of Streaming Media to search device, and than server described above, its structural representation as shown in Figure 3, specifically can comprise:
Request reception unit 10, for the Streaming Media search request that receiving stream media terminal sends, described Streaming Media search request comprises the speech data of user's input.
Attribute determining unit 11, the speech data that the Streaming Media search request for receiving according to described request receiving element 10 comprises determines corresponding user emotion attribute, and described user emotion attribute is excited, sad, angry or anxiety etc.
Particularly, the type of emotion belonging to described Word message, when determining user emotion attribute, if identify Word message corresponding to described speech data, is defined as described user emotion attribute by this attribute determining unit 11; If unidentified go out Word message corresponding to described speech data, according to the parameter of described speech data, and the corresponding relation of preset parameter and mood attribute, determine the user emotion attribute that described speech data is corresponding, wherein, the parameter in described corresponding relation comprises following arbitrary information: decibel value scope and voice intensity; Or, if unidentified go out Word message corresponding to described speech data, the feature of the sample voice data included by the feature of described speech data and each mood attribute is compared, determines according to comparative result the user emotion attribute that described speech data is corresponding.
Further, if attribute determining unit 11 obtains multiple user emotion attribute according to described speech data, also the attribute list of described multiple user emotion attribute can be sent to described stream media terminal; When the user emotion attribute that the user receiving the transmission of described stream media terminal selects from described attribute list, then the user emotion attribute of described reception is defined as user emotion attribute corresponding to described speech data.
Information transmitting unit 12, for the Streaming Media that the user emotion attribute searched with described attribute determining unit 11 is determined is corresponding, and sends to described stream media terminal by the information of the described Streaming Media searched.
If this information transmitting unit 12 specifically for described in the Streaming Media searched have multiple, by the number of times that other stream media terminal is play, multiple Streaming Media is sorted according to described Streaming Media, and the information of the multiple Streaming Medias after sequence is sent to described stream media terminal.
Visible, search in device at the Streaming Media of the embodiment of the present invention, the speech data of the user's input included by the Streaming Media search request that attribute determining unit 11 sends according to stream media terminal, determine corresponding user emotion attribute, such as sad, excited, the attributes such as indignation or anxiety, then information transmitting unit 12 searches Streaming Media corresponding to user emotion attribute again, and the information of Streaming Media is sent to stream media terminal.Such Streaming Media searches device can determine the mood that user is current and user emotion attribute according to the speech data of user's input, and then for the mood determined to search corresponding Streaming Media, the time that user inputs various keyword to obtain the Streaming Media corresponding with user's current emotional can be saved.
The embodiment of the present invention also provides a kind of Streaming Media to search device, and than stream media terminal described above, its structural representation as shown in Figure 4, specifically can comprise:
Data receipt unit 20, for showing user's input interface, receives the speech data of user's input from described user's input interface;
Request transmitting unit 21, the speech data for the user's input received by described data receipt unit 20 adds in Streaming Media search request, and described Streaming Media search request is sent to server;
Information receiving unit 22, the information of the Streaming Media that the user emotion attribute that the speech data that the described server sent for receiving described server comprises according to the Streaming Media search request that described request transmitting element 21 sends is corresponding is searched.
Shown in figure 5, in a specific embodiment, Streaming Media searches device except can comprising structure as shown in Figure 4, can also comprise notification unit 23, list reception unit 24 and attribute transmitting element 25, wherein:
Notification unit 13, if the length of the speech data received for described data receipt unit 20 is outside preset scope, display user notification, described user notification is for notifying that the length of speech data described in described user needs in described preset scope.
List display unit 24, for after request transmitting unit 21 have sent Streaming Media search request, if server obtains multiple user emotion attribute according to the speech data that Streaming Media search request comprises, show the attribute list of the multiple user emotion attributes received from described server.
Attribute transmitting element 25, for receiving the user emotion attribute that user selects from the attribute list that described list display unit 24 shows, then sends to described server by the user emotion attribute of described selection.
Visible, in the present embodiment, if after data receipt unit 20 receives the speech data of user's input, notification unit 23 needs the length judging this speech data, if outside preset scope, then can show user notification.After have sent Streaming Media search request when request transmitting unit 21, list display unit 23 can also receive attribute list from server and show this attribute list, after attribute transmitting element 25 have sent the user emotion attribute of user's selection, information receiving unit 22 just can receive the information of the Streaming Media that server sends.
In the present embodiment, Streaming Media is searched the speech data that the user of moderate length can input by request transmitting unit 21 in device and is sent to server by Streaming Media search request, such server can determine the mood that user is current and user emotion attribute according to the speech data of user's input, and then for the mood determined to search corresponding Streaming Media, the time that user inputs various keyword to obtain the Streaming Media corresponding with user's current emotional can be saved.
The embodiment of the present invention also provides a kind of Streaming Media to search device, its structural representation as shown in Figure 6, this Streaming Media searches device can produce larger difference because of configuration or performance difference, one or more central processing units (centralprocessingunits can be comprised, CPU) 30 (such as, one or more processors) and memory 31, one or more store the storage medium 32 (such as one or more mass memory units) of application program 321 or data 322.Wherein, memory 31 and storage medium 32 can be of short duration storages or store lastingly.The program being stored in storage medium 32 can comprise one or more modules (diagram does not mark), and each module can comprise streaming media and search a series of command operatings in device.Further, central processing unit 30 can be set to communicate with storage medium 32, searches a series of command operatings device performed in storage medium 32 at Streaming Media.
Streaming Media searches device can also comprise one or more power supplys 33, one or more wired or wireless network interfaces 34, one or more input/output interfaces 35, and/or, one or more operating system 323, such as WindowsServerTM, MacOSXTM, UnixTM, LinuxTM, FreeBSDTM etc.
The step performed by server or stream media terminal described in said method embodiment can search the structure of device based on the Streaming Media shown in this Fig. 6.
The embodiment of the present invention also provides a kind of Streaming Media seeking system, comprise stream media terminal and server, the structure of wherein said server can search the structure of device by the Streaming Media as shown in above-mentioned Fig. 3 or Fig. 6, the structure of described stream media terminal can search the structure of device by the Streaming Media as shown in figure arbitrary in above-mentioned Fig. 4 to Fig. 6, does not repeat at this.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is that the hardware that can carry out instruction relevant by program has come, this program can be stored in a computer-readable recording medium, and storage medium can comprise: read-only memory (ROM), random access memory ram), disk or CD etc.
The Streaming Media lookup method provided the embodiment of the present invention above, Apparatus and system are described in detail, apply specific case herein to set forth principle of the present invention and execution mode, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.
Claims (15)
1. a Streaming Media lookup method, is characterized in that, comprising:
The Streaming Media search request that receiving stream media terminal sends, described Streaming Media search request comprises the speech data of user's input;
Determine corresponding user emotion attribute according to described speech data, described user emotion attribute is excited, sad, angry or anxiety;
Search the Streaming Media corresponding with the described user emotion attribute determined, and the information of the described Streaming Media searched is sent to described stream media terminal.
2. the method for claim 1, is characterized in that, describedly determines that corresponding user emotion attribute specifically comprises according to described speech data:
If identify the Word message that described speech data is corresponding, the type of emotion belonging to described Word message is defined as described user emotion attribute;
If unidentified go out Word message corresponding to described speech data, according to the parameter of described speech data, and the corresponding relation of preset parameter and mood attribute, determine the user emotion attribute that described speech data is corresponding, wherein, the parameter in described corresponding relation comprises following arbitrary information: decibel value scope and voice intensity;
Or, if unidentified go out Word message corresponding to described speech data, the feature of the sample voice data included by the feature of described speech data and each mood attribute is compared, determines according to comparative result the user emotion attribute that described speech data is corresponding.
3. the method for claim 1, is characterized in that, describedly determines that corresponding user emotion attribute specifically comprises according to described speech data:
If obtain multiple user emotion attribute according to described speech data, the attribute list of described multiple user emotion attribute is sent to described stream media terminal;
When the user emotion attribute that the user receiving the transmission of described stream media terminal selects from described attribute list, then the user emotion attribute of described reception is defined as user emotion attribute corresponding to described speech data.
4. the method as described in any one of claims 1 to 3, it is characterized in that, if described in the Streaming Media searched have multiple, then the described information by the described Streaming Media searched sends to described stream media terminal, specifically comprise: by the number of times that other stream media terminal is play, multiple Streaming Media is sorted according to described Streaming Media, and the information of the multiple Streaming Medias after sequence is sent to described stream media terminal.
5. a Streaming Media lookup method, is characterized in that, comprising:
Display user input interface, receives the speech data of user's input from described user's input interface;
The speech data that described user inputs is added in Streaming Media search request, described Streaming Media search request is sent to server;
Receive the information of the Streaming Media that the described server user emotion attribute corresponding according to described speech data that described server sends is searched.
6. method as claimed in claim 5, it is characterized in that, described from after described user's input interface receives the speech data of user's input, also comprise: if the length of described speech data is outside preset scope, display user notification, described user notification is for notifying that the length of speech data described in described user needs in described preset scope.
7. the method as described in claim 5 or 6, is characterized in that, described described Streaming Media search request is sent to server after, described method also comprises:
Show the attribute list of the multiple user emotion attributes received from described server;
Receive the user emotion attribute that user selects from described attribute list, then the user emotion attribute of described selection is sent to described server.
8. Streaming Media searches a device, it is characterized in that, comprising:
Request reception unit, for the Streaming Media search request that receiving stream media terminal sends, described Streaming Media search request comprises the speech data of user's input;
Attribute determining unit, the speech data that the Streaming Media search request for receiving according to described request receiving element comprises determines corresponding user emotion attribute, and described user emotion attribute is excited, sad, angry or anxiety;
Information transmitting unit, for the Streaming Media that the user emotion attribute searched with described attribute determining unit is determined is corresponding, and sends to described stream media terminal by the information of the described Streaming Media searched.
9. device as claimed in claim 8, is characterized in that,
Described attribute determining unit, if specifically for identifying Word message corresponding to described speech data, be defined as described user emotion attribute by the type of emotion belonging to described Word message;
Or, described attribute determining unit, if specifically for unidentified go out Word message corresponding to described speech data, according to the parameter of described speech data, and the corresponding relation of preset parameter and mood attribute, determine the user emotion attribute that described speech data is corresponding, wherein, the parameter in described corresponding relation comprises following arbitrary information: decibel value scope and voice intensity;
Or, described attribute determining unit, if specifically for unidentified go out Word message corresponding to described speech data, the feature of the sample voice data included by the feature of described speech data and each mood attribute is compared, determines according to comparative result the user emotion attribute that described speech data is corresponding.
10. device as claimed in claim 8, is characterized in that,
Described attribute determining unit, if specifically for obtaining multiple user emotion attribute according to described speech data, sends to described stream media terminal by the attribute list of described multiple user emotion attribute; When the user emotion attribute that the user receiving the transmission of described stream media terminal selects from described attribute list, then the user emotion attribute of described reception is defined as user emotion attribute corresponding to described speech data.
11. devices as described in any one of claim 8 to 10, is characterized in that,
Described information transmitting unit, if specifically for described in the Streaming Media searched have multiple, by the number of times that other stream media terminal is play, multiple Streaming Media is sorted according to described Streaming Media, and the information of the multiple Streaming Medias after sequence is sent to described stream media terminal.
12. 1 kinds of Streaming Medias search device, it is characterized in that, comprising:
Data receipt unit, for showing user's input interface, receives the speech data of user's input from described user's input interface;
Request transmitting unit, the speech data for the user's input received by described data receipt unit adds in Streaming Media search request, and described Streaming Media search request is sent to server;
Information receiving unit, the information of the Streaming Media that the user emotion attribute that the described server for receiving the transmission of described server is corresponding according to described speech data is searched.
13. devices as claimed in claim 12, it is characterized in that, also comprise: notification unit, if the length of the speech data received for described data receipt unit is outside preset scope, display user notification, described user notification is for notifying that the length of speech data described in described user needs in described preset scope.
14. devices as described in claim 12 or 13, is characterized in that, also comprise:
List display unit, for showing the attribute list of the multiple user emotion attributes received from described server;
Attribute transmitting element, for receiving the user emotion attribute that user selects from the attribute list that described list display unit shows, then sends to described server by the user emotion attribute of described selection.
15. 1 kinds of Streaming Media seeking systems, it is characterized in that, comprise stream media terminal and server, wherein said server is that the Streaming Media as described in any one of claim 8 to 11 searches device, and described stream media terminal is that the Streaming Media as described in any one of claim 12 to 14 searches device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510944464.2A CN105530521A (en) | 2015-12-16 | 2015-12-16 | Streaming media searching method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510944464.2A CN105530521A (en) | 2015-12-16 | 2015-12-16 | Streaming media searching method, device and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105530521A true CN105530521A (en) | 2016-04-27 |
Family
ID=55772460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510944464.2A Pending CN105530521A (en) | 2015-12-16 | 2015-12-16 | Streaming media searching method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105530521A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105979083A (en) * | 2016-04-29 | 2016-09-28 | 珠海市魅族科技有限公司 | Method and device for displaying graph |
CN106713818A (en) * | 2017-02-21 | 2017-05-24 | 福建江夏学院 | Speech processing system and method during video call |
CN108777804A (en) * | 2018-05-30 | 2018-11-09 | 腾讯科技(深圳)有限公司 | media playing method and device |
WO2020232796A1 (en) * | 2019-05-17 | 2020-11-26 | 腾讯音乐娱乐科技(深圳)有限公司 | Multimedia data matching method and device, and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1848105A (en) * | 2005-04-15 | 2006-10-18 | 浙江工业大学 | Spiritual consoling device with feeling for the aged |
CN101198915A (en) * | 2005-06-14 | 2008-06-11 | 丰田自动车株式会社 | Dialogue system |
CN101662546A (en) * | 2009-09-16 | 2010-03-03 | 中兴通讯股份有限公司 | Method of monitoring mood and device thereof |
CN101669090A (en) * | 2007-04-26 | 2010-03-10 | 福特全球技术公司 | Emotive advisory system and method |
CN102300163A (en) * | 2011-09-22 | 2011-12-28 | 宇龙计算机通信科技(深圳)有限公司 | Information pushing method, mobile terminal and system |
CN103137043A (en) * | 2011-11-23 | 2013-06-05 | 财团法人资讯工业策进会 | Advertisement display system and advertisement display method in combination with search engine service |
CN103561652A (en) * | 2011-06-01 | 2014-02-05 | 皇家飞利浦有限公司 | Method and system for assisting patients |
CN103829958A (en) * | 2014-02-19 | 2014-06-04 | 广东小天才科技有限公司 | Method and device for monitoring moods of people |
CN103889109A (en) * | 2014-02-17 | 2014-06-25 | 武汉阿拉丁科技有限公司 | Driving device of emotional control of LED light color and brightness change |
CN103929551A (en) * | 2013-01-11 | 2014-07-16 | 上海掌门科技有限公司 | Assisting method and system based on call |
CN103941853A (en) * | 2013-01-22 | 2014-07-23 | 三星电子株式会社 | Electronic device for determining emotion of user and method for determining emotion of user |
CN104284018A (en) * | 2014-09-23 | 2015-01-14 | 深圳市金立通信设备有限公司 | Terminal |
CN104616666A (en) * | 2015-03-03 | 2015-05-13 | 广东小天才科技有限公司 | Method and device for improving dialogue communication effect based on speech analysis |
CN204322085U (en) * | 2014-12-15 | 2015-05-13 | 山东大学 | A kind of early education towards child is accompanied and attended to robot |
CN104917896A (en) * | 2015-06-12 | 2015-09-16 | 努比亚技术有限公司 | Data pushing method and terminal equipment |
-
2015
- 2015-12-16 CN CN201510944464.2A patent/CN105530521A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1848105A (en) * | 2005-04-15 | 2006-10-18 | 浙江工业大学 | Spiritual consoling device with feeling for the aged |
CN101198915A (en) * | 2005-06-14 | 2008-06-11 | 丰田自动车株式会社 | Dialogue system |
CN101669090A (en) * | 2007-04-26 | 2010-03-10 | 福特全球技术公司 | Emotive advisory system and method |
CN101662546A (en) * | 2009-09-16 | 2010-03-03 | 中兴通讯股份有限公司 | Method of monitoring mood and device thereof |
CN103561652A (en) * | 2011-06-01 | 2014-02-05 | 皇家飞利浦有限公司 | Method and system for assisting patients |
CN102300163A (en) * | 2011-09-22 | 2011-12-28 | 宇龙计算机通信科技(深圳)有限公司 | Information pushing method, mobile terminal and system |
CN103137043A (en) * | 2011-11-23 | 2013-06-05 | 财团法人资讯工业策进会 | Advertisement display system and advertisement display method in combination with search engine service |
CN103929551A (en) * | 2013-01-11 | 2014-07-16 | 上海掌门科技有限公司 | Assisting method and system based on call |
CN103941853A (en) * | 2013-01-22 | 2014-07-23 | 三星电子株式会社 | Electronic device for determining emotion of user and method for determining emotion of user |
CN103889109A (en) * | 2014-02-17 | 2014-06-25 | 武汉阿拉丁科技有限公司 | Driving device of emotional control of LED light color and brightness change |
CN103829958A (en) * | 2014-02-19 | 2014-06-04 | 广东小天才科技有限公司 | Method and device for monitoring moods of people |
CN104284018A (en) * | 2014-09-23 | 2015-01-14 | 深圳市金立通信设备有限公司 | Terminal |
CN204322085U (en) * | 2014-12-15 | 2015-05-13 | 山东大学 | A kind of early education towards child is accompanied and attended to robot |
CN104616666A (en) * | 2015-03-03 | 2015-05-13 | 广东小天才科技有限公司 | Method and device for improving dialogue communication effect based on speech analysis |
CN104917896A (en) * | 2015-06-12 | 2015-09-16 | 努比亚技术有限公司 | Data pushing method and terminal equipment |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105979083A (en) * | 2016-04-29 | 2016-09-28 | 珠海市魅族科技有限公司 | Method and device for displaying graph |
CN106713818A (en) * | 2017-02-21 | 2017-05-24 | 福建江夏学院 | Speech processing system and method during video call |
CN108777804A (en) * | 2018-05-30 | 2018-11-09 | 腾讯科技(深圳)有限公司 | media playing method and device |
CN108777804B (en) * | 2018-05-30 | 2021-07-27 | 腾讯科技(深圳)有限公司 | Media playing method and device |
WO2020232796A1 (en) * | 2019-05-17 | 2020-11-26 | 腾讯音乐娱乐科技(深圳)有限公司 | Multimedia data matching method and device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107833574B (en) | Method and apparatus for providing voice service | |
KR101909807B1 (en) | Method and apparatus for inputting information | |
US9558261B2 (en) | Ontology-based data access monitoring | |
CN105302903B (en) | Searching method, device, system and search result sequencing foundation determination method | |
US10885107B2 (en) | Music recommendation method and apparatus | |
CN105530521A (en) | Streaming media searching method, device and system | |
JP5717794B2 (en) | Dialogue device, dialogue method and dialogue program | |
CN105426404A (en) | Music information recommendation method and apparatus, and terminal | |
CN105279227B (en) | Method and device for processing voice search of homophone | |
CN111199732B (en) | Emotion-based voice interaction method, storage medium and terminal equipment | |
CN109961786B (en) | Product recommendation method, device, equipment and storage medium based on voice analysis | |
KR102348084B1 (en) | Image Displaying Device, Driving Method of Image Displaying Device, and Computer Readable Recording Medium | |
CN105491126A (en) | Service providing method and service providing device based on artificial intelligence | |
US20130085987A1 (en) | Downloading method and device | |
US20190236208A1 (en) | Smart speaker with music recognition | |
KR20160106075A (en) | Method and device for identifying a piece of music in an audio stream | |
US20230259712A1 (en) | Sound effect adding method and apparatus, storage medium, and electronic device | |
CN110990598B (en) | Resource retrieval method and device, electronic equipment and computer-readable storage medium | |
CN109923515A (en) | Use the experience of telling a story of network addressable device creation film | |
CN111324700A (en) | Resource recall method and device, electronic equipment and computer-readable storage medium | |
CN108319628B (en) | User interest determination method and device | |
CN112667076A (en) | Voice interaction data processing method and device | |
CN111414512A (en) | Resource recommendation method and device based on voice search and electronic equipment | |
CN109829117B (en) | Method and device for pushing information | |
WO2014176489A2 (en) | A system and method for supervised creation of personalized speech samples libraries in real-time for text-to-speech synthesis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160427 |
|
RJ01 | Rejection of invention patent application after publication |