CN105550217B - Scene music searching method and scene music searching device - Google Patents

Scene music searching method and scene music searching device Download PDF

Info

Publication number
CN105550217B
CN105550217B CN201510884497.2A CN201510884497A CN105550217B CN 105550217 B CN105550217 B CN 105550217B CN 201510884497 A CN201510884497 A CN 201510884497A CN 105550217 B CN105550217 B CN 105550217B
Authority
CN
China
Prior art keywords
music
scene
search
search word
searching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510884497.2A
Other languages
Chinese (zh)
Other versions
CN105550217A (en
Inventor
张茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510884497.2A priority Critical patent/CN105550217B/en
Publication of CN105550217A publication Critical patent/CN105550217A/en
Priority to PCT/CN2016/100405 priority patent/WO2017092493A1/en
Application granted granted Critical
Publication of CN105550217B publication Critical patent/CN105550217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a scene music searching method and a scene music searching device, wherein the scene music searching method comprises the steps of obtaining a scene music searching sentence; splitting the scene music search sentence to obtain at least one scene music search word; acquiring a music label corresponding to the scene music search word according to the scene music search word; searching corresponding music in a music library by using the music label; and displaying the scene music search words and the corresponding music. The invention also provides a scene music searching device, and the scene music searching method and the scene music searching device of the invention analyze the scene music searching sentence input by the user to obtain the corresponding music label and further obtain the corresponding music; the music content displayed to the user is comprehensive and can be updated in time.

Description

Scene music searching method and scene music searching device
Technical Field
The present invention relates to the field of internet, and in particular, to a scene music search method and a scene music search apparatus.
Background
With the rapid development of internet technology, people can obtain various contents such as videos, music and the like from the internet. In order to provide satisfactory music content for users, existing music software or music playing platforms can classify music according to playing scenes, such as songs listened to before sleeping, songs listened to while going home, or songs listened to while walking.
However, the above-mentioned scene music is operated manually, that is, the scene is associated with a part of music manually, so the kinds of scenes are limited. If a user wants to listen to music during cooking in a kitchen, the user may not find corresponding scene music conveniently due to the fact that no kitchen scene exists on music software or a music playing platform. Meanwhile, the music content corresponding to the existing scene music is added manually, so that the music content is updated slowly and the content is fixed.
Therefore, the existing music playing platform cannot provide richer and more accurate scene music.
Disclosure of Invention
The embodiment of the invention provides a scene music searching method and a scene music searching device which have more comprehensive and faster-updating music content; the technical problems that the existing scene music searching method and the existing scene music searching device have less music content and the music content is updated slowly are solved.
The embodiment of the invention provides a scene music searching method, which comprises the following steps:
acquiring a scene music search statement; splitting the scene music search sentence to obtain at least one scene music search word;
acquiring a music label corresponding to the scene music search word according to the scene music search word;
searching corresponding music in a music library by using the music label; the music library comprises the music, the music labels and the corresponding relation between the music and the music labels; and
and displaying the scene music search words and the corresponding music.
An embodiment of the present invention further provides a scene music search apparatus, including:
the sentence splitting module is used for obtaining scene music searching sentences; splitting the scene music search sentence to obtain at least one scene music search word;
the music label acquisition module is used for acquiring a music label corresponding to the scene music search word according to the scene music search word;
the music searching module is used for searching corresponding music in a music library by using the music labels; the music library comprises the music, the music labels and the corresponding relation between the music and the music labels; and
and the music display module is used for displaying the scene music search words and the corresponding music.
Compared with the scene music searching method and the scene music searching device in the prior art, the scene music searching method and the scene music searching device provided by the invention have the advantages that corresponding music labels are obtained by analyzing the scene music searching sentences input by the user, and then corresponding music is obtained; the music content displayed to the user is comprehensive; and can obtain the latest music content in real time; the technical problems that the existing scene music searching method and the existing scene music searching device have less music content and the music content is updated slowly are solved.
Drawings
Fig. 1 is a flowchart of a first preferred embodiment of a scene music searching method of the present invention;
fig. 2 is one of flowcharts of a second preferred embodiment of a scene music searching method of the present invention;
FIG. 3 is a second flowchart of a scene music searching method according to a second preferred embodiment of the present invention;
fig. 4 is a third flowchart of a scene music searching method according to a second preferred embodiment of the present invention;
fig. 5 is a schematic structural diagram of a scene music search device according to a first preferred embodiment of the present invention;
fig. 6 is a schematic structural diagram of a scene music search device according to a second preferred embodiment of the present invention;
fig. 7 is a schematic structural diagram of a sentence splitting module of a second preferred embodiment of the scene music search apparatus according to the present invention;
fig. 8 is a schematic structural diagram of a music tag acquisition module of a second preferred embodiment of the scene music search apparatus according to the present invention;
fig. 9 is a schematic structural diagram of a music presentation module of a second preferred embodiment of the scene music search device according to the present invention;
fig. 10A to 10B are schematic diagrams illustrating a scene music searching method and a scene music searching apparatus according to an embodiment of the present invention;
fig. 11 is a schematic view of a working environment structure of an electronic device in which the scene music search apparatus of the present invention is located.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
In the description that follows, embodiments of the invention are described with reference to steps and symbols of operations performed by one or more computers, unless otherwise indicated. It will thus be appreciated that those steps and operations, which are referred to herein several times as being computer-executed, include being manipulated by a computer processing unit in the form of electronic signals representing data in a structured form. This manipulation transforms the data or maintains it at locations in the computer's memory system, which may reconfigure or otherwise alter the computer's operation in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the invention have been described in language specific to above, it is not intended to be limited to the specific details shown, since one skilled in the art will recognize that various steps and operations described below may be implemented in hardware.
The scene music search apparatus of the present invention may be implemented using a variety of electronic devices including, but not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The electronic device is preferably a music search platform or a music search server, so as to effectively improve the music search success rate and the music content updating speed of the music search platform or the music search server.
Referring to fig. 1, fig. 1 is a flowchart illustrating a scene music searching method according to a first preferred embodiment of the present invention. The scene music search method of the preferred embodiment may be implemented using the electronic device, and includes:
step S101, acquiring a scene music search statement; splitting the scene music search sentence to obtain at least one scene music search word;
step S102, acquiring a music label corresponding to the scene music search word according to the scene music search word;
step S103, searching corresponding music in a music library by using the music label;
and step S104, displaying the scene music search words and the corresponding music.
The following describes in detail the specific flow of the steps of the scene music search method according to the preferred embodiment.
In step S101, the scene music search apparatus obtains a scene music search sentence, where the scene music search sentence may be a sentence manually input by a user or a sentence input by voice, for example, if the user inputs a scene music search sentence "i blow at sea", then the scene music search words obtained after splitting are "i", "at", "sea", "blow". Subsequently, the process goes to step S102.
In step S102, the scene music search apparatus acquires a music tag corresponding to the scene music search word from the scene music search word acquired in step S101. Instead of directly using the scene music search word as a music tag, the same or similar music tags are retrieved from the music library using the scene music search word as a keyword. Therefore, whether the music labels of the music library have the scene music search word or not, the music labels strongly related to the scene music search word can be obtained, and the searching effectiveness is ensured. Subsequently, the process goes to step S103.
In step S103, the scene music search means searches the music library for corresponding music using the music tags acquired in step S102. The music library here includes music, music labels, and music label correspondence. The music tags in the music library may be generated tags for users of social media, authority tags audited by music publishers or professionals, or supplementary tags generated by a machine learning method. The music library is an online music database updated in real time, and can directly call the online music database of a local music website or the online music databases of other websites. Subsequently, the process goes to step S104.
In step S104, the scene music search device displays the scene music search word obtained in step S101 and the corresponding music obtained in step S103 for the user to perform a playing operation.
This completes the scene music search process of the scene music search method of the present preferred embodiment.
In the scene music search method of the preferred embodiment, the corresponding music tag is obtained by analyzing the scene music search statement input by the user, and then the corresponding music is obtained; the scene music search words are used as the key words for searching, so that the success rate of searching is improved; meanwhile, music is returned through the music library updated in real time, the updating speed of the searched music content and the comprehensiveness of the searched music content are improved, and therefore the music content displayed to the user is relatively comprehensive.
Referring to fig. 2, fig. 2 is a flowchart illustrating a scene music searching method according to a second preferred embodiment of the present invention. The scene music search method of the preferred embodiment may be implemented using the electronic device, and includes:
step S201, splitting a scene music search sentence into a plurality of basic words;
step S202, using verbs and nouns in the basic words as scene music search words;
step S203, judging whether the scene music search word has a corresponding music label; if the music tag has a corresponding music tag, go to step S204; if there is no corresponding music tag, go to step S205;
step S204, acquiring a music label;
step S205, acquiring similar search terms of the scene music search terms, and acquiring corresponding music labels according to the similar search terms;
step S206, searching corresponding music in a music library by using the music label;
step S207, acquiring weight values of the music labels in all the searched music;
and S208, sequencing and displaying all the searched music according to the weight values of the music labels.
The following describes in detail the specific flow of the steps of the scene music search method according to the preferred embodiment.
In step S201, the scene music search apparatus obtains a scene music search statement, where the scene music search statement may be a manually input statement or a voice input statement by a user, for example, if the user inputs a scene music search statement of "i blow at sea", then the basic words obtained after splitting are "i", "at", "sea", "blow". Subsequently, the process goes to step S202.
In step S202, since the scene music search word is generally a meaningful name or verb, and the other person refers to a pronoun, preposition, or conjunctions as the scene music search word without meaning, the scene music search apparatus uses the verb and the noun in the basic word as the scene music search word. "seaside" and "blow" in the basic words acquired in step S201 may be set as scene music search words. Subsequently, the process goes to step S203.
In step S203, the scene music search apparatus acquires a music tag corresponding to the scene music search word from the scene music search word acquired in step S202. In this step, it is first determined whether the scene music search term has a corresponding music tag, and if so, the process goes to step S204; if there is no corresponding music tag, go to step S205.
In step S204, if the scene music search word obtained in step S202 has a corresponding music tag, for example, the scene music search word "seaside" obtained in step S202 can be used as the music tag itself, the scene music search apparatus directly obtains the music tag. Subsequently, it goes to step S205.
In step S205, if the scene music search word acquired in step S202 does not have a corresponding music tag, for example, the scene music search word "blow" acquired in step S202 cannot be used as a music tag itself, the scene music search apparatus acquires a similar search word of the scene music search word.
The obtained similar search words can be used as music labels, so that the similarity between the scene music search words and each music label can be calculated according to the word vector similarity, and the music label with the highest similarity is selected as the music label corresponding to the scene music search words.
Meanwhile, the similar search terms can also be obtained by analyzing the scene music search terms by using different databases, for example, explanation descriptions of the scene music search terms are obtained on a term explanation website, for example, "blow" is a mood when blow is provided on a hundred-degree website, such as: we go out to blow and disperse the explanation of "walking", so we can also set the similar search word of "blowing" as "walking" or "walking", etc. Subsequently, the process goes to step S206.
In step S206, the scene music search means searches the music library for corresponding music using the music tags acquired in step S204 and step S205. The music library includes music, music labels, correspondence between music and music labels, and weight values of all music labels corresponding to the music. The music tags in the music library may be generated tags for users of social media, authority tags audited by music publishers or professionals, or supplementary tags generated by a machine learning method. The music library is an online music database updated in real time, and can directly call the online music database of a local music website or the online music databases of other websites.
It should be noted that, in order to avoid missing related music, if a plurality of music tags are acquired in step S204 and step S205, all music satisfying at least one music tag is searched in the music library. Subsequently, it goes to step S207.
In step S207, since the music is presented to the user, the music most likely desired by the user should be presented with priority. Therefore, in this step, the scene music search apparatus acquires the weight values of all pieces of music searched in step S206 for the music tag corresponding to the scene music search word. That is, the size of the relevance of each piece of music to a scene music search term that is relevant to the scene music search term is obtained.
If a plurality of scene music search words exist, the scene music search words are combined into a scene music search phrase. Then, by means of superposition, the weighted values of all music searched by the scene music search phrase in step S206 are obtained. Namely, the weight value of the scene music search phrase in a certain music is the superposition of the weight value of each scene music search word in the scene music search phrase in the music. If the scene music search phrase comprises a scene music search word A and a scene music search word B, the weighted values of the scene music search word A in music a, music B and music c are respectively 20, 30 and 50; the weights of the scene music search word B in music a, music B, and music c are 40, 10, and 0, respectively. Thus, the scene music search phrase has weights of 60, 40, and 50 for music a, music b, and music c. Subsequently, the process goes to step S208.
In step S208, while the scene music search apparatus displays the scene music search term, the scene music search apparatus displays all the searched music in order according to the weighting values of the music tags in the music, which are obtained in step S207, so that the user can play the music. Meanwhile, the current popularity of the music can be used for correcting the ranking of the music, for example, a plurality of pieces of music with weight value difference smaller than a set value relative to a certain scene music search word are ranked according to the current popularity of the music. Since the music is sorted by the relevance and the popularity with the scene music search word, the user can easily acquire the music desired to be listened to.
This completes the scene music search process of the scene music search method of the present preferred embodiment.
Preferably, referring to fig. 3, fig. 3 is a second flowchart of a scene music searching method according to a second preferred embodiment of the present invention. The method for searching for scene music according to the preferred embodiment further includes, after step S208:
step S301, the scene music searching device receives the search word modification instruction and modifies the scene music search word according to the search word modification instruction.
Since different users may have different understandings about a certain scene music, if the users consider the scene music search word acquisition inaccurate, a search word modification instruction may be sent to the scene music search apparatus through the client. The scene music search words "seaside" and "blow" are acquired in step S202, and finally step S208 performs music presentation according to the scene music search words "seaside" and "blow". And the user considers that the scene music which the user wants does not need to blow, and the scene music searching word 'blowing' can be deleted by sending a searching word modifying instruction. Thus, the scene music searching device receives the searching word modification instruction and modifies the scene music searching word according to the searching word modification instruction, such as deleting the scene music searching word 'blowing'.
In step S302, the scene music search device obtains the music labels corresponding to the modified scene music search terms, and obtains the weighted values of all music searched in step S206.
In step S303, the scene music searching apparatus performs reordering display on all searched music according to the weight values of the music tags in the music obtained in step S302, so as to allow the user to perform playing operation.
Thus, the flexibility of music search of the scene music search method of the preferred embodiment is enhanced, and the success rate of music search is further improved.
Preferably, referring to fig. 4, fig. 4 is a third flowchart of a scene music searching method according to a second preferred embodiment of the present invention. The method for searching for scene music according to the preferred embodiment further includes, after step S208:
step S401, receiving a music playing instruction from the client, and playing the displayed music according to the music playing instruction.
In step S402, if the user is not satisfied with the music displayed by the scene music search apparatus, the user may not send a music playing instruction to play the music, so that the scene music search apparatus may correct the weight value of the music tag in the displayed music according to the ratio of the number of times of music playing to the number of times of music displaying.
If the ratio of the number of music playing times of the displayed music to the number of music displaying times is large, increasing the weight value of the music label in the displayed music; and if the ratio of the number of music playing times of the displayed music to the number of music displaying times is smaller, reducing the weight value of the music label in the displayed music. Of course, in order to avoid mistaking the listening trial operation of the user as the music playing operation, the number of music playing operations having the playing time shorter than the set value may be deleted from the number of music playing operations.
Through counting the playing operation of the user, the weighted value of the music label corresponding to the music in the music library is corrected in real time, so that the accuracy of the music label in the music library is higher, and the success rate of music search is further improved.
The scene music search method of the preferred embodiment deletes the invalid scene music search word on the basis of the first preferred embodiment, improves the search hit rate of the scene music search word by setting the similar search word, shortens the music search time by setting the weight value of the music tag, and improves the success rate of music search.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a scene music search apparatus according to a first preferred embodiment of the present invention. The scene music search apparatus 50 of the present preferred embodiment is implemented using the first preferred embodiment of the scene music search method described above.
The scene music search device 50 includes a sentence splitting module 51, a music tag obtaining module 52, a music search module 53 and a music presentation module 54. The sentence splitting module 51 is configured to obtain a scene music search sentence; and splitting the scene music search sentence to obtain at least one scene music search word. The music tag obtaining module 52 is configured to obtain a music tag corresponding to the scene music search term according to the scene music search term. The music searching module 53 is configured to search for corresponding music in the music library using the music tag. The music presentation module 54 is configured to present the scene music search term and the corresponding music.
When the scene music search apparatus 50 of the preferred embodiment is used, first, the sentence splitting module 51 obtains a scene music search sentence, where the scene music search sentence may be a sentence manually input by a user or a sentence input by voice, and if the user inputs a scene music search sentence "i blow at sea", the scene music search term obtained after splitting is "i", "at sea", "blow".
Then, the music tag obtaining module 52 obtains a music tag corresponding to the scene music search word according to the scene music search word obtained by the sentence splitting module 51. Instead of directly using the scene music search word as a music tag, the same or similar music tags are retrieved from the music library using the scene music search word as a keyword. Therefore, whether the music labels of the music library have the scene music search word or not, the music labels strongly related to the scene music search word can be obtained, and the searching effectiveness is ensured.
The music search module 53 then searches the music library for corresponding music using the music tags acquired by the music tag acquisition module 52. The music library here includes music, music labels, and music label correspondence. The music tags in the music library may be generated tags for users of social media, authority tags audited by music publishers or professionals, or supplementary tags generated by a machine learning method. The music library is an online music database updated in real time, and can directly call the online music database of a local music website or the online music databases of other websites.
Finally, the music display module 54 displays the scene music search term obtained by the sentence segmentation module 51 and the corresponding music obtained by the music search module 53 for the user to play.
This completes the scene music search process of the scene music search apparatus 50 of the present preferred embodiment.
The scene music search device of the preferred embodiment analyzes the scene music search sentence input by the user to obtain the corresponding music label, and then obtains the corresponding music; the scene music search words are used as the key words for searching, so that the success rate of searching is improved; meanwhile, music is returned through the music library updated in real time, the updating speed of the searched music content and the comprehensiveness of the searched music content are improved, and therefore the music content displayed to the user is relatively comprehensive.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a scene music search apparatus according to a second preferred embodiment of the present invention. The scene music search device 60 of the present preferred embodiment is implemented using the second preferred embodiment of the scene music search method described above.
The scene music searching device 60 includes a sentence splitting module 61, a music tag obtaining module 62, a music searching module 63, a music displaying module 64, a playing module 65 and a weight value correcting module 66. The sentence splitting module 61 is configured to obtain a scene music search sentence; and splitting the scene music search sentence to obtain at least one scene music search word. The music tag obtaining module 62 is configured to obtain a music tag corresponding to the scene music search term according to the scene music search term. The music searching module 3 is used for searching the music library for corresponding music by using the music labels. The music presentation module 64 is configured to present the scene music search term and corresponding music. The playing module 65 is configured to receive a music playing instruction, and perform a playing operation on the displayed music according to the music playing instruction. The weighted value correcting module 66 is configured to correct the weighted value of the music tag in the display music according to a ratio of the number of times of music playing of the display music to the number of times of music display.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a sentence splitting module of a scene music search apparatus according to a second preferred embodiment of the present invention. The sentence splitting module 61 includes a splitting unit 611 and a search term setting unit 612. The splitting unit 611 is configured to split the scene music search statement into a plurality of basic terms; the search word setting unit 612 is configured to use verbs and nouns in the basic words as scene music search words.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a music tag obtaining module of a second preferred embodiment of a scene music searching device according to the present invention. The music tag acquisition module 62 includes a judgment unit 621, a first music tag acquisition unit 622, and a second music tag acquisition unit 623. The judging unit 621 is configured to judge whether the scene music search word has a corresponding music tag; the first music tag obtaining unit 622 is configured to obtain a music tag if the scene music search word has a corresponding music tag; the second music tag obtaining unit 623 is configured to, if the scene music search word does not have a corresponding music tag, obtain a similar search word of the scene music search word, and obtain a corresponding music tag according to the similar search word.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a music display module of a second preferred embodiment of a scene music search device according to the present invention. The music presentation module 64 includes a first weight value obtaining unit 641, a first ranking presentation unit 642, a search term modifying unit 643, a second weight value obtaining unit 644, and a second ranking presentation unit 645. The first weight value obtaining unit 641 is configured to obtain the weight values of the music tags corresponding to the scene music search terms in all the searched music. The first ordering and displaying unit 642 is used for ordering and displaying all searched music according to the weight value of the music tag in the music. The search term modification unit 643 is configured to receive a search term modification instruction, and modify a scene music search term according to the search term modification instruction. The second weight value acquisition unit 644 is configured to acquire weight values of music tags in all searched music. The second sorting presentation unit 645 is configured to re-sort and present all the searched music according to the weight value of the music tag.
When the scene music search apparatus 60 of the preferred embodiment is used, first, the splitting unit 611 of the sentence splitting module 61 obtains a scene music search sentence, where the scene music search sentence may be a sentence manually input by a user or a sentence input by voice, and if the user inputs a scene music search sentence "i blow at sea", the basic words obtained after splitting are "i", "at", "sea", "blow".
Since the scene music search term is generally a name or a verb having meaning, and the pronouns, prepositions, or conjunctions of other persons are meaningless as the scene music search term, the search term setting unit 612 of the sentence splitting module 61 then takes the verb and the noun in the basic term as the scene music search term. Such as "seaside" and "blow" in the acquired basic words may be set as the scene music search word.
Then, the music tag obtaining module 62 obtains a music tag corresponding to the scene music search term according to the scene music search term obtained by the sentence splitting module 61. The determining unit 621 of the music tag obtaining module 62 first determines whether the scene music search word has a corresponding music tag.
If the acquired scene music search word has a corresponding music tag, for example, the acquired scene music search word "seaside" itself can be used as the music tag, the first music tag acquiring unit 622 of the music tag acquiring module 62 directly acquires the music tag.
If the obtained scene music search word does not have a corresponding music tag, for example, the obtained scene music search word "blow" itself cannot be used as a music tag, the second music tag obtaining unit 623 of the music tag obtaining module 62 obtains a similar search word of the scene music search word, where it is ensured that the obtained similar search word itself can be used as a music tag, so that the similarity between the scene music search word and each music tag can be calculated according to the word vector similarity, and the music tag with the highest similarity is selected as the music tag corresponding to the scene music search word.
Meanwhile, the similar search terms can also be obtained by analyzing the scene music search terms by using different databases, for example, explanation descriptions of the scene music search terms are obtained on a term explanation website, for example, "blow" is a mood when blow is provided on a hundred-degree website, such as: we go out to blow and disperse the explanation of "walking", so we can also set the similar search word of "blowing" as "walking" or "walking", etc.
The music search module 63 then searches for corresponding music in the music library using the music tags acquired by the music tag acquisition module 62. The music library includes music, music labels, correspondence between music and music labels, and weight values of all music labels corresponding to the music. The music tags in the music library may be generated tags for users of social media, authority tags audited by music publishers or professionals, or supplementary tags generated by a machine learning method. The music library is an online music database updated in real time, and can directly call the online music database of a local music website or the online music databases of other websites.
It should be noted that, in order to avoid missing related music, if the music tag acquisition module 62 acquires a plurality of music tags, the music search module 63 searches the music library for all music satisfying at least one music tag.
Since the music is presented to the user, the music that the user is most likely to desire should be presented with priority. Therefore, the first weighted value obtaining unit 641 of the music presentation module 64 obtains the weighted values of all music searched by the music search module in the music tags corresponding to the scene music search terms. That is, the size of the relevance of each piece of music to a scene music search term that is relevant to the scene music search term is obtained.
If there are multiple scene music search words, the first weight value obtaining unit 641 merges the multiple scene music search words into a scene music search phrase. Then, by means of superposition, the weight values of the scene music search phrase in all music searched by the music search module 63 are obtained. Namely, the weight value of the scene music search phrase in a certain music is the superposition of the weight value of each scene music search word in the scene music search phrase in the music. If the scene music search phrase comprises a scene music search word A and a scene music search word B, the weighted values of the scene music search word A in music a, music B and music c are respectively 20, 30 and 50; the weights of the scene music search word B in music a, music B, and music c are 40, 10, and 0, respectively. Thus, the scene music search phrase has weights of 60, 40, and 50 for music a, music b, and music c.
Finally, the first ordering unit 642 of the music presentation module 64 displays the scene music search term, and at the same time, orders and displays all the searched music according to the weight values of the music tags in the music, which are obtained by the first weight value obtaining unit 641, so that the user can play the music. Meanwhile, the current popularity of the music can be used for correcting the ranking of the music, for example, a plurality of pieces of music with weight value difference smaller than a set value relative to a certain scene music search word are ranked according to the current popularity of the music. Since the music is sorted by the relevance and the popularity with the scene music search word, the user can easily acquire the music desired to be listened to. Since the music is sorted by the correlation with the scene music search word here, the user can easily acquire the music desired to be listened to.
This completes the scene music search process of the scene music search apparatus 60 of the present preferred embodiment.
Preferably, the scene music search device 60 of the preferred embodiment may also correct the displayed scene music search term in real time. The method specifically comprises the following steps:
first, the search term modification unit 643 of the music presentation module 64 receives a search term modification instruction, and modifies the scene music search term according to the search term modification instruction.
Since different users may have different understandings about a certain scene music, if the user considers that the scene music search word acquisition is inaccurate, a search word modification instruction may be sent to the scene music search apparatus 60 through the client. If the sentence splitting module 61 obtains the scene music search words "seaside" and "blow", and finally the music display module 64 performs music display according to the scene music search words "seaside" and "blow". And the user considers that the scene music he wants does not need "blow", the scene music search word "blow" can be deleted by sending a search word modification instruction to the search word modification unit 643. Thus, the search term modification unit 643 receives the search term modification instruction, and modifies the scene music search term according to the search term modification instruction, such as deleting the scene music search term "blow".
Then, the second weighted value obtaining unit 644 of the music presentation module 64 obtains the modified music labels corresponding to the scene music search terms, and obtains the weighted values of all music searched by the music search module 63.
Finally, the second sorting presentation unit 645 of the music presentation module 64 performs a re-sorting presentation on all searched music according to the weight values of the music tags in the music, which are acquired by the second weight value acquisition unit 644, so as to provide a user with a playing operation.
This enhances the flexibility of music search by the scene music search device 60 of the present preferred embodiment, further improving the success rate of music search.
Preferably, the scene music search device 60 of the preferred embodiment may also correct the weight value of the music tag in the music library in real time. The method specifically comprises the following steps:
the playing module 65 receives a music playing instruction from the client, and plays the displayed music according to the music playing instruction.
If the user is not satisfied with the music displayed by the scene music search device 60, the user may not send a music playing instruction to play the music, so the weighted value correction module 66 may correct the weighted value of the music tag in the displayed music according to the ratio of the number of times of playing the music displaying the music to the number of times of displaying the music.
If the ratio of the number of music playing times of the displayed music to the number of music displaying times is large, the weighted value correcting module 66 increases the weighted value of the music label in the displayed music; if the ratio of the number of music playing times of the exhibition music to the number of music exhibition is small, the weighted value modification module 66 decreases the weighted value of the music tag in the exhibition music. Of course, in order to avoid mistaking the listening trial operation of the user as the music playing operation, the number of music playing operations having the playing time shorter than the set value may be deleted from the number of music playing operations.
Through counting the playing operation of the user, the weighted value of the music label corresponding to the music in the music library is corrected in real time, so that the accuracy of the music label in the music library is higher, and the success rate of music search is further improved.
The scene music search device of the preferred embodiment deletes the invalid scene music search word on the basis of the first preferred embodiment, improves the search hit rate of the scene music search word by setting the similar search word, shortens the music search time by setting the weight value of the music tag, and improves the success rate of music search.
The following describes a specific working principle of the scene music searching method and the scene music searching apparatus according to an embodiment of the present invention. Referring to fig. 10A and 10B, fig. 10A to 10B are schematic diagrams illustrating a scene music searching method and a scene music searching apparatus according to an embodiment of the present invention. The scene music search device is arranged on a scene music search server. The specific embodiment comprises the following steps:
firstly, a user inputs a scene music search sentence 'I blow at sea' in a scene music search server.
Secondly, the scene music searching device obtains effective scene music searching words 'seaside' and 'blowing', and obtains music labels 'seaside' and 'walking', namely music suitable for listening at seaside or in the walking situation, according to the scene music searching words 'seaside' and 'blowing'.
And thirdly, the scene music searching device acquires corresponding music in the music library according to the music labels 'seaside' and 'walking', and the weighted values of the music labels 'seaside' and 'walking' in all the searched music, namely the weighted values of the music label 'seaside' in all the music and the weighted values of the music label 'walking' in all the music are superposed.
Fourthly, the scene music searching device sorts and displays all the searched music according to the weight values of the music labels 'seaside' and 'walking' in the music, as shown in fig. 10A. At this time, the user can click any piece of music to play.
And fifthly, if the user considers that the acquired scene music search word is inaccurate, if the user does not want the scene music search word 'blowing', the user can send a search word modification instruction to the scene music search server, namely clicking a blowing label to enable the scene music search server to be in an unselected state. Thus, the scene music search apparatus only counts the weight values of the music tag "seaside" in all music.
And sixthly, the scene music searching device performs reordering display on all searched music according to the weight value of the music label 'seaside' in the music, as shown in fig. 10B. At this time, the user can click any piece of music to play.
This completes the scene music search and the process of playing the scene music by the scene music search apparatus of the embodiment.
The scene music searching method and the scene music searching device provided by the invention can be used for analyzing the scene music searching sentence input by the user to obtain the corresponding music label and further obtain the corresponding music; the music content displayed to the user is relatively comprehensive and can be updated in time; the technical problems that the existing scene music searching method and the existing scene music searching device have less music content and the music content is updated slowly are solved.
As used herein, the terms "component," "module," "system," "interface," "process," and the like are generally intended to refer to a computer-related entity: hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Fig. 11 and the following discussion provide a brief, general description of an operating environment of an electronic device in which a scene music search apparatus of the present invention may be implemented. The operating environment of FIG. 11 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example electronic devices 1112 include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Although not required, embodiments are described in the general context of "computer readable instructions" being executed by one or more electronic devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
Fig. 11 illustrates an example of an electronic device 1112 that includes one or more embodiments of the scene music search apparatus of the invention. In one configuration, electronic device 1112 includes at least one processing unit 1116 and memory 1118. Depending on the exact configuration and type of electronic device, memory 1118 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. This configuration is illustrated in fig. 11 by dashed line 1114.
In other embodiments, electronic device 1112 may include additional features and/or functionality. For example, device 1112 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in FIG. 11 by storage 1120. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be in storage 1120. Storage 1120 may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in memory 1118 for execution by processing unit 1116, for example.
The term "computer readable media" as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 1118 and storage 1120 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by electronic device 1112. Any such computer storage media may be part of electronic device 1112.
Electronic device 1112 may also include communication connection(s) 1126 that allow electronic device 1112 to communicate with other devices. Communication connection(s) 1126 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting electronic device 1112 to other electronic devices. Communication connection 1126 may include a wired connection or a wireless connection. Communication connection 1126 may transmit and/or receive communication media.
The term "computer readable media" may include communication media. Communication media typically embodies computer readable instructions or other data in a "modulated data signal" such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" may include signals that: one or more of the signal characteristics may be set or changed in such a manner as to encode information in the signal.
Electronic device 1112 may include input device(s) 1124 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device. Output device(s) 1122 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 1112. The input device 1124 and the output device 1122 may be connected to the electronic device 1112 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another electronic device may be used as input device 1124 or output device 1122 for electronic device 1112.
Components of electronic device 1112 may be connected by various interconnects, such as a bus. Such interconnects may include Peripheral Component Interconnect (PCI), such as PCI express, Universal Serial Bus (USB), firewire (IEEE1394), optical bus structures, and the like. In another embodiment, components of electronic device 1112 may be interconnected by a network. For example, memory 1118 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, an electronic device 1130 accessible via a network 1128 may store computer readable instructions to implement one or more embodiments provided by the present invention. Electronic device 1112 may access electronic device 1130 and download a part or all of the computer readable instructions for execution. Alternatively, electronic device 1112 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at electronic device 1112 and some at electronic device 1130.
Various operations of embodiments are provided herein. In one embodiment, the one or more operations may constitute computer readable instructions stored on one or more computer readable media, which when executed by an electronic device, will cause the computing device to perform the operations. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Those skilled in the art will appreciate alternative orderings having the benefit of this description. Moreover, it should be understood that not all operations are necessarily present in each embodiment provided herein.
Also, as used herein, the word "preferred" is intended to serve as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a concrete fashion. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, "X employs A or B" is intended to include either of the permutations as a matter of course. That is, if X employs A; b is used as X; or X employs both A and B, then "X employs A or B" is satisfied in any of the foregoing examples.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may perform the method in the corresponding method embodiment.
In summary, although the present invention has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be determined by the appended claims.

Claims (11)

1. A method for searching scene music is characterized by comprising the following steps:
acquiring a scene music search statement; splitting the scene music search sentence to obtain at least one scene music search word;
acquiring a music label corresponding to the scene music search word according to the scene music search word;
searching corresponding music in a music library by using the music label; the music library comprises the music, the music labels and the corresponding relation between the music and the music labels; and
displaying the scene music search word and the corresponding music;
the step of obtaining the music label corresponding to the scene music search word according to the scene music search word comprises the following steps:
judging whether the scene music search word has a corresponding music label;
if the scene music search word has a corresponding music label, acquiring the music label; and
if the scene music search word does not have the corresponding music label, acquiring a similar search word of the scene music search word, and acquiring a corresponding music label according to the similar search word; wherein the similar search word is a noun or a verb having meaning in the explanation of the scene music search word.
2. The method of claim 1, wherein the step of splitting the scene music search statement to obtain at least one scene music search term comprises:
splitting the scene music search sentence into a plurality of basic words; and
and taking verbs and nouns in the basic words as the scene music search words.
3. The method for searching music in scenes according to claim 1, wherein the music library further comprises weight values of all music labels corresponding to the music;
the step of presenting the scene music search term and the corresponding music comprises:
acquiring weight values of the music labels in all searched music; and
and sequencing and displaying all the searched music according to the weight values of the music labels.
4. The method of claim 3, wherein the step of presenting the scene music search term and the corresponding music further comprises:
receiving a search word modification instruction, and modifying the scene music search word according to the search word modification instruction;
acquiring the modified music labels corresponding to the scene music search terms and the weight values of all searched music; and
and reordering all the searched music according to the weight value of the music label in the music for display.
5. The method of claim 3, further comprising:
receiving a music playing instruction, and playing the displayed music according to the music playing instruction; and
and correcting the weight value of the music label in the display music according to the ratio of the music playing frequency of the display music to the music display frequency.
6. A scene music search device, comprising:
the sentence splitting module is used for obtaining scene music searching sentences; splitting the scene music search sentence to obtain at least one scene music search word;
the music label acquisition module is used for acquiring a music label corresponding to the scene music search word according to the scene music search word;
the music searching module is used for searching corresponding music in a music library by using the music labels; the music library comprises the music, the music labels and the corresponding relation between the music and the music labels; and
the music display module is used for displaying the scene music search words and the corresponding music;
wherein the music tag obtaining module comprises:
the judging unit is used for judging whether the scene music searching words have corresponding music labels or not;
a first music tag obtaining unit, configured to obtain the music tag if the scene music search word has a corresponding music tag; and
a second music tag obtaining unit, configured to, if the scene music search word does not have a corresponding music tag, obtain a similar search word of the scene music search word, and obtain a corresponding music tag according to the similar search word; wherein the similar search word is a noun or a verb having meaning in the explanation of the scene music search word.
7. The apparatus for searching for scene music according to claim 6, wherein the sentence splitting module comprises:
the splitting unit is used for splitting the scene music search statement into a plurality of basic words; and
and the search word setting unit is used for taking verbs and nouns in the basic words as the scene music search words.
8. The apparatus according to claim 6, wherein the music library further includes weight values of music labels corresponding to music;
the music display module comprises:
a first weight value acquiring unit configured to acquire weight values of the music tags in all the searched music; and
and the first sequencing display unit is used for sequencing and displaying all the searched music according to the weight values of the music labels.
9. The apparatus for searching music in scenes according to claim 8, wherein said music presentation module further comprises:
the search word modification unit is used for receiving a search word modification instruction and modifying the scene music search word according to the search word modification instruction;
the second weighted value acquisition unit is used for acquiring the modified music labels corresponding to the scene music search terms and the weighted values of all searched music; and
and the second sequencing display unit is used for performing sequencing display on all searched music according to the weight values of the music labels in the music.
10. The apparatus for searching for scene music according to claim 8, further comprising:
the playing module is used for receiving a music playing instruction and playing the displayed music according to the music playing instruction; and
and the weight value correction module is used for correcting the weight value of the music label in the display music according to the ratio of the number of times of playing the music for displaying the music to the number of times of displaying the music.
11. A storage medium having stored therein processor-executable instructions, the instructions being loaded by one or more processors to perform the scene music search method of any one of claims 1 to 5.
CN201510884497.2A 2015-12-03 2015-12-03 Scene music searching method and scene music searching device Active CN105550217B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510884497.2A CN105550217B (en) 2015-12-03 2015-12-03 Scene music searching method and scene music searching device
PCT/CN2016/100405 WO2017092493A1 (en) 2015-12-03 2016-09-27 Ambiance music searching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510884497.2A CN105550217B (en) 2015-12-03 2015-12-03 Scene music searching method and scene music searching device

Publications (2)

Publication Number Publication Date
CN105550217A CN105550217A (en) 2016-05-04
CN105550217B true CN105550217B (en) 2021-05-07

Family

ID=55829406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510884497.2A Active CN105550217B (en) 2015-12-03 2015-12-03 Scene music searching method and scene music searching device

Country Status (2)

Country Link
CN (1) CN105550217B (en)
WO (1) WO2017092493A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550217B (en) * 2015-12-03 2021-05-07 腾讯科技(深圳)有限公司 Scene music searching method and scene music searching device
CN108153898A (en) * 2018-01-10 2018-06-12 上海展扬通信技术有限公司 Audio frequency playing method, terminal and computer readable storage medium
CN109587554B (en) 2018-10-29 2021-08-03 百度在线网络技术(北京)有限公司 Video data processing method and device and readable storage medium
CN109299314B (en) * 2018-11-13 2019-12-27 百度在线网络技术(北京)有限公司 Music retrieval and recommendation method, device, storage medium and terminal equipment
CN110209870B (en) * 2019-05-10 2021-11-09 杭州网易云音乐科技有限公司 Music log generation method, device, medium and computing equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996195A (en) * 2009-08-28 2011-03-30 中国移动通信集团公司 Searching method and device of voice information in audio files and equipment
CN103150356A (en) * 2013-02-22 2013-06-12 百度在线网络技术(北京)有限公司 Broad application requirement retrieval method and system
CN103279513A (en) * 2013-05-22 2013-09-04 百度在线网络技术(北京)有限公司 Method for generating content label and method and device for providing multi-media content information
CN104281705A (en) * 2014-10-23 2015-01-14 百度在线网络技术(北京)有限公司 Searching method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073854B2 (en) * 2007-04-10 2011-12-06 The Echo Nest Corporation Determining the similarity of music using cultural and acoustic information
CN100555287C (en) * 2007-09-06 2009-10-28 腾讯科技(深圳)有限公司 internet music file sequencing method, system and searching method and search engine
CN102508920B (en) * 2011-11-18 2013-03-13 广州索答信息科技有限公司 Information retrieval method based on Boosting sorting algorithm
CN103425687A (en) * 2012-05-21 2013-12-04 阿里巴巴集团控股有限公司 Retrieval method and system based on queries
US9235853B2 (en) * 2012-09-11 2016-01-12 Google Inc. Method for recommending musical entities to a user
CN103886099B (en) * 2014-04-09 2017-02-15 中国人民大学 Semantic retrieval system and method of vague concepts
CN104951485A (en) * 2014-09-02 2015-09-30 腾讯科技(深圳)有限公司 Music file data processing method and music file data processing device
CN104933028A (en) * 2015-06-23 2015-09-23 百度在线网络技术(北京)有限公司 Information pushing method and information pushing device
CN104991943A (en) * 2015-07-10 2015-10-21 百度在线网络技术(北京)有限公司 Music searching method and apparatus
CN105550217B (en) * 2015-12-03 2021-05-07 腾讯科技(深圳)有限公司 Scene music searching method and scene music searching device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996195A (en) * 2009-08-28 2011-03-30 中国移动通信集团公司 Searching method and device of voice information in audio files and equipment
CN103150356A (en) * 2013-02-22 2013-06-12 百度在线网络技术(北京)有限公司 Broad application requirement retrieval method and system
CN103279513A (en) * 2013-05-22 2013-09-04 百度在线网络技术(北京)有限公司 Method for generating content label and method and device for providing multi-media content information
CN104281705A (en) * 2014-10-23 2015-01-14 百度在线网络技术(北京)有限公司 Searching method and device

Also Published As

Publication number Publication date
WO2017092493A1 (en) 2017-06-08
CN105550217A (en) 2016-05-04

Similar Documents

Publication Publication Date Title
US10789309B1 (en) Associating an entity with a search query
US10162886B2 (en) Embedding-based parsing of search queries on online social networks
US10175860B2 (en) Search intent preview, disambiguation, and refinement
US10185763B2 (en) Syntactic models for parsing search queries on online social networks
US9418128B2 (en) Linking documents with entities, actions and applications
CN108319627B (en) Keyword extraction method and keyword extraction device
US7769771B2 (en) Searching a document using relevance feedback
CN105550217B (en) Scene music searching method and scene music searching device
US11580181B1 (en) Query modification based on non-textual resource context
US8661035B2 (en) Content management system and method
US20180081880A1 (en) Method And Apparatus For Ranking Electronic Information By Similarity Association
US20180004838A1 (en) System and method for language sensitive contextual searching
US10152478B2 (en) Apparatus, system and method for string disambiguation and entity ranking
JP2006527870A (en) Configurable information identification system and method
CN109241319B (en) Picture retrieval method, device, server and storage medium
CN110147494B (en) Information searching method and device, storage medium and electronic equipment
US20200159765A1 (en) Performing image search using content labels
CN114564666A (en) Encyclopedic information display method, encyclopedic information display device, encyclopedic information display equipment and encyclopedic information display medium
JP5302614B2 (en) Facility related information search database formation method and facility related information search system
CN111104536A (en) Picture searching method, device, terminal and storage medium
CN111460177A (en) Method and device for searching film and television expression, storage medium and computer equipment
CN112883218A (en) Image-text combined representation searching method, system, server and storage medium
KR20210006098A (en) Method and system for determining document consistence to improve document search quality
CN104021201A (en) Data conversion method and device
CN114020867A (en) Method, device, equipment and medium for expanding search terms

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant