WO2020052324A1 - 用于显示装置的内容推送方法、推送装置和显示设备 - Google Patents
用于显示装置的内容推送方法、推送装置和显示设备 Download PDFInfo
- Publication number
- WO2020052324A1 WO2020052324A1 PCT/CN2019/094255 CN2019094255W WO2020052324A1 WO 2020052324 A1 WO2020052324 A1 WO 2020052324A1 CN 2019094255 W CN2019094255 W CN 2019094255W WO 2020052324 A1 WO2020052324 A1 WO 2020052324A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- music
- content
- sample
- display
- played
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 239000012634 fragment Substances 0.000 claims description 64
- 239000013598 vector Substances 0.000 claims description 46
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000001514 detection method Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 8
- 238000010422 painting Methods 0.000 description 8
- 230000036651 mood Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 229910052709 silver Inorganic materials 0.000 description 2
- 239000004332 silver Substances 0.000 description 2
- 238000009835 boiling Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000010428 oil painting Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005295 random walk Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
- 238000010429 water colour painting Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/54—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for retrieval
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/02—Constructional features of telephone sets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
Definitions
- the present disclosure relates to, but is not limited to, the field of display technology, and in particular, to a content pushing method, a pushing device, and a display device for a display device.
- the system for playing music and the system for playing video may be two independent systems, music and video content often appear irrelevant.
- the hearing service provides music
- the visual service pushes advertisements, which brings a poor experience to users.
- the present disclosure aims to solve at least one of the technical problems existing in the prior art, and proposes a content pushing method, device, and device for a display device.
- an embodiment of the present disclosure provides a content pushing method for a display device, including: detecting music played in an environment; acquiring at least one keyword of the played music; acquiring and playing the Content to be displayed associated with the keywords of the music; pushing the content to be displayed to the display device for the display device to display the content to be displayed.
- the step of acquiring at least one keyword of the played music includes: acquiring music information of playing music in an environment; matching the music information with each sample music piece in a database, A sample music segment with the highest matching degree is determined; a keyword corresponding to the sample music segment with the highest matching degree is obtained from the database as a keyword for playing the music, wherein the database records multiple Sample music fragments and keywords corresponding to the plurality of sample music fragments.
- the music information includes: a feature vector of the played music; and the step of obtaining the music information of the music played in the environment includes: performing feature extraction on the played music to obtain the music The feature vector of playing music; the step of matching the music information with each sample music fragment in the database to determine the sample music fragment with the highest degree of matching includes calculating the feature vector of the playing music and the sample in the database The similarity between the feature vectors of the music fragments; a sample music fragment corresponding to the feature vector with the greatest similarity to the feature vector of the music played is determined as the sample music fragment with the highest matching degree.
- the music information includes: a music fragment corresponding to the played music; and the step of obtaining music information of the music played in the environment includes: inputting the played music to a pre-designed music Performing recognition in the segment recognition model to determine the music segment corresponding to the played music; matching the music information to a sample music segment in a database to determine a sample music segment with the highest matching degree includes calculating the Describe the similarity between the music fragment corresponding to the played music and each sample music fragment in the database; determine the sample music fragment with the highest similarity to the music fragment corresponding to the played music as the sample with the highest matching degree Music clip.
- the step of calculating the similarity between the music segment corresponding to the playback music and each sample music segment in the database includes: calculating the The similarity between the music name and the music name of the sample music clip in the database.
- the method further includes: adding the playback music to the music according to a recognition result A training set corresponding to the segment recognition model, and training and updating the music segment recognition model.
- the step of obtaining at least one keyword of the played music includes: inputting the played music into a pre-designed keyword recognition model for identification, so as to determine a corresponding value of the played music. Key words.
- the step of obtaining the content to be displayed associated with the keywords of the played music includes: searching for a preset content repository or the Internet according to the keywords of the played music.
- Optional display content associated with the keywords for playing music wherein the searched optional display content is used as an alternative display content, and the content storage library stores multiple display content and multiple display content in advance Corresponding keywords; selecting at least one candidate display content from all searched candidate display content as the to-be-displayed content.
- the step of selecting at least one candidate display content from all searched candidate display content as the content to be displayed includes: obtaining from the content repository or the Internet. Keywords corresponding to all candidate display contents; a preset keyword similarity algorithm is used to calculate the similarity between each of all candidate display contents and the keywords of the playing music; all similarities are filtered out Alternative display content corresponding to similarity greater than a preset similarity threshold; selecting at least one candidate display content from the filtered candidate display content as the content to be displayed.
- the method further includes: determining a content characteristic of the content to be displayed; and determining according to the content characteristic.
- An embodiment of the present disclosure further provides a content pushing device for a display device, including: a music detection component configured to detect music played in an environment; a first acquisition component configured to acquire the played music At least one keyword; a second acquisition component configured to acquire content to be displayed associated with the keyword for playing music; a push component configured to push the content to be displayed to the display device to For the display device to display the content to be displayed.
- a music detection component configured to detect music played in an environment
- a first acquisition component configured to acquire the played music At least one keyword
- a second acquisition component configured to acquire content to be displayed associated with the keyword for playing music
- a push component configured to push the content to be displayed to the display device to For the display device to display the content to be displayed.
- the first acquisition component includes: a music information acquisition unit configured to acquire music information for playing music in an environment; and a matching unit configured to associate the music information with each of the databases The sample music fragments are matched to determine the sample music fragments with the highest matching degree; the keyword acquisition unit is configured to obtain the keywords corresponding to the sample music fragments with the highest matching degree from the database as the playback Keywords of music; the database records a plurality of sample music fragments and keywords corresponding to the plurality of sample music fragments.
- the music information includes: a feature vector of the playback music;
- the music information acquisition unit includes: a feature extraction subunit configured to perform feature extraction on the playback music to obtain the The feature vector of the playing music;
- the matching unit includes: a first calculation subunit configured to calculate a similarity between the feature vector of the playing music and the feature vector of each sample music segment in the database; a first determination The sub-unit is configured to determine a sample music segment corresponding to a feature vector with the greatest similarity to the feature vector of the played music, as a sample music segment with the highest matching degree.
- the music information includes: a music segment corresponding to the played music;
- the music information acquisition unit includes: a segment identification subunit, configured to use a predesigned music segment identification model to The inputted playback music is identified to determine a music segment corresponding to the playback music;
- the matching unit includes a second calculation subunit configured to calculate the music segment corresponding to the playback music and a database The degree of similarity between the sample music fragments of; the second determination subunit is configured to determine the sample music fragment with the highest similarity of the music fragment corresponding to the played music as the sample music fragment with the highest matching degree.
- the second calculation subunit is configured to calculate a similarity between a music name of the music segment corresponding to the played music and a music name of each sample music segment in a database.
- the music information acquisition unit further includes a training subunit configured to add the playback music to the music according to a recognition result after the segment recognition unit finishes identifying the playback music.
- the first obtaining component includes: a keyword recognition unit configured to recognize the inputted playback music according to a pre-designed keyword recognition model to determine the playback music correspondence Keywords.
- the second obtaining component includes a search unit configured to search for a selectable display content associated with the keyword for playing music from a preset content repository or the Internet,
- the optional display content searched out is used as an alternative display content, and a plurality of display contents and keywords corresponding to the plurality of display contents are stored in the content repository in advance;
- the selection unit is configured to search from the search unit At least one selected candidate display content is selected from all the candidate display content as the to-be-displayed content.
- the selection unit includes: a search subunit configured to obtain keywords corresponding to all candidate display contents from the content repository or the Internet; and a third calculation subunit configured to A preset keyword similarity algorithm is used to separately calculate the similarity between the keywords in each of the candidate display contents and the playback music; the screening subunit is configured to filter out all similarities greater than the preset The candidate display content corresponding to the similarity of the similarity threshold; the selection subunit is configured to select at least one candidate display content from the candidate display content filtered by the filtering subunit as the content to be displayed.
- the content pushing device further includes: a feature determination component configured to determine a content characteristic of the content to be displayed; and a mode determination component configured to determine the to be displayed according to the content characteristic A display mode corresponding to the content; a display control component configured to control the display device to use the determined display mode to display the content to be displayed.
- An embodiment of the present disclosure further provides a display device, including: a display screen; at least one processor; a storage medium storing a program, and when the program runs, the at least one processor is controlled to execute as described above. Push method.
- FIG. 1 is a flowchart of a content push method according to an embodiment of the present disclosure
- FIG. 2 is a flowchart of a content pushing method according to an embodiment of the present disclosure
- FIG. 3 is a schematic structural diagram of a content pushing device according to an embodiment of the present disclosure.
- FIG. 4a is a schematic diagram of a structure of the first obtaining component in FIG. 3;
- FIG. 4a is a schematic diagram of a structure of the first obtaining component in FIG. 3;
- FIG. 4b is a schematic diagram of another structure of the first obtaining component in FIG. 3;
- FIG. 5 is a schematic diagram of a structure of a selection unit in the present disclosure.
- music in the present disclosure refers to a melody that can be played using a player.
- the embodiment of the present disclosure does not limit the playback form of the music.
- FIG. 1 is a flowchart of a content pushing method according to an embodiment of the present disclosure.
- the content push method is used to push content to a display device. As shown in FIG. 1, the content push method includes:
- Step S1 detecting music played in the environment.
- the music detection component may be used to start detecting the currently playing music once every preset time (for example, 5s, which can be set as required).
- the music detection component includes a sound sensor (such as a microphone) and music extraction software; the sound sensor can sense sound information in the environment, and the music extraction software processes the sound information generated by the sound sensor to obtain the current Data for playing music.
- the data of the currently playing music may specifically include the melody and lyrics of the currently playing music.
- Step S2 Acquire at least one keyword in the music.
- step S2 keyword extraction may be performed on the currently played music acquired in step S1 to obtain at least one keyword corresponding to the currently played music.
- step S2 includes:
- Step S201 Acquire music information for playing music in an environment
- Step S202 Match the music information with the sample music fragments in the database to determine the sample music fragment with the highest matching degree
- Step S203 Obtain a keyword corresponding to the sample music segment with the highest matching degree from the database, and use the keyword as the keyword for playing the music.
- a plurality of sample music fragments and keywords corresponding to each sample music fragment are recorded in the database. It should be noted that the number of keywords corresponding to the sample music fragments may be the same or different, and the number of keywords corresponding to the sample music fragments may specifically be one, two or more, which is not limited in this disclosure.
- sample music clips can be obtained from the Internet on a regular or real-time basis and keyword extraction can be performed to update the database.
- keyword extraction can be performed to update the database.
- keywords there are multiple types of keywords that are extracted, such as: music name, music type, music scene, music content, music mood, and so on.
- the types of music can include: pop music, vocal music, country music, jazz music, Latin music, rock music, popular music, classical music, folk music, etc .
- music scenes can include: chanting history songs, lyric songs, love songs, nursery songs, military Songs, animation songs, etc .
- music content may include: people, flora and fauna, scenery, cars, sky, etc .
- music emotions may include: passion, cheerfulness, relaxation, anger, depression, tension, thriller, etc.
- the music clip is "I'm a little bird, I want to fly and fly but I can't fly high".
- the corresponding keywords can be extracted as: I am a little bird (music name ), Pop music (music type), lyric (music scene), birdie (music content), flying (music content), depression (music mood), etc.
- the corresponding keywords extracted may be: Boiling the Yangtze River East (the name of the music), Bel Canto (the type of music), Wing Shi Ge ( Music scene), Yangtze River (music content), spray (music content), hero (music content), passion (music mood), etc.
- a rapid automatic keyword extraction (RAKE) algorithm can be used when performing keyword extraction on a music segment in a database.
- TF-IDF term frequency-reverse document frequency
- Random Walk algorithm etc.
- other keyword extraction algorithms can also be used in this disclosure, which will not be illustrated one by one here.
- the technical solution of the present disclosure does not limit the algorithm used when extracting keywords from the music fragments in the database.
- the keywords corresponding to the music fragments in the database can also be manually configured according to actual needs.
- the music information may include: a feature vector of the played music.
- step S201 specifically includes:
- Step S2011a Perform feature extraction on the playing music to obtain a feature vector of the playing music.
- a preset music feature extraction algorithm (such as a secondary feature extraction, a wavelet transform method, and a spectrum analysis method) may be used to perform feature extraction on the currently playing music.
- the extracted features may include: audio time-domain features (such as short-term energy, short-term average zero-crossing rate, etc.), frequency-domain features, cepstrum features (such as linear prediction cepstrum coefficients, Mel frequency cepstrum coefficients, etc.)
- the extracted features constitute a feature vector of music.
- step S202 specifically includes:
- Step S2021a Calculate the similarity between the feature vector of the played music and the feature vector of the sample music segment in the database.
- step S2021a for example, the cosine theorem of the vector space model or the method of combining the cosine theorem of the vector control model and the Euclidean distance may be used to calculate the similarity between the feature vectors.
- the existing arbitrary vector similarity algorithm can be used to calculate the similarity between the feature vector of the currently playing music and the feature vector of each sample music fragment.
- the technical solution of the present disclosure The vector similarity algorithm used in step S2021 is not limited.
- Step S2022a Determine the sample music segment corresponding to the feature vector with the greatest similarity to the feature vector of the music to be played as the sample music segment with the highest matching degree.
- the music information may include: a music fragment corresponding to the music.
- step S201 specifically includes:
- Step S2011b input the played music into a pre-stored music segment recognition model for identification, so as to determine a music segment corresponding to the music.
- the music segment recognition model is based on a plurality of preset training sets (each training set corresponds to a class, and each sample corresponding to the same music segment is located in the same training set) and is trained using a preset classification recognition algorithm.
- some complete music preferably some officially released music performances
- music names corresponding to each complete music may be collected in advance from the Internet, and then these complete music may be segmented to obtain several real music fragments;
- Each real music segment is regarded as a class.
- a large amount of music data that has performed the real music segment is collected from the Internet as sample data of the class (real music segment), so as to obtain the training corresponding to the class. set.
- step S2011b the played music is input into a music segment recognition model, and the music segment recognition model can recognize the input playback music and output a real music segment corresponding to the played music. It should be noted that, in step S2011, by identifying the currently playing music and outputting the corresponding real music segment, it can facilitate subsequent matching of the corresponding sample music segment from the database more accurately.
- the music segment recognition model may be a shallow recognition model based on an algorithm such as a multilayer perceptron, a support vector machine, boosting, or maximum entropy.
- the music segment recognition model may also be a deep recognition model based on Deep Neural Networks (DNN).
- DNN Deep Neural Networks
- the biggest feature of a deep neural network compared to a shallow recognition model is the way in which features are selected.
- the shallow recognition model is selected by the experts in related fields based on their own experience.
- the model focuses on classification recognition or prediction tasks.
- the selection of sample features greatly affects the effectiveness of the algorithm.
- the essence of a deep neural network recognition model is to learn the features of the data from multiple hidden layers through a large number of data samples. Each hidden layer learns the features obtained by abstracting the data at different levels. Compared with the features selected manually, such hierarchically learned features can better reflect the nature of the data, and ultimately can improve the accuracy of classification or prediction.
- the classification recognition algorithm based on the music fragment recognition model is not limited.
- step S202 specifically includes:
- Step S2021b Calculate the similarity between the music segment corresponding to the played music and the sample music segment in the database.
- step S2021b the similarity between the music name of the real music segment corresponding to the currently playing music and the music name of the sample music segment in the database may be calculated.
- the similarity of two music pieces can also be characterized based on other content. For example, calculate the similarity of feature vectors of two music pieces, or calculate the similarity of tunes of two music pieces. The calculation of similarity will not be illustrated one by one here.
- Step S2022b Determine the sample music segment with the highest similarity of the music segment corresponding to the played music as the sample music segment with the highest matching degree.
- the method further includes:
- Step S2012b Add the playing music to the training set corresponding to the music segment recognition model according to the recognition result, and train and update the music segment recognition model.
- step S2012b updating the music segment recognition model according to the recognition result can effectively improve the recognition accuracy of the music segment recognition model.
- step S2 includes:
- Step S200 input the played music into a keyword recognition model for identification, so as to determine keywords corresponding to the played music.
- each keyword type can include multiple categories (for example: music type can include : Pop, Bel Canto, Country, jazz, Latin, Rock, Popular, Classical, Folk, etc .; Music scenes can include: History Songs, Lyrics, Love Songs, Children's Songs, Military Songs, Anime Songs, etc.) .
- a keyword recognition model can be designed for each keyword type, and the keyword recognition model can identify the input music fragment to determine the category of the input music fragment in the keyword type .
- each training set is trained to obtain a keyword recognition model that can identify the type of music. After inputting the currently playing music to the keyword recognition model, the keyword recognition model can output the music type corresponding to the currently playing music, and the output result can be used as a keyword of the currently playing music.
- step S200 different keyword recognition models are used to identify the music name, music type, music scene, music content, music mood, etc. of the currently playing music, and the output result is used as the keywords corresponding to the currently playing music.
- Step S3 Acquire content to be displayed that is associated with a keyword of playing music.
- step S3 may include:
- Step S301 Search for a selectable display content associated with the keyword for playing music from a preset content repository or the Internet, where the searched optional display content is used as a candidate display content.
- the content repository stores a number of display contents and keywords corresponding to each display content in advance; the display contents may specifically be character introduction, music introduction, related paintings, video clips, and the like.
- the keywords corresponding to the displayed content can be person names, person keywords, music keywords, painting names, painting content keywords, painting author keywords, historical keywords, video content keywords, and so on.
- the keywords corresponding to each displayed content can be added, deleted, and modified as required.
- the “optional display content associated with keywords for playing music” specifically refers to a set of all corresponding keywords and all keywords corresponding to playing music
- the formed sets have optional display content that intersects (the two sets have at least one same element).
- step S301 each keyword corresponding to the played music is used as a search vocabulary, and the search is performed in the content storage database to obtain several candidate display contents.
- Step S3 may include: Step S302, selecting at least one candidate display content from all searched candidate display content as the content to be displayed.
- step S302 one or more of the candidate display contents searched in step S301 may be randomly selected as the content to be displayed.
- step S302 includes:
- Step S3021 acquiring keywords corresponding to each candidate display content from a content repository or the Internet.
- Step S3022 using a preset keyword similarity algorithm to respectively calculate similarities between keywords of candidate display content and keywords of playing music.
- step S3022 For each candidate display content, all keywords corresponding to the candidate display content constitute a keyword set of the candidate display content; all keywords corresponding to the currently playing music constitute a keyword set.
- a preset keyword similarity algorithm (collective similarity algorithm) is used to calculate the similarity of the keywords between each candidate display content and the currently playing music.
- Step S3023 Filter out candidate display content whose similarity between the keywords and the currently playing music is greater than a preset similarity threshold.
- the preset similarity threshold can be designed and adjusted according to actual needs.
- Step S3024 Select at least one candidate display content from the filtered candidate display content as the content to be displayed.
- the embodiment of the present disclosure does not limit the algorithm used to select content to be displayed from candidate display content whose similarity is greater than a preset similarity threshold.
- a preset similarity threshold For example, one candidate display content with the highest similarity may be used as the content to be displayed; or all candidate display content with similarity greater than a preset similarity threshold is used as the content to be displayed for the display device to rotate (applicable For music carousel scenes).
- Step S4 Push the content to be displayed to the display device for the display device to display the content to be displayed.
- step S4 the content to be displayed obtained in step S3 is sent to a display device for the display device to display the content to be displayed associated with the currently playing music.
- the content received by the user's auditory senses is related to the content perceived by the visual senses, and the auditory information processed by the user's brain matches the visual information, thereby improving the user's experience.
- the display content is pushed according to the current partial music clip every preset time. For a complete piece of music, the entire process can be seen as pushing a video composed of multiple content to be displayed to the display device.
- An embodiment of the present disclosure provides a method for pushing display content, which can push the associated display content to a display device according to the currently playing music in the environment, so that the content received by the user's hearing sense and the sense of the visual sense can be felt Related content to enhance user experience.
- FIG. 2 is a flowchart of a content pushing method according to an embodiment of the present disclosure.
- the content pushing method includes steps S1 to S4 in addition to steps S1 to S4 in the above embodiment.
- the content push method includes
- Step S5. Determine content characteristics of the content to be displayed.
- the content features in the present disclosure may specifically include the screen style, content theme, painting type, etc. of the content to be displayed.
- Content topics include landscape painting, portraits, architecture, etc.
- Painting types include oil painting, watercolor painting, Chinese painting, sketching, etc.
- the screen (content) style displayed by the display device is classified and designed in advance.
- the picture style can be divided into sad pictures, festive pictures, modern pictures, retro pictures and so on.
- a plurality of pictures of each style type can be collected in advance to form a training set corresponding to each style type, and then a classification recognition model capable of identifying picture style types can be trained based on the training set.
- a classification recognition model is used to determine the picture style of the content to be displayed.
- Step S6 Determine a display mode corresponding to the content to be displayed according to the content characteristics.
- the display device can support different display modes, and different display modes have certain differences in terms of brightness, hue, contrast, saturation, and the like.
- the display modes may include: fresh and cold display mode, fresh and warm display mode, silver tone display mode, black and white display mode, and the like.
- Step S7 The display device is controlled to use the determined display mode to display the content to be displayed.
- a correspondence relationship between different content features and a display mode is established in advance.
- the content characteristics including the picture style the sad picture corresponds to the fresh cold display mode
- the festive picture corresponds to the fresh warm display mode
- the modern picture corresponds to the silver tone display mode
- the retro picture corresponds to the black and white display mode.
- the corresponding display mode can be determined according to the screen style determined in step S5;
- the display device can be controlled to display the content to be displayed according to the display mode determined in step S6, so that The content to be displayed is displayed in an appropriate display mode, thereby further improving the user's experience.
- step S5 in FIG. 2 is performed after step S4 is merely exemplary.
- step S3 is performed after step S3
- step S7 is performed after step S4.
- FIG. 3 is a schematic structural diagram of a content pushing device according to an embodiment of the present disclosure. As shown in FIG. 3, the content pushing device may be used to implement the content pushing method provided by the foregoing embodiment.
- the content pushing device includes: a music detecting part 1, a first obtaining part 2, a second obtaining part 3, and a pushing part 4.
- the music detection section 1 is configured to detect a part of music in the environment.
- the first acquiring component 2 is configured to acquire at least one keyword in the music.
- the second acquisition component 3 is configured to acquire content to be displayed associated with the keywords of the played music.
- the pushing component 4 is configured to push the content to be displayed to the display device for the display device to display the content to be displayed.
- the music detection section 1 may include, for example, a microphone or a sound sensor.
- the first obtaining part 2, the second obtaining part 3, and the pushing part 4 may be implemented by hardware such as a CPU, FPGA, and IC.
- the music detecting component 1 in this embodiment may perform step S1 in the above embodiment
- the first obtaining component 2 may perform step S2 in the foregoing embodiment
- the second obtaining component 3 may perform in the foregoing embodiment.
- the pushing component 4 can perform step S4 in the foregoing embodiment.
- FIG. 4a is a schematic structural diagram of a first obtaining component in FIG. 3.
- the first obtaining component 2 includes a music information obtaining unit 201, a matching unit 202, and a keyword obtaining unit 203.
- the music information acquisition unit 201 is configured to acquire music information that plays music in an environment.
- the matching unit 202 is configured to match the music information with the sample music fragments in the database to determine the sample music fragments with the highest matching degree.
- the database records a plurality of sample music fragments and keywords corresponding to the sample music fragments.
- the keyword obtaining unit 203 is configured to obtain, from a database, keywords corresponding to the sample music segment with the highest matching degree, as the keywords for playing the music.
- the music information acquisition unit 201 in this embodiment may be used to perform step S201 in the foregoing embodiment
- the matching unit 202 may be used to perform step S202 in the foregoing embodiment
- the keyword acquisition unit 203 may be used to perform the foregoing implementation. Step S203 in the example.
- the actual music information includes: a feature vector of the played music.
- the music information acquisition unit 201 includes a feature extraction subunit
- the matching unit 202 includes a first calculation subunit and a first determination subunit.
- the feature extraction subunit is configured to perform feature extraction on the playing music to obtain a feature vector of the music.
- the first calculation subunit is configured to calculate a similarity between a feature vector of the played music and a feature vector of a sample music piece in the database.
- the first determining sub-unit is configured to determine a sample music segment corresponding to a feature vector with a maximum similarity to a feature vector of the currently playing music as the sample music segment with the highest matching degree.
- FIG. 4b is another schematic structural diagram of the first obtaining component in FIG. 3.
- the music information includes: a music fragment corresponding to the played music.
- the music information acquisition unit 201 includes a fragment identification subunit
- the matching unit 202 includes a second calculation subunit and a second determination subunit.
- the segment recognition subunit is configured to identify the input music by using a predesigned music segment recognition model to determine a music segment corresponding to the music. It should be noted that the storage location of the music fragment recognition model here is not specifically limited, and may be stored in the fragment recognition subunit or on the server side, and the fragment recognition subunit is directly called from the server when working.
- the second calculation subunit is configured to calculate a similarity between a music segment corresponding to the played music and a sample music segment in a database.
- the second determination sub-unit is configured to determine a sample music segment with the highest similarity of the music segment corresponding to the played music as the sample music segment with the highest matching degree.
- the second calculation subunit is specifically configured to calculate a similarity between the music name of the music segment corresponding to the played music and the music name of the sample music segment in the database.
- the music information acquisition unit 201 further includes: a training subunit configured to, after the segment recognition unit recognizes the playback music, add the playback music to a training set corresponding to the music segment recognition model according to the recognition result, and Recognize the model for training and updating.
- a training subunit configured to, after the segment recognition unit recognizes the playback music, add the playback music to a training set corresponding to the music segment recognition model according to the recognition result, and Recognize the model for training and updating.
- the first obtaining component includes a keyword recognition unit (not shown), and the keyword recognition unit is configured to recognize the input playing music according to a pre-designed keyword recognition model to determine Keywords for playing music.
- the storage location of the keyword recognition model is not specifically limited, and may be stored in the keyword recognition unit or on the background server. The keyword recognition unit is directly called from the background server when working.
- the second obtaining component 3 includes: a searching unit 301 and a selecting unit 302.
- the search unit 301 is configured to search for display content associated with the keywords for playing music from a preset content repository or the Internet, where the searched display content is used as an alternative display content, and wherein the content repository Several display contents and keywords corresponding to each display content are stored in advance.
- the selecting unit 302 is configured to select at least one candidate display content as the content to be displayed from all the candidate display content searched by the search unit.
- FIG. 5 is a schematic structural diagram of a selection unit according to an embodiment of the present disclosure.
- the selection unit 302 includes a search subunit 3021, a third calculation subunit 3022, a screening subunit 3023, and a selection subunit 3024.
- the search subunit 3021 is configured to search for keywords corresponding to all candidate display contents from a content repository or the Internet.
- the third calculation sub-unit 3022 is configured to separately calculate the similarity of the keywords between each of the candidate display contents and the played music by using a preset keyword similarity algorithm.
- the screening sub-unit 3023 is configured to screen out candidate display contents corresponding to similarities greater than a preset similarity threshold among all similarities.
- the selection sub-unit 3024 is configured to select at least one candidate display content from the candidate display content filtered by the screening sub-unit 3023 as the content to be displayed.
- the search unit 301 in this embodiment may perform step S301 in the foregoing embodiment
- the selection unit 302 may perform step S302 in the foregoing embodiment
- the search subunit 3021 may perform step S3021 in the foregoing embodiment.
- the third calculation subunit 3022 may perform step S3022 in the foregoing embodiment
- the screening subunit 3023 may perform step S3023 in the foregoing embodiment
- the selection subunit 3024 may perform step S3024 in the foregoing embodiment.
- the content pushing device further includes: a feature determination component 5, a mode determination component 6, and a display control component 7.
- the feature determination section 5 is configured to determine a content feature of the content to be displayed.
- the mode determination section 6 is configured to determine a display mode corresponding to the content to be displayed according to the characteristics of the content.
- the display control section 7 is configured to control the display device to display the content to be displayed using the determined display mode.
- the display control means may include, for example, a display, an electronic picture frame, and the like.
- style determination component 5 in this embodiment may perform step S5 in the foregoing embodiment
- mode determination component 6 may perform step S6 in the foregoing embodiment
- display control component 7 may perform the steps in the foregoing embodiment. S7.
- the music detection component in the present disclosure may be disposed near the display device or integrated on the display device, and the first acquisition component, the second acquisition component, and the push component may be disposed on a server side, and the server side may pass The wired / wireless network pushes the display content to the display device.
- the content push device is integrated on the display device as a whole or the push device is entirely provided near the display device.
- An embodiment of the present disclosure provides a display device, including: a display screen, at least one processor, and a storage medium.
- the storage medium stores a program, and when the program runs, the at least one processor is controlled to execute as described in the above embodiment
- the display screen is used to display content to be displayed.
- the above program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form.
- the above storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), and a random storage Access memory (RAM, Random Access Memory) and so on.
Abstract
Description
Claims (21)
- 一种用于显示装置的内容推送方法,包括:检测在环境中播放的音乐;获取所述播放音乐的至少一个关键词;获取与所述播放音乐的所述关键词相关联的待显示内容;以及将所述待显示内容推送至所述显示装置,以供所述显示装置显示所述待显示内容。
- 根据权利要求1所述的内容推送方法,其中,所述获取所述播放音乐的至少一个关键词的步骤包括:获取在环境中播放音乐的音乐信息;将所述音乐信息与数据库中的样本音乐片段进行匹配,确定出匹配度最高的样本音乐片段;以及从所述数据库中获取所述匹配度最高的样本音乐片段所对应的关键词,以作为所述播放音乐的关键词,其中,所述数据库中记载有多个样本音乐片段及所述多个样本音乐片段对应的关键词。
- 根据权利要求2所述的内容推送方法,其中,所述音乐信息包括:所述播放音乐的特征向量;所述获取在环境中播放音乐的音乐信息的步骤包括:对所述播放音乐进行特征提取,以得到所述播放音乐的特征向量;所述将所述音乐信息与数据库中的样本音乐片段进行匹配,确定出匹配度最高的样本音乐片段的步骤包括:计算所述播放音乐的特征向量与数据库中的样本音乐片段的特征向量之间的相似度;以及确定与所述播放音乐的特征向量的相似度最大的特征向量所对应的样本音乐片段,作为匹配度最高的样本音乐片段。
- 根据权利要求2所述的内容推送方法,其中,所述音乐信息包 括:所述播放音乐所对应的音乐片段;所述获取在环境中播放音乐的音乐信息的步骤包括:将所述播放音乐输入至预先设计的音乐片段识别模型中进行识别,以确定所述播放音乐对应的音乐片段;所述将所述音乐信息与数据库中的样本音乐片段进行匹配,确定出匹配度最高的样本音乐片段的步骤包括:计算所述播放音乐对应的所述音乐片段与数据库中的样本音乐片段之间的相似度;确定与所述播放音乐对应的所述音乐片段的相似度最大的样本音乐片段,作为匹配度最高的样本音乐片段。
- 根据权利要求4所述的内容推送方法,其中,所述计算所述播放音乐对应的所述音乐片段与数据库中的样本音乐片段之间的相似度的步骤包括:计算所述播放音乐对应的所述音乐片段的音乐名称与数据库中的样本音乐片段的音乐名称之间的相似度。
- 根据权利要求4所述的内容推送方法,其中,所述将所述播放音乐输入至预先设计的音乐片段识别模型中进行识别的步骤之后,所述方法还包括:根据识别结果将所述播放音乐添加至所述音乐片段识别模型相应的训练集中,并对所述音乐片段识别模型进行训练、更新。
- 根据权利要求1所述的内容推送方法,其中,所述获取所述播放音乐的至少一个关键词的步骤包括:将所述播放音乐输入至预先设计的关键词识别模型中进行识别,以确定所述播放音乐对应的关键词。
- 根据权利要求1所述的内容推送方法,其中,所述获取与所述播放音乐的关键词相关联的待显示内容的步骤包括:根据所述播放音乐的关键词,从预先设置的内容存储库或互联网中搜索出与所述播放音乐的关键词相关联的可选显示内容,其中搜索出的所述可选显示内容作为备选显示内容,所述内容存储库中预先存储有多个显示内容以及多个显示内容对应的关键词;从搜索出的全部备选显示内容中选取至少一个备选显示内容以作为所述待显示内容。
- 根据权利要求8所述的内容推送方法,其中,所述从搜索出的全部备选显示内容中选取至少一个所述备选显示内容以作为所述待显示内容的步骤包括:从所述内容存储库或互联网中获取全部备选显示内容所对应的关键词;采用预设的关键词相似度算法分别计算全部备选显示内容中的每一个与所述播放音乐之间的关键词的相似度;筛选出全部相似度中大于预设相似度阈值的相似度对应的备选显示内容;从筛选出的备选显示内容中选取至少一个备选显示内容以作为所述待显示内容。
- 根据权利要求1-9中任一所述的内容推送方法,其中,所述获取与所述播放音乐的所述关键词相关联的待显示内容的步骤之后,所述还包括:确定所述待显示内容的内容特征;根据所述内容特征确定所述待显示内容对应的显示模式;所述将所述待显示内容推送至所述显示装置的步骤之后,所述方法还包括:控制所述显示装置采用所确定的所述显示模式来显示所述待显示内容。
- 一种用于显示装置的内容推送装置,包括:音乐检测部件,被构造成检测在环境中播放的音乐;第一获取部件,被构造成获取所述播放音乐的至少一个关键词;第二获取部件,被构造成获取与所述播放音乐的关键词相关联的待显示内容;以及推送部件,被构造成将所述待显示内容推送至所述显示装置,以供所述显示装置显示所述待显示内容。
- 根据权利要求11所述的内容推送装置,其中,所述第一获取部件包括:音乐信息获取单元,被构造成获取在环境中播放音乐的音乐信息;匹配单元,被构造成将所述音乐信息与数据库中的样本音乐片段进行匹配,确定出匹配度最高的样本音乐片段;以及关键词获取单元,被构造成从所述数据库中获取所述匹配度最高的样本音乐片段所对应的关键词,以作为所述播放音乐的关键词,其中,所述数据库中记载有多个样本音乐片段及所述多个样本音乐片段对应的关键词。
- 根据权利要求12所述的内容推送装置,其中,所述音乐信息包括:所述播放音乐的特征向量;所述音乐信息获取单元包括:特征提取子单元,被构造成对所述播放音乐进行特征提取,以得到所述播放音乐的特征向量;所述匹配单元包括:第一计算子单元,被构造成计算所述播放音乐的特征向量与数据库中的样本音乐片段的特征向量之间的相似度;第一确定子单元,被构造成确定与所述播放音乐的特征向量的相似度最大的特征向量所对应的样本音乐片段,作为匹配度最高的样本音乐片段。
- 根据权利要求12所述的内容推送装置,其中,所述音乐信息包括:所述播放音乐所对应的音乐片段;所述音乐信息获取单元包括:片段识别子单元,被构造成利用预先设计的音乐片段识别模型来对所输入的所述播放音乐进行识别,以确定所述播放音乐对应的音乐片段;所述匹配单元包括:第二计算子单元,被构造成计算所述播放音乐对应的所述音乐片段与数据库中的样本音乐片段之间的相似度;第二确定子单元,被构造成确定与所述播放音乐对应的所述音乐片段的相似度最大的样本音乐片段,作为匹配度最高的样本音乐片段。
- 根据权利要求14所述的内容推送装置,其中,所述第二计算子单元被构造成计算所述播放音乐对应的所述音乐片段的音乐名称与数据库中的样本音乐片段的音乐名称之间的相似度。
- 根据权利要求14所述的内容推送装置,其中,所述音乐信息获取单元还包括:训练子单元,被构造成在片段识别单元对所述播放音乐完成识别后,根据识别结果将所述播放音乐添加至所述音乐片段识别模型相应的训练集中,并对所述音乐片段识别模型进行训练、更新。
- 根据权利要求11所述的内容推送装置,其中,所述第一获取部件包括:关键词识别单元,被构造成根据预先设计的关键词识别模型对所输入的所述播放音乐进行识别,以确定所述播放音乐对应的关键词。
- 根据权利要求11所述的内容推送装置,其特征在于,所述第 二获取部件包括:搜索单元,被构造成从预先设置的内容存储库或互联网中搜索出与所述播放音乐的关键词相关联的可选显示内容,其中搜索出的所述可选显示内容作为备选显示内容,并且其中所述内容存储库中预先存储有多个显示内容以及多个显示内容对应的关键词;以及选取单元,被构造成从搜索单元所搜索出的全部备选显示内容中选取至少一个所述备选显示内容以作为所述待显示内容。
- 根据权利要求18所述的内容推送装置,其中,所述选取单元包括:搜索子单元,被构造成从所述内容存储库或互联网中获取全部备选显示内容对应的关键词;第三计算子单元,被构造成采用预设的关键词相似度算法分别计算全部备选显示内容中的每一个与所述播放音乐之间的关键词的相似度;筛选子单元,被构造成筛选出全部相似度中大于预设相似度阈值的相似度对应的备选显示内容;以及选取子单元,被构造成从所述筛选子单元所筛选出的备选显示内容中选取至少一个备选显示内容以作为所述待显示内容。
- 根据权利要求11-19中任一所述的内容推送装置,还包括:特征确定部件,被构造成确定所述待显示内容的内容特征;模式确定部件,被构造成根据所述内容特征确定所述待显示内容对应的显示模式;以及显示控制部件,被构造成控制所述显示装置采用所确定的所述显示模式来显示所述待显示内容。
- 一种显示设备,包括:显示屏;至少一个处理器;存储介质,存储有程序,且当所述程序运行时将控制至少一个所述处理器执行如上述权利要求1-10中任一所述的内容推送方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/643,112 US11410706B2 (en) | 2018-09-11 | 2019-07-01 | Content pushing method for display device, pushing device and display device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811056287.4 | 2018-09-11 | ||
CN201811056287.4A CN109802987B (zh) | 2018-09-11 | 2018-09-11 | 用于显示装置的内容推送方法、推送装置和显示设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020052324A1 true WO2020052324A1 (zh) | 2020-03-19 |
Family
ID=66556247
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/094255 WO2020052324A1 (zh) | 2018-09-11 | 2019-07-01 | 用于显示装置的内容推送方法、推送装置和显示设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11410706B2 (zh) |
CN (1) | CN109802987B (zh) |
WO (1) | WO2020052324A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109802987B (zh) | 2018-09-11 | 2021-05-18 | 北京京东方技术开发有限公司 | 用于显示装置的内容推送方法、推送装置和显示设备 |
US11615772B2 (en) * | 2020-01-31 | 2023-03-28 | Obeebo Labs Ltd. | Systems, devices, and methods for musical catalog amplification services |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080110322A1 (en) * | 2006-11-13 | 2008-05-15 | Samsung Electronics Co., Ltd. | Photo recommendation method using mood of music and system thereof |
CN101930446A (zh) * | 2009-06-26 | 2010-12-29 | 鸿富锦精密工业(深圳)有限公司 | 电子装置及在嵌入式电子装置中播放音乐的方法 |
CN102737676A (zh) * | 2011-04-05 | 2012-10-17 | 索尼公司 | 音乐播放装置、音乐播放方法、程序及数据创建装置 |
CN105224581A (zh) * | 2014-07-03 | 2016-01-06 | 北京三星通信技术研究有限公司 | 在播放音乐时呈现图片的方法和装置 |
CN106921749A (zh) * | 2017-03-31 | 2017-07-04 | 北京京东尚科信息技术有限公司 | 用于推送信息的方法和装置 |
CN106919662A (zh) * | 2017-02-14 | 2017-07-04 | 复旦大学 | 一种音乐识别方法及系统 |
CN107221347A (zh) * | 2017-05-23 | 2017-09-29 | 维沃移动通信有限公司 | 一种音频播放的方法及终端 |
CN109802987A (zh) * | 2018-09-11 | 2019-05-24 | 北京京东方技术开发有限公司 | 用于显示装置的内容推送方法、推送装置和显示设备 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6159989B2 (ja) * | 2013-06-26 | 2017-07-12 | Kddi株式会社 | シナリオ生成システム、シナリオ生成方法およびシナリオ生成プログラム |
EP2963651A1 (en) * | 2014-07-03 | 2016-01-06 | Samsung Electronics Co., Ltd | Method and device for playing multimedia |
-
2018
- 2018-09-11 CN CN201811056287.4A patent/CN109802987B/zh active Active
-
2019
- 2019-07-01 US US16/643,112 patent/US11410706B2/en active Active
- 2019-07-01 WO PCT/CN2019/094255 patent/WO2020052324A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080110322A1 (en) * | 2006-11-13 | 2008-05-15 | Samsung Electronics Co., Ltd. | Photo recommendation method using mood of music and system thereof |
CN101930446A (zh) * | 2009-06-26 | 2010-12-29 | 鸿富锦精密工业(深圳)有限公司 | 电子装置及在嵌入式电子装置中播放音乐的方法 |
CN102737676A (zh) * | 2011-04-05 | 2012-10-17 | 索尼公司 | 音乐播放装置、音乐播放方法、程序及数据创建装置 |
CN105224581A (zh) * | 2014-07-03 | 2016-01-06 | 北京三星通信技术研究有限公司 | 在播放音乐时呈现图片的方法和装置 |
CN106919662A (zh) * | 2017-02-14 | 2017-07-04 | 复旦大学 | 一种音乐识别方法及系统 |
CN106921749A (zh) * | 2017-03-31 | 2017-07-04 | 北京京东尚科信息技术有限公司 | 用于推送信息的方法和装置 |
CN107221347A (zh) * | 2017-05-23 | 2017-09-29 | 维沃移动通信有限公司 | 一种音频播放的方法及终端 |
CN109802987A (zh) * | 2018-09-11 | 2019-05-24 | 北京京东方技术开发有限公司 | 用于显示装置的内容推送方法、推送装置和显示设备 |
Also Published As
Publication number | Publication date |
---|---|
US20210225408A1 (en) | 2021-07-22 |
CN109802987A (zh) | 2019-05-24 |
US11410706B2 (en) | 2022-08-09 |
CN109802987B (zh) | 2021-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107918653B (zh) | 一种基于喜好反馈的智能播放方法和装置 | |
TWI553494B (zh) | 基於多模態融合之智能高容錯視頻識別系統及其識別方法 | |
Kaminskas et al. | Location-aware music recommendation using auto-tagging and hybrid matching | |
CN113569088B (zh) | 一种音乐推荐方法、装置以及可读存储介质 | |
Kaminskas et al. | Contextual music information retrieval and recommendation: State of the art and challenges | |
CN105074697B (zh) | 用于推断关于实体的元数据的实时众包数据的累积 | |
US8321414B2 (en) | Hybrid audio-visual categorization system and method | |
Braunhofer et al. | Location-aware music recommendation | |
CN106462609A (zh) | 用于呈现与媒体内容相关的音乐项的方法、系统和介质 | |
CN105224581B (zh) | 在播放音乐时呈现图片的方法和装置 | |
US11157542B2 (en) | Systems, methods and computer program products for associating media content having different modalities | |
JP5359534B2 (ja) | 情報処理装置および方法、並びにプログラム | |
US11636835B2 (en) | Spoken words analyzer | |
CN113574522A (zh) | 搜索中的富体验的选择性呈现 | |
CN109920409A (zh) | 一种声音检索方法、装置、系统及存储介质 | |
WO2020052324A1 (zh) | 用于显示装置的内容推送方法、推送装置和显示设备 | |
US20220147558A1 (en) | Methods and systems for automatically matching audio content with visual input | |
CN111859008B (zh) | 一种推荐音乐的方法及终端 | |
JP5344756B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
JP2021026261A (ja) | 情報処理システム、方法およびプログラム | |
US11640426B1 (en) | Background audio identification for query disambiguation | |
Vidhani et al. | Mood Indicator: Music and Movie Recommendation System using Facial Emotions | |
JP2014164112A (ja) | 電気機器 | |
US10489450B1 (en) | Selecting soundtracks | |
Ren et al. | Visual summarization for place-of-interest by social-contextual constrained geo-clustering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19859876 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19859876 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 18/06/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19859876 Country of ref document: EP Kind code of ref document: A1 |