WO2014169571A1 - 关联内容处理方法及系统 - Google Patents

关联内容处理方法及系统 Download PDF

Info

Publication number
WO2014169571A1
WO2014169571A1 PCT/CN2013/083815 CN2013083815W WO2014169571A1 WO 2014169571 A1 WO2014169571 A1 WO 2014169571A1 CN 2013083815 W CN2013083815 W CN 2013083815W WO 2014169571 A1 WO2014169571 A1 WO 2014169571A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
keyword
buffer
cached
request
Prior art date
Application number
PCT/CN2013/083815
Other languages
English (en)
French (fr)
Inventor
姚立哲
陈军
尚国强
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP13882558.3A priority Critical patent/EP3040877A4/en
Priority to US14/915,407 priority patent/US20160203144A1/en
Priority to JP2016537075A priority patent/JP2016532969A/ja
Publication of WO2014169571A1 publication Critical patent/WO2014169571A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/433Query formulation using audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Definitions

  • the present invention relates to the field of communications, and in particular to an associated content processing method and system.
  • BACKGROUND OF THE INVENTION The inventors have found that when people watch multimedia files, such as television or video, if they are interested in some of the content, they need to search for relevant information and content by themselves. With the development of technology, some related content has been pre-associated in some videos for users to select, but the related content is pre-set, and it is still unable to dynamically generate user interest according to user needs. Associated content. In view of the problem in the related art that the related content that the user is interested in cannot be generated, no solution has been proposed so far.
  • an associated content processing method including: when playing multimedia, buffering part or all of data in the multimedia; receiving a request, where the request is used to request acquisition Content associated with the cached data; obtaining a keyword based on the cached data; and retrieving the associated content based on the keyword.
  • buffering part or all of the data in the multimedia comprises: setting at least two buffers on a device for buffering the part or all of the data, wherein each of the at least two buffers is used for Buffering data of a predetermined duration; storing the part or all of the data on the at least two buffers.
  • storing the part or all of the data on the at least two buffers comprises: after the first buffer data of the at least two buffers is full, buffering data in the second buffer until the first The n buffer data is full. After the nth buffer data is stored, the first buffer data is deleted, and the data is cached again in the first buffer area, where n is the number of buffers.
  • buffering part or all of the data in the multimedia comprises: buffering the part or all of the data in the device playing the multimedia; and/or buffering the part or all of the data on the server side.
  • obtaining the keyword according to the cached data includes: parsing a keyword from the cached data as the keyword; or parsing one or more keys from the cached data a word, the one or more keywords are sent to the sender of the request, and the keyword confirmed by the sender is used as the keyword.
  • parsing the keyword from the cached data comprises: identifying, in the case where the cached data includes audio data, part or all of the audio data as a keyword; and/or In the case where the buffered data includes video data, a keyword corresponding to the image of the video data is acquired.
  • the device that plays the multimedia and the device that sends the request are different devices, and the device that plays the multimedia is connected to the device that sends the request.
  • an associated content processing system including: a cache module, configured to cache some or all of the data in the multimedia when playing multimedia; and a receiving module configured to receive a request, wherein the request is for requesting to acquire content associated with the cached data; an obtaining module configured to acquire a keyword according to the cached data; and a retrieval module configured to retrieve the correlation according to the keyword search Linked content.
  • the cache module is configured to store the part or all of the data on at least two buffers, wherein the at least two buffers are set on a device that caches the part or all of the data, Each of the at least two buffers is used to buffer data of a predetermined duration.
  • the cache module is configured to buffer data in the second buffer after the first buffer data of the at least two buffers is full, until the nth buffer data is full, at the nth After the buffer data is stored, the first buffer data is deleted, and data is cached again in the first buffer area, where n is the number of buffers.
  • the cache module is located in a device that plays the multimedia; and/or is located in a server that provides the multimedia.
  • the obtaining module is configured to parse a keyword from the cached data as the keyword; or the acquiring module is configured to parse one or more from the cached data.
  • the one or more keywords are sent to the sender of the request, and the keyword confirmed by the sender is used as the keyword.
  • the acquiring module is configured to: identify, in the case where the cached data includes audio data, part or all of the audio data as a keyword; and/or, the acquiring module is configured to In the case that the cached data includes video data, a keyword corresponding to the image of the video data is acquired.
  • the device that plays the multimedia and the device that sends the request are different devices, and the device that plays the multimedia is connected to the device that sends the request.
  • the present invention when playing multimedia, buffering part or all of the data in the multimedia; receiving a request, wherein the request is used to request to acquire content associated with the cached data; Data acquisition keywords; retrieve the associated content based on the keywords, and send the associated content.
  • the problem that the related content that the user is interested in cannot be generated in the related art is solved, and the related content is generated according to the request of the user, thereby improving the user experience.
  • FIG. 1 is a flowchart of an associated content processing method according to an embodiment of the present invention
  • FIG. 2 is a structural block diagram of an associated content processing system according to an embodiment of the present invention
  • FIG. 3 is an associated content according to an embodiment of the present invention.
  • 1 is a block diagram of a preferred structure of an associated content processing system according to an embodiment of the present invention
  • FIG. 5 is a block diagram 3 of a preferred structure of an associated content processing system according to an embodiment of the present invention.
  • FIG. A preferred block diagram of the associated content processing system of the embodiment of the present invention is four.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other without conflict. The invention will be described in detail below with reference to the drawings in conjunction with the embodiments.
  • FIG. 1 is a flowchart of an associated content processing method according to an embodiment of the present invention. As shown in FIG.
  • Step S102 when playing multimedia, buffer some or all of the data in the multimedia
  • Step S104 receiving a request, wherein the request is used to request to acquire content associated with the cached data
  • Step S106 according to the cached data
  • Step S108 retrieving the associated content according to the keyword.
  • the associated content is obtained by the keyword obtained from the cache after receiving the request, and the process of acquiring the associated content is related to the cached data, and is not preset with the content associated with the multimedia, so
  • the problem that the related content that the user is interested in cannot be generated in the related art is solved, and the related content is generated according to the request of the user, thereby improving the user experience.
  • At least two buffers are set on the device for buffering the part or all of the data, wherein each of the at least two buffers is used for buffering data of a predetermined duration; on the at least two buffers Store some or all of this data.
  • a buffer may be set on a device that plays multimedia, or a buffer may be set on a server, at least two buffers may be set to cache more data, and since the buffer is cached for a predetermined period of time, The cached data is correctly spliced in chronological order according to the contents of the buffer area.
  • There are various ways of buffering data on at least two buffers For example, as a preferred manner, data may be cached in the second buffer after the first buffer data of the at least two buffers is full.
  • the keyword obtained according to the cached data may be one or more.
  • the obtained keyword may be directly used as a keyword, or may be sent to the requesting party for confirmation, that is, according to the cached data.
  • the keyword includes: parsing a keyword from the cached data as the keyword; or parsing one or more keywords from the cached data, and sending the one or more keywords to the request.
  • the sender uses the keyword confirmed by the sender as the keyword.
  • the keywords confirmed by the requester are more accurate, but the steps are added. Therefore, the two methods can be combined.
  • the cached data type may be audio data, the audio data may use audio recognition technology, the cached data may also be video data, and the video data may be image recognition technology, and the image is compared with a preset image library to obtain keywords. For example, in a case where the cached data includes audio data, part or all of the content in the audio data is recognized as a keyword; and/or, in a case where the buffered data includes video data Next, the keyword corresponding to the image of the video data will be obtained. Audio video technology and video recognition technology have a variety of identification methods in the related art, as long as the identification of the keywords can be used, there is no longer a huge amount.
  • the device that plays the multimedia and the device that sends the request are different devices, and the device that plays the multimedia is connected to the device that sends the request.
  • an associated content processing system is also provided, which is used to implement the foregoing method. It has been described in the foregoing embodiments and preferred embodiments, and details are not described herein again. It should be noted that the names of the modules in the following systems do not constitute a practical limitation on the module.
  • the cache module may be expressed as "a module for storing the part or all of the data on at least two buffers", The following modules can all be implemented in the processor.
  • FIG. 2 is a structural block diagram of an associated content processing system according to an embodiment of the present invention.
  • the structure includes: a cache module 22, a receiving module 24, an obtaining module 26, and a retrieval module 28.
  • the modules can be implemented on a server or a server group, or can be implemented as a different server.
  • the receiving module 24 and the cache module 22 can be implemented on a video server.
  • the cache module 22 can also be used on a terminal that plays video.
  • the acquisition module 26 block can be implemented on the identification server, for example, can be a speech recognition server; the retrieval module can be implemented on the associated content generation server.
  • the system will be described in detail below.
  • the cache module 22 is configured to cache some or all of the data in the multimedia when playing the multimedia; the receiving module 24 is configured to receive the request, wherein the request is used to request to acquire the content associated with the cached data; 26, configured to obtain a keyword according to the cached data; and a retrieval module 28 configured to retrieve the associated content according to the keyword.
  • the cache module 22 is configured to store the part or all of the data on at least two buffers, wherein the at least two buffers are set on a device for buffering the part or all of the data, the at least two buffers Each buffer in the zone is used to buffer data for a predetermined duration.
  • the cache module 22 is configured to buffer data in the second buffer after the first buffer data of the at least two buffers is full, until the nth buffer data is full, in the nth buffer After the data is stored, the first buffer data is deleted, and the data is cached again in the first buffer area, where n is the number of buffers.
  • the cache module 22 is located in a device that plays the multimedia; and/or is located in a server that provides the multimedia.
  • the obtaining module 26 is configured to parse the keyword from the cached data as the keyword; or the obtaining module 26 is configured to parse one or more keywords from the cached data.
  • the one or more keywords are sent to the sender of the request, and the keyword confirmed by the sender is used as the keyword.
  • the obtaining module 26 is configured to, when the cached data includes audio data, identify some or all of the content in the audio data as a keyword; and/or, the obtaining module 26 is configured to be in the In the case where the buffered data includes video data, a keyword corresponding to the image of the video data is acquired.
  • the device that plays the multimedia and the device that sends the request are different devices, and the device that plays the multimedia is connected to the device that sends the request.
  • the problem of how to dynamically generate associated content based on a certain content of interest that the user hears when the user is watching the video is addressed.
  • the problem of how to dynamically display the associated content described above is also solved in the following embodiments without affecting the user's current experience of watching the video.
  • FIG. 3 is a block diagram of a preferred structure of an associated content processing system according to an embodiment of the present invention. As shown in FIG.
  • a user owns device 1 and device 2, and interacts with each other through a short-distance communication technology such as Bluetooth.
  • device 2 Used to watch videos, device 2 is used to capture and display associated content.
  • the device 1 is provided with two buffers, which are set to store the audio of the most recent time, as follows: The buffer bl, capable of storing audio of duration tl, buffer b2, capable of storing duration T2 audio.
  • the buffer bl capable of storing audio of duration tl
  • buffer b2 capable of storing duration T2 audio.
  • the audio information, the information originally cached in bl is cleared, and so on, until the video is paused, stopped, fast forwarded, rewinded. In this way, the audio information of the video for the most recent time is always cached in bl and b2.
  • the method for generating the associated content according to the audio information is as follows - when the user hears the content of interest, the device 2 sends a request for acquiring the associated content to the device 1; after receiving the request, the device 1 buffers the audio buffered in the current bl, b2 Information is combined into an audio in chronological order; And sending the audio to the voice recognition server, requesting the voice recognition server to identify; after receiving the request, the voice recognition server identifies the audio; and transmitting the identified keyword to the associated content generation server; the associated content generation server After receiving the identified keyword, the search is performed according to the keyword and the corresponding related content is generated; and the generated related content is returned to the device 2; after receiving the related content, the device 2 displays the related content to the user.
  • FIG. 4 is a block diagram 2 of a preferred structure of an associated content processing system according to an embodiment of the present invention. As shown in FIG. 4, the user owns device 1 and device 2, device 1 is used to watch video, and device 2 is used to acquire and display association. content.
  • the video server is provided with two buffers, which are set to store the audio of the most recent time, as follows: buffer bl, capable of storing audio of duration tl, buffer b2, capable of storing duration T2 audio.
  • buffer bl capable of storing audio of duration tl
  • buffer b2 capable of storing duration T2 audio.
  • bl starts to buffer the audio information of the video from the starting position viewed by the user.
  • b2 starts to buffer the audio information of the video.
  • b2 starts to continue to cache the video.
  • the audio information, the information originally cached in bl is cleared, and so on, until the video is paused, stopped, fast forwarded, rewinded.
  • the method for generating the associated content according to the audio information is as follows: When the user hears the content of interest, the device 2 sends a request for acquiring the related content to the video server; after receiving the request, the video server buffers the audio in the current bl, b2 The information is merged into an audio in chronological order; and the audio is sent to a voice recognition server for requesting recognition by the voice recognition server; After receiving the request, the voice recognition server identifies the audio; and sends the identified keyword to the associated content generation server; after receiving the identified keyword, the associated content generation server searches and generates a corresponding keyword according to the keyword.
  • FIG. 5 is a block diagram 3 of a preferred structure of an associated content processing system according to an embodiment of the present invention. As shown in FIG.
  • a user owns device 1 and device 2, and interacts with each other through a short-distance communication technology such as Bluetooth.
  • device 2 Used to watch videos, device 2 is used to capture and display associated content.
  • the device 1 is provided with two buffers, which are set to store the audio of the most recent time, as follows: The buffer bl, capable of storing audio of duration tl, buffer b2, capable of storing duration T2 audio.
  • the buffer bl capable of storing audio of duration tl
  • buffer b2 capable of storing duration T2 audio.
  • the audio information, the information originally cached in bl is cleared, and so on, until the video is paused, stopped, fast forwarded, rewinded. In this way, the audio information of the video for the most recent time is always cached in bl and b2.
  • the method for generating the associated content according to the audio information is as follows: When the user hears the content of interest, the device 2 sends a request for acquiring the associated content to the device 1; after receiving the request, the device 1 buffers the audio buffered in the current bl, b2 The information is merged into an audio in chronological order; and the audio is sent to a voice recognition server for requesting recognition by the voice recognition server; After receiving the request, the voice recognition server recognizes the audio; when more than one keyword is recognized, the voice recognition server sends all the identified keywords to the device 2; the device 2 will receive the voice recognition server from the voice recognition server.
  • the device 2 sends the keyword (possibly multiple) selected by the user to the associated content generation server; the associated content generation server receives the identified key After the word, the search is performed according to the keyword and the corresponding related content is generated; and the generated related content is returned to the device 2; after receiving the related content, the device 2 presents the related content to the user.
  • the device 2 and the device 1 are linked and interacted by short-distance communication technologies such as Bluetooth and WIFI.
  • short-distance communication technologies such as Bluetooth and WIFI.
  • the two buffers bl and b2 are alternately looped to store audio information, it is judged that the above chronological method can be filled by using the data in the buffer currently buffering the audio as the latter stage.
  • FIG. 6 is a block diagram of a preferred structure of an associated content processing system according to an embodiment of the present invention.
  • the user owns device 1 and device 2, device 1 is used to watch video, and device 2 is used to acquire and display association. content.
  • the video server is provided with two buffers, which are set to store the audio of the most recent time, as follows: buffer bl, capable of storing audio of duration tl, buffer b2, capable of storing duration T2 audio.
  • buffer bl capable of storing audio of duration tl
  • buffer b2 capable of storing duration T2 audio.
  • b2 When bl is full, b2 starts to buffer the audio information of the video. When b2 is full, bl starts to continue to cache the video. The audio information, the information originally cached in bl is cleared, and so on, until the video is paused, stopped, fast forwarded, rewinded. In this way, the audio information of the video for the most recent time is always cached in bl and b2.
  • the method of generating the associated content according to the audio information is as follows - when the user hears the content of interest, the device 2 sends a request for acquiring the associated content to the video server; After receiving the request, the video server combines the audio information buffered in the current bl and b2 into one audio in time sequence; and sends the audio to the voice recognition server, requesting the voice recognition server to identify; the voice recognition server receives the request. After that, the audio is identified; when more than one keyword is recognized, the voice recognition server sends all the identified keywords to the device 2; the device 2 displays the plurality of keywords received from the voice recognition server to the user.
  • the device 2 sends the keyword (possibly multiple) selected by the user to the associated content generation server; after receiving the recognized keyword, the associated content generation server performs the keyword according to the keyword Searching and generating corresponding associated content; and returning the generated associated content to device 2; device 2 displays the associated content and presents it to the user.
  • the device 2 and the device 1 are linked and interacted by short-distance communication technologies such as Bluetooth and WIFI.
  • short-distance communication technologies such as Bluetooth and WIFI.
  • the two buffers bl and b2 are alternately looped to store audio information, it is judged that the above chronological method can be filled by using the data in the buffer currently buffering the audio as the latter stage.
  • the data in another buffer is implemented as a method in the previous paragraph.
  • the storage space is variable in size, but the length of the stored audio is a fixed value, for example, storage Audio duration is
  • scene one Referring to FIG. 3, the user owns the device 1 and the device 2, wherein the device 1 is a television, and the device 2 is a mobile phone, and the two interact with each other through a short-distance communication technology such as Bluetooth.
  • the television is used to watch video, and the mobile phone is used to acquire and display related content. .
  • the TV is provided with two buffers, which are set to store the audio of the most recent time, as follows: Buffer bl, capable of storing audio of 5 seconds duration, buffer b2, capable of storing duration 5 seconds of audio.
  • Buffer bl capable of storing audio of 5 seconds duration
  • buffer b2 capable of storing duration 5 seconds of audio.
  • the audio information of the video for the most recent time is always cached in bl and b2.
  • the user watches the news through the TV.
  • the user hears "Shenzhou 10”
  • he wants to obtain more relevant content and the user sends a request for acquiring the related content to the television by shaking the mobile phone.
  • the following process is as follows: After the TV receives the request, the TV receives the request.
  • the audio information buffered in the current bl, b2 is merged into one audio in time sequence (in this embodiment, the synthesized audio duration is 7 seconds, which includes the voice information of "Shenzhou 10"); and the audio is sent to
  • the voice recognition server requests the voice recognition server to identify; after receiving the request, the voice recognition server recognizes the audio, identifies the keyword "Shenzhou 10"; and sends the identified keyword to the associated content generation server; After receiving the identified keyword, the associated content generation server searches for the keyword and generates corresponding content related to "Shenzhou Ten"; and returns the generated related content to the mobile phone; after receiving the related content, the mobile phone will It is displayed to the user, and the user can further understand the related content of "Shenzhou 10" through the mobile phone.
  • the user owns device 1 and device 2, wherein device 1 is a television, device 2 is a mobile phone, and television is used to watch video, and the mobile phone is used to acquire and display related content.
  • the video server is provided with two buffers, which are set to store the audio of the most recent time, as follows: Buffer bl, capable of storing audio of 5 seconds duration, buffer b2, capable of storing duration It is 5 seconds of audio.
  • Buffer bl capable of storing audio of 5 seconds duration
  • buffer b2 capable of storing duration It is 5 seconds of audio.
  • bl starts to continue to cache the video.
  • the audio information the information originally cached in bl is cleared, and so on, until the video is paused, stopped, fast forwarded, rewinded. In this way, the audio information of the video for the most recent time is always cached in bl and b2.
  • the user watches the news through the TV. When the user hears "Shenzhou 10", he wants to obtain more related content, and the user sends a request for acquiring the related content to the video server by shaking the mobile phone.
  • the following process is as follows: The video server receives the request.
  • the audio information buffered in the current bl, b2 is merged into one audio in time sequence (in this embodiment, the synthesized audio duration is 7 seconds, which includes the voice information of "Shenzhou 10"); and the audio is Sending to the voice recognition server, requesting the voice recognition server to identify; after receiving the request, the voice recognition server recognizes the audio, identifies the keyword "Shenzhou 10"; and sends the identified keyword to the associated content generation After receiving the recognized keyword, the associated content generation server searches for the keyword and generates corresponding content related to "Shenzhou Ten"; and returns the generated related content to the mobile phone; the mobile phone receives the associated content After showing it to the user, the user can learn more about "Shenzhou Ten" through the mobile phone.
  • the synthesized audio duration is 7 seconds, which includes the voice information of "Shenzhou 10"
  • the audio is Sending to the voice recognition server, requesting the voice recognition server to identify; after receiving the request, the voice recognition server recognizes the audio, identifies the keyword "Shenzhou 10
  • the keyword may also be confirmed by the mobile phone and then retrieved.
  • the above modules or steps of the embodiments of the present invention can be implemented by a general computing device, which can be concentrated on a single computing device or distributed in multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device, such that they may be stored in the storage device by the computing device, or they may be separately fabricated into individual integrated circuit modules, or Multiple of these modules or steps are fabricated as a single integrated circuit module. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.

Abstract

本发明公开了关联内容处理方法及系统,该方法包括:在播放多媒体时,缓存该多媒体中的部分或者全部数据;接收到请求,其中,该请求用于请求获取与缓存的数据相关联的内容;根据该缓存的数据获取关键字;根据该关键字检索得到该相关联的内容,并发送该相关联的内容。通过本发明解决了相关技术中无法生成用户感兴趣的关联内容的问题,实现了根据用户的请求来生成关联内容,提高了用户体验。

Description

关联内容处理方法及系统 技术领域 本发明涉及通信领域, 具体而言, 涉及关联内容处理方法及系统。 背景技术 发明人发现, 人们在观看多媒体文件时, 例如, 电视或者视频时, 如果对其中的 某些内容感兴趣, 则需要用户自己进行搜索相关的信息和内容。 随着技术的发展, 目 前在一些视频中已经预先关联了一些相关的内容供用户观看时进行选择, 但是, 这些 关联的内容是预先设置好的, 依然无法根据用户的需要动态的生成用户感兴趣的关联 内容。 针对相关技术中无法生成用户感兴趣的关联内容的问题, 至今没有提出任何解决 方案。 发明内容 本发明实施例提供了关联内容处理方法及系统, 以至少解决相关技术中无法生成 用户感兴趣的关联内容的问题。 根据本发明实施例的一个方面, 提供了一种关联内容处理方法, 包括: 在播放多 媒体时, 缓存所述多媒体中的部分或者全部数据; 接收到请求, 其中, 所述请求用于 请求获取与缓存的数据相关联的内容; 根据所述缓存的数据获取关键字; 根据所述关 键字检索得到所述相关联的内容。 优选地, 缓存所述多媒体中的部分或者全部数据包括: 在缓存所述部分或全部数 据的装置上设置至少两个缓冲区, 其中, 所述至少两个缓冲区中的每个缓冲区用于缓 冲预定时长的数据; 在所述至少两个缓冲区上存储所述部分或者全部数据。 优选地, 在所述至少两个缓冲区上存储所述部分或者全部数据包括: 所述至少两 个缓冲区的第一缓冲区数据存满之后, 在所述第二缓冲区缓存数据, 直到第 n缓冲区 数据存满, 在第 n缓冲区数据存储之后, 将所述第一缓冲区数据删除, 并重新在所述 第一缓存区缓存数据, 其中, 所述 n为缓冲区的数量。 优选地, 缓存所述多媒体中的部分或者全部数据包括: 在播放所述多媒体的设备 缓存所述部分或者全部数据; 和 /或, 在服务器侧缓存所述部分或者全部数据。 优选地, 根据所述缓存的数据获取所述关键字包括: 将从所述缓存的数据中解析 出关键字作为所述关键字; 或者, 从所述缓存的数据中解析出一个或多个关键字, 将 所述一个或多个关键字发送给所述请求的发送方, 将所述发送方确认的关键字作为所 述关键字。 优选地, 从所述缓存的数据中解析出关键字包括: 在所述缓存的数据包括音频数 据的情况下, 将所述音频数据中的部分或全部内容识别为关键字; 和 /或, 在所述缓存 的数据包括视频数据的情况下, 将获取所述视频数据的图像所对应的关键字。 优选地, 播放所述多媒体的设备和发送所述请求的设备是不同的设备, 播放所述 多媒体的设备和发送所述请求的设备相连接。 根据本发明实施例的另一个方面, 还提供了一种关联内容处理系统, 包括: 缓存 模块, 设置为在播放多媒体时, 缓存所述多媒体中的部分或者全部数据; 接收模块, 设置为接收到请求, 其中, 所述请求用于请求获取与缓存的数据相关联的内容; 获取 模块, 设置为根据所述缓存的数据获取关键字; 检索模块, 设置为根据所述关键字检 索得到所述相关联的内容。 优选地,所述缓存模块, 设置为在至少两个缓冲区上存储所述部分或者全部数据, 其中, 所述至少两缓冲区是在缓存所述部分或全部数据的装置上设置的, 所述至少两 个缓冲区中的每个缓冲区用于缓冲预定时长的数据。 优选地, 所述缓存模块, 设置为在所述至少两个缓冲区的第一缓冲区数据存满之 后, 在所述第二缓冲区缓存数据, 直到第 n缓冲区数据存满, 在第 n缓冲区数据存储 之后, 将所述第一缓冲区数据删除, 并重新在所述第一缓存区缓存数据, 其中, 所述 n为缓冲区的数量。 优选地, 所述缓存模块位于播放所述多媒体的设备中; 和 /或, 位于提供所述多媒 体的服务器中。 优选地, 所述获取模块, 设置为将从所述缓存的数据中解析出关键字作为所述关 键字; 或者, 所述获取模块, 设置为从所述缓存的数据中解析出一个或多个关键字, 将所述一个或多个关键字发送给所述请求的发送方, 将所述发送方确认的关键字作为 所述关键字。 优选地, 所述获取模块, 设置为在所述缓存的数据包括音频数据的情况下, 将所 述音频数据中的部分或全部内容识别为关键字; 和 /或, 所述获取模块, 设置为在所述 缓存的数据包括视频数据的情况下, 将获取所述视频数据的图像所对应的关键字。 优选地, 播放所述多媒体的设备和发送所述请求的设备是不同的设备, 播放所述 多媒体的设备和发送所述请求的设备相连接。 通过本发明实施例, 采用在播放多媒体时, 缓存所述多媒体中的部分或者全部数 据; 接收到请求, 其中, 所述请求用于请求获取与缓存的数据相关联的内容; 根据所 述缓存的数据获取关键字; 根据所述关键字检索得到所述相关联的内容, 并发送所述 相关联的内容。 解决了相关技术中无法生成用户感兴趣的关联内容的问题, 实现了根 据用户的请求来生成关联内容, 提高了用户体验。 附图说明 此处所说明的附图用来提供对本发明的进一步理解, 构成本申请的一部分, 本发 明的示意性实施例及其说明用于解释本发明, 并不构成对本发明的不当限定。 在附图 中- 图 1是根据本发明实施例的关联内容处理方法的流程图; 图 2是根据本发明实施例的关联内容处理系统的结构框图; 图 3是根据本发明实施例的关联内容处理系统的优选结构框图一; 图 4是根据本发明实施例的关联内容处理系统的优选结构框图二; 图 5是根据本发明实施例的关联内容处理系统的优选结构框图三; 图 6是根据本发明实施例的关联内容处理系统的优选结构框图四。 具体实施方式 需要说明的是, 在不冲突的情况下, 本申请中的实施例及实施例中的特征可以相 互组合。 下面将参考附图并结合实施例来详细说明本发明。 在本实施例中, 提供了一种关联内容处理方法, 图 1是根据本发明实施例的关联 内容处理方法的流程图, 如图 1所示, 该流程包括如下步骤: 步骤 S102, 在播放多媒体时, 缓存该多媒体中的部分或者全部数据; 步骤 S104,接收到请求,其中,该请求用于请求获取与缓存的数据相关联的内容; 步骤 S106, 根据该缓存的数据获取关键字; 步骤 S108, 根据该关键字检索得到该相关联的内容。 通过上述步骤, 关联内容是在接收到请求之后, 从缓存中获取的关键字来得到的, 这种获取关联内容的处理与缓存的数据相关, 并不是预先设置的与该多媒体关联的内 容, 因此, 通过上述步骤解决了相关技术中无法生成用户感兴趣的关联内容的问题, 实现了根据用户的请求来生成关联内容, 提高了用户体验。 优选地, 在缓存该部分或全部数据的装置上设置至少两个缓冲区, 其中, 该至少 两个缓冲区中的每个缓冲区用于缓冲预定时长的数据; 在该至少两个缓冲区上存储该 部分或者全部数据。 例如, 可以在播放多媒体的设备上设置缓存区, 也可以在服务器 上设置缓存区, 设置至少两个缓存区可以缓存更多的数据, 并且, 由于缓存区是缓存 预定时长的数据, 因此, 可以根据缓存区中的内容按照时间顺序将缓存的数据进行正 确的拼接。 在至少两个缓冲区上缓冲数据有多种方式, 例如, 作为一种优选的方式, 可以在 该至少两个缓冲区的第一缓冲区数据存满之后, 在该第二缓冲区缓存数据, 直到第 n 缓冲区数据存满, 在第 n缓冲区数据存储之后, 将该第一缓冲区数据删除, 并重新在 该第一缓存区缓存数据, 其中, 该 n为缓冲区的数量。 根据缓存的数据获取到的关键字可能是一个或者多个, 此时, 可以将该获取到的 关键字直接作为关键字, 也可以发送给请求方进行确认, 即, 根据该缓存的数据获取 该关键字包括: 将从该缓存的数据中解析出关键字作为该关键字; 或者, 从该缓存的 数据中解析出一个或多个关键字, 将该一个或多个关键字发送给该请求的发送方, 将 该发送方确认的关键字作为该关键字。 经过请求方确认的关键字更加准确, 但是, 增 加了操作步骤, 因此, 可以将这两个方式结合使用, 例如, 正在仅仅获取到一个关键 字的情况下不再需要确认, 如果获取到多个关键字则发送给请求方进行确认。 缓存的数据类型可能是音频数据, 音频数据可以使用音频识别技术, 缓存的数据 也可能是视频数据, 对于视频数据可以采用图像识别技术, 将图像与预先设置的图像 库进行比较来得到关键字。 例如, 在该缓存的数据包括音频数据的情况下, 将该音频 数据中的部分或全部内容识别为关键字; 和 /或, 在该缓存的数据包括视频数据的情况 下, 将获取该视频数据的图像所对应的关键字。 音频视频技术和视频识别技术在相关 技术中有很多种识别方法, 只要能够识别出关键字均可以使用,在此不再一一列巨额。 优选地, 播放该多媒体的设备和发送该请求的设备是不同的设备, 播放该多媒体 的设备和发送该请求的设备相连接。 在本实施例中, 还提供了一种关联内容处理系统, 该系统用于实现上述的方法, 在上述实施例及优选实施方式中已经进行过说明的, 在此不再赘述。 需要说明的是, 下述系统中的模块的名称并不构成对该模块的实际限定, 例如, 缓存模块可以表述为 "用于在至少两个缓冲区上存储该部分或者全部数据的模块", 以下的模块均可以在处 理器中实现, 例如, 缓存模块可以表述为"一种处理器, 用于在至少两个缓冲区上存储 该部分或者全部数据", 或者, "一种处理器, 包括缓存模块"。 图 2是根据本发明实施 例的关联内容处理系统的结构框图, 如图 2所示, 该结构包括: 缓存模块 22、 接收模 块 24、获取模块 26和检索模块 28。这些模块可以在一个服务器或者服务器组上实现, 也可以作为不同的服务器来实现, 例如, 接收模块 24和缓存模块 22可以在视频服务 器上实现, 当然, 缓存模块 22也可以在播放视频的终端上; 获取模块 26块可以在识 别服务器上实现, 例如可以是语音识别服务器; 检索模块可以在关联内容生成服务器 上实现。 下面对该系统进行详细的说明。 缓存模块 22, 设置为在播放多媒体时, 缓存该多媒体中的部分或者全部数据; 接 收模块 24, 设置为接收到请求, 其中, 该请求用于请求获取与缓存的数据相关联的内 容; 获取模块 26, 设置为根据该缓存的数据获取关键字; 检索模块 28, 设置为根据该 关键字检索得到该相关联的内容。 优选地, 该缓存模块 22, 设置为在至少两个缓冲区上存储该部分或者全部数据, 其中, 该至少两缓冲区是在缓存该部分或全部数据的装置上设置的, 该至少两个缓冲 区中的每个缓冲区用于缓冲预定时长的数据。 优选地,该缓存模块 22,设置为在该至少两个缓冲区的第一缓冲区数据存满之后, 在该第二缓冲区缓存数据, 直到第 n缓冲区数据存满, 在第 n缓冲区数据存储之后, 将该第一缓冲区数据删除, 并重新在该第一缓存区缓存数据, 其中, 该 n为缓冲区的 数量。 优选地, 该缓存模块 22位于播放该多媒体的设备中; 和 /或, 位于提供该多媒体 的服务器中。 优选地,该获取模块 26,设置为将从该缓存的数据中解析出关键字作为该关键字; 或者, 该获取模块 26, 设置为从该缓存的数据中解析出一个或多个关键字, 将该一个 或多个关键字发送给该请求的发送方, 将该发送方确认的关键字作为该关键字。 优选地, 该获取模块 26, 设置为在该缓存的数据包括音频数据的情况下, 将该音 频数据中的部分或全部内容识别为关键字; 和 /或, 该获取模块 26, 设置为在该缓存的 数据包括视频数据的情况下, 将获取该视频数据的图像所对应的关键字。 优选地, 播放该多媒体的设备和发送该请求的设备是不同的设备, 播放该多媒体 的设备和发送该请求的设备相连接。 以下结合优选实施例进行说明。 在以下优选实施例中, 解决了当用户在观看视频时, 如何根据用户听到的某个感 兴趣的内容动态的生成关联内容的问题。 另外, 在以下实施例中还解决了如何在不影 响用户当前观看视频的体验的情况下, 展示上述动态生成的关联内容的问题。 在以下 实施例中提供了四个优选的实现方案, 下面对此进行分别说明。 方案一 图 3是根据本发明实施例的关联内容处理系统的优选结构框图一, 如图 3所示, 用户拥有设备 1和设备 2, 它们之间通过蓝牙等短距离通信技术进行交互, 设备 1用 于观看视频, 设备 2用于获取和展示关联内容。 为了获取正在观看视频的音频信息, 设备 1设置有两个缓冲区, 设置为存储最近 一段时间的音频, 具体如下: 缓冲区 bl, 能够存储时长为 tl的音频, 缓冲区 b2, 能够存储时长为 t2的音频。 当用户观看视频时, bl从用户观看的起始位置开始缓存该视频的音频信息, 当 bl存 满后, b2开始继续缓存该视频的音频信息, 当 b2存满后, bl开始继续缓存该视频的 音频信息, 原来缓存在 bl中的信息则被清除, 如此循环, 直到该视频被暂停、 停止、 快进、 快退为止。 如此则 bl、 b2中始终缓存着该视频最近一段时间内的音频信息。 根据音频信息生成关联内容的方法如下- 当用户听到感兴趣的内容时, 通过设备 2向设备 1发送获取关联内容的请求; 设备 1收到该请求后,将当前 bl、 b2中缓存的音频信息按时间顺序合并成为一个 音频; 并将该音频发送至语音识别服务器, 请求语音识别服务器进行识别; 语音识别服务器接收到该请求后, 对该音频进行识别; 并将识别后的关键字发送给关联内容生成服务器; 关联内容生成服务器收到识别后的关键字后, 根据关键字进行搜索并生成相应的 关联内容; 并将生成的关联内容返回给设备 2; 设备 2接收到该关联内容后将其展示给用户。 上述步骤 1中, 设备 2与设备 1间通过蓝牙、 WIFI等短距离通信技术进行链接并 进行交互。 上述步骤 2中, 因为 bl, b2两个缓冲区是交替循环进行存储音频信息的, 所以判断上述按时间顺序的方法可以通过将当前正在缓存音频的缓冲区中的数据作为 后段, 已经存满的另一个缓冲区中的数据作为前段的方法来实现。 方案二 图 4是根据本发明实施例的关联内容处理系统的优选结构框图二, 如图 4所示, 用户拥有设备 1和设备 2, 设备 1用于观看视频, 设备 2用于获取和展示关联内容。 为了获取正在观看视频的音频信息, 视频服务器设置有两个缓冲区, 设置为存储 最近一段时间的音频, 具体如下: 缓冲区 bl, 能够存储时长为 tl的音频, 缓冲区 b2, 能够存储时长为 t2的音频。 当用户观看视频时, bl从用户观看的起始位置开始缓存该视频的音频信息, 当 bl存 满后, b2开始继续缓存该视频的音频信息, 当 b2存满后, bl开始继续缓存该视频的 音频信息, 原来缓存在 bl中的信息则被清除, 如此循环, 直到该视频被暂停、 停止、 快进、 快退为止。 如此则 bl、 b2中始终缓存着该视频最近一段时间内的音频信息。 根据音频信息生成关联内容的方法如下: 当用户听到感兴趣的内容时,通过设备 2向视频服务器发送获取关联内容的请求; 视频服务器收到该请求后,将当前 bl、 b2中缓存的音频信息按时间顺序合并成为 一个音频; 并将该音频发送至语音识别服务器, 请求语音识别服务器进行识别; 语音识别服务器接收到该请求后, 对该音频进行识别; 并将识别后的关键字发送给关联内容生成服务器; 关联内容生成服务器收到识别后的关键字后, 根据关键字进行搜索并生成相应的 关联内容; 并将生成的关联内容返回给设备 2; 设备 2接收到该关联内容后将其展示给用户。 上述步骤 1中, 设备 2与设备 1间通过蓝牙、 WIFI等短距离通信技术进行链接并 进行交互。 上述步骤 2中, 因为 bl, b2两个缓冲区是交替循环进行存储音频信息的, 所以判断上述按时间顺序的方法可以通过将当前正在缓存音频的缓冲区中的数据作为 后段, 已经存满的另一个缓冲区中的数据作为前段的方法来实现。 方案三 图 5是根据本发明实施例的关联内容处理系统的优选结构框图三, 如图 5所示, 用户拥有设备 1和设备 2, 它们之间通过蓝牙等短距离通信技术进行交互, 设备 1用 于观看视频, 设备 2用于获取和展示关联内容。 为了获取正在观看视频的音频信息, 设备 1设置有两个缓冲区, 设置为存储最近 一段时间的音频, 具体如下: 缓冲区 bl, 能够存储时长为 tl的音频, 缓冲区 b2, 能够存储时长为 t2的音频。 当用户观看视频时, bl从用户观看的起始位置开始缓存该视频的音频信息, 当 bl存 满后, b2开始继续缓存该视频的音频信息, 当 b2存满后, bl开始继续缓存该视频的 音频信息, 原来缓存在 bl中的信息则被清除, 如此循环, 直到该视频被暂停、 停止、 快进、 快退为止。 如此则 bl、 b2中始终缓存着该视频最近一段时间内的音频信息。 根据音频信息生成关联内容的方法如下: 当用户听到感兴趣的内容时, 通过设备 2向设备 1发送获取关联内容的请求; 设备 1收到该请求后,将当前 bl、 b2中缓存的音频信息按时间顺序合并成为一个 音频; 并将该音频发送至语音识别服务器, 请求语音识别服务器进行识别; 语音识别服务器接收到该请求后, 对该音频进行识别; 当识别出的关键字不止一个时, 语音识别服务器将识别出的全部关键字发送至设 备 2; 设备 2将接收到来自语音识别服务器的多个关键字展示给用户, 并由用户选择感 兴趣的关键字; 设备 2将用户选定的关键字 (可能有多个) 发送给关联内容生成服务器; 关联内容生成服务器收到识别后的关键字后, 根据关键字进行搜索并生成相应的 关联内容; 并将生成的关联内容返回给设备 2; 设备 2接收到该关联内容后将其展示给用户。 上述步骤 1中, 设备 2与设备 1间通过蓝牙、 WIFI等短距离通信技术进行链接并 进行交互。 上述步骤 2中, 因为 bl, b2两个缓冲区是交替循环进行存储音频信息的, 所以判断上述按时间顺序的方法可以通过将当前正在缓存音频的缓冲区中的数据作为 后段, 已经存满的另一个缓冲区中的数据作为前段的方法来实现。 方案四 图 6是根据本发明实施例的关联内容处理系统的优选结构框图四, 如图 6所示, 用户拥有设备 1和设备 2, 设备 1用于观看视频, 设备 2用于获取和展示关联内容。 为了获取正在观看视频的音频信息, 视频服务器设置有两个缓冲区, 设置为存储 最近一段时间的音频, 具体如下: 缓冲区 bl, 能够存储时长为 tl的音频, 缓冲区 b2, 能够存储时长为 t2的音频。 当用户观看视频时, bl从用户观看的起始位置开始缓存该视频的音频信息, 当 bl存 满后, b2开始继续缓存该视频的音频信息, 当 b2存满后, bl开始继续缓存该视频的 音频信息, 原来缓存在 bl中的信息则被清除, 如此循环, 直到该视频被暂停、 停止、 快进、 快退为止。 如此则 bl、 b2中始终缓存着该视频最近一段时间内的音频信息。 根据音频信息生成关联内容的方法如下- 当用户听到感兴趣的内容时,通过设备 2向视频服务器发送获取关联内容的请求; 视频服务器收到该请求后,将当前 bl、 b2中缓存的音频信息按时间顺序合并成为 一个音频; 并将该音频发送至语音识别服务器, 请求语音识别服务器进行识别; 语音识别服务器接收到该请求后, 对该音频进行识别; 当识别出的关键字不止一个时, 语音识别服务器将识别出的全部关键字发送至设 备 2; 设备 2将接收到来自语音识别服务器的多个关键字展示给用户, 并由用户选择感 兴趣的关键字; 设备 2将用户选定的关键字 (可能有多个) 发送给关联内容生成服务器; 关联内容生成服务器收到识别后的关键字后, 根据关键字进行搜索并生成相应的 关联内容; 并将生成的关联内容返回给设备 2; 设备 2接收到该关联内容后将其展示给用户。 上述步骤 1中, 设备 2与设备 1间通过蓝牙、 WIFI等短距离通信技术进行链接并 进行交互。 上述步骤 2中, 因为 bl, b2两个缓冲区是交替循环进行存储音频信息的, 所以判断上述按时间顺序的方法可以通过将当前正在缓存音频的缓冲区中的数据作为 后段, 已经存满的另一个缓冲区中的数据作为前段的方法来实现。 上述各个方案中缓冲区 bl、 b2的大小可以根据以下方法来进行设置: 设置固定的存储空间, 例如 bl=b2=512KB 存储空间大小可变, 但是存储的音频的时长为固定值, 例如, 存储的音频时长为
10秒, 根据当时音频的编码情况 bl、 b2动态调整存储空间的大小。 下面结合两个优选场景来进行说明。 场景一 结合图 3, 用户拥有设备 1和设备 2, 其中设备 1为电视, 设备 2为手机, 它们之 间通过蓝牙等短距离通信技术进行交互, 电视用于观看视频, 手机用于获取和展示关 联内容。 为了获取正在观看视频的音频信息, 电视设置有两个缓冲区, 设置为存储最近一 段时间的音频, 具体如下: 缓冲区 bl,能够存储时长为 5秒的音频,缓冲区 b2,能够存储时长为 5秒的音频。 当用户观看视频时, bl从用户观看的起始位置开始缓存该视频的音频信息, 当 bl存 满后, b2开始继续缓存该视频的音频信息, 当 b2存满后, bl开始继续缓存该视频的 音频信息, 原来缓存在 bl中的信息则被清除, 如此循环, 直到该视频被暂停、 停止、 快进、 快退为止。 如此则 bl、 b2中始终缓存着该视频最近一段时间内的音频信息。 用户通过电视观看新闻, 当用户听到"神舟十号"时, 希望获取更多相关的内容, 用户通过摇晃手机向电视发送获取关联内容的请求, 此后的处理流程如下: 电视收到该请求后,将当前 bl、 b2中缓存的音频信息按时间顺序合并成为一个音 频 (本实施例中合成后的音频时长为 7秒, 其中包含"神舟十号"的语音信息); 并将该音频发送至语音识别服务器, 请求语音识别服务器进行识别; 语音识别服务器接收到该请求后, 对该音频进行识别, 识别出关键字"神舟十号"; 并将识别后的关键字发送给关联内容生成服务器; 关联内容生成服务器收到识别后的关键字后, 根据关键字进行搜索并生成相应的 关于"神舟十号"的关联内容; 并将生成的关联内容返回给手机; 手机接收到该关联内容后将其展示给用户,用户可以通过手机进一步了解"神舟十 号"的相关内容。 场景二 结合图 4, 用户拥有设备 1和设备 2, 其中设备 1为电视, 设备 2为手机, 电视用 于观看视频, 手机用于获取和展示关联内容。 为了获取正在观看视频的音频信息, 视频服务器设置有两个缓冲区, 设置为存储 最近一段时间的音频, 具体如下: 缓冲区 bl,能够存储时长为 5秒的音频,缓冲区 b2,能够存储时长为 5秒的音频。 当用户观看视频时, bl从用户观看的起始位置开始缓存该视频的音频信息, 当 bl存 满后, b2开始继续缓存该视频的音频信息, 当 b2存满后, bl开始继续缓存该视频的 音频信息, 原来缓存在 bl中的信息则被清除, 如此循环, 直到该视频被暂停、 停止、 快进、 快退为止。 如此则 bl、 b2中始终缓存着该视频最近一段时间内的音频信息。 用户通过电视观看新闻, 当用户听到"神舟十号"时, 希望获取更多相关的内容, 用户通过摇晃手机向视频服务器发送获取关联内容的请求, 此后的处理流程如下: 视频服务器收到该请求后,将当前 bl、 b2中缓存的音频信息按时间顺序合并成为 一个音频 (本实施例中合成后的音频时长为 7秒, 其中包含"神舟十号"的语音信息); 并将该音频发送至语音识别服务器, 请求语音识别服务器进行识别; 语音识别服务器接收到该请求后, 对该音频进行识别, 识别出关键字"神舟十号"; 并将识别后的关键字发送给关联内容生成服务器; 关联内容生成服务器收到识别后的关键字后, 根据关键字进行搜索并生成相应的 关于"神舟十号"的关联内容; 并将生成的关联内容返回给手机; 手机接收到该关联内容后将其展示给用户,用户可以通过手机进一步了解"神舟十 号"的相关内容。 上述两个优选场景中, 也可以将关键字经过手机的确认, 然后再进行检索。 显然, 本领域的技术人员应该明白, 上述的本发明实施例的各模块或各步骤可以 用通用的计算装置来实现, 它们可以集中在单个的计算装置上, 或者分布在多个计算 装置所组成的网络上, 可选地, 它们可以用计算装置可执行的程序代码来实现, 从而, 可以将它们存储在存储装置中由计算装置来执行, 或者将它们分别制作成各个集成电 路模块, 或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。 这样, 本 发明实施例不限制于任何特定的硬件和软件结合。 以上所述仅为本发明的优选实施例而已, 并不用于限制本发明, 对于本领域的技 术人员来说, 本发明可以有各种更改和变化。 凡在本发明的精神和原则之内, 所作的 任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。 工业实用性 本发明实施例的技术方案可以应用在多媒体应用领域, 解决了相关技术中无法生 成用户感兴趣的关联内容的问题, 实现了根据用户的请求来生成关联内容, 提高了用 户体验。

Claims

权 利 要 求 书
1. 一种关联内容处理方法, 包括:
在播放多媒体时, 缓存所述多媒体中的部分或者全部数据;
接收到请求, 其中, 所述请求用于请求获取与缓存的数据相关联的内容; 根据所述缓存的数据获取关键字;
根据所述关键字检索得到所述相关联的内容。
2. 根据权利要求 1所述的方法, 其中, 缓存所述多媒体中的部分或者全部数据包 括- 在缓存所述部分或全部数据的装置上设置至少两个缓冲区, 其中, 所述至 少两个缓冲区中的每个缓冲区用于缓冲预定时长的数据;
在所述至少两个缓冲区上存储所述部分或者全部数据。
3. 根据权利要求 2所述的方法, 其中, 在所述至少两个缓冲区上存储所述部分或 者全部数据包括:
所述至少两个缓冲区的第一缓冲区数据存满之后, 在所述第二缓冲区缓存 数据, 直到第 n缓冲区数据存满, 在第 n缓冲区数据存储之后, 将所述第一缓 冲区数据删除, 并重新在所述第一缓存区缓存数据, 其中, 所述 n为缓冲区的 数量。
4. 根据权利要求 1至 3中任一项所述的方法, 其中, 缓存所述多媒体中的部分或 者全部数据包括:
在播放所述多媒体的设备缓存所述部分或者全部数据; 和 /或, 在服务器侧缓存所述部分或者全部数据。
5. 根据权利要求 1所述的方法,其中,根据所述缓存的数据获取所述关键字包括:
将从所述缓存的数据中解析出关键字作为所述关键字; 或者, 从所述缓存的数据中解析出一个或多个关键字, 将所述一个或多个关键字 发送给所述请求的发送方, 将所述发送方确认的关键字作为所述关键字。
6. 根据权利要求 1或 5所述的方法, 其中, 从所述缓存的数据获取关键字包括: 在所述缓存的数据包括音频数据的情况下, 将所述音频数据中的部分或全 部内容识别为关键字; 和 /或,
在所述缓存的数据包括视频数据的情况下, 将获取所述视频数据的图像所 对应的关键字。
7. 根据权利要求 1至 6中任一项所述的方法, 其中, 播放所述多媒体的设备和发 送所述请求的设备是不同的设备, 播放所述多媒体的设备和发送所述请求的设 备相连接。
8. 一种关联内容处理系统, 包括:
缓存模块, 设置为在播放多媒体时, 缓存所述多媒体中的部分或者全部数 据;
接收模块, 设置为接收到请求, 其中, 所述请求用于请求获取与缓存的数 据相关联的内容;
获取模块, 设置为根据所述缓存的数据获取关键字;
检索模块, 设置为根据所述关键字检索得到所述相关联的内容。
9. 根据权利要求 8所述的系统, 其中,
所述缓存模块, 设置为在至少两个缓冲区上存储所述部分或者全部数据, 其中, 所述至少两缓冲区是在缓存所述部分或全部数据的装置上设置的, 所述 至少两个缓冲区中的每个缓冲区设置为缓冲预定时长的数据。
10. 根据权利要求 9所述的系统, 其中, 所述缓存模块, 设置为在所述至少两个缓 冲区的第一缓冲区数据存满之后, 在所述第二缓冲区缓存数据, 直到第 n缓冲 区数据存满, 在第 n缓冲区数据存储之后, 将所述第一缓冲区数据删除, 并重 新在所述第一缓存区缓存数据, 其中, 所述 n为缓冲区的数量。
11. 根据权利要求 8至 10中任一项所述的系统, 其中,
所述缓存模块位于播放所述多媒体的设备中; 和 /或, 位于提供所述多媒体 的服务器中。
12 根据权利要求 8所述的系统, 其中,
所述获取模块, 设置为将从所述缓存的数据中解析出关键字作为所述关键 字; 或者, 所述获取模块, 设置为从所述缓存的数据中解析出一个或多个关键字, 将 所述一个或多个关键字发送给所述请求的发送方, 将所述发送方确认的关键字 作为所述关键字。
13. 根据权利要求 8或 12所述的系统, 其中, 所述获取模块, 设置为在所述缓存的数据包括音频数据的情况下, 将所述 音频数据中的部分或全部内容识别为关键字; 和 /或,
所述获取模块, 设置为在所述缓存的数据包括视频数据的情况下, 将获取 所述视频数据的图像所对应的关键字。
14. 根据权利要求 8至 13中任一项所述的系统,其中,播放所述多媒体的设备和发 送所述请求的设备是不同的设备, 播放所述多媒体的设备和发送所述请求的设 备相连接。
PCT/CN2013/083815 2013-08-29 2013-09-18 关联内容处理方法及系统 WO2014169571A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP13882558.3A EP3040877A4 (en) 2013-08-29 2013-09-18 METHOD AND SYSTEM FOR PROCESSING ASSOCIATED CONTENT
US14/915,407 US20160203144A1 (en) 2013-08-29 2013-09-18 Method and System for Processing Associated Content
JP2016537075A JP2016532969A (ja) 2013-08-29 2013-09-18 関連コンテンツの処理方法及びシステム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310385183.9A CN104427350A (zh) 2013-08-29 2013-08-29 关联内容处理方法及系统
CN201310385183.9 2013-08-29

Publications (1)

Publication Number Publication Date
WO2014169571A1 true WO2014169571A1 (zh) 2014-10-23

Family

ID=51730730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/083815 WO2014169571A1 (zh) 2013-08-29 2013-09-18 关联内容处理方法及系统

Country Status (5)

Country Link
US (1) US20160203144A1 (zh)
EP (1) EP3040877A4 (zh)
JP (1) JP2016532969A (zh)
CN (1) CN104427350A (zh)
WO (1) WO2014169571A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721538A (zh) * 2015-12-30 2016-06-29 东莞市青麦田数码科技有限公司 数据访问的方法和装置
CN107659842B (zh) * 2017-08-21 2020-02-07 武汉斗鱼网络科技有限公司 一种视频交友中举报方法及系统
US11210821B2 (en) 2019-11-27 2021-12-28 Arm Limited Graphics processing systems
US11216993B2 (en) 2019-11-27 2022-01-04 Arm Limited Graphics processing systems
US11170555B2 (en) * 2019-11-27 2021-11-09 Arm Limited Graphics processing systems
US11210847B2 (en) 2019-11-27 2021-12-28 Arm Limited Graphics processing systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101634987A (zh) * 2008-07-21 2010-01-27 上海天统电子科技有限公司 多媒体播放器
CN101930446A (zh) * 2009-06-26 2010-12-29 鸿富锦精密工业(深圳)有限公司 电子装置及在嵌入式电子装置中播放音乐的方法
CN102622401A (zh) * 2012-01-09 2012-08-01 广东步步高电子工业有限公司 一种在音频文件播放过程中扩展显示相关信息的方法、系统及移动手持装置

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5603058A (en) * 1994-09-08 1997-02-11 International Business Machines Corporation Video optimized media streamer having communication nodes received digital data from storage node and transmitted said data to adapters for generating isochronous digital data streams
JP2003235063A (ja) * 2002-02-13 2003-08-22 Hitachi Ltd 携帯端末用データ配信システム
JP2005250738A (ja) * 2004-03-03 2005-09-15 Mitsubishi Electric Corp デジタル放送記録再生装置およびその記録再生方法
WO2007034787A1 (ja) * 2005-09-26 2007-03-29 Nec Corporation 携帯電話端末、データ処理開始方法、データ伝送方法
JP2008129884A (ja) * 2006-11-22 2008-06-05 Nec Corp 情報検索システム及びその方法並びにそれに用いる放送受信機
JP5115089B2 (ja) * 2007-08-10 2013-01-09 富士通株式会社 キーワード抽出方法
JP5096128B2 (ja) * 2007-12-25 2012-12-12 エイディシーテクノロジー株式会社 通信装置及びプログラム
US8510317B2 (en) * 2008-12-04 2013-08-13 At&T Intellectual Property I, L.P. Providing search results based on keyword detection in media content
JP5189481B2 (ja) * 2008-12-26 2013-04-24 Kddi株式会社 映像関連情報提示装置、映像関連情報提示システムおよび映像関連情報提示装置の制御プログラム
JP5344937B2 (ja) * 2009-01-20 2013-11-20 ヤフー株式会社 テレビ番組のメタ情報に基づく検索方法、検索システム、及び検索ターム生成装置
US9264785B2 (en) * 2010-04-01 2016-02-16 Sony Computer Entertainment Inc. Media fingerprinting for content determination and retrieval
JP5817196B2 (ja) * 2010-09-29 2015-11-18 ブラザー工業株式会社 携帯装置のプログラムおよび携帯装置の制御方法
CN102130946A (zh) * 2011-01-19 2011-07-20 四川长虹电器股份有限公司 避免缓存跳帧的方法
JP5637930B2 (ja) * 2011-05-12 2014-12-10 日本放送協会 興味区間検出装置、視聴者興味情報提示装置、および興味区間検出プログラム
CN103827911B (zh) * 2011-12-13 2018-01-12 英特尔公司 通过基于元数据的基础结构的多种媒体类型的实时映射和导航
US20140373082A1 (en) * 2012-02-03 2014-12-18 Sharp Kabushiki Kaisha Output system, control method of output system, control program, and recording medium
US10770075B2 (en) * 2014-04-21 2020-09-08 Qualcomm Incorporated Method and apparatus for activating application by speech input

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101634987A (zh) * 2008-07-21 2010-01-27 上海天统电子科技有限公司 多媒体播放器
CN101930446A (zh) * 2009-06-26 2010-12-29 鸿富锦精密工业(深圳)有限公司 电子装置及在嵌入式电子装置中播放音乐的方法
CN102622401A (zh) * 2012-01-09 2012-08-01 广东步步高电子工业有限公司 一种在音频文件播放过程中扩展显示相关信息的方法、系统及移动手持装置

Also Published As

Publication number Publication date
US20160203144A1 (en) 2016-07-14
JP2016532969A (ja) 2016-10-20
CN104427350A (zh) 2015-03-18
EP3040877A4 (en) 2016-09-07
EP3040877A1 (en) 2016-07-06

Similar Documents

Publication Publication Date Title
JP5612676B2 (ja) メディアコンテンツ読出しシステム及び個人用仮想チャンネル
WO2014169571A1 (zh) 关联内容处理方法及系统
US10824670B2 (en) Real-time audio stream search and presentation system
EP3488358A1 (en) Systems and methods for using seektables to stream media items
CN103945259B (zh) 一种在线视频播放方法及装置
US20220095002A1 (en) Method for transmitting media stream, and electronic device
CN102055718B (zh) 一种在http streaming系统中实现分层请求内容的方法,装置和系统
WO2015196749A1 (zh) 基于场景识别的信息推荐方法及装置
TW200424877A (en) Method and system for utilizing video content to obtain text keywords or phrases for providing content related links to network-based resources
US20210021655A1 (en) System and method for streaming music on mobile devices
TW200926813A (en) Method for retrieving content accessible to television receiver and system for retrieving content accessible to television receiver
CN113141524B (zh) 资源传输方法、装置、终端及存储介质
WO2018205833A1 (zh) 音乐文件信息的传输方法及装置、存储介质以及电子装置
US10743085B2 (en) Automatic annotation of audio-video sequences
EP2915071A1 (en) Bookmarking prospective media content on computer network
US20130173749A1 (en) Methods and devices for providing digital content
US20190320230A1 (en) Method, apparatus, and device for obtaining play data, and storage medium
US8756630B2 (en) Imaging distribution apparatus and imaging distribution method
WO2015188565A1 (zh) 一种基于移动终端的iptv视频推送点播的方法及装置
KR20080066513A (ko) 메타 데이터 정보 제공 서버, 클라이언트 장치, 메타데이터 정보 제공 방법 및 콘텐츠 제공 방법
JP2005141507A (ja) 関連情報提示装置、関連情報検索装置、関連情報提示方法、関連情報検索方法、関連情報提示プログラム及び関連情報検索プログラム
JP2008311811A (ja) 携帯電話装置
CN114666611A (zh) 时间戳生成方法、装置及相关设备
KR20090011100A (ko) 스트리밍 파일의 재생 위치 인덱싱 방법 및 스트리밍파일의 재생 위치 인덱싱 기능을 구비한 휴대용 단말기
KR20060017193A (ko) 모바일폰의 음악 스트리밍 서비스 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13882558

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016537075

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2013882558

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 14915407

Country of ref document: US

Ref document number: 2013882558

Country of ref document: EP