WO2017164510A2 - Procédé de marquage de contenu multimédia basé sur des données vocales, et système l'utilisant - Google Patents

Procédé de marquage de contenu multimédia basé sur des données vocales, et système l'utilisant Download PDF

Info

Publication number
WO2017164510A2
WO2017164510A2 PCT/KR2017/001103 KR2017001103W WO2017164510A2 WO 2017164510 A2 WO2017164510 A2 WO 2017164510A2 KR 2017001103 W KR2017001103 W KR 2017001103W WO 2017164510 A2 WO2017164510 A2 WO 2017164510A2
Authority
WO
WIPO (PCT)
Prior art keywords
voice
multimedia content
tag
server
keyword information
Prior art date
Application number
PCT/KR2017/001103
Other languages
English (en)
Korean (ko)
Other versions
WO2017164510A3 (fr
Inventor
김준모
Original Assignee
김준모
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 김준모 filed Critical 김준모
Publication of WO2017164510A2 publication Critical patent/WO2017164510A2/fr
Publication of WO2017164510A3 publication Critical patent/WO2017164510A3/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit

Definitions

  • the present invention relates to a voice data-based multimedia content tagging method and a system using the same. More particularly, the present invention relates to a voice data-based multimedia for generating a voice tag based on voice data of the multimedia content and tagging the generated voice tag to the multimedia content. It relates to a content tagging method and a system using the same.
  • multimedia content refers to information service contents utilized in systems and services that integrate, create, transmit, and process various types of information such as text, voice, and video.
  • Such multimedia contents can deliver much more information amount more effectively than other images, sounds, and texts at the same time, and the demand is gradually increased compared to contents composed of other images, sounds, and texts only.
  • the conventional method of searching for multimedia contents is to search for desired contents because the user needs to search and play the actual multimedia contents, or search based on description contents composed of images or texts to describe the multimedia contents. It takes a lot of time, and there is a disadvantage that does not exactly search the content desired.
  • Korean Patent Registration No. 10-1403317 (including video with tagging information) that includes tag information that appears as an image in the multimedia and provides information to the user Information providing system) is invented, but also tag information is composed of images to check images one by one, search for desired multimedia contents, and use only images stored in tag information among images included in multimedia contents. Because of searching multimedia content, there is a problem that the search results cannot be trusted.
  • the present invention has been made to solve the above problems and an object of the present invention is to generate a voice tag based on the voice data of the multimedia content, and to tag the generated voice tag to the multimedia content And providing a system.
  • Another object of the present invention is to provide a voice data-based multimedia content tagging method and system capable of searching for multimedia content associated with a specific search word based on a voice tag.
  • a voice data-based multimedia content tagging method includes: extracting voice keyword information based on multimedia content by a server; Generating, by the server, a voice tag based on the extracted voice keyword information; And tagging, by the server, the generated voice tag on the multimedia content.
  • the server separates the voice data included in the multimedia content into morpheme units, selects voice data corresponding to lexical morphemes from the separated voice data, and selects the selected voice data from the voice. Can be extracted as keyword information.
  • the server converts the extracted voice keyword information into text and matches the textual voice keyword information with synchronization time information of the voice data synchronized to a timeline of the multimedia content. You can create tags.
  • the server may add the generated voice tag to the multimedia content and encode the tag in a predetermined format.
  • the generating may include generating the voice tag by matching the extracted voice keyword information with synchronization time information of the voice data synchronized to the timeline of the multimedia content.
  • the generating may include generating the voice tag by matching the extracted voice keyword information with the synchronization time information of the voice data synchronized with the timeline of the multimedia content and the URL address to which the multimedia content is linked. Can be generated.
  • the server textifies the extracted voice keyword information, sets at least one voice keyword information among the textized voice keyword information as a keyword, and the remaining voice keyword information not set as the keyword. If set to a stop word, the voice tag may be generated by selecting only the voice data set as the keyword except filtering the voice data set as the stop word.
  • the voice data-based multimedia content tagging method includes the steps of requesting the mobile terminal a search based on a specific search word to the server; And performing, by the server, the requested search.
  • the server may compare the tagged voice tag with the search word to detect a voice tag associated with the search word among the tagged voice tags, thereby performing the search.
  • the server detects a voice tag associated with the search word and provides a voice tag associated with the search word to the mobile terminal as a result of the search, but includes a voice tag including the same voice data as the search word. If a plurality of voice tags including the same voice data as the search word are provided, the voice tag including voice data similar to the search word may be provided.
  • the mobile terminal may preferentially provide a voice tag of a multimedia content having a relatively high number of download requests and a real time play request to a voice tag of a multimedia content having a relatively low number of download requests and a real time play request.
  • the voice data-based multimedia content tagging system extracts the voice keyword information based on the multimedia content, and generates a voice tag based on the extracted voice keyword information
  • a server tagging the generated voice tag on the multimedia content
  • a mobile terminal provided with the tagged multimedia content from the server. It includes.
  • the mobile terminal extracts the voice keyword information based on the multimedia content; Generating, by the mobile terminal, a voice tag based on the extracted voice keyword information; And tagging, by the mobile terminal, the generated voice tag to the multimedia content, wherein the generating step includes: extracting the voice keyword information and the multimedia content when the voice keyword information is extracted.
  • the voice tag may be generated by matching the path information with respect to the storage path.
  • a search service for searching for multimedia content desired by a user can be provided to a user of the mobile terminal.
  • a reliable search result may be obtained by searching for a voice tag associated with a specific search word among voice tags generated based on voice data.
  • FIG. 1 is a diagram illustrating a voice data-based multimedia content tagging system according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the configuration of a voice data based multimedia content tagging system according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a voice data-based multimedia content tagging method according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a data structure of multimedia content tagged with a voice data-based multimedia content tagging method according to an embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a method of tagging voice data based multimedia content according to an embodiment of the present invention in more detail.
  • FIG. 6 is a diagram illustrating a process of extracting voice keyword information in a voice data-based multimedia content tagging method according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a process of generating a voice tag using a voice data based multimedia content tagging method according to an embodiment of the present invention.
  • FIG. 8 is a diagram illustrating a process of generating a voice tag using a voice data-based multimedia content tagging method according to an embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating a voice data-based multimedia content tagging method according to an embodiment of the present invention in more detail.
  • FIG. 1 is a diagram illustrating a voice data based multimedia content tagging system according to an embodiment of the present invention
  • FIG. 2 is a diagram illustrating a configuration of a voice data based multimedia content tagging system according to an embodiment of the present invention.
  • FIGS. 1 and 2 a voice data based multimedia content tagging system according to the present embodiment will be described with reference to FIGS. 1 and 2.
  • a voice data-based multimedia content tagging system generates a voice tag based on voice data of the multimedia content, tags the generated voice tag on the multimedia content, and associates a specific search term with the tagged voice tag. It is provided to perform a search of multimedia content.
  • the present voice data-based multimedia content tagging system includes a server 100 and a mobile terminal 200.
  • the server 100 is provided to tag the voice tag to the multimedia content and to search for the multimedia content associated with the specific search word based on the tagged voice tag.
  • the server 100 may extract voice keyword information based on the multimedia content, generate a voice tag based on the extracted voice keyword information, and tag the generated voice tag on the multimedia content.
  • the server 100 may search for multimedia content associated with the specific search word based on the tagged voice tag.
  • the server 100 includes a communication unit 110, a control unit 120 and a storage unit 130.
  • the communication unit 110 of the server is provided to perform internet communication with the external device and the mobile terminal 200 using a network communication network.
  • the communication unit 110 may provide tagged multimedia content to the mobile terminal 200.
  • the controller 120 of the server is provided to extract voice keyword information based on the multimedia content, generate a voice tag based on the extracted voice keyword information, and tag the generated voice tag on the multimedia content.
  • the controller 120 may search for multimedia content associated with the specific search word based on the tagged voice tag.
  • the storage unit 130 of the server is provided to store multimedia content tagged with a voice tag.
  • the storage unit 130 may store data for a search service for multimedia content and a URL address to which the multimedia content is linked.
  • the mobile terminal 200 is provided to enable internet communication with the server 100 using a network communication network, and provides tagged multimedia content from the server 100, and requests the server 100 to search based on a specific search word. Can be.
  • the mobile terminal 200 includes a communication unit 210, a control unit 220, a storage unit 230, and a display unit 240.
  • the communication unit 210 of the mobile terminal is provided to perform internet communication with the server 100 using a network communication network.
  • the communication unit 210 may request a search from the server 100 based on a specific search word or receive multimedia content provided from the server 100.
  • the controller 220 of the mobile terminal is provided to control the first half of the mobile terminal 200. For example, if an input signal for requesting a search based on a specific search word is input through a separate input unit, a search request may be made to the server 100 based on the specific search word through the communication unit 210.
  • the storage unit 230 of the mobile terminal is provided to store various programs necessary for driving the mobile terminal 200.
  • the storage unit 230 may store data of a search service for searching for multimedia content or multimedia content provided from the server 100.
  • the display unit 240 of the mobile terminal is provided for outputting image information to be output by the mobile terminal 200.
  • the display 240 may output multimedia content provided from the server 100.
  • FIG. 3 is a flowchart illustrating a method of tagging multimedia data based multimedia data according to an embodiment of the present invention
  • FIG. 4 is a diagram of multimedia content tagged using the voice data based multimedia content tagging method according to an embodiment of the present invention. It is a figure for demonstrating the data structure of the figure.
  • the server 100 extracts voice keyword information based on the multimedia content (S110). For example, the server 100 separates voice data included in multimedia content into morpheme units, selects voice data corresponding to lexical morphemes from the separated voice data, and extracts the selected voice data as voice keyword information. can do.
  • morphemes are the smallest units at the morphological level of language, which impart the function of meaning
  • lexical morphemes are morphemes that represent specific objects, actions and states.
  • the server 100 when the voice keyword information is extracted, the server 100 generates a voice tag based on the extracted voice keyword information (S120). For example, the server 100 may generate the voice tag by matching the extracted voice keyword information with the synchronization time information of the voice data.
  • the server 100 may generate the voice tag by matching the extracted voice keyword information with the synchronization time information of the voice data and the URL address information to which the multimedia content is linked.
  • the server 100 may store the generated voice tag separately without tagging the multimedia content.
  • the voice tag may be stored in the storage 130 of the server in a file format separate from the multimedia content.
  • the server 100 may provide the mobile terminal 200 with a voice tag of the multimedia content corresponding to the search condition.
  • the mobile terminal may receive the multimedia content linked to the URL address by decoding the URL address information area of the voice tag.
  • the server 100 tags the generated voice tag to the multimedia content (S130). For example, the server 100 may add the generated voice tag to the multimedia content and encode the tag in a predetermined format.
  • the server 100 may preset the format of the multimedia content, thereby standardizing the format of the multimedia content encoded in various formats.
  • the encoded and tagged multimedia content may be stored in the server 100 as a new file.
  • the multimedia content tagged with the voice tag may be composed of a data area of the voice tag and a data area of the multimedia content as shown in FIG. 4.
  • the server 100 may add a voice tag generated by decoding the encoded content and re-encode the tag in a predetermined format.
  • the server 100 may generate a voice tag by matching the extracted voice keyword information with path information on a storage path of the multimedia content.
  • the storage path refers to a storage path of a file in which multimedia content is stored in a storage unit in the form of a file.
  • the server 100 When the mobile terminal 200 requests the server 100 to search based on a specific search word (S140), the server 100 performs the requested search (S150). For example, the server 100 may perform a search by comparing a tagged voice tag with a search word and detecting a voice tag associated with the search word among the tagged voice tags.
  • the mobile terminal 200 installs an application for performing a voice data-based multimedia content tagging method and installs the application.
  • voice keyword information may be extracted based on the multimedia content.
  • the mobile terminal 200 may generate a voice tag based on the extracted voice keyword information.
  • the mobile terminal 200 may generate the voice tag by matching the extracted voice keyword information with the path information of the multimedia content.
  • the mobile terminal 200 may tag the multimedia content or otherwise store it separately without tagging the multimedia content.
  • the mobile terminal 200 may add the generated voice tag to the multimedia content and encode the tag in the predetermined format.
  • the encoded and tagged multimedia content may be stored in the mobile terminal 200 as a new file.
  • the multimedia content corresponding to the search condition is selected in the voice keyword information area of the tagged voice tag among the multimedia contents stored in the mobile terminal 200.
  • the multimedia content can be retrieved and executed in the mobile terminal 200.
  • FIG. 5 is a flowchart illustrating a voice data based multimedia content tagging method according to an embodiment of the present invention in more detail.
  • FIG. 6 is a voice keyword in the voice data based multimedia content tagging method according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a process of extracting information, and FIG. 7 is a diagram illustrating a process of generating a voice tag using a voice data-based multimedia content tagging method according to an embodiment of the present invention. Is a view illustrating a process of generating a voice tag using a voice data based multimedia content tagging method according to an embodiment of the present invention.
  • the server 100 separates the voice data included in the multimedia content into morpheme units (S210), and selects voice data corresponding to the lexical morphemes from the separated voice data (S220). Voice data may be extracted as voice keyword information (S230).
  • the server 100 may use the voice data.
  • the server 100 the voice data corresponding to the lexical morpheme of the separated voice data, "Mononglong”, “swing”, “Chunhyang”, “Bo (da)”, “Love” and “Pa (lo)"
  • the selected voice data may be extracted as voice keyword information in operation S230.
  • the server 100 may generate the voice tag by matching the extracted voice keyword information with the synchronization time information of the voice data (S240).
  • FIG. 7 is a diagram schematically illustrating a timeline of multimedia content
  • FIG. 8 illustrates a voice tag generated by matching voice keyword information with synchronization time information.
  • the extracted voice keyword information is extracted voice data
  • the synchronization time information of the voice data is information including a synchronization start time and a synchronization end time of the voice data synchronized to the timeline of the multimedia content.
  • the server 100 may synchronize the voice keyword information including the voice data of "swing” with the synchronization time information from which T1 (16:30) to T2 (16:42) is synchronized and the voice of "swing".
  • the voice tag may be generated by matching keyword information.
  • the server 100 includes the voice keyword information including the voice data of "Chunhyang” and the synchronization time information of which the voice data of "Chunhyang” is synchronized from T3 (17:30) to T4 (18:22).
  • the voice tag may be generated by matching keyword information.
  • the server 100 may text the extracted voice keyword information and match the textual voice keyword information with the synchronization time information of the voice data to generate a voice tag.
  • the server 100 may text-extract the extracted voice keyword information, set at least one voice keyword information among the textized voice keyword information as a keyword, and set the remaining voice keyword information not set as the keyword. If it is set as a stop word, the voice tag may be generated by selecting only the voice data set as a keyword and filtering out the voice data set as the stop word.
  • the server 100 selects the voice data corresponding to the lexical morphemes of the separated voice data such as “mongryong”, “swing”, “chunhyang”, “bo”, “love” and “fast”. If it is selected and extracted as voice keyword information, it is textized, and if the textualized voice keyword information of "Chunhyang” among the textized voice keyword information is set as a keyword, the remaining "monglong” and "swing” are not set as keywords. Textual voice keyword information such as “bo”, “love”, and “fast” is excluded as a stop word, and only the textual voice data of "chunhyang" set as a keyword is selected. You can create tags.
  • the keyword here means the headword
  • the stop word means the negative word
  • the server 100 may generate the voice tag by matching the extracted voice keyword information with the synchronization time information of the voice data and the URL address to which the multimedia content is linked.
  • the mobile terminal 200 may receive the multimedia content linked to the URL address based on the voice tag of the retrieved multimedia content. Can be.
  • the server 100 may add the generated voice tag to the multimedia content, encode the tag in a predetermined format, and tag it (S250).
  • the server 100 may preset the format of the multimedia content, thereby standardizing the format of the multimedia content encoded in various formats.
  • the server 100 may add a voice tag generated by decoding the encoded content and re-encode the tag in a predetermined format.
  • the server 100 may perform the requested search (S270).
  • the search term is any word of natural language.
  • FIG. 9 is a flowchart illustrating a voice data-based multimedia content tagging method according to an embodiment of the present invention in more detail.
  • the server 100 may extract voice keyword information based on the multimedia content as described above (S410).
  • the server 100 may generate a voice tag based on the extracted voice keyword information (S420).
  • the server 100 may tag the generated voice tag to the multimedia content (S430).
  • the server 100 After tagging the voice tag to the multimedia content, when the mobile terminal 200 requests the server 100 to search based on a specific search word (S440), the server 100 performs tagging to perform the requested search.
  • the voice tag is compared with the search word received from the mobile terminal 200 to determine whether there is a voice tag associated with the search word among the tagged voice tags (S450).
  • the server 100 may perform a search by detecting the detected voice tag.
  • the server 100 preferentially provides a voice tag including voice data identical to the search word (S460) and provides a voice tag including voice data similar to the search word (S470).
  • the server 100 determines that the voice tag of the multimedia content has a relatively high number of download requests and real time playback requests among voice tags including the same voice data as the search word. Is provided preferentially over voice tags of multimedia content having relatively few download requests and real-time playback requests.
  • the number of download requests and the number of real-time playback requests of the multimedia content refer to the number of times that the download request is requested by the other mobile terminal 200 and the number of times that the real-time playback is requested.
  • a voice tag associated with a specific search word may be searched among voice tags generated based on voice data, and a reliable search result may be obtained through the search.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Telephonic Communication Services (AREA)

Abstract

L'invention concerne : un procédé de marquage de contenu multimédia basé sur des données vocales pour générer une étiquette vocale sur la base de données vocales d'un contenu multimédia et marquer l'étiquette vocale générée par rapport au contenu multimédia ; et un système utilisant ce procédé. Le procédé de marquage de contenu multimédia basé sur des données vocales comprend les étapes consistant à : permettre à un serveur de générer une étiquette vocale sur la base d'informations de mot-clé vocal extraites ; et permettre au serveur d'étiqueter l'étiquette vocale générée par rapport à un contenu multimédia. Par conséquent, un service de recherche permettant à un utilisateur d'un terminal mobile de rechercher un contenu multimédia souhaité peut être fourni à l'utilisateur. De plus, dans une recherche liée à un mot de recherche spécifique, un résultat de recherche fiable peut être acquis via la recherche d'étiquettes vocales associées au mot de recherche spécifique parmi des étiquettes vocales générées sur la base de données vocales.
PCT/KR2017/001103 2016-03-25 2017-02-02 Procédé de marquage de contenu multimédia basé sur des données vocales, et système l'utilisant WO2017164510A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2016-0036059 2016-03-25
KR1020160036059A KR101832050B1 (ko) 2016-03-25 2016-03-25 음성 데이터 기반 멀티미디어 콘텐츠 태깅 방법 및 이를 이용한 시스템

Publications (2)

Publication Number Publication Date
WO2017164510A2 true WO2017164510A2 (fr) 2017-09-28
WO2017164510A3 WO2017164510A3 (fr) 2018-08-02

Family

ID=59900594

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/001103 WO2017164510A2 (fr) 2016-03-25 2017-02-02 Procédé de marquage de contenu multimédia basé sur des données vocales, et système l'utilisant

Country Status (2)

Country Link
KR (1) KR101832050B1 (fr)
WO (1) WO2017164510A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215657A (zh) * 2018-11-23 2019-01-15 四川工大创兴大数据有限公司 一种粮库监测用语音机器人及其应用

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102523135B1 (ko) * 2018-01-09 2023-04-21 삼성전자주식회사 전자 장치 및 전자 장치에 의한 자막 표현 방법
KR20220138512A (ko) 2021-04-05 2022-10-13 이피엘코딩 주식회사 모바일 기기에서의 음성 태깅을 이용한 영상 학습 및 인식 방법
WO2023233421A1 (fr) * 2022-05-31 2023-12-07 Humanify Technologies Pvt Ltd Système et procédé de balisage de contenu multimédia

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4802689B2 (ja) * 2005-12-08 2011-10-26 株式会社日立製作所 情報認識装置及び情報認識プログラム
KR20090062371A (ko) * 2007-12-13 2009-06-17 주식회사 그래텍 부가 정보 제공 시스템 및 방법
CA2797764A1 (fr) * 2010-04-30 2011-11-03 Now Technologies (Ip) Limited Appareil de gestion de contenu
KR101356006B1 (ko) * 2012-02-06 2014-02-12 한국과학기술원 구간설정이 가능한 음성기반 멀티미디어 컨텐츠 태깅 방법 및 장치
KR20130141094A (ko) * 2012-06-15 2013-12-26 휴텍 주식회사 음성태그를 이용한 웹 컨텐츠 검색관리 방법, 그리고 이를 위한 웹 컨텐츠 검색관리 프로그램을 기록한 컴퓨터로 판독가능한 기록매체

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215657A (zh) * 2018-11-23 2019-01-15 四川工大创兴大数据有限公司 一种粮库监测用语音机器人及其应用

Also Published As

Publication number Publication date
KR101832050B1 (ko) 2018-02-23
KR20170111161A (ko) 2017-10-12
WO2017164510A3 (fr) 2018-08-02

Similar Documents

Publication Publication Date Title
WO2017164510A2 (fr) Procédé de marquage de contenu multimédia basé sur des données vocales, et système l'utilisant
WO2010117213A2 (fr) Appareil et procédé destinés à fournir des informations en lien avec des programmes de radiodiffusion
WO2015119335A1 (fr) Procédé et dispositif de recommandation de contenu
WO2011059275A2 (fr) Procédé et appareil de gestion de données
WO2011053010A2 (fr) Appareil et procédé de synchronisation de contenu de livre numérique avec un contenu vidéo et système associé
WO2011084039A9 (fr) Procédé de distribution de contenus multimédia et appareil correspondant
EP3143764A1 (fr) Appareil et procédé de traitement de vidéo
WO2016017986A1 (fr) Serveur, procédé de fourniture d'informations de serveur, appareil d'affichage, procédé de commande d'appareil d'affichage et système de fourniture d'informations
WO2016035970A1 (fr) Systeme publicitaire utilisant une recherche de publicite
WO2011162446A1 (fr) Module et procédé permettant de décider une entité nommée d'un terme à l'aide d'un dictionnaire d'entités nommées combiné avec un schéma d'ontologie et une règle d'exploration
WO2015088155A1 (fr) Système interactif, serveur et procédé de commande associé
WO2015129983A1 (fr) Dispositif et procédé destinés à recommander un film en fonction de l'exploration distribuée de règles d'association imprécises
WO2021025542A1 (fr) Procédé, système et dispositif pour partager un moteur d'intelligence par de multiples dispositifs
WO2021167220A1 (fr) Procédé et système pour générer automatiquement une table des matières pour une vidéo sur la base de contenus
WO2017104863A1 (fr) Système et procédé de conversion entre des données hétérogènes stockées par l'intermédiaire d'une api de collecte de données
WO2012070766A2 (fr) Procédé destiné à générer des données de balisage vidéo sur la base d'informations d'empreintes digitales vidéo, et procédé et système de fourniture d'informations utilisant ce procédé
WO2023018150A1 (fr) Procédé et dispositif pour la recherche personnalisée de supports visuels
WO2016186326A1 (fr) Dispositif de fourniture de liste de mots de recherche et procédé associé
WO2020138608A1 (fr) Procédé et appareil de réponse à une question faisant appel à une pluralité de robots conversationnels
WO2022065537A1 (fr) Dispositif de reproduction vidéo pour assurer la synchronisation de sous-titres et son procédé de fonctionnement
WO2010076917A2 (fr) Procédé de fonctionnement de récepteur de diffusion stockant un programme de diffusion et récepteur de diffusion exploitant le procédé
WO2021091003A1 (fr) Procédé de gestion des droits d'auteur d'un contenu
WO2019160388A1 (fr) Appareil et système pour fournir des contenus sur la base d'énoncés d'utilisateur
EP2550636A2 (fr) Procédé permettant de gérer des informations de sélection concernant un contenu multimédia, et dispositif utilisateur, service et support de stockage permettant d'exécuter le procédé
WO2022119326A1 (fr) Procédé de fourniture de service de production d'un contenu de conversion multimédia à l'aide d'une adaptation de ressource d'image, et appareil associé

Legal Events

Date Code Title Description
NENP Non-entry into the national phase in:

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17770481

Country of ref document: EP

Kind code of ref document: A2

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.01.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17770481

Country of ref document: EP

Kind code of ref document: A2