WO2022085970A1 - Procédé de génération d'une image sur la base d'un texte de données d'utilisateur, dispositif électronique associé et procédé de génération d'une image sur la base d'un texte - Google Patents

Procédé de génération d'une image sur la base d'un texte de données d'utilisateur, dispositif électronique associé et procédé de génération d'une image sur la base d'un texte Download PDF

Info

Publication number
WO2022085970A1
WO2022085970A1 PCT/KR2021/013271 KR2021013271W WO2022085970A1 WO 2022085970 A1 WO2022085970 A1 WO 2022085970A1 KR 2021013271 W KR2021013271 W KR 2021013271W WO 2022085970 A1 WO2022085970 A1 WO 2022085970A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
image
generating
database
person
Prior art date
Application number
PCT/KR2021/013271
Other languages
English (en)
Korean (ko)
Inventor
박철민
Original Assignee
주식회사 에이아이파크
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에이아이파크 filed Critical 주식회사 에이아이파크
Publication of WO2022085970A1 publication Critical patent/WO2022085970A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • a method of generating an image based on user data text, an electronic device thereof, and a method of generating an image based on text is a method of generating an image based on text.
  • TTS text to speech
  • TTS text to speech
  • An object of the present invention to solve the above problems is to provide a method and an electronic device for generating an image based on user data text, and a method for generating an image based on text.
  • a server for achieving the above object includes a communication unit configured to communicate with a user device, a processor, and a memory, wherein the memory includes a database for generating an image based on text wherein the processor generates an image generation model based on the database, receives a first text from the user device through the communication unit, and based on the image generation model, generates a first text corresponding to the first text. It may be configured to generate one image and transmit the first image to the user device through the communication unit.
  • the database includes an audio database including a plurality of pairs of text and audio corresponding to the text, and a video image database including a plurality of pairs of video images corresponding to the audio and audio
  • the processor is configured to: In order to generate a generative model, a video generating model for generating a voice based on text is generated based on the voice database, and a video image is generated based on the generated voice based on the video image database. and generate an image generation model, wherein the processor generates a first image corresponding to the first text, based on the speech generation model and the first text, to generate the first image corresponding to the first text.
  • a voice is generated, a first video image corresponding to the first voice is generated based on the video image generation model and the first voice, and the first voice and the first video image are synthesized to generate the first sound.
  • the database may include a plurality of person-specific databases corresponding to a plurality of persons
  • the processor may be configured to generate a plurality of person-specific image generation models based on the plurality of person-specific databases.
  • the processor receives selection information about a first person among the plurality of people from the user device through the communication unit, and receives a first person corresponding to the first person from among the plurality of person-specific image generation models. It may be configured to generate the first image based on an image generation model for each person.
  • a method performed in a server includes an operation of storing a database for generating an image based on text, an operation of generating an image generation model based on the database, receiving a first text from the user device, generating a first image corresponding to the first text based on the image generation model, and transmitting the first image to the user device can do.
  • the database includes an audio database including a plurality of pairs of text and audio corresponding to the text, and a video image database including a plurality of pairs of video images corresponding to audio and audio
  • the image generation model is The generating may include generating a voice generation model for generating a voice based on text based on the voice database, and a video image generating model for generating a video image based on a voice based on the video image database. and generating the first image corresponding to the first text, wherein the generating of the first image corresponding to the first text includes generating a first voice corresponding to the first text based on the voice generation model and the first text.
  • generating a first video image corresponding to the first audio based on the video image generation model and the first audio, and synthesizing the first audio and the first video image to obtain the first It may include an operation of generating an image.
  • the database includes a plurality of person-specific databases corresponding to a plurality of persons
  • the operation of generating an image generation model based on the database includes: a plurality of person-specific image generation models based on the plurality of person-specific databases It may include an operation to create
  • the method further includes receiving selection information about a first person among the plurality of people from the user device, and the operation of generating the first image corresponding to the first text includes: and generating the first image based on an image generating model for each first person corresponding to the first person among the image generating models.
  • a non-transitory storage medium stores a command, and when the command is executed by an electronic device, the electronic device receives a first text, and the first Transmitting text to a server including a database for generating an image based on the text and an image generating model based on the database, receiving a first image corresponding to the first text from the server, and the first image can be output.
  • the command when executed by the electronic device, it causes the electronic device to display a plurality of people, receive a selection of a first person among the plurality of people, and transmit the selection of the first person to the server. to be transmitted, and the first image may be generated in the server based on an image generation model for each person corresponding to the first person.
  • the present invention it is possible to provide a method and an electronic device for generating an image based on user data text, and a method for generating an image based on text. Accordingly, by providing the visual information that can recognize the information bearer, the information can be delivered more effectively by drawing the attention of the recipient of the information.
  • FIG. 1 is a block diagram of a user device and a server according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating operations performed by a server according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating operations performed by a user device according to an embodiment of the present invention.
  • first, second, A, and B may be used to describe various elements, but the elements should not be limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component.
  • the term “and/or” includes a combination of a plurality of related listed items or any of a plurality of related listed items.
  • user data refers to data transmitted to a server through a user device, and such user data includes text, speech or voice, and images by the user. Alternatively, it may include an image or video, a gesture, and the like.
  • text is described as an example of the user data, but the present invention is not limited thereto.
  • the user device 101 may include a communication unit 110 , a processor 120 , a memory 130 , an input interface 140 , and an output interface 150 .
  • the communication unit 110 may communicate with other electronic devices other than the user device 101 , including the server 106 .
  • the type of communication method or communication protocol performed by the communication unit 110 with other electronic devices is not limited.
  • the communication unit 110 of the user device 101 transmits the first text 191 input by the user to the server 106 , and sends the first text 191 from the server 106 to the first text 191 .
  • a corresponding first image 192 may be received.
  • the processor 120 controls other components of the user device 101 , such as the communication unit 110 , the memory 130 , the input interface 140 , and the output interface 150 , or It can receive data from other components.
  • the processor 120 performs a certain operation through other components of the user device 101 , such as the communication unit 110 , the memory 130 , the input interface 140 , and the output interface 150 . Doing this may mean controlling other components of the user device 101 to perform the corresponding operation. Also, the processor 120 may perform an operation on data received from other components of the user device 101 .
  • the processor 120 of the user device 101 transmits the first text 191 input by the user to the server 106 through the communication unit 110 , and sends the first text 191 input from the server 106 to the server 106 .
  • a first image 192 corresponding to the first text 191 may be received.
  • the memory 130 may store a result of an operation performed by the processor 120 . According to various embodiments, the memory 130 may store computer-executable instructions to perform operations performed by the user device 101 according to an embodiment of the present invention.
  • the input interface 140 may receive an input from a user of the user device 101 .
  • the input interface 140 may include at least one of a touch pad, a digitizer, a stylus pen, a microphone, a camera, a mouse, and a keyboard.
  • the processor 120 of the user device 101 may confirm input of user data such as the first text 191 from the user through the input interface 140 .
  • the output interface 150 may provide an output to a user of the user device 101 .
  • the output interface 150 may include at least one of a TV, a digital signage, a display device such as a monitor or a touch screen display, and an audio output interface such as a speaker.
  • the processor 120 of the user device 101 may output the first image 192 through the output interface 150 .
  • the server 106 may include a memory 160 , a processor 170 , and a communication unit 180 .
  • the memory 160 may include a database 161 , a voice generation model 162 , and an image generation model 1632 .
  • the database 161 is an audio database including a plurality of pairs of text and audio corresponding to the text, and a video image including a plurality of pairs of audio and video images corresponding to the audio. It may include a database.
  • the video-to-speech generation model 162 includes a speech generation model for generating a voice based on text, which is generated based on an audio database
  • the image generation model 163 includes: and the audio generation model and the video image database. and a video image generation model for generating a video image based on a voice generated based on the .
  • the database 161 may include an image database including a plurality of combinations of text and images including audio and video images corresponding to the text.
  • the image generation model 162163 may include an image generation model that generates an image including an audio and a video image based on a voice based on a text, which is generated based on an image database.
  • the processor 170 may control other components of the server 106 , such as the communication unit 180 , or receive data from other component(s) of the user device 101 . Then, in this specification, the processor 170, such as the communication unit 180, to perform any operation through other components of the server 106, controls the other components of the server 106 to perform the operation. can mean doing In addition, the processor 170 may perform an operation on data received from other components of the server 106 .
  • the processor 170 may generate the voice generation model 162 and the image generation model 162163 based on the database 161 . Also, according to various embodiments, the processor 170 is configured to perform a first text and a first image corresponding to the first text based on the image generation model 162 received from the user device 101 through the communication unit 180 . can create
  • the communication unit 180 may communicate with other electronic devices other than the server 106 including the user device 101 .
  • the type of communication method or communication protocol performed by the communication unit 110 with other electronic devices is not limited.
  • the communication unit 180 receives the first text 191 from the user device 101 , and displays a first image 192 corresponding to the first text 191 to the user device 101 . can send
  • FIG. 2 is a flowchart illustrating operations performed by a server according to an embodiment of the present invention.
  • the processor 170 of the server 106 may generate the voice generation model 162 based on the database 161 .
  • the processor 170 of the server 106 may generate the image generation model 162163 based on the database 161 .
  • the voice and image generation models 162 and 163 are generated through deep learning, respectively.
  • the database 161 may include an audio database including a plurality of pairs of text and audio corresponding to the text, and a video image database including a plurality of pairs of audio and video images corresponding to the audio.
  • the processor 170 generates a voice generation model that generates a voice based on text based on the voice database, and generates a video image generation model that generates a video image based on the voice based on the video image database.
  • the database 161 may include an image database including a plurality of combinations of text and images including audio and video images corresponding to the text.
  • the processor 170 may generate an image generation model that generates an image including a voice based on text and an audio and video image based on the voice based on the image database.
  • data for voice may include Mel Frequency Cepstral Coefficients (MFCC) characteristics of voice.
  • MFCC Mel Frequency Cepstral Coefficients
  • the data for the video image may include a face image of a person to be displayed on the video.
  • the data for the video image may include data regarding the coordinates of feature points of the lips displayed on the image.
  • the database 161 may include a plurality of person-specific databases corresponding to the plurality of persons.
  • the content of data that can be included in the database for each person is the same as described with respect to the various embodiments described above.
  • the processor 170 may generate a plurality of person-specific image generation models based on the plurality of person-specific databases.
  • the processor 170 may also generate a plurality of person-specific voice generation models based on the plurality of person-specific databases.
  • the database 161 may include a plurality of context-specific databases corresponding to a plurality of contexts.
  • the plurality of situations may include at least one of an utterance situation between intimacy, a utterance situation in a public situation, a situation uttered at an angle, an utterance situation in an urgent situation, and a situation uttered in an investigation.
  • various situations may be set.
  • the content of data that can be included in the database for each situation is the same as described with respect to the various embodiments described above.
  • the processor 170 may generate a plurality of contextual image generation models based on the plurality of contextual databases. Meanwhile, the processor 170 may also generate a plurality of contextual voice generation models based on the plurality of contextual databases.
  • the process of generating the voice and image generation models 162 and 163 based on the database 161 may be performed through deep learning.
  • the processor 170 may receive the first text 191 from the user device 101 through the communication unit 180 .
  • the processor 170 receives the plurality of data from the user device 101 through the communication unit 180 . Information on the first person selected by the user from among the people of .
  • the processor 170 receives the plurality of data from the user device 101 through the communication unit 180 . Information on the first situation selected by the user may be further received among the situations of .
  • the processor 170 may generate a first image corresponding to the first text based on the image generation model 162163 .
  • the database 161 includes an audio database and a video image database
  • the image generation model 162163 includes an audio generation model and a video image generation model
  • the audio generation model 162 generates a voice
  • the processor 170 generates a first voice corresponding to the first text based on the voice generation model and the first text
  • the first video may be generated by generating a first video image corresponding to the first audio based on the video image generation model and the first audio, and synthesizing the first audio and the first video image.
  • the database 161 includes an image database
  • the image generation model 162163 includes an image generation model that generates an image including an audio and a video image based on a voice based on text and a video image.
  • the processor 170 may generate a first image including a first voice corresponding to the first text and a first video image, based on the image generation model and the first text.
  • the processor 170 determines the person corresponding to the first person selected by the user from among the plurality of persons
  • a first image corresponding to the first text may be generated based on the star voice or/and image generation model and the first text.
  • the processor 170 when information on a first situation selected by the user from among a plurality of situations is further received from the user device 101 , the processor 170 is configured to determine a situation corresponding to the first situation selected by the user from among the plurality of situations A first image corresponding to the first text may be generated based on the star voice or/and image generation model and the first text.
  • the processor 170 may transmit the first image 192 to the user device 101 through the communication unit 180 .
  • FIG. 3 is a flowchart illustrating operations performed by a user device according to an embodiment of the present invention.
  • the processor 120 of the user device 101 may confirm the input of the first text through the input interface 140 .
  • the processor 120 of the user device 101 may transmit the first text 191 to the server 106 through the communication unit 110 .
  • the processor 120 of the user device 101 may receive the first image 192 corresponding to the first text 191 from the server 106 through the communication unit 110 .
  • the processor 120 of the user device 101 may output the first image 192 through the output interface 150 .
  • the operation according to the embodiment of the present invention can be implemented as a computer-readable program or code on a computer-readable recording medium.
  • the computer-readable recording medium includes all types of recording devices in which data that can be read by a computer system is stored.
  • the computer-readable recording medium may be distributed in a network-connected computer system to store and execute computer-readable programs or codes in a distributed manner.
  • the computer-readable recording medium may include a hardware device specially configured to store and execute program instructions, such as ROM, RAM, and flash memory.
  • the program instructions may include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • aspects of the invention have been described in the context of an apparatus, it may also represent a description according to a corresponding method, wherein a block or apparatus corresponds to a method step or feature of a method step. Similarly, aspects described in the context of a method may also represent a corresponding block or item or a corresponding device feature. Some or all of the method steps may be performed by (or using) a hardware device such as, for example, a microprocessor, programmable computer or electronic circuit. In some embodiments, one or more of the most important method steps may be performed by such an apparatus.
  • a programmable logic device eg, a field programmable gate array
  • the field programmable gate array may operate in conjunction with a microprocessor to perform one of the methods described herein.
  • the methods are preferably performed by some hardware device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Security & Cryptography (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

D'après un mode de réalisation de la présente invention, un serveur peut comprendre : une unité de communication configurée pour communiquer avec un dispositif utilisateur ; un processeur ; et une mémoire. La mémoire contient une base de données permettant de générer une image sur la base d'un texte. Le processeur est configuré pour : générer un modèle de génération d'image sur la base de la base de données ; recevoir un premier texte provenant du dispositif utilisateur par l'intermédiaire de l'unité de communication ; générer une première image correspondant au premier texte sur la base du modèle de génération d'image ; et transmettre la première image au dispositif utilisateur par l'intermédiaire de l'unité de communication. Divers autres modes de réalisation sont possibles.
PCT/KR2021/013271 2020-10-23 2021-09-28 Procédé de génération d'une image sur la base d'un texte de données d'utilisateur, dispositif électronique associé et procédé de génération d'une image sur la base d'un texte WO2022085970A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200138104A KR20220053863A (ko) 2020-10-23 2020-10-23 사용자 데이터텍스트에 기반하여 영상을 생성하는 방법 및 그를 위한 전자 장치 및 텍스트에 기반하여 영상을 생성하는 방법
KR10-2020-0138104 2020-10-23

Publications (1)

Publication Number Publication Date
WO2022085970A1 true WO2022085970A1 (fr) 2022-04-28

Family

ID=81290699

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/013271 WO2022085970A1 (fr) 2020-10-23 2021-09-28 Procédé de génération d'une image sur la base d'un texte de données d'utilisateur, dispositif électronique associé et procédé de génération d'une image sur la base d'un texte

Country Status (2)

Country Link
KR (1) KR20220053863A (fr)
WO (1) WO2022085970A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040076524A (ko) * 2003-02-26 2004-09-01 주식회사 메세지 베이 아시아 애니메이션 캐릭터 제작 방법 및 애니메이션 캐릭터를이용한 인터넷 서비스 시스템
KR20200045852A (ko) * 2018-10-23 2020-05-06 스마트이어 주식회사 음성 합성 또는 영상 편집을 통한 멀티미디어 컨텐츠 내 광고 서비스 플랫폼 및 음성 합성 서비스와 영상 편집 서비스를 제공하는 방법
JP2020123817A (ja) * 2019-01-30 2020-08-13 シャープ株式会社 画像形成システム、画像形成装置、画像形成方法及びプログラム
JP2020140326A (ja) * 2019-02-27 2020-09-03 みんとる合同会社 コンテンツ生成システム、及びコンテンツ生成方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040076524A (ko) * 2003-02-26 2004-09-01 주식회사 메세지 베이 아시아 애니메이션 캐릭터 제작 방법 및 애니메이션 캐릭터를이용한 인터넷 서비스 시스템
KR20200045852A (ko) * 2018-10-23 2020-05-06 스마트이어 주식회사 음성 합성 또는 영상 편집을 통한 멀티미디어 컨텐츠 내 광고 서비스 플랫폼 및 음성 합성 서비스와 영상 편집 서비스를 제공하는 방법
JP2020123817A (ja) * 2019-01-30 2020-08-13 シャープ株式会社 画像形成システム、画像形成装置、画像形成方法及びプログラム
JP2020140326A (ja) * 2019-02-27 2020-09-03 みんとる合同会社 コンテンツ生成システム、及びコンテンツ生成方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FILNTISIS PANAGIOTIS PARASKEVAS; KATSAMANIS ATHANASIOS; TSIAKOULIS PIRROS; MARAGOS PETROS: "Video-realistic expressive audio-visual speech synthesis for the Greek language", SPEECH COMMUNICATION, vol. 95, 1 January 1900 (1900-01-01), NL , pages 137 - 152, XP085288312, ISSN: 0167-6393, DOI: 10.1016/j.specom.2017.08.011 *

Also Published As

Publication number Publication date
KR20220053863A (ko) 2022-05-02

Similar Documents

Publication Publication Date Title
US6377925B1 (en) Electronic translator for assisting communications
WO2011074771A2 (fr) Appareil et procédé permettant l'étude d'une langue étrangère
US10741172B2 (en) Conference system, conference system control method, and program
WO2020256471A1 (fr) Procédé et dispositif de génération de vidéo de parole sur la base d'un apprentissage automatique
WO2021118179A1 (fr) Terminal utilisateur, dispositif d'appel vidéo, système d'appel vidéo et son procédé de commande
WO2020204655A1 (fr) Système et procédé pour un réseau de mémoire attentive enrichi par contexte avec codage global et local pour la détection d'une rupture de dialogue
WO2016117962A1 (fr) Procédé et terminal d'utilisateur pour fournir un service de message à base d'image holographique et dispositif d'affichage d'image holographique
WO2019164234A1 (fr) Procédé d'apprentissage d'intentions personnalisées
WO2018021651A1 (fr) Appareil de commande de poupée-personnage hors ligne et procédé utilisant des informations d'émotion de l'utilisateur
WO2021006538A1 (fr) Dispositif de transformation visuelle d'avatar exprimant un message textuel en tant que v-moji et procédé de transformation de message
WO2019004582A1 (fr) Appareil de reconnaissance vocale en temps réel équipé d'une puce asic, et téléphone intelligent
WO2020256475A1 (fr) Procédé et dispositif de génération de vidéo vocale à l'aide de texte
WO2018169276A1 (fr) Procédé pour le traitement d'informations de langue et dispositif électronique associé
WO2022092439A1 (fr) Procédé de fourniture d'image d'élocution et son dispositif informatique d'exécution
WO2018182063A1 (fr) Dispositif, procédé et programme d'ordinateur fournissant un appel vidéo
WO2013125915A1 (fr) Procédé et appareil de traitement d'informations d'image comprenant un visage
WO2019031621A1 (fr) Procédé et système permettant de reconnaître une émotion pendant un appel téléphonique et d'utiliser une émotion reconnue
CN113850898A (zh) 场景渲染方法及装置、存储介质及电子设备
WO2022085970A1 (fr) Procédé de génération d'une image sur la base d'un texte de données d'utilisateur, dispositif électronique associé et procédé de génération d'une image sur la base d'un texte
EP3493048A1 (fr) Dispositif de traduction et système de traduction
WO2015037871A1 (fr) Système, serveur et terminal permettant de fournir un service de lecture vocale au moyen d'une reconnaissance de textes
WO2022196880A1 (fr) Procédé et dispositif de service d'interaction basé sur un avatar
WO2022255850A1 (fr) Système de dialogue en ligne et procédé de fourniture en mesure de prendre en charge une traduction multilingue
WO2021118180A1 (fr) Terminal d'utilisateur, appareil de diffusion, système de diffusion le comprenant et procédé de commande associé
WO2021118184A1 (fr) Terminal utilisateur et son procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21883042

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21883042

Country of ref document: EP

Kind code of ref document: A1