WO2018016760A1 - Dispositif électronique et son procédé de commande - Google Patents

Dispositif électronique et son procédé de commande Download PDF

Info

Publication number
WO2018016760A1
WO2018016760A1 PCT/KR2017/006790 KR2017006790W WO2018016760A1 WO 2018016760 A1 WO2018016760 A1 WO 2018016760A1 KR 2017006790 W KR2017006790 W KR 2017006790W WO 2018016760 A1 WO2018016760 A1 WO 2018016760A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
short
server
short clip
received
Prior art date
Application number
PCT/KR2017/006790
Other languages
English (en)
Korean (ko)
Inventor
송영석
김한기
임동현
박해광
손준호
이우정
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Priority to EP17831233.6A priority Critical patent/EP3438852B1/fr
Priority to US16/319,545 priority patent/US10957321B2/en
Publication of WO2018016760A1 publication Critical patent/WO2018016760A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4828End-user interface for program selection for searching program descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6581Reference data, e.g. a movie identifier for ordering a movie or a product identifier in a home shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/30User interface
    • G08C2201/31Voice input
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0638Interactive procedures

Definitions

  • the present invention relates to an electronic device and a control method thereof, and more particularly, to an electronic device providing a short clip and a control method thereof.
  • multimedia devices such as TVs, PCs, laptop computers, tablet PCs, smartphones, and the like are widely used in most homes.
  • a search result when a search result includes video or audio content, the content is provided as it is.
  • the original content contains a large number of irrelevant parts of the user's question, so that the search results that are meaningless to the user are accepted.
  • the present invention has been made to solve the above-described problem, and an object of the present invention is to provide an electronic device and a control method thereof for providing a short clip for original content based on a keyword.
  • An electronic device provides a communication unit, an output unit, an input unit, and an input unit for communicating with a server that stores information about a plurality of short clips and keywords for each of the plurality of short clips.
  • a short clip request signal is transmitted to the server based on a keyword included in the received speech voice and information on the content output from the output unit, and the server according to the request signal.
  • a processor configured to output the short clip through the output unit based on the information about the short clip received from the controller.
  • the information on the plurality of short clips includes at least one of information on a time interval including the location where the plurality of short clips are stored and the keyword, and the processor is further configured to transmit information from the server according to the request signal.
  • the short clip may be output based on the received information.
  • each of the plurality of short clips may be video content or sound content generated by editing a portion including a specific keyword in specific content.
  • the processor may provide additional information about the short clip when additional information about the short clip is received.
  • the additional information about the short clip may include a title, a genre, and a broadcast time of the original content. And at least one of a generation time of the short clip, broadcasting station information of the original content, and the keyword.
  • the output unit may include at least one of a display and a speaker.
  • the output unit is implemented to include only a speaker, and the processor may provide additional information about the short clip as audio through the speaker.
  • the output unit may include at least one of a display and a speaker, and the processor may be configured to generate a short associated with the keyword to the server based on a keyword that is repeated a predetermined number of times for a predetermined time in the audio output through the speaker.
  • the clip request signal may be additionally transmitted to the server.
  • the processor may provide additional response information for the spoken voice together with the short clip based on the keyword included in the received spoken voice.
  • the processor may transmit the request signal including the keyword and the user information to the server, and receive a short clip associated with the keyword and the user information from the server.
  • the processor transmits the received spoken voice to a voice recognition server or the server and shortens based on the information about the keyword and the content received from the voice recognition server or the server.
  • the clip request signal may be transmitted to the server.
  • the method may include outputting content and generating a voice of a user. Receiving, when the spoken voice is received, transmitting a short clip request signal to the server based on a keyword included in the received spoken voice and information on the content, and receiving from the server according to the request signal. Outputting the short clip based on the information about the short clip.
  • the information on the plurality of short clips may include at least one of information on a location where the plurality of short clips are stored and information on a time interval including the keyword, and the transmitting may include: When the information about the short clip is received from the server, the short clip may be output based on the received information.
  • each of the plurality of short clips may be video content or sound content generated by editing a portion including a specific keyword in specific content.
  • the outputting of the short clip may include providing additional information about the short clip when additional information about the short clip is received, and the additional information about the short clip may include title, genre, and original content. At least one of a broadcast time of the original content, a time of generating the short clip, broadcast station information of the original content, and the keyword.
  • the outputting of the short clip may provide additional information about the short clip as audio through a speaker.
  • the electronic device may include at least one of a display and a speaker, and the transmitting of the keyword may include the keyword to the server based on a keyword that is repeated at least a predetermined number of times for a predetermined time in the audio output through the speaker.
  • the short clip request signal associated with may be additionally transmitted to the server.
  • the outputting of the short clip may provide additional response information for the spoken voice together with the short clip based on a keyword included in the received spoken voice.
  • the transmitting may include transmitting the request signal including the keyword and the user information to the server, and outputting the short clip, receiving the short clip associated with the keyword and the user information from the server. Can be output.
  • the transmitting may include transmitting the received spoken voice to a voice recognition server or the server and generating a short clip request signal based on the keyword and the information about the content received from the voice recognition server or the server. Can be sent to the server.
  • a system including an electronic device and a server generates information on a plurality of short clips based on keywords of a plurality of original contents, and generates a plurality of short clips.
  • a server for storing information and keywords for each of the plurality of short clips and a spoken voice of a user are received, a short clip request signal is generated based on the keyword included in the received spoken voice and information about the content output by the electronic device.
  • an electronic device for transmitting to the server and outputting a short clip based on information about the short clip received from the server according to the request signal.
  • FIG. 1 is a view for explaining a system for providing a short clip according to an embodiment of the present invention.
  • FIGS. 2A and 2B are block diagrams illustrating a configuration of an electronic device according to an embodiment of the present disclosure.
  • FIG. 3 is a block diagram illustrating a configuration of a server according to an exemplary embodiment.
  • FIG. 4 is a diagram for describing a method of outputting a short clip associated with a keyword according to an exemplary embodiment.
  • FIG. 5 is a diagram for describing a method of outputting a short clip associated with output content according to an exemplary embodiment.
  • FIG. 6 is a diagram for describing a method of obtaining a keyword by analyzing an audio signal according to an exemplary embodiment.
  • FIG. 7 is a diagram for describing additional information about a short clip according to one embodiment of the present invention.
  • FIG. 8 is a diagram for describing additional response information provided with a short clip according to an exemplary embodiment.
  • FIG. 9 is a flowchart illustrating a short clip providing method according to an exemplary embodiment.
  • FIG. 10 is a flowchart illustrating a system for providing a short clip according to an exemplary embodiment.
  • FIG. 11 is a diagram for describing a method of providing a short clip through an speaker according to another embodiment of the present disclosure.
  • FIG. 1 is a view for explaining a system for providing a short clip according to an embodiment of the present invention.
  • the electronic device 100 may be implemented as various types of devices that output content using at least one of a display and a speaker. Accordingly, the electronic device 100 may be implemented as a digital TV, but is not limited thereto.
  • the electronic device 100 may be implemented as various types of devices having a display function such as a PC, a mobile phone, a tablet PC, a PMP, a PDA, a navigation device, and the like.
  • the electronic device 100 may be implemented as a sound output device having no display function. In this case, the content may be output as an audio signal through the speaker.
  • the electronic device 100 is implemented as a digital TV for convenience of description. An embodiment in which the electronic device 100 includes only a speaker without a display function will be described in detail with reference to FIG. 10.
  • the electronic device 100 may receive a spoken voice of a user and obtain a keyword included in the received spoken voice.
  • the electronic device 100 may transmit the received spoken voice to a voice recognition server (not shown) and receive a keyword included in the spoken voice from the voice recognition server.
  • the present invention is not limited thereto, and the electronic device 100 may obtain a keyword by analyzing a user's spoken voice.
  • the server 200 may be used as a voice recognition server for providing a short clip and analyzing a spoken voice and transmitting a keyword included in the spoken voice to the electronic device 100.
  • a voice recognition server for providing a short clip and analyzing a spoken voice and transmitting a keyword included in the spoken voice to the electronic device 100.
  • the electronic device 100 may transmit a short clip request signal to the server 200 based on the keyword included in the received speech voice and information on the content output by the electronic device 100.
  • the electronic device 100 may receive information about the short clip from the server 200 in response to the request signal, and output the short clip based on the received information.
  • the information about the short clip may be at least one of information about a time clip including a short clip, a location where the short clip is stored, and a keyword.
  • the electronic device 100 may reproduce and output only a time section including a specific keyword in the content based on this.
  • the server 200 may store information about the plurality of short clips and keywords for each of the plurality of short clips.
  • the server 200 may receive content from the content provider 300 and generate a short clip from the received content.
  • the server 200 may receive broadcast content from a broadcaster and generate a plurality of short clips from the received broadcast content.
  • the content received from the content provider 300 is referred to as the original content.
  • the short clip refers to an image obtained by editing a specific portion or part of the received original content, and in some cases, a plurality of contents may be combined. For example, a specific part or part may be obtained from each of the plurality of contents, and the obtained parts may be combined to generate a short clip.
  • the server 200 may analyze the audio signal of the original content and edit the original content in units of endpoint detection (EPD).
  • EPD refers to an algorithm that detects a start point and an end point of a voice in real time by analyzing an audio signal of an original content.
  • the server 200 may obtain a keyword by analyzing the voice included in each of the edited images in EPD units. Accordingly, the server 200 may obtain and store a plurality of edited images and keywords corresponding to each of the plurality of edited images edited in EPD units from one original content. Here, at least one keyword matching the edited video may be provided.
  • the server 200 when the server 200 acquires a plurality of keywords by analyzing an audio signal included in the edited video, the plurality of keywords may be matched to one edited video and stored in the server.
  • the original content is not necessarily edited in EPD units, and the server 200 may generate a plurality of short clips by editing the original content based on various voice detection algorithms.
  • the short clip and the keyword generation method for each short clip of the server 200 will be described in detail with reference to FIG. 3.
  • an edited video obtained from original content is referred to as a short clip for convenience of description.
  • the short clip may be an image in which a specific part of the original content, for example, a part including a specific keyword, is edited within a predetermined time (for example, within 3 minutes).
  • a predetermined time for example, within 3 minutes.
  • the short clip is not limited to the image content, of course, can be generated by editing the audio content.
  • the playback time of the short clip may be changed according to a setting and a voice detection algorithm, it is of course not limited to within 3 minutes.
  • the server 200 may generate and store information about the short clip at the time of generating the short clip.
  • the information on the short clip may include at least one of information on a location where the short clip is stored and information on a time interval including a specific keyword.
  • the server 200 may obtain a keyword by analyzing an audio signal included in the short clip, and store the short clip and a keyword matching the short clip. Therefore, the server 200 may store a plurality of short clips and keywords for each of the plurality of short clips.
  • the server 200 may display the title, genre, broadcast time of the original content, creation time of the short clip, broadcast station information of the original content, and the like based on metadata about the original content. Can be saved with a short clip.
  • the electronic device 100 analyzes a user's spoken voice and transmits a short clip request signal related to a keyword included in the spoken voice to a server, and the server 200 transmits the received request signal to the server.
  • the short clip for the included keyword may be transmitted to the electronic device 100.
  • the electronic device 100 may display the received short clip and provide it to the user.
  • the electronic device 100 may transmit a user's spoken voice to a voice recognition server and receive a keyword included in the spoken voice from the voice recognition server.
  • the server 200 providing the short clip may be configured to receive the user's spoken voice and transmit the keyword included in the spoken voice to the electronic device 100. That is, the voice recognition server or the server 200 may be implemented to perform voice recognition of converting the received voice into text and acquiring a keyword from the converted text when the user's spoken voice is received.
  • FIGS. 2A and 2B are block diagrams illustrating a configuration of a display apparatus according to an exemplary embodiment.
  • the electronic device 100 includes a communication unit 110, an input unit 120, an output unit 130, and a processor 140.
  • the communication unit 110 communicates with an external device according to various types of communication methods.
  • the communication unit 110 may communicate with the server 200 which stores a plurality of short clips and keywords for each of the plurality of short clips using at least one wired / wireless method.
  • the communication unit 110 may communicate with the voice recognition server.
  • the communication unit 110 may include various communication chips such as a Wi-Fi chip, a Bluetooth chip, a wireless communication chip, an NFC chip.
  • the communicator 110 may transmit the received spoken voice to the voice recognition server and receive a keyword included in the spoken voice.
  • the communication unit 110 may transmit the received spoken voice to the server 200 and receive a keyword from the server 200.
  • the present invention is not limited thereto, and the electronic device 100 may obtain a keyword by performing voice recognition on the spoken voice of the user without performing communication with the voice recognition server or the server 200.
  • the communication unit 110 may transmit a signal for requesting a short clip to the server 200, and receives a short clip according to the request signal from the server 200.
  • the request signal is a signal based on information on keywords and content included in the user's spoken voice.
  • the request signal may be a signal including a keyword and information on content being output by the electronic device 100.
  • the request signal may be transmitted to the server 200 continuously or simultaneously with a separate signal including a keyword and information on content being output by the electronic device 100.
  • the request signal may be a signal including information on content displayed on the electronic device 100, a keyword repeatedly output from the content, information on a user of the electronic device 100, and the like.
  • the keyword repeatedly output from the content may mean a keyword that is repeated more than a predetermined number of times during a predetermined time in the content output from the electronic device 100.
  • output content the content displayed on the electronic device 100 or the output content will be referred to as output content.
  • the communication unit 110 may receive a short clip from the server 200 in response to the above-described request signal.
  • the server 200 may transmit a short clip corresponding to the request signal to the electronic device 100.
  • the server 200 may store information on a location where original content corresponding to the request signal is stored and time information corresponding to a short clip among the original content.
  • the server 200 may transmit the web address for playing the original content and the time information corresponding to the short clip among the original content to the electronic device 100.
  • the electronic device 100 may access the server where the original content is stored based on the received web address, and play the section corresponding to the time information.
  • the electronic device 100 may receive a web address for receiving specific content from the server 200 and time information on a section including a corresponding keyword in the specific content.
  • the electronic device 100 may access the received web address to receive specific content, and reproduce and output only a specific section of the specific content based on time information.
  • the input unit 120 is a component for receiving a spoken voice of a user and converting it into audio data.
  • the input unit 120 may be implemented as a microphone to receive a spoken voice of a user.
  • the present invention is not limited thereto, and the input unit 120 may be provided in a remote control device (not shown) for controlling the electronic device 100 instead of the electronic device 100 to receive a spoken voice of a user.
  • the input unit 120 may be implemented in the form of a touch screen that forms a mutual layer structure with the touch pad. In this case, the input unit 120 may receive a keyword input through a touch screen in addition to the spoken voice.
  • the output unit 130 may output at least one of various contents and short clips.
  • the output unit 130 may include at least one of a display and a speaker.
  • the output unit 130 may include various content playback screens such as images, videos, texts, music, etc., application execution screens including various contents, web browser screens, and graphical user interfaces (GUIs). ) Screen, etc. can be displayed.
  • GUIs graphical user interfaces
  • the display may be implemented as a liquid crystal display panel (LCD), organic light emitting diodes (OLED), or the like, but is not limited thereto.
  • the display may be implemented as a flexible display or a transparent display.
  • the display may display the short clip received from the server 200.
  • the output unit 130 may provide the received short clip as audio through the speaker.
  • the output unit 130 may provide additional information about the received short clip as audio and an audio signal of the short clip. You can also provide only.
  • the processor 140 controls the overall operation of the electronic device 100.
  • the processor 140 sends a signal through the communication unit 110 to request a short clip based on information on keywords and contents included in the received spoken voice.
  • the server 200 may transmit the data.
  • the short clip received from the server 200 according to the request signal may be output through the output unit 130.
  • the processor 140 may transmit information on the output content to the server 200.
  • the information on the output content may include a title, genre, broadcast time, broadcasting station information, and the like of the output content. Therefore, when the processor 140 transmits a short clip request signal to the server 200 based on at least one of the information about the keyword and the output content, the processor 140 receives and provides the short clip associated with the keyword and the output content. can do.
  • the processor 140 when the processor 140 transmits the short clip request signal to the server 200, the processor 140 may be provided with the short clip previously generated.
  • the pre-generated short clip may be a short clip generated from content different from the output content.
  • the content may be pre-generated content that is broadcast before the output content broadcast time.
  • the present invention is not limited thereto, and a short clip generated from the corresponding output content may also be received.
  • the server 200 may also receive the broadcast content.
  • the processor 140 transmits a request signal, a short clip of the output content is output. If created, the short clip can also be the target. For example, if the corresponding output content is earlier than a preset time when the broadcast start time is requested by the user, the short clip for the corresponding output content may be generated.
  • the processor 140 may receive additional information about the short clip.
  • the processor 140 may receive and provide a short clip and additional information about the short clip from the server 200.
  • the additional information about the short clip may be information including at least one of a title, a genre of the original content of the short clip, a broadcast time of the original content, a creation time of the short clip, a broadcaster of the original content, and a keyword.
  • the processor 140 may analyze the audio signal of the output content and transmit a signal for requesting a short clip associated with the keyword to the server 200 based on a keyword that is repeated more than a predetermined number of times for a predetermined time. Accordingly, the processor 140 may obtain a word repeated in the output content as a keyword, and transmit the keyword to the server 200 to receive a short clip associated with the keyword.
  • the electronic device 100 may include a storage unit (not shown) for storing user information, and the processor 140 may transmit a request signal including the user information stored in the storage unit to the server 200. .
  • the processor 140 may receive and display a short clip associated with user information.
  • the user information is information about a user of the electronic device 100 and may include information including an age group, a favorite genre, a preferred content, a preferred broadcasting station, and the like. Therefore, when the electronic device 100 receives a plurality of short clips from the server 200, the electronic device 100 may receive and display a short clip more suitable for the user based on the keyword and the user information.
  • FIG. 2B is a block diagram illustrating a detailed configuration of an electronic device 100 according to another embodiment of the present disclosure.
  • the electronic device 100 uses the communication unit 110, the input unit 120, the output unit 130, the processor 140, the storage unit 150, the audio processor 160, and the video processor 170. Include. A detailed description of parts overlapping with those shown in FIG. 2A among the elements shown in FIG. 2B will be omitted.
  • the processor 140 controls overall operations of the electronic device 100 using various programs stored in the storage 150.
  • the processor 140 may be one or more of a central processing unit (CPU), a controller, an application processor (AP), a communication processor (CP), and an ARM processor. It may include or may be defined in the corresponding terms.
  • the processor 140 may be implemented as a digital signal processor (DSP), may be implemented as an SoC incorporating a content processing algorithm, or may be implemented in the form of a field programmable gate array (FPGA). .
  • DSP digital signal processor
  • SoC SoC incorporating a content processing algorithm
  • FPGA field programmable gate array
  • the processor 140 may access the RAM 141, the ROM 142, the main CPU 143, the graphics processor 144, the first to n interfaces 145-1 to 145-n, and the bus 146. Include.
  • the RAM 141, the ROM 142, the main CPU 143, the graphics processor 144, the first to nth interfaces 145-1 to 145-n, and the like may be connected to each other through the bus 136.
  • the first to n interfaces 145-1 to 145-n are connected to the various components described above.
  • One of the interfaces may be a network interface connected to an external device via a network.
  • the main CPU 143 accesses the storage 150 and performs booting using the operating system stored in the storage 150. Then, various operations are performed using various programs, contents, data, etc. stored in the storage 150.
  • the ROM 142 stores a command set for system booting.
  • the main CPU 143 copies the O / S stored in the storage unit 150 to the RAM 141 according to the command stored in the ROM 142 and executes O / S.
  • the main CPU 143 copies various application programs stored in the storage unit 150 to the RAM 141 and executes the application programs copied to the RAM 141 to perform various operations.
  • the graphic processor 144 generates a screen including various objects such as an icon, an image, and a text by using a calculator (not shown) and a renderer (not shown).
  • An operation unit (not shown) calculates attribute values such as coordinate values, shapes, sizes, colors, and the like in which objects are displayed according to the layout of the screen based on the received control command.
  • the renderer generates a screen having various layouts including objects based on the attribute values calculated by the calculator.
  • the screen generated by the renderer (not shown) is displayed in the display area of the outputter 130.
  • the storage unit 150 stores various data such as an operating system (O / S) software module for driving the electronic device 100, various multimedia contents, various applications, various contents input or set during application execution, and the like.
  • the storage unit 150 may store user information, for example, user preference information, age group, user profile information, and the like.
  • the audio processor 160 is a component that performs processing on audio data.
  • the audio processor 160 may perform various processing such as decoding, amplification, noise filtering, and the like on the audio data.
  • the audio processor 160 may generate and provide a feedback sound corresponding to a case where the user preference information displayed at the channel zapping satisfies a predetermined criterion.
  • the video processor 170 is a component that performs processing on video data.
  • the video processor 170 may perform various image processing such as decoding, scaling, noise filtering, frame rate conversion, resolution conversion, and the like on the video data.
  • FIG. 3 is a block diagram showing the configuration of a server 200 according to an embodiment of the present invention.
  • the server 200 includes a communication unit 210, a storage unit 220, and a processor 230.
  • the communication unit 210 communicates with an external device according to various types of communication methods.
  • the communication unit 210 may communicate with the content provider 300 using at least one of the wired and wireless methods.
  • the communication unit 210 may receive content from the content provider 300.
  • the communicator 210 may include various communication chips such as a Wi-Fi chip, a Bluetooth chip, a wireless communication chip, an NFC chip, and a tuner.
  • the communication unit 210 may communicate with the electronic device 100.
  • the communication unit 210 may receive a short clip request signal transmitted by the electronic device 100 and transmit a short clip to the electronic device 100 in response thereto.
  • the storage unit 220 stores various data such as an operating system (O / S) software module for driving the server 200, various multimedia contents, various applications, various contents input or set during application execution, and the like.
  • O / S operating system
  • the storage unit 220 may store original content, a plurality of short clips generated from the original content, and a plurality of keywords for each of the short clips.
  • the server 200 when the server 200 edits original content to generate a plurality of short clips, the server 200 may obtain at least one keyword according to audio signals included in the plurality of short clips.
  • the server 200 may store the short clip and a keyword obtained from the short clip in the storage 220.
  • the server 200 may store the first and second keywords together with the first short clip.
  • the server 200 may group and store a short clip for each keyword.
  • the short clips including the audio signal corresponding to the first keyword may be grouped and stored in the storage 220. Therefore, if the first keyword is included in the short clip request signal received from the electronic device 100, the server 200 may transmit a plurality of short clips grouped to the first keyword to the electronic device 100. .
  • the processor 230 controls the overall operation of the server 200.
  • the processor 230 analyzes a spoken voice received from the electronic device 100 and obtains a keyword included in the spoken voice. can do.
  • the server 200 may transmit a keyword to the electronic device 100.
  • the processor 230 may edit the received original content to generate a plurality of short clips.
  • the processor 230 may edit only a specific section of the original content based on the voice detection algorithm.
  • the voice detection algorithm refers to an algorithm for detecting an audio signal including at least one keyword.
  • the processor 230 may analyze the audio signal of the original content to detect a start point and an end point of the voice, and edit a section (EPD unit) between the start point and the end point to generate a short clip.
  • a section EPD unit
  • the server 200 may be based on a preset time interval, a specific interval set by the content provider, a time interval set by the server 200 administrator, and a user request time interval included in the short clip request signal. You can also edit the original content to create a short clip.
  • the processor 230 may generate a short clip by editing the corresponding section in real time. In this case, the processor 230 may determine that the voice is terminated when the voice is not detected for more than a preset time or when a machine sound or noise is detected for more than the preset time. Thereafter, the processor 230 may store the generated short clip and the acquired keyword together in the storage 220. Therefore, the processor 230 may transmit a short clip to the electronic device 100 in response to the short clip request signal received from the electronic device 100.
  • the server 200 may store time information on a section including a web address and a specific keyword that can receive the original content, as a database, without generating a short clip from the original content. have.
  • the server 200 may receive a web address corresponding to the short clip request signal and section information including a specific keyword in the original content. May be transmitted to the electronic device 100. Therefore, the electronic device 100 may provide the short clip by outputting only a section including a specific keyword in the original content based on the web address and time information, instead of receiving the short clip from the server 200. .
  • 4 to 8 illustrate an embodiment in which the electronic device 100 includes a display for convenience of description, and output content and a short clip are output through the display.
  • FIG. 4 is a diagram for describing a method of displaying a short clip associated with a keyword according to an exemplary embodiment.
  • the electronic device 100 may receive a spoken voice of a user.
  • the electronic device 100 may analyze the spoken voice of the user and obtain a keyword included in the spoken voice. For example, if the received speech of the user is 'tell me the current traffic information', the electronic device 100 may obtain 'traffic information' as a keyword.
  • the electronic device 100 according to another embodiment of the present invention can also obtain a keyword included in the spoken voice by communicating with the voice recognition server or server 200.
  • the electronic device 100 may transmit a signal for requesting a short clip for the acquired keyword to the server 200.
  • the server 200 may transmit a short clip for the keyword to the electronic device 100.
  • the server 200 may transmit the specific short clip to the electronic device 100 based on the short clip generated from the original content and the keyword for each short clip until the request signal is received from the electronic device 100. For example, if the keyword included in the short clip request signal is 'traffic information', the server 200 transmits only the short clip having 'traffic information' as a keyword to the electronic device 100.
  • the electronic device 100 may be generated by editing a specific section of a news program transmitted from a content provider, that is, a broadcaster, and may receive a short clip having 'traffic information' as a keyword. Therefore, the received short clip may be image content including an audio signal corresponding to 'traffic information'.
  • the electronic device 100 may transmit a short clip request signal including user information to the server 200.
  • the server 200 may transmit a short clip related to the keyword and the user information to the electronic device 100. For example, if the location of the electronic device 100 corresponds to 'Seoul' according to the user information, the server 200 may select 'traffic information' and 'Seoul from a plurality of short clips having' traffic information 'as a keyword. The short clip satisfying both 'may be transmitted to the electronic device 100. Therefore, the electronic device 100 may display the short clip optimized to the user among the short clips generated in real time.
  • the electronic device 100 may provide an output mode and a short clip mode.
  • the output mode may be a mode for continuously outputting only output content regardless of whether a short clip is received from the server 200.
  • the short clip mode may be a mode for displaying a short clip received from the server 200.
  • the electronic device 100 may display the short clip by switching from the output mode to the short clip mode at the end of the output content (for example, during CF broadcasting).
  • the present invention is not limited thereto, and the switching between the output mode and the short clip mode may be performed in response to a user input. For example, when the user's spoken voice is received in the output mode, the user may automatically switch to the short clip mode and display the short clip received from the server 200.
  • the output mode and the short clip mode may be executed at the same time. For example, when a short clip is received from the server 200, the received short clip may be displayed on a portion of the output unit 130 by overlapping the output content.
  • FIG. 5 is a diagram illustrating a method of displaying a short clip associated with output content according to an exemplary embodiment.
  • the electronic device 100 may include information about the output content in the short clip request signal and transmit the information to the server 200.
  • the server 200 may transmit the specific short clip to the electronic device 100 based on the keyword and the short clip request signal.
  • the information about the output content means information about the content that is output to the electronic device 100 and may be obtained from metadata about the output content.
  • the information on the output content may include a title, genre, broadcast time, broadcast station information, and the like of the output content.
  • the present invention is not limited thereto, and the information on the content may be obtained through various methods. For example, additional information may be obtained by receiving information on content from an external server or performing OCR on a screen.
  • the electronic device 100 may obtain at least one of “Team A” and “the batter” as keywords.
  • the electronic device 100 may display information (eg, 'sports', 'baseball') and keywords (eg, 'Team A' and 'hitter') about the output content.
  • the short clip request signal may be transmitted to the server 200.
  • the server 200 may transmit a short clip to the electronic device 100 that keyword 'sports', 'baseball', 'Team A' and 'batter' among the plurality of short clips.
  • the electronic device 100 may receive and display the interview image of the other person of Team A, the sports news about Team A, and the like from the server 200. Meanwhile, as described above, the plurality of short clips received by the electronic device 100 may be image contents generated by editing a specific section of the original content received by the broadcaster and received by the server 200.
  • FIG. 6 is a diagram for describing a method of obtaining a keyword by analyzing an audio signal according to an exemplary embodiment.
  • the electronic device 100 may transmit the word repeatedly output from the output content to the server 200 by including the short clip request signal.
  • the electronic device 100 may transmit a keyword, which is repeated more than a predetermined number of times for a predetermined time, from the audio output through the speaker provided in the electronic device 100 to the server 200.
  • the electronic device 100 may obtain 'Spain', 'Barcelona', and the like, which are repeatedly output by analyzing an audio signal of the output content as a keyword. .
  • the server 200 may transmit a short clip matching 'Spain' and 'Barcelona' among the plurality of short clips to the electronic device 100.
  • the electronic device 100 may receive and display short clips of 'Spain' and 'Barcelona' from the server 200.
  • the electronic device 100 may include the information on the output content in the short clip request information and transmit the information to the server 200.
  • the electronic device 100 may receive a short clip generated by editing a specific section of the travel information program for 'Spain' and 'Barcelona'.
  • the electronic device 100 may display the short clip received from the server 200 as a thumbnail image.
  • the short clip corresponding to the thumbnail image selected according to the user's input may be played.
  • FIG. 7 is a diagram for describing additional information about a short clip according to one embodiment of the present invention.
  • the electronic device 100 may additionally receive information on the short clip from the server 200 and provide the received information together with the short clip.
  • the additional information about the short clip includes at least one of the title 710 of the original content, the genre, the broadcast time 720 of the original content, the station information 730 of the original content, the creation time of the short clip, and a keyword.
  • the broadcast time of the original content may mean a time when the server 200 receives the content from the content provider 300, a time for generating the original content, a time when the broadcast station transmits the original content, and the like.
  • the keyword of the short clip may mean a keyword that matches a keyword included in the short clip request signal among at least one keyword matched with the corresponding short clip.
  • additional information about the short clip may be displayed when the selected short clip is reproduced according to a user input.
  • the present invention is not limited thereto, and the electronic device 100 may display a plurality of short clips received from the server 200 as thumbnail images and simultaneously display additional information on the short clips.
  • FIG. 8 is a diagram for describing additional response information provided with a short clip according to an exemplary embodiment.
  • the electronic device 100 may receive additional response information about a keyword acquired in the spoken voice of the user from an external server and display the additional response information together with the short clip.
  • the additional response information may include a search result 810 for the keyword, information on the keyword, and the like.
  • the present invention is not limited thereto, and of course, additional response information regarding at least one of information on output content, user information, and a keyword repeated in the output content may be received and displayed from an external server.
  • a search result of a genre of output content as a search word can be received from an external server and displayed together with a short clip. Can also be received by an external server and displayed.
  • FIG. 9 is a flowchart illustrating a short clip providing method according to an exemplary embodiment.
  • content is output (S910).
  • the short clip request signal is transmitted to the server based on the information about the keyword and the content included in the received spoken voice (S930).
  • the short clip is output based on the information about the short clip received from the server according to the request signal (S940).
  • the information on the short clip includes at least one of information on a time interval including a location where the short clip is stored and a keyword.
  • the received clip is received.
  • a short clip can be output based on the information.
  • each of the plurality of short clips may be video content or sound content generated by editing a portion including a specific keyword in specific content.
  • additional information about the short clip when additional information about the short clip is received, additional information about the short clip is provided, wherein the information about the short clip includes a title, a genre of the original content, a broadcast time of the original content, and a short clip. May include at least one of a generation time, broadcast station information of original content, and a keyword.
  • additional information about the short clip may be provided as audio through a speaker.
  • the electronic device may include at least one of a display and a speaker.
  • a short clip associated with the keyword is sent to the server based on a keyword that is repeated at least a predetermined number of times for a predetermined time in the audio output through the speaker.
  • the request signal can be additionally transmitted to the server.
  • additional response information regarding the spoken voice may be provided together with the short clip based on the keyword included in the received spoken voice.
  • the request signal including the keyword and the user information may be transmitted to the server.
  • a short clip related to the keyword and the user information may be received from the server and output.
  • the received spoken voice may be transmitted to the voice recognition server or the server described above, and the short clip request signal may be transmitted to the server based on the information about the keyword and the content received from the voice recognition server or the server.
  • FIG. 10 is a flowchart illustrating a system for providing a short clip according to an exemplary embodiment.
  • the server 200 receives content from the content provider 300 (S1010).
  • the content received from the content provider 300 will be referred to as the original content.
  • the server 200 may receive the content from the content provider 300 in real time. If the content provider 300 is a broadcast station, the server 200 may receive a broadcast program broadcast in real time from the broadcast station as original content.
  • the server 200 generates a plurality of short clips based on the keywords of each of the received original contents (S1020).
  • the server 200 stores a plurality of generated short clips and keywords for each of the plurality of short clips (S1030).
  • the electronic device 100 receives a user spoken voice.
  • the short clip request signal associated with the keyword included in the received speech voice is transmitted to the server 200 (S1050).
  • the electronic device 100 receives a short clip from the server (S1060).
  • the electronic device 100 outputs the received short clip (S1070).
  • FIG. 11 is a diagram for describing a method of providing a short clip through an speaker according to another embodiment of the present disclosure. Referring to FIG. 11
  • the electronic device 100 may include only a speaker and no display as an output unit.
  • the electronic device 100 may output and provide an audio signal of a short clip from the server 200.
  • the short clip includes both a video signal and an audio signal as moving image content
  • the electronic device 100 may provide only an audio signal in the received short clip.
  • a short clip may be provided that uses 'current weather' as a keyword.
  • the location clip of the electronic device 100 may be additionally received to provide a short clip of the current weather (for example, the current weather in New York) of a specific region. Also, since the electronic device 100 may not have a display, only the audio signal of the received short clip may be output.
  • the additional information on the short clip may be converted into an audio signal and provided.
  • the additional information about the short clip and the short clip may be received from the server 200, the additional information about the short clip may be output first, and the audio signal included in the short clip may be sequentially output.
  • the electronic device 100 may output only partial information of additional information about the received short clip as audio. For example, when the title, genre, broadcast time, etc. of the original content are received as additional information about the short clip, the electronic device 100 provides only the title of the original content as an audio signal and then supplies the audio signal of the received short clip. You can also output
  • the electronic device 100 may sequentially provide a plurality of short clips based on a predetermined priority.
  • the electronic device 100 may output audio signals included in the plurality of short clips through the speaker in the order of generating the short clips.
  • the user may receive the short clip and additional information about the short clip as an audio signal.
  • the above-described methods according to various embodiments of the present disclosure may be implemented in the form of software, a program, or an application that can be installed in an existing electronic device, a server, or the like.
  • control method of an electronic device may be implemented by computer executable program code to be executed by a processor in a state stored in various non-transitory computer readable mediums. It may be provided to each server or devices.
  • the method for controlling an electronic device may include a computer program product including a computer readable medium including a computer readable program executed by a computer device. It can be performed by.
  • the computer readable program may be stored in a computer readable storage medium in a server, and the program may be implemented in a form downloadable to a computer device through a network.
  • the non-transitory readable medium refers to a medium that stores data semi-permanently and is readable by a device, not a medium storing data for a short time such as a register, a cache, a memory, and the like.
  • a non-transitory readable medium such as a CD, a DVD, a hard disk, a Blu-ray disk, a USB, a memory card, a ROM, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

La présente invention concerne un dispositif électronique. Le dispositif électronique comporte : une unité de communication permettant de communiquer avec un serveur stockant des informations sur une pluralité de clips courts et stockant des mots-clés par la pluralité de clips courts ; une unité de sortie ; une unité d'entrée ; et un processeur qui, lorsqu'une voix prononcée par un utilisateur est reçue par l'intermédiaire de l'unité d'entrée, transmet un signal de requête de clip court au serveur, sur la base d'un mot-clé compris dans la voix prononcée reçue et des informations sur le contenu émis par l'unité de sortie, et produit un clip court par l'intermédiaire de l'unité de sortie, sur la base des informations sur le clip court reçu du serveur en réponse au signal de requête.
PCT/KR2017/006790 2016-07-21 2017-06-27 Dispositif électronique et son procédé de commande WO2018016760A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17831233.6A EP3438852B1 (fr) 2016-07-21 2017-06-27 Dispositif électronique et son procédé de commande
US16/319,545 US10957321B2 (en) 2016-07-21 2017-06-27 Electronic device and control method thereof

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662365076P 2016-07-21 2016-07-21
US62/365,076 2016-07-21
KR10-2017-0036304 2017-03-22
KR1020170036304A KR102403149B1 (ko) 2016-07-21 2017-03-22 전자 장치 및 그의 제어 방법

Publications (1)

Publication Number Publication Date
WO2018016760A1 true WO2018016760A1 (fr) 2018-01-25

Family

ID=60993116

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/006790 WO2018016760A1 (fr) 2016-07-21 2017-06-27 Dispositif électronique et son procédé de commande

Country Status (2)

Country Link
KR (1) KR102403149B1 (fr)
WO (1) WO2018016760A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021073161A1 (fr) * 2019-10-18 2021-04-22 平安科技(深圳)有限公司 Procédé, appareil et dispositif d'enregistrement de personnes âgées basés sur la reconnaissance vocale, et support de stockage
CN114466223A (zh) * 2022-04-12 2022-05-10 深圳市天兴诚科技有限公司 一种编码技术的视频数据处理方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120066226A1 (en) * 2010-09-10 2012-03-15 Verizon Patent And Licensing, Inc. Social media organizer for instructional media
KR20120038654A (ko) * 2010-10-14 2012-04-24 엘지전자 주식회사 방송 음성 인식 서비스를 제공하는 네트워크 tv와 서버 그리고 그 제어방법
US20130282713A1 (en) * 2003-09-30 2013-10-24 Stephen R. Lawrence Personalization of Web Search Results Using Term, Category, and Link-Based User Profiles
KR20140028540A (ko) * 2012-08-29 2014-03-10 엘지전자 주식회사 디스플레이 디바이스 및 스피치 검색 방법
KR20150077580A (ko) * 2013-12-27 2015-07-08 주식회사 케이티 음성 인식 기반 서비스 제공 방법 및 그 장치

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9830321B2 (en) * 2014-09-30 2017-11-28 Rovi Guides, Inc. Systems and methods for searching for a media asset

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130282713A1 (en) * 2003-09-30 2013-10-24 Stephen R. Lawrence Personalization of Web Search Results Using Term, Category, and Link-Based User Profiles
US20120066226A1 (en) * 2010-09-10 2012-03-15 Verizon Patent And Licensing, Inc. Social media organizer for instructional media
KR20120038654A (ko) * 2010-10-14 2012-04-24 엘지전자 주식회사 방송 음성 인식 서비스를 제공하는 네트워크 tv와 서버 그리고 그 제어방법
KR20140028540A (ko) * 2012-08-29 2014-03-10 엘지전자 주식회사 디스플레이 디바이스 및 스피치 검색 방법
KR20150077580A (ko) * 2013-12-27 2015-07-08 주식회사 케이티 음성 인식 기반 서비스 제공 방법 및 그 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3438852A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021073161A1 (fr) * 2019-10-18 2021-04-22 平安科技(深圳)有限公司 Procédé, appareil et dispositif d'enregistrement de personnes âgées basés sur la reconnaissance vocale, et support de stockage
CN114466223A (zh) * 2022-04-12 2022-05-10 深圳市天兴诚科技有限公司 一种编码技术的视频数据处理方法及系统
CN114466223B (zh) * 2022-04-12 2022-07-12 深圳市天兴诚科技有限公司 一种编码技术的视频数据处理方法及系统

Also Published As

Publication number Publication date
KR102403149B1 (ko) 2022-05-30
KR20180010955A (ko) 2018-01-31

Similar Documents

Publication Publication Date Title
WO2015099276A1 (fr) Appareil d'affichage, appareil de serveur, système d'affichage les comprenant et procédé de fourniture de contenu associé
WO2017082519A1 (fr) Dispositif de terminal utilisateur pour recommander un message de réponse et procédé associé
WO2013012107A1 (fr) Dispositif électronique et procédé de commande de celui-ci
WO2014073823A1 (fr) Appareil d'affichage, appareil d'acquisition de voix et procédé de reconnaissance vocale correspondant
WO2016076540A1 (fr) Appareil électronique de génération de contenus de résumé et procédé associé
WO2014092476A1 (fr) Appareil d'affichage, appareil de commande à distance, et procédé pour fournir une interface utilisateur les utilisant
WO2019139270A1 (fr) Dispositif d'affichage et procédé de fourniture de contenu associé
WO2018008823A1 (fr) Appareil électronique et son procédé de commande
WO2019112342A1 (fr) Appareil de reconnaissance vocale et son procédé de fonctionnement
WO2015002384A1 (fr) Serveur, procédé de commande associé, appareil de traitement d'image, et procédé de commande associé
WO2014069820A1 (fr) Appareil de réception de diffusion, serveur et procédés de commande s'y rapportant
WO2015020288A1 (fr) Appareil d'affichage et méthode associée
WO2017135776A1 (fr) Appareil d'affichage, appareil terminal d'utilisateur, système, et procédé de commande associé
WO2019039739A1 (fr) Appareil d'affichage et son procédé de commande
WO2019135553A1 (fr) Dispositif électronique, son procédé de commande et support d'enregistrement lisible par ordinateur
WO2016024824A1 (fr) Appareil d'affichage et son procédé de commande
WO2018080176A1 (fr) Appareil d'affichage d'image et procédé d'affichage d'image
WO2019184436A1 (fr) Procédé et appareil de diffusion sélective de vidéo, et support d'informations lisible par ordinateur
WO2020071870A1 (fr) Dispositif d'affichage d'images et procédé d'utilisation d'informations de programme de diffusion
WO2018016760A1 (fr) Dispositif électronique et son procédé de commande
WO2017146518A1 (fr) Serveur, appareil d'affichage d'image et procédé pour faire fonctionner l'appareil d'affichage d'image
WO2018128343A1 (fr) Appareil électronique et procédé de fonctionnement associé
WO2021040180A1 (fr) Dispositif d'affichage et procédé de commande associé
WO2020241973A1 (fr) Appareil d'affichage et son procédé de commande
WO2015190780A1 (fr) Terminal utilisateur et son procédé de commande

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2017831233

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017831233

Country of ref document: EP

Effective date: 20181031

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17831233

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE