WO2015137621A1 - Système et procédé de fourniture d'un contenu apparenté à faible puissance et support d'enregistrement lisible par ordinateur dans lequel est enregistré un programme - Google Patents

Système et procédé de fourniture d'un contenu apparenté à faible puissance et support d'enregistrement lisible par ordinateur dans lequel est enregistré un programme Download PDF

Info

Publication number
WO2015137621A1
WO2015137621A1 PCT/KR2015/000890 KR2015000890W WO2015137621A1 WO 2015137621 A1 WO2015137621 A1 WO 2015137621A1 KR 2015000890 W KR2015000890 W KR 2015000890W WO 2015137621 A1 WO2015137621 A1 WO 2015137621A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound wave
sound
content
related content
mobile receiver
Prior art date
Application number
PCT/KR2015/000890
Other languages
English (en)
Korean (ko)
Inventor
김태현
Original Assignee
주식회사 사운들리
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020140059864A external-priority patent/KR20150106299A/ko
Application filed by 주식회사 사운들리 filed Critical 주식회사 사운들리
Priority to JP2016574884A priority Critical patent/JP6454741B2/ja
Priority to US15/124,928 priority patent/US9794620B2/en
Priority to CN201580021062.3A priority patent/CN106256131A/zh
Priority to EP15761122.9A priority patent/EP3118850A1/fr
Publication of WO2015137621A1 publication Critical patent/WO2015137621A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/018Audio watermarking, i.e. embedding inaudible data in the audio signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4436Power management, e.g. shutting down unused components of the receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content

Definitions

  • the present invention relates to a low-power related content providing system, a method, and a computer-readable recording medium having recorded therein low-power, and providing related content.
  • the present invention is a national research and development project of the Small and Medium Business Administration (host: SME Technology Information Agency, Assignment No .: S2145032, research project name: Health Diagnosis Establishment Growth Technology Development Project, research title: It is supported by sound wave multiple access technology development.
  • Korean Patent Registration No. 10-1310943 (September 11, 2013) (System and method for providing content related information related to broadcast content), Korean Patent Registration No. 10-1313293 (September 24, 2013) (System and Method of Providing Additional Information of Broadcast Content), and Korean Patent Registration No. 10-0893671 (April 9, 2009) (Creation and Matching Hash of Multimedia Content) uses sound characteristics of the content to be broadcast or reproduced.
  • the present invention discloses a method of providing related content for product indirect advertisement included in content that is being viewed or listened to by a viewer or listener's mobile receiver.
  • the computer-readable recording medium provides a method of providing related content for indirect advertisement of a product included in content that is watched or listened to by a viewer or listener's mobile receiver by inserting non-audible sound waves into the broadcast or played content. It is starting.
  • a low-power related content providing system that can automatically provide the associated content for the main content being watched or listened while operating at a constant low power even without the viewer or listener actively looking for it. have.
  • the sound wave ID included in the main content broadcast or reproduced through the content player is extracted at low power, and the main content of the user is not performed by the user or through the user's mobile receiver.
  • a low power related content providing system may be provided that provides related content for indirect advertising products included in the main content without disturbing immersion in.
  • the sound wave ID included in the main content broadcast or reproduced through the content player is extracted at low power, and the main content of the user is not performed by the user or through the user's mobile receiver.
  • a method may be provided for providing associated content for an indirect advertisement product included in content without disturbing immersion in.
  • a method for providing a low power related content may be executed by a computer which may automatically provide the related content of the main content being watched or listened to while being operated at low power at all times without actively seeking the related content.
  • a recording medium on which the program is recorded may be provided.
  • the sound wave ID included in the main content broadcast or reproduced through the content player is extracted at low power, and the main content of the user is not performed by the user or through the user's mobile receiver.
  • a recording medium may be provided in which a program for executing a computer for executing a method of providing related content for an indirect advertising product included in main content without interfering with the immersion into the main content.
  • a computer-readable recording medium recording a low power-related content providing system, method, or program may be provided.
  • the step of detecting the sound output from the content player for playing the content (referred to as 'main content'); Converting the sensed sound into digital data; Extracting a sound wave ID from the digital data; Selecting related content corresponding to the sound wave ID; And displaying, by the mobile receiver, the related content in an image and / or voice.
  • the related content providing method may be provided.
  • a mobile receiver comprising a memory and a processor, the mobile receiver comprising: a microphone for detecting sound output from a content player for reproducing content (called 'main content'); An ADC for converting the sound sensed by the microphone into digital data; A sound wave ID extracting unit for extracting a sound wave ID from the digital data; A mobile receiver capable of providing related contents may include a display unit for displaying related contents selected in correspondence with the sound wave ID.
  • a microphone for sensing a sound output from a memory, a processor, a content player for reproducing content (called 'main content'), and an ADC for converting the sound detected by the microphone into digital data.
  • a computer readable medium having recorded thereon a program for executing a related content providing method in a computer comprising: extracting a sound wave ID from the digital data; And displaying the related content selected corresponding to the sound wave ID through a display unit provided in the computer.
  • a computer readable medium may be provided that records a program for executing a related content providing method comprising a.
  • the sound wave ID included in the main content broadcast or reproduced through the content player is extracted at low power, and the mobile receiver of the user, which is the viewer or the listener, may be used without the user's own efforts or by the user. It is possible to provide related content for indirect advertising products included in the main content without disturbing the immersion into the main content.
  • FIG. 1 is a view for explaining a related content providing system for providing related content using a sound wave ID according to an embodiment of the present invention.
  • FIG. 2 is a view for explaining an embodiment of a mobile receiver.
  • FIG. 3 is a view for explaining another embodiment of a mobile receiver.
  • FIG. 4 is a view for explaining a method of providing related content according to an embodiment of the present invention.
  • FIG. 5 is a view for explaining a method for providing related content according to an embodiment of the present invention.
  • FIG. 6 is a view for explaining a method for providing related content according to an embodiment of the present invention.
  • FIG. 7 is a view for explaining a method for providing related content according to an embodiment of the present invention.
  • 9 is a view for explaining that the acoustic recognition module is activated at a predetermined cycle to operate at low power.
  • FIG. 10 is a view for explaining that the sound recognition module is activated in accordance with the time that the sound wave ID can be extracted during the broadcast and playback time of the main content.
  • FIG. 11 is a diagram for describing an activation cycle of an acoustic recognition module when the existence of the acoustic wave ID is estimated to be greater than or equal to a preset probability through a process of extracting the acoustic wave ID.
  • FIG. 12 is a diagram for explaining that a period of an acoustic recognition module is adjusted according to an event in which another application using a microphone, which is a shared resource inside a mobile receiver, is activated.
  • FIG. 13 is a view for explaining that the period of the acoustic recognition module is adjusted according to an event in which another application using a microphone, which is a shared resource, is executed in the mobile receiver.
  • FIG. 14 is a diagram for describing methods of receiving an event of a mobile receiver to operate an acoustic recognition module at low power.
  • 15 is a view for explaining a method for providing related content according to an embodiment of the present invention.
  • ADC 205 Sound Wave ID Extraction Unit
  • Storage devices 213 and 413 Display unit
  • -Content refers to information that provides educational, cultural, informational, and / or entertainment value with structured digital data in the form of video, sound or sound and image.
  • -Content player A device that plays content, such as a TV, radio, desktop computer, or mobile device (e.g. laptop computer, tablet computer, smartphone, wearable computer, etc.) to produce sound or speakers
  • the terminal may further include a monitor or an image output terminal for displaying an image.
  • -Content i) means main content or related content, or ii) collectively refers to both main content and related content.
  • -Main content Content that is output (broadcast or played) through the content player.
  • Content output through a mobile receiver which refers to information and / or information such as sound, video, documents, internet addresses, etc., associated with the content of the main content output through the content player or the background for delivering the content.
  • Digital identifier value may include content that is difficult to include in the main content due to content organization restrictions or other laws and regulations, content that induces the viewer or listener to participate, or content related to indirect advertising that does not attract the viewer or listener's attention. Or information about a product that comes out of an indirect advertisement or content about a purchase of the product.
  • a device capable of outputting related content which includes a microphone and an analog-to-digital converter (ADC) to analyze the sound included in the main content, and analyzes the sound of the main content to identify the main content or After identifying the relative time position of the main content, it is a mobile device that provides the content viewers or listeners in a form that can visually or auditoryly recognize the related content associated with all or part of the main content.
  • the viewer or listener may be a person who owns the mobile receiver.
  • the mobile receiver may be a mobile device such as a laptop computer, tablet computer, smartphone, wearable computer, or the like.
  • -Indirect advertisement or indirect advertisement An advertisement that is output as part of a main content image or as part of sound as an advertisement such as a product, a service, a brand logo, a tagline, etc., which are included as a part or all of the main content.
  • Product indirect advertising or indirect advertising includes the concept of sponsored advertising (advertising that is provided as part of the content by offering a spot at a free or discounted price in the production of the content). The content associated with.
  • -Indirect advertising products Products or services that are output as all or part of the main content by indirect advertising or indirect advertising, and are used to include not only products having a specific form but also services, brands, logos, and taglines. do.
  • the sound wave ID is derived from (1) digital data represented by an audible sound artificially inserted into the sound of the main content (non-audible sound wave ID) or (2) extracted from sound contained as part of the main content. It is abbreviated digital data (audible fingerprint ID) that indicates the characteristics of the sound.
  • the artificially inserted non-audible sound wave ID has a feature that is modulated into non-audible sound waves that are inaudible to humans using psychosychoacoustics.
  • An audible fingerprint sonic ID that characterizes the sound does not require artificial insertion, but can only be extracted when the main content contains sound.
  • Non-audible sound wave IDs may include serial numbers, meaningful words, or Internet links.
  • Audible fingerprint sound wave IDs include Singular / Eigen Value Decomposition (SVD / EVD), Principal Component Analysis (PCA), Peak Landmarks Detection, Zero-crossing counting, and Mel Frequency Cepstral Coefficient (MFCC).
  • the non-audible sound wave ID may refer to one of an analog form of an audible sound wave in which digital data is modulated, digital data before modulation, and digital data after demodulation.
  • the audible fingerprint sound wave ID obtained by extracting a feature of the sound may include extracted feature digital data, a part of main content sound from which it can be extracted, and a feature of extracting feature digital data detected by a mobile receiver. A portion of the content sound may refer to one of the characteristic digital data extracted by the mobile receiver.
  • Acoustic recognition module used to mean at least one component used to detect and extract sound wave ID, for example, at least one of a microphone, an ADC, a sound wave ID extractor, and an associated content providing application. Can be named collectively.
  • FIG. 1 is a view for explaining a system for providing related content using a sound wave ID according to an embodiment of the present invention (hereinafter, ?? associated content providing system ??).
  • the related content providing system is a system for providing related content related to all or a part of main content output from the content player 100 through the mobile receiver 200.
  • the mobile receiver 200 may provide, for example, related content including an advertisement related to at least one of an image and a sound output as main content.
  • the mobile receiver 200 may provide related content for an indirect advertising product, which is represented by an image or output by sound, output from main content.
  • the mobile receiver 200 may provide related contents associated with each of a plurality of indirect advertisement products, which are displayed as images output from the main content or referred to as sound.
  • the associated content providing system may include a content player 100, a mobile receiver 200, and a server 300.
  • the content player 100 may output an image and / or sound.
  • the mobile device 200 may extract at least one sound wave ID from the sound output by the content player 100.
  • the sound wave ID may correspond to the related content through the related content database (DB) —the DB in which the sound wave ID and the related content correspond to each other.
  • DB related content database
  • the sonic ID identifies the main content, identifies a relative time position within the main content, identifies an associated content associated with (corresponded to) an indirect ad within the main content, or indirect ad at a relative time position within the main content. Can be used to identify associated content associated with (corresponding).
  • the content player 100 may include, for example, a speaker or a sound output terminal for reproducing sound such as a TV, a radio, a desktop computer, and a mobile device (eg, a laptop computer, a tablet computer, a smartphone, a wearable computer, etc.). In addition, it may further include a monitor or an image output terminal for displaying an image.
  • a speaker or a sound output terminal for reproducing sound such as a TV, a radio, a desktop computer, and a mobile device (eg, a laptop computer, a tablet computer, a smartphone, a wearable computer, etc.).
  • a monitor or an image output terminal for displaying an image.
  • the mobile receiver 200 may extract a sound wave ID included in the sound of the main content output from the content reproducing apparatus 100, and may recognize the related content corresponding to the sound wave ID in a visual or auditory manner.
  • the mobile receiver 200 may include a mobile device such as a laptop computer, a tablet computer, a smartphone, a wearable computer, and the like.
  • the mobile receiver 200 may include a microphone (not shown) and an ADC (not shown) for receiving a sound wave ID.
  • the mobile receiver 200 extracts the sound wave ID, transmits the sound wave ID to the server 300, receives the related content corresponding to the sound wave ID from the server 300, and displays the sound content as a sound or image.
  • the mobile receiver 200 may display the related content corresponding to the sound wave ID to the user in the form of a notification message, an image, and / or a voice.
  • the mobile receiver 200 is a mobile device such as a laptop computer, a tablet computer, a wearable computer, a smartphone, and the like, and has a microphone (not shown) for recognizing sound and communication with a server. And a communication unit (not shown), a computer processor (not shown), a memory (not shown), and a program (not shown).
  • the program is loaded into the memory under the control of the computer processor to perform the operation (e.g., recognize the sound wave ID, transmit the recognized sound wave ID to the server 300, and notify the related content received from the server 300).
  • the technique of inserting and extracting the sound wave ID into the main content may use the technique disclosed in the following patent applications.
  • a technique of inserting an ID into a sound and a technique of extracting a content ID inserted into a sound include, for example, a patent application No. 10 filed by the inventor with the Korean Patent Office on April 12, 2012.
  • the technique disclosed in -2012-0038120 position estimation method and position estimation system for estimating the position of a mobile terminal using an acoustic system and an acoustic system used therein can be used.
  • the inventor (patent application ID No. 10-2012-0078410 filed to the Korean Intellectual Property Office on July 18, 2012 (includes the ID in the sound signal), It is also possible to use a technique for recognizing and extracting ID included in the sound signal.
  • the technique of extracting the audible fingerprint sound wave ID corresponding to the characteristic of the sound included in the main content may use the technique disclosed in the following patent applications.
  • Korean Patent Registration No. 10-1310943 September 11, 2013
  • Korean Patent Registration No. 10-1313293 September 24, 2013
  • Korean Patent Registration No. 10-0893671 September 9, 2009
  • generation and matching of hashes of multimedia content
  • the server 300 may receive a sound wave ID from the mobile receiver 200, select the related content corresponding to the sound wave ID by referring to a related content DB (not shown) corresponding to the corresponding content for each sound wave ID, and select the selected sound content.
  • the related content is transmitted to the mobile receiver 200.
  • the server 200 may be, for example, an advertisement server that stores and manages advertisements, or may be a multimedia instant messenger (MIM) service server that transmits text and / or multimedia data to friends belonging to the same group. have.
  • MIM multimedia instant messenger
  • the server 200 When the server 200 is implemented as a MIM service server, the server 200 transmits related content to a client program that interacts with the MIM service server.
  • FIG. 2 is a view for explaining an embodiment of a mobile receiver.
  • the mobile receiver 200 includes a microphone 201, an ADC 203, a processor 207, a memory 209, a storage device 211, a display unit 213, and a communication unit 215. can do.
  • the microphone 201 may detect a sound output from the content player 100. That is, the received sound may be converted into an electrical signal.
  • the ADC 203 converts sound, which is an analog sound wave signal detected by the microphone 101, into digital data.
  • the ADC 203 may convert a sound wave signal within a preset frequency range detected by a microphone or a maximum frequency range allowed by hardware into digital data.
  • the ADC 203 may only convert the sound into digital data when the sound has an intensity above a predetermined predetermined threshold.
  • the ADC 203 may convert the digital data only when the sound within the preset frequency range has a strength greater than or equal to a preset threshold.
  • the threshold may be, for example, a relative value such as signal-to-noise ratio (SNR) or an absolute value such as sound intensity.
  • SNR signal-to-noise ratio
  • the processor 207 is a device for interpreting and executing computer instructions, which may be a central processing unit (CPU) of a general-purpose general purpose computer. For a mobile device such as a smartphone, for example, it may mean an application processor (AP) and memory.
  • the program loaded at 209 can be executed.
  • the memory 209 may be implemented as a volatile storage device such as random access memory (RAM), and an operating system (OS), an application, data, and the like required for driving the mobile device 200 are loaded into the memory 209. Can be.
  • the storage device 211 is a nonvolatile storage device for storing data, and may be implemented as a device such as a hard disk drive (HDD), a flash memory, or a secure digital (SD) memory card.
  • the display unit 213 may display content by voice, video, and / or text to the user.
  • the display unit 213 may be implemented as a device such as a display unit, a sound output terminal, or a speaker connected thereto.
  • the communication unit 215 may be implemented as an apparatus for communicating with the outside.
  • the communication unit 215 may communicate with the server 100.
  • the associated content providing application 223 may be loaded and operated in the memory 209 under the control of the processor 207.
  • the associated content providing application 223 receives the sound using the microphone 201 and the ADC 203, and when the sound wave ID is successfully extracted from the sound, the extracted sound wave ID is transmitted to the server 300 through the communication unit 215. To).
  • the associated content providing application 223, as an additional function determines whether the sound sensed by the ADC 203 has an intensity above a threshold, and only when the sound has an intensity above a preset threshold. It may have a function for extracting the sound wave ID.
  • the related content providing application 223 may, as an additional function, determine whether the sound sensed by the ADC 203 has a strength greater than or equal to a threshold within a preset frequency range, and the sound may have a preset frequency.
  • a sound wave ID may be extracted from the sensed sound only when the intensity is greater than or equal to a preset threshold within the range.
  • the operation of detecting and extracting the sound wave ID may be implemented as hardware other than the related content providing application 223. That is, the sound wave ID extractor 205, which is indicated by a dotted line in FIG. 2, is implemented in hardware, and the mobile receiver 200 may further include a sound wave ID extractor 205.
  • the sound wave ID extracting unit 205 extracts the sound wave ID from the digital data converted by the ADC 203, and the associated content providing application 223 communicates the sound wave ID extracted by the sound wave ID extracting unit 205 to the communication unit. It transmits to the server 300 through the (215).
  • At least one of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the associated content providing application 223 is deactivated.
  • the schedule or the predetermined event occurs, it is activated to perform its own operation.
  • the microphone 201 and the ADC 203 may be activated when a schedule or a predetermined event occurs to perform their operation.
  • the microphone 201 and ADC 203 are deactivated and are components that are activated when a schedule or predetermined event occurs.
  • the related content providing application 223 when the related content providing application 223 is implemented to have a function of determining whether the sound sensed by the microphone 201 has a strength above a threshold within a preset frequency range, the microphone 201 When the ADC 203 is activated, the associated content providing application 223 is also activated to perform an operation.
  • the related content providing application 223 may also have the same function. Is activated to perform the first operation.
  • the associated content providing application 223 may be activated only when the sound wave ID is extracted successfully.
  • the activated related content providing application 223 transmits the extracted sound wave ID to the server 300, receives related content corresponding to the sound wave ID from the server 300, and transmits the related sound content to the voice and / or video through the display unit 213. I can display it.
  • the activated related content providing application 223 may search for and select the related content corresponding to the extracted sound wave ID, and display the related content by voice and / or image through the display unit 213.
  • the related content database may be loaded in the memory 209, and the related content providing application 223 may search the related content database to select related content corresponding to the sound wave ID.
  • the microphone 201 is deactivated and is activated when a schedule or event occurs. If the microphone 201 detects a sound, for example, when the microphone 201 detects a sound belonging to a preset band, Other components (eg, ADC 203, associated content providing application 223, sound wave ID extractor 205, etc.) may be activated simultaneously or sequentially.
  • Other components eg, ADC 203, associated content providing application 223, sound wave ID extractor 205, etc. may be activated simultaneously or sequentially.
  • a component eg, an associated content providing application
  • determines whether the sensed sound has an intensity above a preset threshold may be activated.
  • a component for example, a sound wave ID extracting unit
  • schedule For a more detailed description of the schedule, please refer to the embodiments described with reference to FIG. 4.
  • operations that are activated at regular or irregular periods according to a schedule refer to the embodiments described with reference to FIGS. 9 to 11, and to activate operations where a predetermined event is generated with reference to 12 to 14. See the examples.
  • At least one component of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be deactivated. Then, depending on the schedule created with reference to the main content schedule, it may be activated at regular intervals or at irregular intervals.
  • at least one component of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may have a main content. Only during the playback time, it can be activated at a predetermined cycle.
  • At least one component of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the associated content providing application 223 may play the main content.
  • the period may be longer than the period above.
  • At least one component of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may include a sound wave ID.
  • a sound wave ID may be activated at regular intervals or at irregular intervals. For example, it can be activated at a time when sound wave ID can be extracted.
  • At least one component of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be preset. If the sound wave ID is failed to be extracted, but it is determined that the sound wave ID exists, the sound wave ID may be activated at a shorter period than the preset period. After that, if the ID is extracted, it can be performed in the original period.
  • At least one component of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be preset. Cycles, the following formula
  • the inventor of the present invention if the sensed intensity of the sound is smaller than the threshold, but close to the threshold, it is determined that there is a possibility that there is a sound with a sound wave ID, satisfies the above formula In this case, the components are configured to be activated in a shorter period than the original period.
  • At least one component of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be predetermined.
  • an event it can be activated.
  • the event may mean, for example, when a specific application is executed, when a display unit of the mobile receiver 200 is activated or deactivated, or when the mobile receiver 200 receives a push message from the outside.
  • the specific application may be an application that uses a microphone only for itself, but this may be another application as an example.
  • At least one component of the microphone 201, the ADC 203, the sound wave ID extracting unit 205, the display unit 213, the communication unit 215, and the related content providing application 223 may be preset. While being performed at intervals, when a predetermined event occurs, it may be configured not to be executed. Then, when the predetermined event ends, it is activated according to the original cycle.
  • processor 207 communicates the event to an operating system (OS) (not shown) that may be loaded in memory 209, which the mobile receiver 200 has, and the operating system may The occurrence is transmitted to the filters (not shown), which may be loaded in the memory 209, respectively, corresponding to the applications operating in the mobile receiver 200, and each of these filters is preset. Therefore, the occurrence of the event is not communicated to the applications corresponding to it or not.
  • OS operating system
  • filters not shown
  • FIG. 3 is a view for explaining another embodiment of a mobile receiver.
  • the mobile receiver 400 includes a microphone 401, a low power acoustic strength measurement unit 402, an ADC 403, a sound wave pattern recognition unit 404, a processor 407, and a memory 409.
  • the storage device 411 may include a display unit 413 and a communication unit 415.
  • the sound wave ID extractor 205 of FIG. 2 is divided into a low power acoustic intensity measurement unit 402 and a sound wave pattern recognition unit 404. have.
  • At least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 may be arranged in a schedule. It may be activated according to, or activated when a predetermined event occurs.
  • the schedule may be written to be activated at regular or irregular intervals. For a more detailed description of the schedule, please refer to the embodiments described with reference to FIG. 4.
  • the microphone 401, the low power acoustic strength measurement unit 402, and the ADC 403 may be activated when a schedule or a predetermined event occurs to perform their operation.
  • the microphone 401, the low power acoustic strength measurement unit 402, and the ADC 403 are inactive components that are activated when a schedule or predetermined event occurs.
  • the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, and the sound wave pattern recognition unit 404 are activated when a schedule or a predetermined event occurs to perform their operation. can do.
  • the associated content providing application 423 may be activated only when extraction of the sound wave ID is successful.
  • the activated related content providing application 423 transmits the extracted sound wave ID to the server 300, receives related content corresponding to the sound wave ID from the server 300, and transmits the related content to the voice and / or video through the display unit 413. I can display it.
  • the activated related content providing application 423 may search for and select related content corresponding to the sound wave ID, and display the related content by voice and / or video through the display unit 413.
  • a related content database may be loaded in the memory 409, and the related content providing application 423 may search the related content database and select related content corresponding to the sound wave ID.
  • the microphone 401 is deactivated and is activated when a schedule or event occurs. If the microphone 401 detects a sound, for example, when a sound belonging to a preset band is detected.
  • Other components e.g., ADC 403, associated content providing application 423, low power acoustic strength measurement unit 402, or sound wave pattern recognition unit 404 may be activated simultaneously or sequentially. have.
  • the low power sound intensity measuring unit may be activated to determine whether the sensed sound has an intensity above a preset threshold.
  • the sound wave pattern recognition unit 404 extracting the sound wave ID may be activated.
  • At least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 is inactive. According to the schedule created with reference to the main content schedule, it may be activated at regular intervals or at irregular intervals.
  • at least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 may include a main content. Only during the playback time, it can be activated at a predetermined cycle.
  • At least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 may be configured to play the main content. During the period of time for which the main content is not played, the period may be longer than the original period.
  • At least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 may include a sound wave ID.
  • a sound wave ID may be activated at regular intervals or at irregular intervals. For example, it can be activated at a time when sound wave ID can be extracted.
  • At least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 may be preset. If the sound wave ID is failed to be extracted but it is determined that the sound wave ID exists, it may be activated in a shorter period than the original period. After that, if the ID is extracted, it can be performed in the original period.
  • At least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 may be preset. Cycles, the following formula
  • the inventor of the present invention if the sensed intensity of the sound is smaller than the threshold, but close to the threshold, it is determined that there is a possibility that there is a sound with a sound wave ID, satisfies the above formula In this case, the components are configured to be activated in a shorter period than the original period.
  • At least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 may be predetermined.
  • an event it can be activated.
  • the event may mean, for example, when a specific application is executed, when a display unit of the mobile receiver 200 is activated or deactivated, or when the mobile receiver 200 receives a push message from the outside.
  • the specific application may be an application that uses a microphone only for itself, but this may be another application as an example.
  • At least one component of the microphone 401, the low power acoustic strength measurement unit 402, the ADC 403, the sound wave pattern recognition unit 404, and the associated content providing application 423 may be preset. While being performed at intervals, when a predetermined event occurs, it may be configured not to be executed. Then, when the predetermined event ends, it is activated according to the original cycle.
  • the processor 407 informs the fact that the event occurred to an operating system (OS) (not shown) that may be loaded in the memory 409 of the mobile receiver 400, and the operating system may be configured to send the event.
  • OS operating system
  • the occurrence is transmitted to the filters (not shown), which may be loaded in the memory 409, respectively, corresponding to the applications operating in the mobile receiver 400, and each of these filters is preset. Therefore, the occurrence of the event is not communicated to the applications corresponding to it or not.
  • OS operating system
  • filters not shown
  • the low power acoustic strength measurement unit 402 consumes low power but is continuously activated, and the acoustic wave pattern recognition unit 404 is more power than the low power acoustic intensity measurement unit 402. It is usually consumed but inactive and activated according to a schedule or when a predetermined event occurs (ie intermittently).
  • the low power acoustic intensity measuring unit 402 may detect a sound within a specific frequency range of the sound, and may continuously determine whether the detected sound intensity has an intensity above a preset threshold.
  • the low power acoustic intensity measuring unit 402 determines whether the sound input through the microphone 401 has an intensity greater than or equal to a preset threshold within a preset frequency range. If the intensity has an intensity greater than or equal to the threshold value, the low power acoustic intensity measurement unit 402 may activate the ADC 403 and the sound wave pattern recognition unit 404 that have been deactivated for low power operation. In this case, the activated ADC 403 may convert the sound into digital data and transmit the sound to the activated sound wave pattern recognition unit 404.
  • the sound wave pattern recognition unit 404 receives the sound in the form of digital data from the ADC 403, and extracts the sound wave ID if the sound wave ID is included in the received sound. At this time, when the processor 407 is inactivated to operate at low power, the sound wave pattern recognition unit 404 activates the processor 407. In the present embodiment, the sound wave pattern recognition unit 404 provides the extracted sound wave ID to the related content providing application 423.
  • the sound wave pattern recognition unit 404 receives the sound in the form of digital data from the ADC 403, compares the received sound with at least one stored sound wave ID, and transmits the sound from the ADC 403. After determining whether the sound wave ID can be extracted from the received sound, if it is determined that the sound wave ID can be extracted, the sound wave ID is directly extracted. According to an alternative embodiment, the sound wave pattern recognition unit 404 determines whether a sound wave ID can be extracted from the sound received from the ADC 403, and if it is determined that the sound wave ID can be extracted, the ADC 403. ) Provides the sound received from the related content providing application 423, the related content providing application 423 may extract the sound wave ID.
  • a technique for implementing the operation of detecting and extracting the sound wave ID in a separate hardware may use the technology disclosed in the following patent applications.
  • a technique of extracting sound waves modulated into non-audible sound at low power is disclosed, for example, in Korean Patent Application No. 10-2013-0141504 (Low, Nov. 20, 2013). Power sound wave receiving method and a mobile device using the same) can be used.
  • the memory 409 may be loaded with a DMB application or a video player operating under the control of the processor 407, or a codec 221 essential for playing contents through the same.
  • the DMB application, the video player, or the codec 221 may be stored in the storage 411 and loaded and operated in the memory 409 by a user's command.
  • a message processing program (not shown) may be loaded into the memory 409 to display the message through the display unit 413.
  • the term ?? acoustic recognition module ?? is a microphone, an ADC, a sound wave ID extractor, an associated content providing application, a low power acoustic strength measurer, and a sound wave pattern included in a mobile receiver.
  • the acoustic recognition module in order to operate at a low power, is activated according to a schedule or when a predetermined event occurs (that is, intermittently), and when activated, detects sound and detects sound wave ID.
  • activation means that the acoustic recognition module performs only the minimum functions to minimize the power consumption, and executes the necessary hardware module, runs the necessary software, or inputs necessary information in order to be able to detect and extract the sound wave ID. It is a state that can perform various activities such as requesting from OS.
  • the acoustic recognition module may be activated and operated in a 10-minute period, which is shown in FIG.
  • the acoustic recognition module is activated and operated at irregular intervals by utilizing the temporal feature of extracting the sound wave ID from the main content or by considering the broadcast time of the main content including the sound wave ID to be extracted. To be shown in FIG. 10.
  • the acoustic recognition module may be activated and operated at irregular intervals, which is shown in FIG.
  • the acoustic recognition module may be activated and operate at irregular intervals according to an event issued in the mobile receiver, which is illustrated in FIGS. 12 to 14. 14 shows how an event occurring inside the mobile receiver is delivered to the acoustic recognition module.
  • 9 is a view for explaining that the acoustic recognition module is activated at a predetermined cycle to operate at low power.
  • 9 a) shows that the acoustic recognition module is activated at the same time in preparation for the time when the main content is output.
  • 9B shows that the acoustic recognition module is activated at the second half or the completion time of the main content in preparation for the time when the main content is output.
  • 9C shows that the acoustic recognition module is activated a plurality of times (for example, twice) with a predetermined period in preparation for the time when the main content is output.
  • FIG. 10 is a view for explaining that the sound recognition module is activated in accordance with the time that the sound wave ID can be extracted during the broadcast and playback time of the main content.
  • the activation cycle may be performed by considering the temporal position of the sound wave ID extracted from the main content output by the content player 100 and the degree of extraction of the sound wave ID. Can be determined. For example, the sound wave ID in the content may be repeatedly extracted for two consecutive minutes, after which the entire four-minute pattern that is difficult to extract may be continuously repeated throughout the content. At this time, if the sound recognition module detects whether the sound wave ID can be extracted at a maximum of 2 minutes, the sound wave ID can be intermittently attempted to extract the sound wave ID while extracting the sound wave ID that can be extracted from the content in a short time. Reduce consumption
  • the acoustic recognition module may be activated at a shorter period at the corresponding time when the content is broadcast. For example, assume that the period of the acoustic recognition module is set to 10 minutes. At this time, if the specific content of the TV is aired for 1 hour every Saturday at 6 pm, and the sound wave ID can be extracted from the content, the acoustic recognition module every five minutes every Saturday from 6 pm to 7 pm instead of 10 minutes It can work. Alternatively, the acoustic recognition module may be continuously deactivated and may be operated at a cycle of only one hour from 6 pm to 7 pm every Saturday.
  • the mobile receiver 200 or 400 knows the time at which the sound wave ID can be extracted, that is, the reproduction time of the sound wave ID1, ID2, and ID3 of FIG. This time may be received by the acoustic recognition module of the mobile receiver 200 or 400 through the server. When the playing time of the sound wave ID1 is reached, the acoustic recognition module is activated to extract the sound wave ID1. If the sound wave ID1 extraction fails, the acoustic recognition module may determine that the user of the mobile receiver 200 or 400 is not watching or listening to the main content 1. If the extraction of the sound wave ID1 is successful, the mobile receiver 200 or 400 may receive and display the related content corresponding to the sound wave ID1 from the server 300.
  • the mobile receiver 200 or 400 receives related content corresponding to the sound wave ID from the server 300. I can display it. In this manner, the mobile receiver 200 or 400 may activate the acoustic recognition module at irregular intervals only at a time when it is predicted that the acoustic ID may be extracted, and display the acoustic ID and display related contents at low power.
  • FIG. 11 is a diagram for describing an activation cycle of an acoustic recognition module when the existence of the acoustic wave ID is estimated to be greater than or equal to a preset probability through a process of extracting the acoustic wave ID.
  • the mobile receiver 200 or 400 activates an acoustic recognition module in accordance with the originally planned period.
  • the acoustic recognition module determines that the sound wave ID may be extracted (hereinafter, described as existence determination), but the extraction may fail. That is, for example, when the extracted sound wave ID has a hamming distance from the specific sound wave ID existing on the related content DB or less than a preset value, the sound wave ID may fail to be extracted, but it may be assumed that there exists. Alternatively, the presence of the sound wave ID itself may be checked and the CRC (Cyclic Redundant Check) information that may check the extraction failure may be included and used.
  • CRC Cyclic Redundant Check
  • a signal-to-noise ratio may be measured to determine whether the sound wave ID is greater than or equal to a preset threshold value, and thus, the sound wave ID may be estimated.
  • SNR signal-to-noise ratio
  • the acoustic recognition module cancels the activation of the planned acoustic recognition module after period 3, which is a period to be originally executed, in order not to miss a time slot for extracting sound wave ID of the main content, and activates with a period 4 that is shorter than the period 3
  • the period after period 4 can be adjusted to the original period. Therefore, by adjusting the period with the result according to the sound wave ID extraction attempt, it is possible to operate at low power while maintaining the existing frequency of activation while increasing the probability of sound wave ID extraction.
  • the acoustic recognition module may be activated and operate at irregular intervals according to an event issued in the mobile receiver 200 or 400. This will be described with reference to FIGS. 12 and 13.
  • FIG. 12 is a view for explaining that the period of the acoustic recognition module is adjusted according to an event in which another application using the microphone 201 or 401 that is a shared resource inside the mobile receiver 200 or 400 is activated.
  • the microphone 201 or 401 which is part of the mobile receiver 200 or 400, is a resource shared by several applications, and used by another application until the first-come first-served (FCFS) method, that is, the first application is stopped. Can be shared in an impossible way.
  • the intermittently activated acoustic recognition module may cause the use of the microphone 201 or 401 to fail when another application attempts to use the microphone 201 or 401. Therefore, the acoustic recognition module delays activation until an application is terminated if an application capable of using the microphone 201 or 401 is detected or if an attempt to use the microphone 201 or 401 fails. The total number of activations can be reduced and operate at low power without interrupting the operation of the application using 201 or 401.
  • the application 1 included in the acoustic recognition module may be loaded in the memory 209 or 409 of the mobile receiver 200 or 400 at time t1.
  • the acoustic recognition module was scheduled to be activated according to the preset period 5
  • the acoustic recognition module was notified of the execution of the application 2 having a high probability of using the microphone 201 or 401 before t2 hours, or the acoustic recognition module at the time t2.
  • the currently running application information may be obtained from the operating system to detect application 2 (the method of detecting or being notified at this time will be described in detail later with reference to FIG. 14).
  • the acoustic recognition module may cancel the scheduled activation at time t2 and newly set the next period to be activated after t2.
  • the newly set period may be a period for activating the acoustic recognition module at a time doubled from t1 to period 5.
  • the newly set period may be modified at a time when the application 2 is notified as a period in which the time t3 that is notified of the termination of the application 2 is adjusted to be a time when the acoustic recognition module is activated. .
  • the application when the camera application is activated, the application may exclusively use a part of the acoustic recognition module for video recording.
  • the acoustic recognition module may stop the periodic activation so that the camera application can exclusively use a part of the acoustic recognition module, and may be activated when the camera application is terminated.
  • FIG. 13 is a diagram illustrating that the period of the acoustic recognition module is adjusted according to an event in which another application using the microphone 201 or 401 that is a shared resource is activated in the mobile receiver 200 or 400.
  • the operating system may notify the acoustic recognition module that the application 4 using the microphone 201 or 401 is executed at time t5 (the method of detecting or receiving the notification may be described in detail later with reference to FIG. 14). do).
  • the acoustic recognition module that is notified of the execution of the application 4 is the microphone 201 or 401, the ADC 203 or 403, the processor 207 or 407, or the like.
  • Application 4 can use and share the hardware and software resources needed for sound wave ID extraction.
  • the operating system may simultaneously provide data, which the sound coming into the microphone 201 or 401, is converted into digital data through the ADC 203 or 403 to the acoustic recognition module as well as the application 4, thereby providing data on the sound from both sides. I can receive it.
  • the acoustic recognition module is separately activated to extract the sound wave ID with lower power than the total power consumption of the mobile receiver 200 or 400 as compared to attempting to extract the sound wave ID.
  • the acoustic recognition module may operate at low power by canceling the schedule of activation at the originally scheduled time t6.
  • this application when a telephony application is activated, this application basically utilizes a microphone 201 or 401 and an ADC 203 or 403 and activates the processor 207 or 407, thereby simultaneously acknowledging the acoustic recognition module.
  • sound wave ID extraction is possible with low power.
  • the mobile receiver 200 or 400 may assume that the content player also serves as a content player.
  • the recognition module can be activated.
  • the acoustic recognition module can estimate whether a particular application uses the microphone 201 or 401. Such estimation can be performed by viewing the hardware usage permission information of each application from the operating system. You can check it. At this time, the list of running applications can also be obtained by requesting from the operating system.
  • FIGS. 12 and 13 illustrate that the acoustic recognition module has an irregular period according to an event inside a mobile receiver in which a specific application is executed or terminated, the type of event is not limited thereto.
  • FIGS. 12 and 13 may not only execute or terminate a specific application, but also may switch an application from the background mode to the foreground mode or from the foreground mode to the background mode.
  • FIG. 12 may be an event in which one or more of internal components of the mobile receiver 200 or 400 are activated, but not in a specific application
  • FIG. 13 may be an event inactive. In more detail, this may be an event in which the display of the display unit is activated or deactivated.
  • the event may be that the mobile receiver 200 or 400 receives a push message from a push server.
  • FIG. 14 is a diagram for describing methods of receiving an event of a mobile receiver 200 or 400 in order to operate the acoustic recognition module at low power as shown in FIGS. 12 and 13.
  • FIG. 14A illustrates a method of being notified when a specific event occurs to applications running by an operating system.
  • 14 b) illustrates a method of notifying that an application occurs to an application corresponding to a sound recognition module by a specific application.
  • 14 c) illustrates how the acoustic recognition module is notified that a push server sends a message that allows a message to be received from the server at low power.
  • the mobile receiver 200 or 400 may include the function of receiving a call.
  • the operating system when a call is received, the operating system operates an application that receives a call, and transmits a call reception event to a filter of each application so that another application may respond to the incoming call.
  • application 5 may be an application that handles incoming calls
  • application 6 may be an application that is playing music.
  • the operating system forwards the incoming call to each filter in the application. Since application 6 must stop playing music when a call is received, the filter of application 6 may be set to pass a call reception event.
  • the filter of the application 5 may be set to pass a call reception event since the user interface must be prepared and a phone call can be made when a call is received.
  • the acoustic recognition module may receive sound digital data and perform sound wave ID extraction. Therefore, the application corresponding to the application of the acoustic recognition module can be configured to pass a call reception event to the filter. In other words, the event required to operate at low power among the events sent by the operating system is set to pass through the filter, so that the acoustic recognition module can extract the sound wave ID at low power.
  • the acoustic recognition module may operate at low power.
  • application 7 of FIG. 14B is application 2 of FIG. 12 or application 4 of FIG.
  • the application 7 may deliver an event to another operating system to the operating system.
  • This event may be an activation event of application 2 of FIG. 12 or an end event of application 4 of FIG. 13.
  • the event is passed to the operating system, which can then pass the event to all application filters.
  • the filter of the application included in the acoustic recognition module may be set to pass through the activation or termination event.
  • the acoustic recognition module can be notified of activation or termination of a particular application.
  • a method of receiving an event indicating the use or termination of the hardware or software that the acoustic recognition module intends to use rather than the activation or termination of an application may be used.
  • the broadcast or playback schedule of the main content 1 and the main content 2 of FIG. 10 may be known in advance.
  • the application 7 of FIG. 14B may be the same application as the sound wave ID extraction app.
  • the sound wave ID extraction app may receive a broadcast or playback schedule of main content 1 and 2 from a server and register the corresponding time as an alarm of the operating system. When the registered alarm reaches the corresponding time, the registered alarm is delivered to the filter of each application.
  • Application 5 functions independently of this alarm event, so Application 5's filter does not forward this event to Application 5.
  • the filter of the application included in the acoustic recognition module may receive an alarm event and start sound wave ID extraction. In the same way, by registering the time when the sound wave ID of the main content 1 and 2 as an alarm of the operating system, the application included in the acoustic recognition module can end the sound wave ID extraction.
  • a mobile receiver 200 or 400 receives a message from a push server and delivers the message to an application through an operating system.
  • the application receiving the push message is Application8.
  • application 8 is a MIM service.
  • the mobile receiver 200 or 400 receives a push message from the server and delivers it to the operating system, which delivers it to the application filters.
  • the filter in Application 8 may be set to pass through a push message to perform the MIM function.
  • Application 9 operates independently of the push message, and the filter of Application 9 may be set not to pass through the push message.
  • the mobile receiver 200 or 400 may receive the push message while activating only the minimum hardware and software for receiving the push message in order to reduce power consumption.
  • the MIM service that receives the push message from the operating system takes an appropriate action for the push message.
  • the filter in the application portion of the acoustic recognition module may be set to pass this push message.
  • the acoustic recognition module activates the acoustic recognition module when various components of the mobile receiver 200 or 400 such as an AP, a display, etc. are activated for the processing of a push message, and thus, among the components of the mobile receiver 200 or 400.
  • the hardware and software required for the acoustic recognition module can be shared with Application8. By sharing such hardware and software, the acoustic recognition module can operate at low power in terms of the entire mobile receiver 200 or 400.
  • FIG. 4 is a view for explaining a method of providing related content according to an embodiment of the present invention.
  • the step of outputting the main content by the content reproducing apparatus 100 (S101), the step of activating the acoustic recognition module of the mobile receiver (S103), and extracting a sound wave ID (S105).
  • the server 300 receives the sound wave ID extracted in step S105, retrieves and selects the related content corresponding to the sound wave ID (S107), and the mobile receiver 200 displays the related content selected in the step S107. It may include the step (S109).
  • the acoustic recognition module is activated according to a schedule or activated when a predetermined event occurs.
  • the schedule may be written such that the acoustic recognition module is activated at regular and / or irregular intervals.
  • the schedule may be, for example, created with reference to a main content schedule—a schedule indicating the times when the main content is played—or a sound wave ID schedule—a schedule indicating the times from which the sound ID may be extracted, but this is merely illustrative.
  • the contents schedule or sound wave ID schedule does not necessarily have to be created with reference.
  • the schedule may be written to activate the acoustic recognition module at regular intervals or to be activated at irregular intervals.
  • the example in which the acoustic recognition module is activated according to the schedule may be the embodiments described with reference to FIGS. 9 to 11, and it will be easily understood by those skilled in the art.
  • acoustic recognition module is activated when there is an event may be the embodiments described with reference to FIGS. 12 to 14, but it will be easily understood by those skilled in the art.
  • the related content providing application When the related content providing application is used as part of the acoustic recognition module, the related content providing application operating in the memory 209 of the mobile receiver 200 transmits the extracted sound wave ID to the server 300.
  • the server 300 searches for the related content corresponding to the received sound wave ID (S107). Then, the server 300 transmits the selected related content to the mobile receiver 200, and the mobile receiver 200 transmits the related content. Display to the user (S109).
  • FIG. 5 is a view for explaining a method for providing related content according to an embodiment of the present invention.
  • a step of outputting main content by the content reproducing apparatus 100 (S201), a step of activating an acoustic recognition module of the mobile receiver (S203), and extracting a sound wave ID (S207) ),
  • the mobile receiver searching for and selecting the related content corresponding to the sound wave ID extracted in step S205 (S209), and the mobile receiver 200 displaying the related content selected in step S209 (S211).
  • S201 main content by the content reproducing apparatus 100
  • S203 a step of activating an acoustic recognition module of the mobile receiver
  • S207 extracting a sound wave ID
  • the related content providing method may further include a step (S205) in which the server transmits update data for the related content DB to the mobile receiver, and the mobile receiver transmits the update data from the server. You can update the related content DB that you receive and store.
  • the related content providing application operating in the memory 209 of the mobile receiver stores the related content DB received from the server 300 in the storage device 211 or updates the previously stored DB.
  • the related content providing application operating in the memory 209 may search for and select the related content DB corresponding to the sound wave ID (S209), and display the selected related content through the display unit 213 (S211).
  • the step S203 of activating the acoustic recognition module of the mobile receiver may be activated according to a schedule, or may be activated when a predetermined event occurs, similarly to the activation step described with reference to FIG. 4.
  • the mobile receiver 200 directly selects the related content corresponding to the sound wave ID.
  • the server 300 may send some or all of the related content DB to the mobile receiver 200 to update some or all of the related content previously stored in the mobile receiver 200.
  • the time when the server 300 transmits the related content DB is not limited to the time shown in FIG. 5.
  • the server 300 sends the related content DB to update the related content DB previously stored in the mobile receiver 200 the server 300 is not limited to a specific time.
  • FIG. 6 is a view for explaining a method for providing related content according to an embodiment of the present invention.
  • the mobile receiver 200 not the content player 300, outputs main content, and the mobile receiver 200 is implemented to recognize sound.
  • the content providing method according to the present embodiment is applied to the system of FIG. 1, the embodiment of FIG. 6 will be described based on differences from the embodiment of FIG. 4.
  • the mobile receiver 200 outputs main content (S301), the acoustic recognition module of the mobile receiver is activated (S305), and the sound wave ID is extracted (S307).
  • the server may include searching for and selecting related content corresponding to the sound wave ID extracted in step S307 (S309), and displaying, by the mobile receiver 200, the related content selected in step S309 (S311). .
  • the step S203 of activating the acoustic recognition module of the mobile receiver may be activated according to a schedule, or may be activated when a predetermined event occurs, similarly to the activation step described with reference to FIG. 4.
  • Step S307 can be implemented, for example, in the following manner.
  • the microphone 201 detects a sound output through a speaker (not shown) that is part of the display unit 213 provided by the mobile receiver 200, and the sound recognition module extracts an ID from the detected sound.
  • a method of extracting a sound wave ID by receiving the acoustic recognition module received from the internal operating system digital data before being output through a speaker (not shown) which is part of the display unit 213 provided in the mobile receiver 200.
  • the input terminal of the microphone 201 of the headset (not shown) connected to the mobile receiver 200 is connected to the small sound output terminal of the headset (not shown), and is transferred to the small speaker of the headset (not shown). How the acoustic recognition module extracts the ID from the signal.
  • the acoustic recognition module transmits the sound wave ID extracted in step S307 to the server 300 (S307).
  • the server 300 searches for and selects the related content corresponding to the sound wave ID by referring to the related content DB-server 300 or stored in a separate storage device (not shown) (S309).
  • the server 300 transmits the selection result to the mobile receiver 200, and the mobile receiver 200 displays the received selection result.
  • FIG. 7 is a view for explaining a method for providing related content according to an embodiment of the present invention.
  • the content reproducing apparatus 100 outputs main content (S401), a step of activating an acoustic recognition module of the mobile receiver (S403), and extracting a sound wave ID (S407). ), The mobile receiver searching for and selecting the related content corresponding to the sound wave ID extracted in step S407 (S409), and the mobile receiver 200 displaying the related content selected in step S409 (S411). Can be.
  • the related content providing method may further include a step (S405) in which the server transmits update data for the related content DB to the mobile receiver, and the mobile receiver transmits the update data from the server. You can update the related content DB that you receive and store.
  • step S407 can be implemented by the same method as the above-described step S307.
  • the step S403 of activating the acoustic recognition module of the mobile receiver may be activated according to a schedule, or may be activated when a predetermined event occurs, similarly to the activation step described with reference to FIG. 4.
  • the mobile receiver 200 directly selects the related content corresponding to the sound wave ID.
  • the server 300 may send some or all of the related content DB to the mobile receiver 200 to update some or all of the related content previously stored in the mobile receiver 200.
  • the time when the server 300 transmits the related content DB is not limited to the time shown in FIG. 7. That is, when the server 300 sends the related content DB to update the pre-stored related content DB in the mobile receiver 200, the time when the server 300 transmits the related content DB is not limited to a specific time. .
  • FIG. 8A illustrates an example of main contents, each of which is composed of a combination of an image and a sound, and a sound wave ID may be extracted at a corresponding time of the main content.
  • main content such as a) of FIG. 8 is output in the embodiment of FIG. 4, the operation thereof will be described.
  • the content player 100 outputs main contents (main contents 1, 2, and 3) as time passes, and extracts sound wave IDs from the contents.
  • Sound wave ID1 can be extracted from sound when main content 1 is output.
  • Sound wave ID2 can be extracted from sound when main content 2 is output.
  • Sound wave ID3 can be extracted from sound when main content 3 is output.
  • the mobile receiver 200 sequentially extracts each sound wave ID, transmits the sound wave IDs to the server 300, and receives and displays related contents corresponding to the sound wave IDs from the server 300.
  • the mobile receiver 200 may perform extraction of the sound wave ID according to a schedule or when a predetermined event occurs (that is, intermittently). As it is the same as the above-described embodiments will not be described in detail.
  • FIG. 8B illustrates another example of main contents output through the content reproducing apparatus.
  • Each of the main contents includes a combination of an image and a sound, and a sound wave ID may be extracted from the sound.
  • the present invention differs from a) of FIG. 8 in that there are contents capable of extracting two different sound wave IDs.
  • main content such as b) of FIG. 8 is output in the embodiment of FIG. 6, the operation thereof will be described.
  • the mobile receiver 200 outputs main contents (main contents 4, 5, 6) over time, and may extract sound wave IDs from these main contents, respectively.
  • Sound wave ID1 can be extracted from sound when main content 4 is outputted.
  • Sound waves ID2 and ID3 can be extracted from sound when main content 5 is outputted.
  • Sound wave ID4 can be extracted from sound when main content 6 is outputted.
  • the mobile receiver 200 sequentially extracts each sound wave ID, transmits these IDs to the server 300, and receives and displays related contents corresponding to the sound wave IDs from the server 300.
  • the time for displaying the related content through the mobile receiver may be a time that does not require the immersion of the viewer or the listener in the main content such as an advertisement or an ending credit in the main content.
  • the mobile receiver displays the related content on the mobile receiver, even if the sonic ID from which the related content can be obtained is located in the middle of the drama. Doing can be postponed to the point when the drama ends and the ending credits are displayed.
  • a computer-readable recording medium having recorded thereon a program for executing the steps constituting the method described above with reference to FIGS. 4 to 7 may be provided.
  • 15 is a view for explaining a method for providing related content according to an embodiment of the present invention.
  • the related content providing method may include: detecting, by the mobile receiver 200, a sound output from a content player for playing content (S501); Converting the sensed sound into digital data by the mobile receiver 200 (S503); Extracting, by the mobile receiver 200, sound wave ID from the digital data (S505); Searching for and selecting related content corresponding to the extracted sound wave ID (S507); And displaying, by the mobile receiver 200, the selected related content as an image and / or a voice (S509).
  • the method for providing a related content further includes the step of determining whether the sound sensed in step S501 has a predetermined threshold value or more; the step of determining the step S501 and It may be performed between steps S505.
  • the determining may be performed between steps S501 and S503.
  • the step S503 of converting the sound sensed in step S501 into digital data may be performed when it is determined that the sound sensed in step S501 has an intensity greater than or equal to a preset threshold.
  • the mobile terminal 200 may be implemented to perform the step (S507) of selecting the related content, or the server 300 may be implemented to perform.
  • the server 300 is configured to perform the step of searching and selecting the related content (S507)
  • the related content providing method according to an embodiment of the present invention the sound wave extracted by the mobile receiver 200 in step S505
  • the method may further include transmitting an ID to the server 300, and transmitting the related content selected by the server 300 to the mobile receiver 200, wherein transmitting the sound wave ID to the server 300.
  • the transmitting of the related content selected by the server 300 to the mobile receiver 200 may be performed between the operations S507 and S509.
  • the related content providing method may further include.
  • the activating step S504 may be a step of activating a component that performs S501, a component that performs S502, and a component that performs S503.
  • the component that performs S501, the component that performs S502, and the component that performs S503 are deactivated and then activated components, the component that performs S505, and the component that performs S507. May be already activated.
  • activating may include activating a component that performs S501, a component that performs S502; And activating a component that performs S503 and a component that performs S507 if the execution result of step S502 succeeds.
  • the step of activating (S504), the step of activating the component performing the step S501; and when the sound is detected as a result of performing the step S501-for example, by detecting the sound belonging to the preset band Case-in, activating the component performing the step S502 and the component performing the step S503; may be configured to include.
  • the component performing the step S507 and the component performing the step S509 may be configured to be activated, for example, already activated or activated when the step S503 succeeds, or when the step S503 is activated. have.
  • activating step S504 may include: activating a component that performs step S502 when sound is detected as a result of performing step S501; If it is determined that the sound sensed in step S501 has a strength greater than or equal to a preset threshold, the operation of step S502 may include activating a component that performs the step S503 and a component that performs the step S505. have.
  • the component performing the step S507 and the component performing the step S509 may be activated simultaneously or sequentially.
  • the activating step S504 may be performed according to a schedule or may be performed when a predetermined event occurs.
  • schedule please refer to the embodiments described with reference to FIG. 4.
  • the activating step S504 may be performed according to a schedule as in the embodiments described with reference to FIGS. 9 to 11, or there may be a predetermined event as in the embodiments described with reference to FIGS. 12 to 14. In this case, it may be performed.
  • the activating step S504 may be performed at regular intervals and / or at irregular intervals according to a schedule created by referring to the main content schedule.
  • the activating step S504 may be performed at a predetermined period only during the time when the main content is played.
  • the activating step (S504) may be performed at a first predetermined period during the time when the main content is played, and at a second period longer than the first period during the time when the main content is not played.
  • the activating step S504 may be performed at regular intervals and / or at irregular intervals according to a schedule created with reference to the sound wave ID schedule.
  • the activating step S504 may be performed at a time when the sound wave ID may be extracted.
  • the activating step (S504) is performed in a preset third period, but when the extraction of the sound wave ID has failed, but it is determined that the sound wave ID is present, activating at a fourth period shorter than the third period. Step S504 may be performed. After that, if the ID is extracted, the activating step (S504) may be performed in a third cycle.
  • step of activating (S504) is performed in the fifth predetermined period, the result of the execution of S502 the following equation
  • the activation step (S504) is performed in a sixth period shorter than the fifth period.
  • activating step S504 may be executed when a predetermined event occurs.
  • the event may mean, for example, when a specific application is executed, when a display unit of the mobile receiver 200 is activated or deactivated, or when the mobile receiver 200 receives a push message from the outside.
  • the specific application may be an application that uses a microphone only for itself, but this may be another application as an example.
  • the activating step (S504) may be performed to be performed at a preset seventh cycle and not executed when a predetermined event occurs. When the predetermined event ends, the activating step (S504) is performed again according to a preset cycle (for example, the seventh cycle).
  • the step of activating when a predetermined event occurs, the fact that the event occurs is delivered to the operating system (OS) (not shown) of the mobile receiver 200, the operating system generates the event Transmitting the facts to filters provided corresponding to respective applications operating in the mobile receiver 200; And each filter may be configured to deliver or not deliver the fact that an event has occurred to applications corresponding to it, as set in advance.
  • OS operating system
  • each filter may be configured to deliver or not deliver the fact that an event has occurred to applications corresponding to it, as set in advance.
  • the component performing the step of detecting sound may be, for example, a microphone, and the component performing the step of converting the sound into digital data may be an ADC.
  • the component performing the step of extracting the sound wave ID may be a sound wave ID extractor or an associated content providing application, and the component performing the step of searching for and selecting the related content may be a mobile receiver (for example, a related content providing application).
  • a server, and a component for performing the step of displaying the related content by video and / or audio may be a display unit.
  • the related content providing method described with reference to FIG. 15 has been described as a mobile receiver detecting a sound output from a content player, but this is merely an example, and the sound output from the mobile receiver as in the embodiment of FIG. 6 or 7 is described. It can also be implemented as a configuration that the mobile receiver detects.
  • the mobile receiver senses the sound output from the mobile receiver; Converting the sensed sound into digital data by the mobile receiver; Extracting, by the mobile receiver, the acoustic wave ID from the digital data; Selecting the associated content corresponding to the extracted sound wave ID; And displaying, by the mobile receiver, the selected related content as an image and / or an audio.
  • the method may further include determining whether the sensed sound has an intensity greater than or equal to a preset threshold; And selecting, by the mobile receiver, a component for converting the detected sound into digital data, a component for extracting a sound wave ID from the digital data, and a related content corresponding to the extracted sound wave ID. And activating at least one of the components for performing the operation, and the mobile terminal displaying the selected related content in video and / or audio.
  • the activating step please refer to the above-described embodiments.
  • a computer-readable medium that records a program for executing the associated content providing method.
  • a related content providing method is a program executed in a mobile receiver having a memory, a processor, a microphone, and an ADC (for the purpose of explanation, a program for executing the related content providing method). It can be implemented as). That is, a mobile receiver having a memory, a processor, a microphone, and an ADC has a function as a computer, and thus, the related content providing method can be implemented in the form of a program executed in such a mobile receiver.
  • a program for executing a related content providing method may include an associated content providing application.
  • the program for executing the related content providing method may be executed after being loaded into a memory under the control of a processor.
  • the program for executing the related content providing method may be, for example, executing one of the embodiments described with reference to FIG. 15.
  • the program for executing the related content providing method may execute, in each of the steps described with reference to FIG. 15, steps that can be implemented as a program in the mobile receiver.
  • the program for executing the related content providing method may execute steps S502, S503, S504, S505, S507, and / or S509 to the mobile receiver. That is, steps S502, 503, S504, S505, S507, and S509 are all implemented as a program, or some of the steps S502, S503, S504, S505, S505, S507, and S509 are hardware. The remaining steps can be implemented in a program. For details of the steps S502, S503, S504, S505, S507, and S509, refer to the embodiments described with reference to FIG. 15.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

L'invention concerne un système de fourniture d'un contenu apparenté qui comprend les étapes consistant à : détecter une sortie de son d'un dispositif de reproduction de contenu afin de reproduire un contenu (dénommé « contenu principal ») ; convertir le son détecté en données numériques ; extraire un identifiant d'onde acoustique à partir des données numériques ; sélectionner un contenu apparenté correspondant à l'identifiant d'onde acoustique ; et afficher le contenu apparenté dans une image et/ou un langage parlé par un récepteur mobile.
PCT/KR2015/000890 2014-03-11 2015-01-28 Système et procédé de fourniture d'un contenu apparenté à faible puissance et support d'enregistrement lisible par ordinateur dans lequel est enregistré un programme WO2015137621A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2016574884A JP6454741B2 (ja) 2014-03-11 2015-01-28 低電力関連コンテンツ提供システム、方法、及びプログラムを記録したコンピューターで読むことができる記録媒体
US15/124,928 US9794620B2 (en) 2014-03-11 2015-01-28 System and method for providing related content at low power, and computer readable recording medium having program recorded therein
CN201580021062.3A CN106256131A (zh) 2014-03-11 2015-01-28 用于在低功率下提供相关内容的系统和方法以及其中记录有程序的计算机可读记录介质
EP15761122.9A EP3118850A1 (fr) 2014-03-11 2015-01-28 Système et procédé de fourniture d'un contenu apparenté à faible puissance et support d'enregistrement lisible par ordinateur dans lequel est enregistré un programme

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20140028614 2014-03-11
KR10-2014-0028614 2014-03-11
KR1020140059864A KR20150106299A (ko) 2014-03-11 2014-05-19 저 전력 연관 콘텐츠 제공 시스템, 방법, 및 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체
KR10-2014-0059864 2014-05-19

Publications (1)

Publication Number Publication Date
WO2015137621A1 true WO2015137621A1 (fr) 2015-09-17

Family

ID=54072020

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/000890 WO2015137621A1 (fr) 2014-03-11 2015-01-28 Système et procédé de fourniture d'un contenu apparenté à faible puissance et support d'enregistrement lisible par ordinateur dans lequel est enregistré un programme

Country Status (1)

Country Link
WO (1) WO2015137621A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017142111A1 (fr) * 2016-02-19 2017-08-24 주식회사 트리니티랩 Système de télécommande de dispositif intelligent au moyen d'un signal audio de bande de fréquences audibles
WO2017150746A1 (fr) * 2016-02-29 2017-09-08 주식회사 트리니티랩 Procédé de fourniture d'informations de faible puissance et procédé de commande à distance de dispositif intelligent pour signal audio en bande de fréquences audio

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100952894B1 (ko) * 2007-01-30 2010-04-16 후지쯔 가부시끼가이샤 음향 판정 방법 및 음향 판정 장치
KR20120064582A (ko) * 2010-12-09 2012-06-19 한국전자통신연구원 멀티미디어 컨텐츠 검색 방법 및 장치
KR20120129347A (ko) * 2011-05-19 2012-11-28 엘지전자 주식회사 무선 충전에 따른 음향 출력 상태의 제어
KR20130107340A (ko) * 2011-01-07 2013-10-01 야마하 가부시키가이샤 정보 제공 시스템, 휴대용 단말 장치, 서버 및 프로그램
JP2013232182A (ja) * 2012-04-02 2013-11-14 Yamaha Corp コンテンツ配信システム、サーバ装置及び端末装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100952894B1 (ko) * 2007-01-30 2010-04-16 후지쯔 가부시끼가이샤 음향 판정 방법 및 음향 판정 장치
KR20120064582A (ko) * 2010-12-09 2012-06-19 한국전자통신연구원 멀티미디어 컨텐츠 검색 방법 및 장치
KR20130107340A (ko) * 2011-01-07 2013-10-01 야마하 가부시키가이샤 정보 제공 시스템, 휴대용 단말 장치, 서버 및 프로그램
KR20120129347A (ko) * 2011-05-19 2012-11-28 엘지전자 주식회사 무선 충전에 따른 음향 출력 상태의 제어
JP2013232182A (ja) * 2012-04-02 2013-11-14 Yamaha Corp コンテンツ配信システム、サーバ装置及び端末装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017142111A1 (fr) * 2016-02-19 2017-08-24 주식회사 트리니티랩 Système de télécommande de dispositif intelligent au moyen d'un signal audio de bande de fréquences audibles
WO2017150746A1 (fr) * 2016-02-29 2017-09-08 주식회사 트리니티랩 Procédé de fourniture d'informations de faible puissance et procédé de commande à distance de dispositif intelligent pour signal audio en bande de fréquences audio

Similar Documents

Publication Publication Date Title
WO2018199390A1 (fr) Dispositif électronique
WO2018131775A1 (fr) Dispositif électronique et son procédé de fonctionnement
WO2017146437A1 (fr) Dispositif électronique et son procédé de fonctionnement
WO2018135753A1 (fr) Appareil électronique et son procédé de fonctionnement
WO2016108660A1 (fr) Procédé et dispositif pour commander un dispositif domestique
WO2019013510A1 (fr) Procédé de traitement vocal et dispositif électronique le prenant en charge
WO2015034163A1 (fr) Procédé d'envoi de notification, et dispositif électronique correspondant
WO2014171759A1 (fr) Système pour fournir une publicité personnalisée pour l'utilisateur sur la base d'un signal son délivré par un téléviseur, procédé pour fournir une publicité personnalisée pour l'utilisateur, et support d'enregistrement lisible par ordinateur pour enregistrer un programme de service mim
WO2014030962A1 (fr) Procédé de recommandation d'amis, ainsi que serveur et terminal associés
WO2014119884A1 (fr) Procédé et système d'affichage d'objet et procédé et système de fourniture de cet objet
WO2020027498A1 (fr) Dispositif électronique et procédé de détermination de dispositif électronique pour effectuer une reconnaissance vocale
WO2014017757A1 (fr) Procédé de transmission d'un message d'interrogation, dispositif d'affichage pour le procédé, procédé de partage d'informations et terminal mobile
WO2014021567A1 (fr) Procédé pour la fourniture d'un service de messagerie, et dispositif et système correspondants
WO2020076014A1 (fr) Appareil électronique et son procédé de commande
EP3228085A1 (fr) Appareil d'affichage et procédé d'utilisation de l'appareil d'affichage
WO2016129840A1 (fr) Appareil d'affichage et son procédé de fourniture d'informations
WO2018164445A1 (fr) Dispositif électronique et procédé associé permettant de commander une application
WO2020263016A1 (fr) Dispositif électronique pour le traitement d'un énoncé d'utilisateur et son procédé d'opération
WO2019017665A1 (fr) Appareil électronique pour traiter un énoncé d'utilisateur pour commander un appareil électronique externe et procédé de commande de cet appareil
WO2015137621A1 (fr) Système et procédé de fourniture d'un contenu apparenté à faible puissance et support d'enregistrement lisible par ordinateur dans lequel est enregistré un programme
WO2015133870A1 (fr) Appareil et procédé permettant d'annuler une rétroaction dans une prothèse auditive
WO2015084022A1 (fr) Procédé de sécurité de contenu et appareil électronique offrant une fonction de sécurité de contenu
WO2019088627A1 (fr) Appareil électronique et procédé de commande associé
AU2014213221B2 (en) Method and system for displaying object, and method and system for providing the object
WO2015076577A1 (fr) Procédé de réception d'ondes sonores de faible puissance et dispositif mobile l'utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15761122

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016574884

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15124928

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2015761122

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015761122

Country of ref document: EP