KR101768692B1 - Electronic display apparatus, method, and computer readable recoding medium - Google Patents

Electronic display apparatus, method, and computer readable recoding medium Download PDF

Info

Publication number
KR101768692B1
KR101768692B1 KR1020150154769A KR20150154769A KR101768692B1 KR 101768692 B1 KR101768692 B1 KR 101768692B1 KR 1020150154769 A KR1020150154769 A KR 1020150154769A KR 20150154769 A KR20150154769 A KR 20150154769A KR 101768692 B1 KR101768692 B1 KR 101768692B1
Authority
KR
South Korea
Prior art keywords
data
auditory
recognition
auditory data
output
Prior art date
Application number
KR1020150154769A
Other languages
Korean (ko)
Other versions
KR20170052391A (en
Inventor
김주윤
김지호
박현철
Original Assignee
주식회사 닷
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 닷 filed Critical 주식회사 닷
Priority to KR1020150154769A priority Critical patent/KR101768692B1/en
Publication of KR20170052391A publication Critical patent/KR20170052391A/en
Application granted granted Critical
Publication of KR101768692B1 publication Critical patent/KR101768692B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/08Devices or methods enabling eye-patients to replace direct visual perception by another kind of perception
    • G06F17/289
    • G06F17/30

Abstract

This embodiment includes a plurality of projections; An input control unit for controlling to sense auditory data; An analyzer for analyzing the auditory data to determine a type of the auditory data and extracting recognition data corresponding to the auditory data in consideration of the type of the auditory data; A processing unit for generating first output data corresponding to the recognition data; And an output controller for controlling the plurality of projections to be driven according to the first output data.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to an electronic output apparatus, a method, and a computer readable recording medium.

The present invention relates to an electronic output apparatus, a method, and a computer-readable recording medium. More particularly, the present invention relates to an electronic output apparatus, a method, and a computer-readable recording medium that analyze sensed auditory data, extract or search recognition data recognized from the auditory data, To an electronic output device, a method, and a computer-readable recording medium for controlling data to be output by a plurality of projections or a vibration motor.

Thanks to the development of the IT industry, smartphone-based mobile phones are rapidly replacing conventional feature phones.

The smart phone includes basic communication and character functions, and includes an OS, a camera, a GPS, and a vibration sensor. Thus, the smartphone has a variety of functions according to application programs.

Since the output device includes a panel, a speaker, and a vibrator, it can output a video signal, various sound signals, and a vibration signal. The input device is usually implemented as a touch panel, Various types of input can be processed.

On the other hand, when the visually impaired uses the smartphone, the output through the speaker and the vibrator can be recognized, but the output through the speaker and the vibrator can not convey the exact text and meaning to be transmitted.

Korean Patent Laid-Open Publication No. 2013-0065887

Embodiments of the present invention provide an electronic output device for searching or extracting recognition data corresponding to inputted auditory data and outputting the output data corresponding to the recognition data through a plurality of projections and / And a computer readable recording medium.

An electronic output device according to embodiments of the present invention includes a plurality of protrusions; An input control unit for controlling to sense auditory data; An analysis unit for analyzing the auditory data to determine a type of the auditory data and generating recognition data corresponding to the auditory data in consideration of the type of the auditory data; A processing unit for generating first output data corresponding to the recognition data; And an output controller for controlling the plurality of protrusions to be driven according to the first output data.

The analysis unit may generate text data included in the auditory data as recognition data when the type of the auditory data is a person.

The analyzing unit may translate the text data included in the auditory data into a language input by a user, and generate translated text data as recognition data.

If the type of the auditory data is a musical instrument, the analyzing unit may search for a title of the music included in the auditory data, and generate the title of the music as the recognition data.

The analyzing unit may search the weather information corresponding to the auditory data as the recognition data when the type of the auditory data is inanimate and the sound data include thunder sound and generate the weather information as the recognition data have.

The processing unit generates second output data corresponding to bits included in the auditory data, the electronic output device comprising: a vibration motor; And a vibration control unit for controlling the vibration motor to generate a vibration corresponding to the second output data.

The type of the auditory data may be set in consideration of the subject that generates the auditory data.

The analysis unit may generate recognition data corresponding to the notification sound as recognition data of the auditory data when the auditory data includes auditory data corresponding to a notification sound set by the user.

A plurality of protrusions according to embodiments of the present invention; An electronic output method of an electronic output apparatus including a vibration motor includes the steps of the electronic output apparatus detecting audible data; Analyzing the auditory data, determining a type of the auditory data, and generating recognition data corresponding to the auditory data in consideration of the type of the auditory data; Generating first output data corresponding to the recognition data; And controlling the plurality of protrusions to be driven according to the first output data.

The extracting of the recognition data may generate text data included in the auditory data as recognition data when the type of the auditory data is a person.

The analyzing unit may translate the text data included in the auditory data into a language input by a user, and generate translated text data as recognition data.

The step of extracting the recognition data may search the title of the music included in the audio data and generate the title of the music as the recognition data if the type of the audio data is a musical instrument.

Wherein the extracting of the recognition data comprises: searching for weather information corresponding to the auditory data as the recognition data when the type of the auditory data is inanimate and the auditory data includes a thunder sound; Data can be generated.

An electronic output method according to embodiments of the present invention includes: generating second output data corresponding to bits included in the auditory data; And controlling the vibration motor to cause the vibration corresponding to the second output data to occur.

A computer program according to an embodiment of the present invention may be stored in a medium using a computer to execute any one of the electronic output methods according to an embodiment of the present invention.

In addition to this, another method for implementing the present invention, another system, and a computer-readable recording medium for recording a computer program for executing the method are further provided.

Other aspects, features, and advantages other than those described above will become apparent from the following drawings, claims, and the detailed description of the invention.

The present invention can control to search or extract recognition data corresponding to inputted auditory data and to output output data corresponding to the recognition data through a plurality of projections and a vibration motor.

1 is a block diagram showing the structure of an electronic output apparatus according to embodiments of the present invention.
2 is a block diagram showing the structure of the control unit 120. As shown in FIG.
3 is a block diagram showing the structure of the output control unit 124 and the output unit 150. As shown in FIG.
4 is a view for explaining various aspects of the electronic output device.
5 to 6 are flowcharts illustrating an electronic output method according to embodiments of the present invention.
7 to 8 are views for explaining an example in which an electronic output device is used.

BRIEF DESCRIPTION OF THE DRAWINGS The present invention is capable of various modifications and various embodiments, and specific embodiments are illustrated in the drawings and described in detail in the detailed description. The effects and features of the present invention and methods of achieving them will be apparent with reference to the embodiments described in detail below with reference to the drawings. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals refer to like or corresponding components throughout the drawings, and a duplicate description thereof will be omitted .

In the following embodiments, the terms first, second, and the like are used for the purpose of distinguishing one element from another element, not the limitative meaning.

In the following examples, the singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise.

In the following embodiments, terms such as inclusive or possessed mean that a feature or element described in the specification is present, and does not exclude the possibility that one or more other features or components are added in advance.

If certain embodiments are otherwise feasible, the particular process sequence may be performed differently from the sequence described. For example, two processes that are described in succession may be performed substantially concurrently, and may be performed in the reverse order of the order described.

In the following embodiments, the term "circuit" refers to any circuitry, circuitry, and / or circuitry, including, for example, hardwired circuitry, programmable circuitry, state machine circuitry, and / or firmware that stores instructions executed by a programmable circuit, either alone or in any combination . The application may be implemented as code or instructions that may be executed on a programmable circuit, such as a host processor or other programmable circuit. A module, as used in any of the embodiments herein, may be implemented as a circuit. The circuitry may be implemented as an integrated circuit, such as an integrated circuit chip.

In the following embodiments, when a component is referred to as "comprising ", it means that it can include other components as well, without excluding other components unless specifically stated otherwise. Also, the terms " part, "" module," and " module ", etc. in the specification mean a unit for processing at least one function or operation and may be implemented by hardware or software or a combination of hardware and software have.

1 is a diagram showing an electronic output apparatus 100 according to an embodiment of the present invention.

1, an electronic output apparatus 100 according to an embodiment of the present invention may include a communication unit 110, a control unit 120, a memory 130, an input unit 140, and an output unit 150 .

The electronic output apparatus 100 according to the embodiment of the present invention detects auditory data generated through nearby persons, animals, objects, etc., extracts recognition data for sensed auditory data, And controls the output data to be outputted through the plurality of projections or the vibration motor. The electronic output apparatus 100 according to the embodiment of the present invention can control the text to be output through a plurality of protrusions or information that is related to music included in the audible data included in the sensed auditory data, It is possible to control the vibration to occur according to the bit of the data.

In another embodiment, the electronic output device 100 according to an embodiment of the present invention may communicate with a user's terminal, or may detect a sound generated by a user's terminal, thereby notifying a user of a text message or voice call received by the user terminal Can be generated. The notification generated at this time may include the contents of the text message or the calling number of the voice call and information about the caller.

In another embodiment, the electronic output apparatus 100 according to the embodiment of the present invention can extract recognition data including a meaning included in the auditory data generated in the surroundings. For example, when the electronic output apparatus 100 according to the embodiment of the present invention detects audible data indicating a dangerous situation, a change in the weather, a risk of an accident, etc. occurring in the vicinity, A risk situation, a change in weather, a risk of an accident, and the like as output data.

In another embodiment, the electronic output apparatus 100 according to the embodiment of the present invention can perform a function of storing auditory data such as a news broadcast generated in the vicinity. The electronic output apparatus 100 according to an embodiment of the present invention converts text included in auditory data including notification broadcasts generated in the vicinity into output data, stores the output data, Can be performed.

Here, the electronic output apparatus 100 may be a personal computer of a user or a portable terminal of a user, and a terminal equipped with an application capable of web browsing may be borrowed without limitation.

The electronic output device 100 may be a computer (e.g., a desktop, a laptop, a tablet, etc.), a media computing platform (e.g., a cable, a satellite set top box, a digital video recorder), a handheld computing device (E. G., A PDA, an email client, etc.), any form of mobile phone, or any other type of computing or communication platform, but the invention is not so limited.

The communication unit 110 may include one or more components that enable communication between the electronic output apparatus 100 and an external server (a search service providing server, a portal service providing server, a news providing server, and the like). The communication unit 110 may include one or more components that enable communication between the electronic output device 100 and the user's cellular phone.

The communication unit 110 includes a Bluetooth communication unit, a Bluetooth low energy communication unit, a near field communication unit, a WLAN communication unit, a Zigbee communication unit, an IrDA communication unit, a WFD Wi-Fi Direct) communication unit, UWB (ultra wideband) communication unit, Ant + communication unit, and the like.

Here, the communication unit 110 may be a device including hardware and software necessary for transmitting / receiving a signal such as a control signal or a data signal through a wired / wireless connection with another network device.

The control unit 120 controls the operation of the communication unit 110, the memory 130, the input unit 140, and the output unit 150 as a whole.

Here, the control unit 120 may include all kinds of devices capable of processing data, such as a processor. Herein, the term " processor " may refer to a data processing apparatus embedded in hardware, for example, having a circuit physically structured to perform a function represented by a code or an instruction contained in the program. As an example of the data processing apparatus built in hardware, a microprocessor, a central processing unit (CPU), a processor core, a multiprocessor, an application-specific integrated circuit (ASIC) circuit, and a field programmable gate array (FPGA), but the scope of the present invention is not limited thereto.

The memory 130 may store a program for processing and controlling the control unit 120 and may store data to be input / output (e.g., a plurality of menus, a plurality of first hierarchical submenus corresponding to the plurality of menus, And a plurality of second hierarchical submenus corresponding to the first hierarchical submenu of the second hierarchical submenu).

The memory 130 can previously store the type-specific sample data of the auditory data and the characteristic information of the sample data. The memory 130 may store the feature information of the sensed auditory data for a predetermined time, for example, one week. The memory 130 may delete the audible data after a predetermined time has elapsed since the audible data was sensed.

The memory 130 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g., SD or XD memory), a RAM (Random Access Memory) SRAM (Static Random Access Memory), ROM (Read Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory) , An optical disc, and the like. Also, the electronic output apparatus 100 may operate a web storage or a cloud server that performs a storage function of the memory 130 on the Internet.

Programs stored in the memory 130 can be classified into a plurality of modules according to their functions, for example, a UI module, a touch screen module, a notification module, and the like.

The input unit 140 may include an input microphone for auditory data. The microphone receives an external acoustic signal and processes it as electrical voice data. For example, the microphone may receive acoustic signals from an external device or speaker. The microphone can use various noise cancellation algorithms to remove noise generated in receiving an external acoustic signal.

The output unit 150 may include a plurality of protrusions 151 and a vibration motor 152. A more detailed description thereof will be given in the description of FIG.

2 is a block diagram showing the structure of the control unit 120. As shown in FIG.

2, the control unit 120 may include an input control unit 121, an analysis unit 122, a processing unit 123, and an output control unit 124.

The input control unit 121 performs a function of controlling to sense auditory data. The input control unit 121 controls to sense auditory data from an external object. The input control unit 121 can detect music, voice, sounds, and the like generated in the surroundings.

The analysis unit 122 may analyze the auditory data, determine the type of the auditory data, and extract the recognition data corresponding to the auditory data, taking into consideration the type of the auditory data. The analyzer 122 removes noise included in the sensed auditory data, converts the auditory data into a frequency domain, extracts the characteristic information included in the converted auditory data, generates auditory data from the characteristic information For example, a person, an animal, an instrument, an automobile, and the like. The analyzer 122 may previously store the feature information of the sound data generated by a person, an animal, a musical instrument, an automobile, etc., and compare the stored feature information with the extracted feature information of the auditory data to determine the type of the auditory data . Here, the type of the auditory data is set according to the subject generating the auditory data, and may be a person, an animal, a thunder sound, a siren sound, a karaoke sound, a telephone sound, and the like. The user can previously record the 'ringing tone', which is the auditory data, in the electronic output apparatus 100, and extract and store the characteristic information of the 'ringing tone'. The electronic output apparatus 100 according to an embodiment of the present invention records various kinds of auditory data in advance, extracts characteristic information of the recorded one or more auditory data, and generates characteristic information of each auditory data together with subject information of each auditory data Can be matched and stored.

The analyzer 122 can extract the recognition data corresponding to the auditory data in consideration of the type of auditory data. Here, the type of auditory data may be set differently depending on the subject that generates auditory data.

If the type of the auditory data is a person, the analysis unit 122 extracts a text section included in the auditory data as recognition data corresponding to the auditory data, and extracts the text section included in the auditory data using voice recognition technology Text can be obtained. More specifically, when a human speech is detected, the analyzing unit 122 may generate the text corresponding to the detected human speech with recognition data. In the case where a person's song is detected, the analyzer 122 may generate text corresponding to the lyrics included in the auditory data as recognition data. In an alternative embodiment, the analyzer 122 may translate the text data corresponding to the auditory data into the language input by the user, and generate the translated text data as recognition data.

In particular, when a human voice is detected, the analyzer 122 not only extracts the text corresponding to the auditory data, but also analyzes the emotion of the person who generated the auditory data, the authenticity of the utterance, and the like. When the voice of an angry person is detected, the analyzing unit 122 may generate texts such as 'angry person' and 'angry person' as recognition data corresponding to auditory data in correspondence with the auditory data. When a sad person's voice is sensed, the analyzer 122 may generate text such as 'sad person' and 'sadness' as recognition data corresponding to the auditory data, in correspondence with the auditory data. When the voice of the perplexed person is sensed, the analyzer 122 may generate text such as 'perplexed person' or the like as recognition data corresponding to the auditory data, in correspondence with the auditory data.

In addition, when 'next stop information' in a bus, a subway, or the like is detected, the analyzing unit 122 can generate sensed stop information as recognition data in correspondence with the auditory data. For example, in order to prevent the sensed sound from being generated and disappear when the name of the stop, such as the 'Naxi Daigaku Gap Zone' is detected, the analysis unit 122 stores the ' Can be generated.

If the type of the auditory data is an effect sound or a notification sound, the analyzer 122 may extract the meaning of the sound effect included in the auditory data as recognition data corresponding to the auditory data. For example, when a siren sound of a fire engine is detected, the analysis unit 122 may generate text such as 'fire engine', 'fire', 'fire truck is passing', and the like as recognition data. When the 'thunder sound' associated with a change in the weather is detected, the analyzer 122 may generate text such as 'rain' or 'rain down', which is the meaning of the sound effect included in the auditory data, as recognition data. When a non-vocal sound of a person is detected, the analyzer 122 may generate text such as 'danger', 'scream', etc. as recognition data.

The analysis unit 122 may set recognition data for the notification sound as recognition data of the auditory data when auditory data set as a notification sound is sensed by a user. For example, when the user stores the character alert sound in association with 'character receive', the analyzer 122 sets 'character receive' as recognition data when detecting the auditory data corresponding to the character alert sound . When the user stores the call alert sound in correspondence with 'call reception', the analyzer 122 may set 'call reception' as recognition data when it senses auditory data that matches the call alert sound.

In another embodiment, the analysis unit 122 may extract the bits included in the auditory data and generate the vibration data according to the bits included in the auditory data, when the auditory data, which is music, is sensed.

The analysis unit 122 may extract the feature information of the auditory data and search for the title of the auditory data generated by the musical instrument in consideration of the feature information if the type of the sensed auditory data is musical instrument or performance. The analyzer 122 can access an external search service providing server (not shown) to acquire the title of the sensed auditory data.

In an alternative embodiment, the analyzer 122 may generate recognition data corresponding to the auditory data and may retrieve detailed information about the recognition data. The analysis unit 122 generates recognition data corresponding to the auditory data such as 'rain', 'fire truck', 'nagyeongdae entrance zone', and transmits the recognition data to a portal server, a search server, News, and the like.

The processing unit 123 may generate first output data corresponding to the recognition data corresponding to the auditory data. Here, the first output data may include braille data expressing text as data representing one or more text included in recognition data.

The processing unit 123 may generate the first output data so that the recognition data obtained through the auditory data can be transmitted through the plurality of projections and / or the vibration motor. The processing unit 123 can generate the first output data so that the text of the recognition data obtained through the auditory data can be displayed through one set of projections each including six projections.

The first output data may refer to data for controlling height, protrusion, protrusion time, protrusion period, etc. of the first through sixth protrusions according to each text of the recognition data. The electronic output apparatus 100 according to an embodiment of the present invention may include as many protrusions as the number of multiples of six and the first output data generated by the processing section 123 may include a height of a set of one or more protrusions, The protrusion time, the protrusion period, and the like.

The processing unit 123 may generate second output data for controlling the vibration time, vibration intensity, vibration frequency, etc. of the vibration motor according to the bit of the recognition data. The second output data generates vibration according to the bit of the auditory data, and can control the vibration time, vibration intensity, vibration frequency, etc. of the vibration motor according to the frequency, intensity, and the like of the auditory data.

For example, when detecting auditory data including bits and lyrics, the processing unit 123 controls the first output data to control the text to be represented according to the lyrics of the auditory data, The second output data can be generated.

Accordingly, the electronic output apparatus 100 according to the embodiments of the present invention can transmit information, danger, situation, environment, and the like related to the sensed auditory data to the user.

When a screaming sound of a person is detected, the electronic output device 100 according to the embodiments of the present invention allows the fact that the screaming has occurred in the vicinity through the protrusions. When the speech sounds are detected, the electronic output apparatus 100 according to the embodiments of the present invention allows the text included in the speech sounds to be expressed through the projections. When the music sound is detected, the electronic output apparatus 100 according to the embodiments of the present invention may cause the vibration according to the bits included in the music to occur.

The output controller 124 controls the plurality of projections and / or the vibration motor to be driven according to the output data.

The electronic output apparatus 100 according to the embodiments of the present invention may further include a critical data processing unit 125 for monitoring and monitoring auditory data corresponding to important data set by the user.

The important data processing unit 125 performs a function of monitoring or monitoring auditory data that the user should forget to monitor. The important data processing unit 125 may monitor or monitor whether or not the auditory data corresponding to the important data stored in advance by the user is sensed. The important data processing unit 125 controls the user to store important data such as text, sound, sound, and voice. The important data processing unit 125 stores the name of the person and the voice itself which are close to each other by the user and can monitor whether or not the hearing data corresponding to the stored name is detected or the auditory data corresponding to the voice of the person is detected have. The important data processing unit 125 may store the destination region by the user and monitor whether or not the auditory data corresponding to the destination region is detected. Accordingly, the electronic output apparatus 100 according to the embodiments of the present invention can set the destination as important data before moving using the public transportation means, and when the destination is different from the destination, I can help you.

3 is a diagram for explaining the structure and operation of the output control unit 124 and the output unit 150. As shown in FIG.

3, the output control unit 124 includes a protrusion control unit 1241 and a vibration control unit 1242. The output unit 150 includes a first protrusion 151a, a second protrusion 151b, And may include a third projection 151c, a fourth projection 151d, a fifth projection 151e, a sixth projection 151f, and a vibration motor 152.

The protrusion control unit 1241 controls the protrusions 151 to be driven according to the first output data generated by the processing unit 123. [

Here, the plurality of protrusions 151 may be, for example, two rows long and three rows long. In FIG. 3, six protrusions are shown for the sake of convenience. However, the protrusions may include six protrusions (12, 18, 24, etc.). The vibration control unit 1242 controls the vibration motor 152 to generate vibration according to the second output data generated by the processing unit 123. [ The plurality of protrusions 151 are electrically connected to the protrusion control unit 1241 and the protrusion control unit 1241 may receive the first output data from the processing unit 123 and linearly move the selected one or more protrusions 151 . Here, the selected protrusion can be moved linearly to a position that is higher than the unselected protrusion.

The vibration motor 152 can output a vibration signal. For example, the vibration motor 152 may output a vibration signal corresponding to the second output data.

4 is a schematic view for explaining an appearance of an electronic output device according to an embodiment of the present invention.

Referring to FIG. 4A, the external appearance of the independent electronic output device 100 including the protrusions 151 and the vibration motor 152 is shown in a smart clock format including a wristband. A processing unit 123 and an output control unit 124 are provided in the electronic output device 100 implemented as a smart clock. The electronic output device 100 may be a braille display device that displays first output data by linear movement of the projection 151. [ The electronic output apparatus 100 shown in FIG. 10A can recognize the output data corresponding to the auditory data by touching the protrusion 151 directly by the user.

Referring to FIG. 4B, there is shown an external view of a user terminal as a smart watch interlocked with a smartphone, and an electronic output device 100 including the protrusions 151 and the vibration motor 152. 4B, the upper part of the smart watch is implemented as one of the organic light emitting display unit, the inorganic light emitting display unit or the liquid crystal display unit, and the lower part of the smart clock can be implemented as the projection 151 of the electronic output device 100 Respectively. In the case of the user terminal as the smart watch combined with the electronic output device 100 shown in FIG. 4B, a portion of the user wearing the smart watch senses the linear motion of the protrusion 151 and outputs output data corresponding to the auditory data Can be recognized.

Referring to FIG. 4C, there is shown an external view of a user terminal as a smart watch interlocked with a smart phone, and an electronic output device 100 including the projections 151 and the vibration motor 152. 4C shows that the display unit and the projection 151 of the electronic output apparatus 100 can be implemented on the upper part of the smart clock. In the case of the user terminal T1 as the smart watch combined with the electronic output apparatus 100 shown in FIG. 4C, the user can directly touch the protrusion 151 by hand to recognize output data corresponding to the auditory data.

Referring to FIG. 4D, a user terminal as a smartphone and an electronic output device 100 including the protrusions 151 and the vibration motor 152 are shown.

5 to 6 are flowcharts illustrating an electronic output method according to embodiments of the present invention.

Referring to FIG. 5, an electronic output method according to embodiments of the present invention includes a step S110 of sensing auditory data, a step S120 of extracting recognition data, a step S130 of generating output data, And driving the vibration motor (S140).

In step S110, the electronic output apparatus 100 performs a function of controlling the audible data to be sensed. The electronic output apparatus 100 controls to sense auditory data from an external object. The electronic output apparatus 100 can detect music, sound, sound, and the like generated in the surroundings.

In S120, the electronic output apparatus 100 may analyze the auditory data, determine the type of the auditory data, and extract recognition data corresponding to the auditory data, taking into consideration the type of the auditory data. The electronic output apparatus 100 removes noise included in the sensed auditory data, converts the auditory data into a frequency domain, extracts the characteristic information included in the converted auditory data, and extracts auditory data from the characteristic information For example, a person, an animal, a musical instrument, and an automobile. The electronic output apparatus 100 may store feature information of sound data generated by a person, an animal, a musical instrument, an automobile, etc., and compare the stored feature information with extracted feature information of the auditory data to determine the type of auditory data have. Here, the type of the auditory data is set according to the subject generating the auditory data, and may be a person, an animal, a thunder sound, a siren sound, a karaoke sound, a telephone sound, and the like. The electronic output apparatus 100 can extract the recognition data corresponding to the auditory data in consideration of the type of auditory data. If the type of the auditory data is a person, the electronic output apparatus 100 extracts a text section included in the auditory data as recognition data corresponding to the auditory data, and extracts a text section included in the auditory data You can get embedded text. More specifically, when a human speech is sensed, the electronic output apparatus 100 can generate text corresponding to the sensed person's speech as recognition data. In the case where a person's melody is detected, the electronic output device 100 can generate text corresponding to the melody included in the auditory data as recognition data. In particular, when a human voice is sensed, the electronic output device 100 not only extracts the text corresponding to the auditory data, but also analyzes the emotion and the authenticity of the person who generated the auditory data. When the voice of the angry person is detected, the electronic output device 100 can generate text such as 'angry person' and 'angry person' as recognition data corresponding to the auditory data in correspondence with the auditory data. When a sad person's voice is sensed, the electronic output device 100 may generate text such as 'sad person' or 'sadness' as recognition data corresponding to the auditory data, in correspondence with the auditory data. When the voice of the perplexed person is sensed, the electronic output apparatus 100 can generate text such as 'perplexed person' or the like as recognition data corresponding to the auditory data in correspondence with the auditory data.

When the next stop information in the bus, subway, or the like is sensed, the electronic output device 100 can generate the sensed stop information as recognition data in association with the auditory data. For example, in order to prevent a sensed sound from being generated and disappear when the name of a stop, such as the 'Naxi-Dae-dae Entering Zone' is sensed, the electronic output device 100 recognizes the ' Data can be generated.

If the type of the auditory data is an effect sound or a notification sound, the electronic output apparatus 100 can extract the meaning of the sound effect included in the auditory data as recognition data corresponding to the auditory data. For example, when a siren sound of a fire truck is detected, the electronic output apparatus 100 can generate text such as 'fire engine', 'fire', 'fire engine passes', and the like as recognition data. When a thunder sound is detected, the electronic output apparatus 100 can generate text such as 'rain' or 'rain down', which is a meaning of sound effects included in the auditory data, as recognition data. When a non-vocal sound of a person is detected, the electronic output apparatus 100 can generate text such as 'danger', 'scream', or the like as recognition data. In another embodiment, when the type of sensed auditory data is music, the electronic output apparatus 100 may extract the bits included in the auditory data and generate the vibration data according to the bits included in the auditory data. When the type of audible data sensed by the electronic output apparatus 100 is music, the electronic output apparatus 100 can extract the feature information of the auditory data and search for the title of the music in consideration of the feature information. The electronic output apparatus 100 can be connected to an external search service providing server (not shown) to obtain the title of the sensed auditory data. In an alternative embodiment, the electronic output device 100 may generate recognition data corresponding to auditory data and may retrieve detailed information about the recognition data. The electronic output apparatus 100 generates recognition data corresponding to the auditory data such as 'rain', 'fire truck', 'nunseongdae entrance zone' and the like, and transmits the recognition data to a portal server, a search server, The user can search detailed information such as news about the user.

In step S130, the electronic output apparatus 100 can generate first output data corresponding to recognition data corresponding to the auditory data. Here, the first output data may include braille data expressing text as data representing one or more text included in recognition data.

The electronic output apparatus 100 can generate the first output data so that the recognition data obtained through the auditory data can be transmitted through the plurality of projections and / or the vibration motor. The electronic output apparatus 100 can generate the first output data so that the text of the recognition data obtained through the auditory data can be displayed through one set of projections each including six projections. The electronic output apparatus 100 may generate second output data for controlling the vibration time, vibration intensity, vibration frequency, etc. of the vibration motor according to the bit of the recognition data. The second output data generates vibration according to the bit of the auditory data, and can control the vibration time, vibration intensity, vibration frequency, etc. of the vibration motor according to the frequency, intensity, and the like of the auditory data. For example, when detecting auditory data including bits and lyrics, the electronic output apparatus 100 generates first output data for controlling the text to be displayed in accordance with the lyrics of the auditory data, So that the second output data can be generated.

In step S140, the electronic output apparatus 100 drives the plurality of projections according to the first output data or drives the vibration motor according to the second output data.

Referring to FIG. 6, an electronic output method according to embodiments of the present invention includes an input step S210, an auditory data sensing step S220, a recognition data extraction step S230, a determination step S240, (S250).

In S210, the electronic output apparatus 100 controls the user to receive output data for important data and important data. Here, the important data refers to text or auditory data, and may be a text or a voice to be monitored by the user. For example, the electronic output apparatus 100 controls the user to receive input of important data such as 'destination station', 'person name', etc., and receives output data for important data when each important data is detected . Here, the output data for the important data refers to data that is output through the vibration motor or output through the plurality of projections, and may be data matching the important data itself, but is not limited thereto and can be input by the user .

In S220, the electronic output apparatus 100 senses auditory data through the input unit 140. [

In S230, the electronic output apparatus 100 extracts recognition data corresponding to the auditory data. Since the operation of S230 is the same as that of S220, detailed description is omitted.

In S240, the electronic output apparatus 100 determines whether or not the recognition data or the auditory data matches the important data. When the important data is text, the electronic output apparatus 100 determines whether or not the important data and the recognition data coincide with each other. When the important data is auditory data, the electronic output apparatus 100 determines whether the characteristic information of the auditory data and the characteristic information of the important data coincide with each other.

If it is determined in step S250 that the values match, the electronic output apparatus 100 drives the plurality of protrusions or the vibration motor in accordance with the display data for the important data.

The electronic output apparatus 100 according to the embodiments of the present invention can notify the user of hearing data including important information and inform the auditory data corresponding to important information so that a situation does not occur.

Figs. 7 and 8 are diagrams for explaining an example of use of the electronic output apparatus 100. Fig.

As shown in FIG. 7, the electronic output apparatus 100 can detect new sounds, non-sounds, and audible data corresponding to automobile sounds that may occur in the surroundings.

According to the detected new sound, the electronic output apparatus 100 generates output data according to a new sound bit, and controls the generation of vibration according to the output data.

The electronic output apparatus 100 extracts 'rain' as recognition data according to the sensed rain sound, generates output data corresponding to the recognition data, and drives a plurality of protrusions according to the 'rain' as the output data . The electronic output apparatus 100 searches for news related to 'rain', generates the contents of the news as output data, and controls the plurality of protrusions to be driven according to the output data.

The electronic output apparatus 100 generates output data corresponding to motions such as movement and stop of the automobile according to the detected automobile sound and controls the plurality of protrusions to be driven according to the output data.

8, when a user on the subway worn the electronic output apparatus 100, the user senses auditory data such as announcement broadcast, arrival station, etc. through the electronic output apparatus 100, And controls the plurality of projections to be driven according to the output data.

The embodiments of the present invention described above can be embodied in the form of a computer program that can be executed on various components on a computer, and the computer program can be recorded on a computer-readable medium. At this time, the medium may be a magnetic medium such as a hard disk, a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM and DVD, a magneto-optical medium such as a floptical disk, , A RAM, a flash memory, and the like, which are specifically configured to store and execute program instructions.

Meanwhile, the computer program may be specifically designed and configured for the present invention or may be known and used by those skilled in the computer software field. Examples of computer programs may include machine language code such as those produced by a compiler, as well as high-level language code that may be executed by a computer using an interpreter or the like.

The specific acts described in the present invention are, by way of example, not intended to limit the scope of the invention in any way. For brevity of description, descriptions of conventional electronic configurations, control systems, software, and other functional aspects of such systems may be omitted. Also, the connections or connecting members of the lines between the components shown in the figures are illustrative of functional connections and / or physical or circuit connections, which may be replaced or additionally provided by a variety of functional connections, physical Connection, or circuit connections. Also, unless explicitly mentioned, such as " essential ", " importantly ", etc., it may not be a necessary component for application of the present invention.

The use of the terms " above " and similar indication words in the specification of the present invention (particularly in the claims) may refer to both singular and plural. In addition, in the present invention, when a range is described, it includes the invention to which the individual values belonging to the above range are applied (unless there is contradiction thereto), and each individual value constituting the above range is described in the detailed description of the invention The same. Finally, the steps may be performed in any suitable order, unless explicitly stated or contrary to the description of the steps constituting the method according to the invention. The present invention is not necessarily limited to the order of description of the above steps. The use of all examples or exemplary language (e.g., etc.) in this invention is for the purpose of describing the present invention only in detail and is not to be limited by the scope of the claims, It is not. It will also be appreciated by those skilled in the art that various modifications, combinations, and alterations may be made depending on design criteria and factors within the scope of the appended claims or equivalents thereof.

Claims (17)

A plurality of protrusions;
An input control unit for controlling to sense auditory data;
An analysis unit for analyzing the auditory data to determine a type of the auditory data and generating recognition data corresponding to the auditory data in consideration of the type of the auditory data;
Generating a first output data corresponding to the recognition data and controlling at least one of a vibration time, a vibration intensity, and a vibration frequency of the vibration motor according to a bit of the recognition data when the sensed auditory data includes a music sound A processing unit for further generating second output data;
An output controller for controlling the plurality of projections to be driven according to the first output data;
And a vibration control unit that controls the vibration motor to cause vibration corresponding to the second output data to be generated.
delete The method according to claim 1,
The analyzer
And generates text data included in the auditory data as recognition data when the type of the auditory data is a person.
The method of claim 3,
The analyzer
Wherein the text data included in the auditory data is translated into a language input by a user and the translated text data is generated as recognition data.
The method according to claim 1,
The analyzer
And searches the title of the music included in the auditory data and generates the title of the music as the recognition data when the type of the auditory data is a musical instrument.
The method according to claim 1,
The analyzer
An electronic output device for searching weather information corresponding to the auditory data as the recognition data and generating the weather information as the recognition data when the type of the auditory data is inanimate and the auditory data includes a thunder sound, .
delete The method according to claim 1,
The analyzer
And generates recognition data corresponding to the notification sound as recognition data of the auditory data when the auditory data includes auditory data corresponding to a notification sound set by the user.
A plurality of protrusions; An electronic output method of an electronic output apparatus including a vibration motor,
Sensing the audible data by the electronic output device;
Analyzing the auditory data, determining a type of the auditory data, and generating recognition data corresponding to the auditory data in consideration of the type of the auditory data;
Generating a first output data corresponding to the recognition data and controlling at least one of a vibration time, a vibration intensity and a vibration frequency of the vibration motor according to a bit of the recognition data when the sensed auditory data includes music sound Further generating second output data to be output;
Controlling the plurality of projections to be driven according to the first output data;
And controlling the vibration motor to cause vibration corresponding to the second output data to be generated.

delete 10. The method of claim 9,
The step of generating the recognition data
And generates text data included in the auditory data as recognition data when the type of the auditory data is a person.
12. The method of claim 11,
The step of generating the recognition data
Wherein the text data included in the auditory data is translated into a language input by a user and the translated text data is generated as recognition data.
10. The method of claim 9,
The step of generating the recognition data
Searching the title of the music included in the audio data and generating the title of the music as the recognition data when the type of the audio data is a musical instrument.
10. The method of claim 9,
The step of generating the recognition data
An electronic output method in which weather information corresponding to the auditory data is retrieved as the recognition data and the weather information is generated as the recognition data when the type of the auditory data is inanimate and the auditory data includes a thunder sound .
10. The method of claim 9,
The type of the auditory data is
Wherein the auditory data is set in consideration of a subject that generates the auditory data.
10. The method of claim 9,
The step of generating the recognition data
And sets the recognition data corresponding to the notification sound as recognition data of the auditory data when the auditory data includes auditory data corresponding to a notification sound set by the user.
A computer-readable recording medium on which a program for executing the method according to claim 9 is recorded.
KR1020150154769A 2015-11-04 2015-11-04 Electronic display apparatus, method, and computer readable recoding medium KR101768692B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150154769A KR101768692B1 (en) 2015-11-04 2015-11-04 Electronic display apparatus, method, and computer readable recoding medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150154769A KR101768692B1 (en) 2015-11-04 2015-11-04 Electronic display apparatus, method, and computer readable recoding medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020170099143A Division KR20170094527A (en) 2017-08-04 2017-08-04 Electronic display apparatus, method, and computer readable recoding medium

Publications (2)

Publication Number Publication Date
KR20170052391A KR20170052391A (en) 2017-05-12
KR101768692B1 true KR101768692B1 (en) 2017-08-17

Family

ID=58740317

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150154769A KR101768692B1 (en) 2015-11-04 2015-11-04 Electronic display apparatus, method, and computer readable recoding medium

Country Status (1)

Country Link
KR (1) KR101768692B1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100916817B1 (en) * 2008-03-26 2009-09-14 엔에이치엔(주) Method and system for providing web contents matching with data of mobile terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100916817B1 (en) * 2008-03-26 2009-09-14 엔에이치엔(주) Method and system for providing web contents matching with data of mobile terminal

Also Published As

Publication number Publication date
KR20170052391A (en) 2017-05-12

Similar Documents

Publication Publication Date Title
US20210216276A1 (en) Speech recognition method and apparatus with activation word based on operating environment of the apparatus
KR102103057B1 (en) Voice trigger for a digital assistant
US10455342B2 (en) Sound event detecting apparatus and operation method thereof
JP7160967B2 (en) Keyphrase detection with audio watermark
KR102228455B1 (en) Device and sever for providing a subject of conversation and method for providing the same
US20120035924A1 (en) Disambiguating input based on context
US20210210086A1 (en) Method for processing voice signals of multiple speakers, and electronic device according thereto
CN104604274A (en) Method and apparatus for connecting service between user devices using voice
US20150193199A1 (en) Tracking music in audio stream
US11830501B2 (en) Electronic device and operation method for performing speech recognition
US20180052658A1 (en) Information processing device and information processing method
CN112259076B (en) Voice interaction method, voice interaction device, electronic equipment and computer readable storage medium
KR101774236B1 (en) Apparatus and method for context-awareness of user
KR20200005476A (en) Retroactive sound identification system
US11455178B2 (en) Method for providing routine to determine a state of an electronic device and electronic device supporting same
JP2013254395A (en) Processing apparatus, processing system, output method and program
US10162898B2 (en) Method and apparatus for searching
KR101768692B1 (en) Electronic display apparatus, method, and computer readable recoding medium
KR20170081418A (en) Image display apparatus and method for displaying image
KR20170094527A (en) Electronic display apparatus, method, and computer readable recoding medium
KR101899021B1 (en) Method for providing filtered outside sound and voice transmitting service through earbud
KR101862337B1 (en) Apparatus, method and computer readable recoding medium for offering information
KR20200040562A (en) System for processing user utterance and operating method thereof
KR102371563B1 (en) Device and sever for providing a subject of conversation and method for providing the same
KR102338445B1 (en) Apparatus, method and computer readable recoding medium for offering information

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E701 Decision to grant or registration of patent right
A107 Divisional application of patent
GRNT Written decision to grant