KR101768692B1 - Electronic display apparatus, method, and computer readable recoding medium - Google Patents
Electronic display apparatus, method, and computer readable recoding medium Download PDFInfo
- Publication number
- KR101768692B1 KR101768692B1 KR1020150154769A KR20150154769A KR101768692B1 KR 101768692 B1 KR101768692 B1 KR 101768692B1 KR 1020150154769 A KR1020150154769 A KR 1020150154769A KR 20150154769 A KR20150154769 A KR 20150154769A KR 101768692 B1 KR101768692 B1 KR 101768692B1
- Authority
- KR
- South Korea
- Prior art keywords
- data
- auditory
- recognition
- auditory data
- output
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/08—Devices or methods enabling eye-patients to replace direct visual perception by another kind of perception
-
- G06F17/289—
-
- G06F17/30—
Abstract
This embodiment includes a plurality of projections; An input control unit for controlling to sense auditory data; An analyzer for analyzing the auditory data to determine a type of the auditory data and extracting recognition data corresponding to the auditory data in consideration of the type of the auditory data; A processing unit for generating first output data corresponding to the recognition data; And an output controller for controlling the plurality of projections to be driven according to the first output data.
Description
The present invention relates to an electronic output apparatus, a method, and a computer-readable recording medium. More particularly, the present invention relates to an electronic output apparatus, a method, and a computer-readable recording medium that analyze sensed auditory data, extract or search recognition data recognized from the auditory data, To an electronic output device, a method, and a computer-readable recording medium for controlling data to be output by a plurality of projections or a vibration motor.
Thanks to the development of the IT industry, smartphone-based mobile phones are rapidly replacing conventional feature phones.
The smart phone includes basic communication and character functions, and includes an OS, a camera, a GPS, and a vibration sensor. Thus, the smartphone has a variety of functions according to application programs.
Since the output device includes a panel, a speaker, and a vibrator, it can output a video signal, various sound signals, and a vibration signal. The input device is usually implemented as a touch panel, Various types of input can be processed.
On the other hand, when the visually impaired uses the smartphone, the output through the speaker and the vibrator can be recognized, but the output through the speaker and the vibrator can not convey the exact text and meaning to be transmitted.
Embodiments of the present invention provide an electronic output device for searching or extracting recognition data corresponding to inputted auditory data and outputting the output data corresponding to the recognition data through a plurality of projections and / And a computer readable recording medium.
An electronic output device according to embodiments of the present invention includes a plurality of protrusions; An input control unit for controlling to sense auditory data; An analysis unit for analyzing the auditory data to determine a type of the auditory data and generating recognition data corresponding to the auditory data in consideration of the type of the auditory data; A processing unit for generating first output data corresponding to the recognition data; And an output controller for controlling the plurality of protrusions to be driven according to the first output data.
The analysis unit may generate text data included in the auditory data as recognition data when the type of the auditory data is a person.
The analyzing unit may translate the text data included in the auditory data into a language input by a user, and generate translated text data as recognition data.
If the type of the auditory data is a musical instrument, the analyzing unit may search for a title of the music included in the auditory data, and generate the title of the music as the recognition data.
The analyzing unit may search the weather information corresponding to the auditory data as the recognition data when the type of the auditory data is inanimate and the sound data include thunder sound and generate the weather information as the recognition data have.
The processing unit generates second output data corresponding to bits included in the auditory data, the electronic output device comprising: a vibration motor; And a vibration control unit for controlling the vibration motor to generate a vibration corresponding to the second output data.
The type of the auditory data may be set in consideration of the subject that generates the auditory data.
The analysis unit may generate recognition data corresponding to the notification sound as recognition data of the auditory data when the auditory data includes auditory data corresponding to a notification sound set by the user.
A plurality of protrusions according to embodiments of the present invention; An electronic output method of an electronic output apparatus including a vibration motor includes the steps of the electronic output apparatus detecting audible data; Analyzing the auditory data, determining a type of the auditory data, and generating recognition data corresponding to the auditory data in consideration of the type of the auditory data; Generating first output data corresponding to the recognition data; And controlling the plurality of protrusions to be driven according to the first output data.
The extracting of the recognition data may generate text data included in the auditory data as recognition data when the type of the auditory data is a person.
The analyzing unit may translate the text data included in the auditory data into a language input by a user, and generate translated text data as recognition data.
The step of extracting the recognition data may search the title of the music included in the audio data and generate the title of the music as the recognition data if the type of the audio data is a musical instrument.
Wherein the extracting of the recognition data comprises: searching for weather information corresponding to the auditory data as the recognition data when the type of the auditory data is inanimate and the auditory data includes a thunder sound; Data can be generated.
An electronic output method according to embodiments of the present invention includes: generating second output data corresponding to bits included in the auditory data; And controlling the vibration motor to cause the vibration corresponding to the second output data to occur.
A computer program according to an embodiment of the present invention may be stored in a medium using a computer to execute any one of the electronic output methods according to an embodiment of the present invention.
In addition to this, another method for implementing the present invention, another system, and a computer-readable recording medium for recording a computer program for executing the method are further provided.
Other aspects, features, and advantages other than those described above will become apparent from the following drawings, claims, and the detailed description of the invention.
The present invention can control to search or extract recognition data corresponding to inputted auditory data and to output output data corresponding to the recognition data through a plurality of projections and a vibration motor.
1 is a block diagram showing the structure of an electronic output apparatus according to embodiments of the present invention.
2 is a block diagram showing the structure of the
3 is a block diagram showing the structure of the
4 is a view for explaining various aspects of the electronic output device.
5 to 6 are flowcharts illustrating an electronic output method according to embodiments of the present invention.
7 to 8 are views for explaining an example in which an electronic output device is used.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention is capable of various modifications and various embodiments, and specific embodiments are illustrated in the drawings and described in detail in the detailed description. The effects and features of the present invention and methods of achieving them will be apparent with reference to the embodiments described in detail below with reference to the drawings. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals refer to like or corresponding components throughout the drawings, and a duplicate description thereof will be omitted .
In the following embodiments, the terms first, second, and the like are used for the purpose of distinguishing one element from another element, not the limitative meaning.
In the following examples, the singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise.
In the following embodiments, terms such as inclusive or possessed mean that a feature or element described in the specification is present, and does not exclude the possibility that one or more other features or components are added in advance.
If certain embodiments are otherwise feasible, the particular process sequence may be performed differently from the sequence described. For example, two processes that are described in succession may be performed substantially concurrently, and may be performed in the reverse order of the order described.
In the following embodiments, the term "circuit" refers to any circuitry, circuitry, and / or circuitry, including, for example, hardwired circuitry, programmable circuitry, state machine circuitry, and / or firmware that stores instructions executed by a programmable circuit, either alone or in any combination . The application may be implemented as code or instructions that may be executed on a programmable circuit, such as a host processor or other programmable circuit. A module, as used in any of the embodiments herein, may be implemented as a circuit. The circuitry may be implemented as an integrated circuit, such as an integrated circuit chip.
In the following embodiments, when a component is referred to as "comprising ", it means that it can include other components as well, without excluding other components unless specifically stated otherwise. Also, the terms " part, "" module," and " module ", etc. in the specification mean a unit for processing at least one function or operation and may be implemented by hardware or software or a combination of hardware and software have.
1 is a diagram showing an
1, an
The
In another embodiment, the
In another embodiment, the
In another embodiment, the
Here, the
The
The
The
Here, the
The
Here, the
The
The
The
Programs stored in the
The
The
2 is a block diagram showing the structure of the
2, the
The
The
The
If the type of the auditory data is a person, the
In particular, when a human voice is detected, the
In addition, when 'next stop information' in a bus, a subway, or the like is detected, the analyzing
If the type of the auditory data is an effect sound or a notification sound, the
The
In another embodiment, the
The
In an alternative embodiment, the
The
The
The first output data may refer to data for controlling height, protrusion, protrusion time, protrusion period, etc. of the first through sixth protrusions according to each text of the recognition data. The
The
For example, when detecting auditory data including bits and lyrics, the
Accordingly, the
When a screaming sound of a person is detected, the
The
The
The important
3 is a diagram for explaining the structure and operation of the
3, the
The
Here, the plurality of
The
4 is a schematic view for explaining an appearance of an electronic output device according to an embodiment of the present invention.
Referring to FIG. 4A, the external appearance of the independent
Referring to FIG. 4B, there is shown an external view of a user terminal as a smart watch interlocked with a smartphone, and an
Referring to FIG. 4C, there is shown an external view of a user terminal as a smart watch interlocked with a smart phone, and an
Referring to FIG. 4D, a user terminal as a smartphone and an
5 to 6 are flowcharts illustrating an electronic output method according to embodiments of the present invention.
Referring to FIG. 5, an electronic output method according to embodiments of the present invention includes a step S110 of sensing auditory data, a step S120 of extracting recognition data, a step S130 of generating output data, And driving the vibration motor (S140).
In step S110, the
In S120, the
When the next stop information in the bus, subway, or the like is sensed, the
If the type of the auditory data is an effect sound or a notification sound, the
In step S130, the
The
In step S140, the
Referring to FIG. 6, an electronic output method according to embodiments of the present invention includes an input step S210, an auditory data sensing step S220, a recognition data extraction step S230, a determination step S240, (S250).
In S210, the
In S220, the
In S230, the
In S240, the
If it is determined in step S250 that the values match, the
The
Figs. 7 and 8 are diagrams for explaining an example of use of the
As shown in FIG. 7, the
According to the detected new sound, the
The
The
8, when a user on the subway worn the
The embodiments of the present invention described above can be embodied in the form of a computer program that can be executed on various components on a computer, and the computer program can be recorded on a computer-readable medium. At this time, the medium may be a magnetic medium such as a hard disk, a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM and DVD, a magneto-optical medium such as a floptical disk, , A RAM, a flash memory, and the like, which are specifically configured to store and execute program instructions.
Meanwhile, the computer program may be specifically designed and configured for the present invention or may be known and used by those skilled in the computer software field. Examples of computer programs may include machine language code such as those produced by a compiler, as well as high-level language code that may be executed by a computer using an interpreter or the like.
The specific acts described in the present invention are, by way of example, not intended to limit the scope of the invention in any way. For brevity of description, descriptions of conventional electronic configurations, control systems, software, and other functional aspects of such systems may be omitted. Also, the connections or connecting members of the lines between the components shown in the figures are illustrative of functional connections and / or physical or circuit connections, which may be replaced or additionally provided by a variety of functional connections, physical Connection, or circuit connections. Also, unless explicitly mentioned, such as " essential ", " importantly ", etc., it may not be a necessary component for application of the present invention.
The use of the terms " above " and similar indication words in the specification of the present invention (particularly in the claims) may refer to both singular and plural. In addition, in the present invention, when a range is described, it includes the invention to which the individual values belonging to the above range are applied (unless there is contradiction thereto), and each individual value constituting the above range is described in the detailed description of the invention The same. Finally, the steps may be performed in any suitable order, unless explicitly stated or contrary to the description of the steps constituting the method according to the invention. The present invention is not necessarily limited to the order of description of the above steps. The use of all examples or exemplary language (e.g., etc.) in this invention is for the purpose of describing the present invention only in detail and is not to be limited by the scope of the claims, It is not. It will also be appreciated by those skilled in the art that various modifications, combinations, and alterations may be made depending on design criteria and factors within the scope of the appended claims or equivalents thereof.
Claims (17)
An input control unit for controlling to sense auditory data;
An analysis unit for analyzing the auditory data to determine a type of the auditory data and generating recognition data corresponding to the auditory data in consideration of the type of the auditory data;
Generating a first output data corresponding to the recognition data and controlling at least one of a vibration time, a vibration intensity, and a vibration frequency of the vibration motor according to a bit of the recognition data when the sensed auditory data includes a music sound A processing unit for further generating second output data;
An output controller for controlling the plurality of projections to be driven according to the first output data;
And a vibration control unit that controls the vibration motor to cause vibration corresponding to the second output data to be generated.
The analyzer
And generates text data included in the auditory data as recognition data when the type of the auditory data is a person.
The analyzer
Wherein the text data included in the auditory data is translated into a language input by a user and the translated text data is generated as recognition data.
The analyzer
And searches the title of the music included in the auditory data and generates the title of the music as the recognition data when the type of the auditory data is a musical instrument.
The analyzer
An electronic output device for searching weather information corresponding to the auditory data as the recognition data and generating the weather information as the recognition data when the type of the auditory data is inanimate and the auditory data includes a thunder sound, .
The analyzer
And generates recognition data corresponding to the notification sound as recognition data of the auditory data when the auditory data includes auditory data corresponding to a notification sound set by the user.
Sensing the audible data by the electronic output device;
Analyzing the auditory data, determining a type of the auditory data, and generating recognition data corresponding to the auditory data in consideration of the type of the auditory data;
Generating a first output data corresponding to the recognition data and controlling at least one of a vibration time, a vibration intensity and a vibration frequency of the vibration motor according to a bit of the recognition data when the sensed auditory data includes music sound Further generating second output data to be output;
Controlling the plurality of projections to be driven according to the first output data;
And controlling the vibration motor to cause vibration corresponding to the second output data to be generated.
The step of generating the recognition data
And generates text data included in the auditory data as recognition data when the type of the auditory data is a person.
The step of generating the recognition data
Wherein the text data included in the auditory data is translated into a language input by a user and the translated text data is generated as recognition data.
The step of generating the recognition data
Searching the title of the music included in the audio data and generating the title of the music as the recognition data when the type of the audio data is a musical instrument.
The step of generating the recognition data
An electronic output method in which weather information corresponding to the auditory data is retrieved as the recognition data and the weather information is generated as the recognition data when the type of the auditory data is inanimate and the auditory data includes a thunder sound .
The type of the auditory data is
Wherein the auditory data is set in consideration of a subject that generates the auditory data.
The step of generating the recognition data
And sets the recognition data corresponding to the notification sound as recognition data of the auditory data when the auditory data includes auditory data corresponding to a notification sound set by the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150154769A KR101768692B1 (en) | 2015-11-04 | 2015-11-04 | Electronic display apparatus, method, and computer readable recoding medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150154769A KR101768692B1 (en) | 2015-11-04 | 2015-11-04 | Electronic display apparatus, method, and computer readable recoding medium |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020170099143A Division KR20170094527A (en) | 2017-08-04 | 2017-08-04 | Electronic display apparatus, method, and computer readable recoding medium |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170052391A KR20170052391A (en) | 2017-05-12 |
KR101768692B1 true KR101768692B1 (en) | 2017-08-17 |
Family
ID=58740317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150154769A KR101768692B1 (en) | 2015-11-04 | 2015-11-04 | Electronic display apparatus, method, and computer readable recoding medium |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101768692B1 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100916817B1 (en) * | 2008-03-26 | 2009-09-14 | 엔에이치엔(주) | Method and system for providing web contents matching with data of mobile terminal |
-
2015
- 2015-11-04 KR KR1020150154769A patent/KR101768692B1/en active IP Right Grant
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100916817B1 (en) * | 2008-03-26 | 2009-09-14 | 엔에이치엔(주) | Method and system for providing web contents matching with data of mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
KR20170052391A (en) | 2017-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210216276A1 (en) | Speech recognition method and apparatus with activation word based on operating environment of the apparatus | |
KR102103057B1 (en) | Voice trigger for a digital assistant | |
US10455342B2 (en) | Sound event detecting apparatus and operation method thereof | |
JP7160967B2 (en) | Keyphrase detection with audio watermark | |
KR102228455B1 (en) | Device and sever for providing a subject of conversation and method for providing the same | |
US20120035924A1 (en) | Disambiguating input based on context | |
US20210210086A1 (en) | Method for processing voice signals of multiple speakers, and electronic device according thereto | |
CN104604274A (en) | Method and apparatus for connecting service between user devices using voice | |
US20150193199A1 (en) | Tracking music in audio stream | |
US11830501B2 (en) | Electronic device and operation method for performing speech recognition | |
US20180052658A1 (en) | Information processing device and information processing method | |
CN112259076B (en) | Voice interaction method, voice interaction device, electronic equipment and computer readable storage medium | |
KR101774236B1 (en) | Apparatus and method for context-awareness of user | |
KR20200005476A (en) | Retroactive sound identification system | |
US11455178B2 (en) | Method for providing routine to determine a state of an electronic device and electronic device supporting same | |
JP2013254395A (en) | Processing apparatus, processing system, output method and program | |
US10162898B2 (en) | Method and apparatus for searching | |
KR101768692B1 (en) | Electronic display apparatus, method, and computer readable recoding medium | |
KR20170081418A (en) | Image display apparatus and method for displaying image | |
KR20170094527A (en) | Electronic display apparatus, method, and computer readable recoding medium | |
KR101899021B1 (en) | Method for providing filtered outside sound and voice transmitting service through earbud | |
KR101862337B1 (en) | Apparatus, method and computer readable recoding medium for offering information | |
KR20200040562A (en) | System for processing user utterance and operating method thereof | |
KR102371563B1 (en) | Device and sever for providing a subject of conversation and method for providing the same | |
KR102338445B1 (en) | Apparatus, method and computer readable recoding medium for offering information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E90F | Notification of reason for final refusal | ||
E701 | Decision to grant or registration of patent right | ||
A107 | Divisional application of patent | ||
GRNT | Written decision to grant |