KR101649252B1 - Smart Input Apparatus for Analyzing Target - Google Patents
Smart Input Apparatus for Analyzing Target Download PDFInfo
- Publication number
- KR101649252B1 KR101649252B1 KR1020150104225A KR20150104225A KR101649252B1 KR 101649252 B1 KR101649252 B1 KR 101649252B1 KR 1020150104225 A KR1020150104225 A KR 1020150104225A KR 20150104225 A KR20150104225 A KR 20150104225A KR 101649252 B1 KR101649252 B1 KR 101649252B1
- Authority
- KR
- South Korea
- Prior art keywords
- barrel
- character
- word
- shutter
- information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G06K9/344—
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
A smart input device for target analysis according to the present invention comprises: a cylindrical barrel forming a body; A shutter part extending in the longitudinal direction of the barrel at a tip end of the barrel and having a touch sensor formed at an end thereof; Wherein the barrel is formed to extend in a lengthwise direction of the barrel so as to be connected to the barrel side portion of the barrel at a tip end of the barrel and shorter than a length of the shutter portion, ; A controller for receiving a touch signal from the touch sensor and controlling the camera module to generate and store a shot image; and a character extraction module for extracting a target character from the shot image and generating target character information; As shown in FIG.
Description
The present invention is characterized in that a visual confirmation space for extending a viewing angle of a camera lens is provided due to a novel physical structure formed of a barrel, a shutter part and a photographing part so that a target character can be stably photographed without being damaged And a smart input device for target analysis.
In recent years, as the advanced network technology has developed rapidly, it is expected that modern people will obtain information more conveniently and quickly. In recent years, as mobile devices and the Internet have developed, convenient and quick methods such as pre-searching using applications installed in mobile devices such as mobile phones and searching the Internet based on Wi-Fi, 3G, and 4G networks have been used as means for obtaining information. That is, a specific character can be input by typing using a touch-type or button-type keyboard, and a dictionary search, an Internet search, and the like can be performed.
However, in such an environment where the mobile device is not used, that is, in an environment such as reading a book, the process of turning on the mobile device to access a desired web page, application, etc. and typing and obtaining the result may be considered troublesome. Also, the process of searching for an application or searching for a desired web page, and then typing may be troublesome even in a situation where a mobile device is in use.
There has been a need to develop a recognition device capable of immediately recognizing a specific character described in a real document such as a book or a screen of a mobile device in order to solve the aforementioned hassle and enable instant retrieval, 10-2004-0025416 and Korean Patent No. 10-0448038 are published.
Korean Patent Laid-Open No. 10-2004-0025416 "Character recognition pen" includes a digital camera for photographing a handwriting in a writing instrument forming a pen shape, an LCD information window for expressing information according to option selection, A PCB IC for storing and storing an image, a function selection button for selecting a function of the character recognition pen, an input port for coupling with the PCB IC to transmit an image captured by the computer, And a power supply unit.
In the above technique, a digital camera for photographing a handwriting in a writing instrument is constructed in order to recognize characters written by the user in real time. In this case, only characters to be handwritten can be recognized. You can not use it if you want to. In addition, since the position of the camera and the position of the writing instrument are close to each other, the camera has a very narrow field of view, and there is a high possibility that the user recognizes a line other than a character to be handwritten.
Korean Patent No. 10-0448038 entitled " Pen-type input device with a camera "discloses a pen-type input device with a camera which improves convenience in use by studying the configuration of the device. The pen-type input device with a camera is arranged so that the center position of the image photographed by the camera is located on the left or right side of the tip of the pen, Is provided on the pen.
The prior art is to attach a camera to a pen-type input device and separate them from each other. Although the distance between the screen and the screen is guaranteed to a certain level, characters can be recognized. However, since the camera is separated, There is a fatal disadvantage that it is highly likely to be damaged and troublesome to adjust the connection part by hand, and there may be a lot of errors in selecting an accurate position to recognize a desired character.
Although a variety of apparatuses for electronically recognizing characters including the above-described prior art are disclosed, there are various problems in actually recognizing characters accurately, and it is difficult to obtain information that a user desires to obtain.
Therefore, it is necessary to develop a new and advanced smart input device for target analysis that can recognize the desired character more intuitively and simply by supplementing various problems as described above, and to quickly acquire information therefrom.
SUMMARY OF THE INVENTION The present invention is conceived to overcome the problems of the prior art, and it is an object of the present invention to provide a visual clearance space for expanding a viewing angle of a camera lens due to a new physical structure formed of a barrel, a shutter part, So that it can be stably photographed.
Another object of the present invention is to provide a fitting slit on one side of a smart touch device to facilitate storage.
Yet another object of the present invention is to efficiently determine a target character among various word bundles photographed in a photographed image, thereby grasping a user's needs without error.
Yet another object of the present invention is to make the position of the camera lens variable according to the pen usage habit of the user, thereby preventing the characters in the shot image from being damaged.
According to an aspect of the present invention, there is provided a smart input device for analyzing a target, comprising: a cylindrical barrel forming a body of the smart touch device; A shutter part extending in the longitudinal direction of the barrel at a tip end of the barrel and having a touch sensor formed at an end thereof; Wherein the barrel is formed to extend in a lengthwise direction of the barrel so as to be connected to the barrel side portion of the barrel at a tip end of the barrel and shorter than a length of the shutter portion, ; A controller for receiving a touch signal from the touch sensor and controlling the camera module to generate and store a shot image; and a character extraction module for extracting a target character from the shot image and generating target character information; .
In addition, the photographing part is formed to be rounded from the tip end of the barrel to the shutter part side, and the
A second inflection point formed at a position spaced a certain distance in the longitudinal direction from the first inflection point and a second inflection point formed at a position spaced apart from the first inflection point by a distance in the longitudinal direction; An extending portion formed with a third inflection point spaced apart in the longitudinal direction and rounded upward between the first and second inflection points, a connecting portion smoothly extended between the second and third inflection points, and a connecting portion extending from the third inflection point upward And a convex portion formed in a direction perpendicular to the shutter portion, the convex portion being convexly rounded.
In addition, the photographing part may include a rotation groove formed elongated at a position spaced apart from the tip end side of the photographing part by a predetermined distance from the barrel side, wherein the camera module includes a lens forming part for accommodating a camera lens, A rotation control unit connected to the imaging unit and extending along the inner surface at one end side of the imaging unit and then bent perpendicularly to the outside of the barrel and protruding through the rotation groove; And a guide part for guiding the pivotal position of the lens forming part so as to be able to rotate along the tip of the shooting part.
In addition, the photographing part may further include a fitting slit formed in a direction toward the shutter part at the first inflection point portion.
In addition, the character extraction module may include a word bundle selector for selectively extracting word bundles by recognizing inter-character margins and leading margins, and generating at least one or more word bundle information; And an optical reading unit for analyzing the selected word bundle information by an optical character reading method and storing the selected word bundle information as target character information, wherein the position of the closest word closest to the touch sensor Of the target character.
In addition, the controller may further include: a character damage determination unit for determining whether the word bundle in the word bundle information selected by the distance calculation unit is not clipped and generating a character damage signal when the character is damaged; And a lens angle control unit which receives the rotation angle and provides power to the rotation control unit.
A smart input device for target analysis according to the present invention comprises:
1) By providing a space for securing a field of view for expanding the viewing angle of the camera lens due to the new physical structure formed by the barrel, the shutter part and the photographing part, the target character can be stably photographed without being damaged,
2) The physical structure of the shooting part and the shutter part is made beautiful,
3) Incorporating slits into the shooting part makes it easy to store,
4) It is possible not only to grasp the user's needs without error by efficiently judging the target character among the various word bundles photographed in the photographed image,
5) By changing the position of the camera lens according to the user's habits of using the pen, characters in the shot image are not damaged.
Brief Description of the Drawings Fig. 1 is a perspective view showing an overall configuration of a smart input device for target analysis according to the present invention; Fig.
2 is a perspective view illustrating an embodiment of a smart input device for target analysis according to the present invention.
3A is a side view of a first embodiment of a photographing part according to the present invention;
FIG. 3B is a side view of a second embodiment of a photographing part according to the present invention; FIG.
3C is a side view of a third embodiment of a shooting part according to the present invention.
4 is a conceptual diagram illustrating an embodiment of a photographed image according to the present invention.
5 is a perspective view of a position adjustment assembly in accordance with the present invention;
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. The accompanying drawings are not drawn to scale and wherein like reference numerals in the various drawings refer to like elements.
FIG. 1 is a perspective view showing an overall configuration of a smart input device for target analysis according to the present invention, and FIG. 2 is a perspective view illustrating an embodiment of a smart input device for target analysis according to the present invention.
1 and 2, the smart touch device of the present invention can be implemented in the shape of a pen to allow the user to conveniently and familiarly use the smart touch device. In particular, The
The smart input device for target analysis of the present invention comprises a barrel (10), a shutter part (20) and a photographing part (30).
The
The
The
The
The photographing
In other words, the
The
3A to 3C are side views of respective embodiments of the photographing
3A, the photographing
The
The
An
That is, the
The
The end portion of the
3A, the front end of the shooting
Or a
Any of the above embodiments may be used in consideration of the viewing angle, aesthetics, and the like of the
In addition, the photographing
Since the fitting slit 36 is formed, the smart touch device of the present invention can be stored more stably, thereby enhancing the convenience of the user.
In the drawings and the description, the external configuration of the shooting
The camera module may include a known lens, an image sensor, a memory, and the like to photograph a target character (1) through a signal transmitted from the touch sensor by a user's touch to generate a shot image have. The
That is, the camera module can generate a captured image according to a signal transmitted from the
The smart touch device of the present invention is characterized in that the
The field of
The
The
In this case, the
As in the above-described embodiment, when the
The
4 is a conceptual diagram showing an embodiment of a photographed image according to the present invention.
4, the photographed image is basically captured by the camera module at the time when the user punches the
As a method of extracting the
At this time, it is preferable that the character extracted by the target character (1) is a character located at the end of the shutter part (20) on the captured image, that is, just above the part on the side of the touch sensor (21). The
The word
The
The word bundle information selected by the
As described above, by configuring the
The
The target character analysis system will be briefly described below. It is most preferable that the system is an application that can be installed in a mobile device such as a smart phone. It is also possible to implement the search database by processing the transferred target character information as memo information. In addition, when the target character information is a sentence, it is of course possible to generate and provide translation information.
In order to supply power for operating the
In addition, the
An embodiment of a process for intuitively and conveniently acquiring information desired by a user using a target character analysis system provided separately from each configuration of the smart touch device of the present invention will be described. The
Then, the
As an additional method of generating a shot image, there is a method of photographing a camera module at the moment of operation of the touch sensor, in addition to receiving a touch signal from the touch sensor in the active (10 seconds volatile memory) state of the camera module It is possible to use a method of extracting an image of a moment when a touch signal is received from a video image secured in advance. At this time, since the power loss becomes large if it is always in the active state, it is also possible to control the active state only when the motion is detected by utilizing the gyro sensor separately provided.
Using this method, images without shaking can be obtained continuously.
Although the above-described configuration does not have any difficulty in realizing the object of the present invention, the photographing
2, when the user grasps and uses the
5 is a perspective view of a position adjustment assembly in accordance with the present invention.
In order to adjust the position of the
The pivoting
The position adjusting assembly includes a
The
The
The operation of the angle adjustment of the
In addition, the position adjustment assembly may be automatically controlled by the
The lens
The character
That is, it is possible to determine whether the characters in the word bundle information are damaged or not from the photographed image and to control the angle of the
In addition, it is also possible to provide a notification to the user through LED lights, vibration devices, and the like, which are additionally provided in the smart input device of the present invention, when the control is completed so as to prevent the character from being damaged.
When such an automatic control method is used, the
In addition, the shooting
Although the smart input device for target analysis according to the present invention has been described above and illustrated in the drawings, the spirit of the present invention is not limited to the above description and drawings, It will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
1: Target character 2: Jagged space
3: Leading margin 10: Barrel
10a:
20: Shutter part 21: Touch sensor
30: Shooting
30b:
30d (1):
30d (3):
31a:
31c: third inflection point 32: camera lens
33: lens forming part 34: rotation control part
34a: rotation groove 35: guide portion
36: insertion slit 37: field of view securing
40: controller 41: image generation module
42:
42b:
43: lens
43b: lens angle control unit 44: information transmission module
100: Smart Touch Device
Claims (7)
A cylindrical barrel defining a body of the smart input device;
A shutter part extending in the longitudinal direction of the barrel at a tip end of the barrel and having a touch sensor formed at an end thereof;
The barrel is formed so as to be connected to the barrel-side portion of the shutter part in the lengthwise direction of the barrel at the tip end of the barrel, the length of which is shorter than that of the shutter part, part;
An image generation module for generating and storing a shot image by receiving a touch signal from the touch sensor and controlling the camera module;
And a character extraction module for extracting a target character from the captured image and generating target character information,
The character extraction module,
A word bundle sorting unit for extracting the word bundles by recognizing the inter-character space and the inter-line margins of the photographed image to generate at least one or more word bundle information;
A distance calculation unit for selecting word group information closest to the touch point among the plurality of word group information,
And an optical reading unit for analyzing the selected word bundle information by an optical character reading method and storing the analyzed word word information as target character information,
And extracts a target character located nearest to the touch sensor on the captured image.
The photographing part,
Wherein the barrel is rounded from the tip end of the barrel toward the shutter part, and the circumference of the barrel is long so that the inner space is expanded as the barrel extends toward the shutter part.
In the longitudinal cross section of the photographing part,
A second inflection point formed at a position spaced a predetermined distance in the longitudinal direction from the first inflection point and a third inflection point spaced apart from the second inflection point in a longitudinal direction are formed in the barrel, ,
An extension extending upwardly between the first and second inflection points,
A connecting portion smoothly extended between the second and third inflection points,
And a convex portion formed in a convexly rounded shape upward from the third inflection point and having an end formed in a direction perpendicular to the shutter portion.
The photographing part,
And a pivoting groove formed at a position spaced apart from the tip end side of the photographing part by a predetermined distance from the barrel side,
The camera module includes:
A lens forming section for housing a camera lens,
A rotation control part connected to the lens forming part and extending along the inner surface at one end side of the shooting part and then being vertically bent outside the barrel and protruding through the rotation groove,
And a guide portion for guiding the pivotal position of the lens forming portion so that the lens forming portion, which is moved according to the pressure applied to the pivoting control portion, rotates along the tip of the photographing portion. A smart input device for target analysis.
The photographing part,
Further comprising a fitting slit formed in a direction toward the shutter part at the first inflection point portion.
The controller comprising:
A character damage judging unit for judging whether a word bundle in the word bundle information selected by the distance calculating unit is not clipped and generating a character damage signal when the character is damaged;
And a lens angle control unit that receives the character damage signal and provides power to the rotation control unit.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150104225A KR101649252B1 (en) | 2015-07-23 | 2015-07-23 | Smart Input Apparatus for Analyzing Target |
PCT/KR2016/008018 WO2017014594A1 (en) | 2015-07-23 | 2016-07-22 | Smart input device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150104225A KR101649252B1 (en) | 2015-07-23 | 2015-07-23 | Smart Input Apparatus for Analyzing Target |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101649252B1 true KR101649252B1 (en) | 2016-08-23 |
Family
ID=56875638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150104225A KR101649252B1 (en) | 2015-07-23 | 2015-07-23 | Smart Input Apparatus for Analyzing Target |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101649252B1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120134964A (en) * | 2011-06-04 | 2012-12-12 | 제노젠(주) | Input device having digitizer ability based on camera |
KR101368444B1 (en) * | 2014-01-10 | 2014-02-28 | 주식회사 아하정보통신 | Electronic touch pen for iwb touch senser |
-
2015
- 2015-07-23 KR KR1020150104225A patent/KR101649252B1/en active IP Right Grant
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120134964A (en) * | 2011-06-04 | 2012-12-12 | 제노젠(주) | Input device having digitizer ability based on camera |
KR101368444B1 (en) * | 2014-01-10 | 2014-02-28 | 주식회사 아하정보통신 | Electronic touch pen for iwb touch senser |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103916592B (en) | For shooting the device and method of portrait in the portable terminal with camera | |
KR102434865B1 (en) | Rollable mobile terminal and control method thereof | |
JP6657593B2 (en) | Biological imaging apparatus, biological imaging method, and biological imaging program | |
KR102352683B1 (en) | Apparatus and method for inputting note information into an image of a photographed object | |
US9513711B2 (en) | Electronic device controlled by a motion and controlling method thereof using different motions to activate voice versus motion recognition | |
US20130120548A1 (en) | Electronic device and text reading guide method thereof | |
KR20120080074A (en) | Display apparatus controled by a motion, and motion control method thereof | |
US20140168054A1 (en) | Automatic page turning of electronically displayed content based on captured eye position data | |
US20130120430A1 (en) | Electronic device and text reading guide method thereof | |
US20230199102A1 (en) | Mobile terminal and control method therefor | |
KR20150034257A (en) | Input device, apparatus, input method, and recording medium | |
JP5989479B2 (en) | Character recognition device, method for controlling character recognition device, control program, and computer-readable recording medium on which control program is recorded | |
CN103390155B (en) | Picture and text identification method and picture and text identification device | |
CN110059678A (en) | A kind of detection method, device and computer readable storage medium | |
JP5929364B2 (en) | Nail printing apparatus and printing control method | |
US8983132B2 (en) | Image recognition apparatus and image recognition method | |
CN111144414A (en) | Image processing method, related device and system | |
KR101649252B1 (en) | Smart Input Apparatus for Analyzing Target | |
JP2008250823A (en) | Image forming apparatus | |
EP2793458B1 (en) | Apparatus and method for auto-focusing in device having camera | |
US20180225438A1 (en) | Biometric authentication apparatus, biometric authentication method, and non-transitory computer-readable storage medium for storing program for biometric authentication | |
JP5149744B2 (en) | Image search device, image search system, image search method and program | |
US9430702B2 (en) | Character input apparatus and method based on handwriting | |
KR20170005975A (en) | Digital mirror | |
JP2009205203A (en) | Iris authentication device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20190909 Year of fee payment: 4 |