KR101649252B1 - Smart Input Apparatus for Analyzing Target - Google Patents

Smart Input Apparatus for Analyzing Target Download PDF

Info

Publication number
KR101649252B1
KR101649252B1 KR1020150104225A KR20150104225A KR101649252B1 KR 101649252 B1 KR101649252 B1 KR 101649252B1 KR 1020150104225 A KR1020150104225 A KR 1020150104225A KR 20150104225 A KR20150104225 A KR 20150104225A KR 101649252 B1 KR101649252 B1 KR 101649252B1
Authority
KR
South Korea
Prior art keywords
barrel
character
word
shutter
information
Prior art date
Application number
KR1020150104225A
Other languages
Korean (ko)
Inventor
이철호
Original Assignee
이철호
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 이철호 filed Critical 이철호
Priority to KR1020150104225A priority Critical patent/KR101649252B1/en
Priority to PCT/KR2016/008018 priority patent/WO2017014594A1/en
Application granted granted Critical
Publication of KR101649252B1 publication Critical patent/KR101649252B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06K9/344

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

A smart input device for target analysis according to the present invention comprises: a cylindrical barrel forming a body; A shutter part extending in the longitudinal direction of the barrel at a tip end of the barrel and having a touch sensor formed at an end thereof; Wherein the barrel is formed to extend in a lengthwise direction of the barrel so as to be connected to the barrel side portion of the barrel at a tip end of the barrel and shorter than a length of the shutter portion, ; A controller for receiving a touch signal from the touch sensor and controlling the camera module to generate and store a shot image; and a character extraction module for extracting a target character from the shot image and generating target character information; As shown in FIG.

Description

[0001] Smart Input Apparatus for Analyzing Target [0002]

The present invention is characterized in that a visual confirmation space for extending a viewing angle of a camera lens is provided due to a novel physical structure formed of a barrel, a shutter part and a photographing part so that a target character can be stably photographed without being damaged And a smart input device for target analysis.

In recent years, as the advanced network technology has developed rapidly, it is expected that modern people will obtain information more conveniently and quickly. In recent years, as mobile devices and the Internet have developed, convenient and quick methods such as pre-searching using applications installed in mobile devices such as mobile phones and searching the Internet based on Wi-Fi, 3G, and 4G networks have been used as means for obtaining information. That is, a specific character can be input by typing using a touch-type or button-type keyboard, and a dictionary search, an Internet search, and the like can be performed.

However, in such an environment where the mobile device is not used, that is, in an environment such as reading a book, the process of turning on the mobile device to access a desired web page, application, etc. and typing and obtaining the result may be considered troublesome. Also, the process of searching for an application or searching for a desired web page, and then typing may be troublesome even in a situation where a mobile device is in use.

There has been a need to develop a recognition device capable of immediately recognizing a specific character described in a real document such as a book or a screen of a mobile device in order to solve the aforementioned hassle and enable instant retrieval, 10-2004-0025416 and Korean Patent No. 10-0448038 are published.

Korean Patent Laid-Open No. 10-2004-0025416 "Character recognition pen" includes a digital camera for photographing a handwriting in a writing instrument forming a pen shape, an LCD information window for expressing information according to option selection, A PCB IC for storing and storing an image, a function selection button for selecting a function of the character recognition pen, an input port for coupling with the PCB IC to transmit an image captured by the computer, And a power supply unit.

In the above technique, a digital camera for photographing a handwriting in a writing instrument is constructed in order to recognize characters written by the user in real time. In this case, only characters to be handwritten can be recognized. You can not use it if you want to. In addition, since the position of the camera and the position of the writing instrument are close to each other, the camera has a very narrow field of view, and there is a high possibility that the user recognizes a line other than a character to be handwritten.

Korean Patent No. 10-0448038 entitled " Pen-type input device with a camera "discloses a pen-type input device with a camera which improves convenience in use by studying the configuration of the device. The pen-type input device with a camera is arranged so that the center position of the image photographed by the camera is located on the left or right side of the tip of the pen, Is provided on the pen.

The prior art is to attach a camera to a pen-type input device and separate them from each other. Although the distance between the screen and the screen is guaranteed to a certain level, characters can be recognized. However, since the camera is separated, There is a fatal disadvantage that it is highly likely to be damaged and troublesome to adjust the connection part by hand, and there may be a lot of errors in selecting an accurate position to recognize a desired character.

Although a variety of apparatuses for electronically recognizing characters including the above-described prior art are disclosed, there are various problems in actually recognizing characters accurately, and it is difficult to obtain information that a user desires to obtain.

Therefore, it is necessary to develop a new and advanced smart input device for target analysis that can recognize the desired character more intuitively and simply by supplementing various problems as described above, and to quickly acquire information therefrom.

SUMMARY OF THE INVENTION The present invention is conceived to overcome the problems of the prior art, and it is an object of the present invention to provide a visual clearance space for expanding a viewing angle of a camera lens due to a new physical structure formed of a barrel, a shutter part, So that it can be stably photographed.

Another object of the present invention is to provide a fitting slit on one side of a smart touch device to facilitate storage.

Yet another object of the present invention is to efficiently determine a target character among various word bundles photographed in a photographed image, thereby grasping a user's needs without error.

Yet another object of the present invention is to make the position of the camera lens variable according to the pen usage habit of the user, thereby preventing the characters in the shot image from being damaged.

According to an aspect of the present invention, there is provided a smart input device for analyzing a target, comprising: a cylindrical barrel forming a body of the smart touch device; A shutter part extending in the longitudinal direction of the barrel at a tip end of the barrel and having a touch sensor formed at an end thereof; Wherein the barrel is formed to extend in a lengthwise direction of the barrel so as to be connected to the barrel side portion of the barrel at a tip end of the barrel and shorter than a length of the shutter portion, ; A controller for receiving a touch signal from the touch sensor and controlling the camera module to generate and store a shot image; and a character extraction module for extracting a target character from the shot image and generating target character information; .

In addition, the photographing part is formed to be rounded from the tip end of the barrel to the shutter part side, and the inner space 30e becomes wider as it extends toward the shutter part side.

A second inflection point formed at a position spaced a certain distance in the longitudinal direction from the first inflection point and a second inflection point formed at a position spaced apart from the first inflection point by a distance in the longitudinal direction; An extending portion formed with a third inflection point spaced apart in the longitudinal direction and rounded upward between the first and second inflection points, a connecting portion smoothly extended between the second and third inflection points, and a connecting portion extending from the third inflection point upward And a convex portion formed in a direction perpendicular to the shutter portion, the convex portion being convexly rounded.

In addition, the photographing part may include a rotation groove formed elongated at a position spaced apart from the tip end side of the photographing part by a predetermined distance from the barrel side, wherein the camera module includes a lens forming part for accommodating a camera lens, A rotation control unit connected to the imaging unit and extending along the inner surface at one end side of the imaging unit and then bent perpendicularly to the outside of the barrel and protruding through the rotation groove; And a guide part for guiding the pivotal position of the lens forming part so as to be able to rotate along the tip of the shooting part.

In addition, the photographing part may further include a fitting slit formed in a direction toward the shutter part at the first inflection point portion.

In addition, the character extraction module may include a word bundle selector for selectively extracting word bundles by recognizing inter-character margins and leading margins, and generating at least one or more word bundle information; And an optical reading unit for analyzing the selected word bundle information by an optical character reading method and storing the selected word bundle information as target character information, wherein the position of the closest word closest to the touch sensor Of the target character.

In addition, the controller may further include: a character damage determination unit for determining whether the word bundle in the word bundle information selected by the distance calculation unit is not clipped and generating a character damage signal when the character is damaged; And a lens angle control unit which receives the rotation angle and provides power to the rotation control unit.

A smart input device for target analysis according to the present invention comprises:

1) By providing a space for securing a field of view for expanding the viewing angle of the camera lens due to the new physical structure formed by the barrel, the shutter part and the photographing part, the target character can be stably photographed without being damaged,

2) The physical structure of the shooting part and the shutter part is made beautiful,

3) Incorporating slits into the shooting part makes it easy to store,

4) It is possible not only to grasp the user's needs without error by efficiently judging the target character among the various word bundles photographed in the photographed image,

5) By changing the position of the camera lens according to the user's habits of using the pen, characters in the shot image are not damaged.

Brief Description of the Drawings Fig. 1 is a perspective view showing an overall configuration of a smart input device for target analysis according to the present invention; Fig.
2 is a perspective view illustrating an embodiment of a smart input device for target analysis according to the present invention.
3A is a side view of a first embodiment of a photographing part according to the present invention;
FIG. 3B is a side view of a second embodiment of a photographing part according to the present invention; FIG.
3C is a side view of a third embodiment of a shooting part according to the present invention.
4 is a conceptual diagram illustrating an embodiment of a photographed image according to the present invention.
5 is a perspective view of a position adjustment assembly in accordance with the present invention;

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. The accompanying drawings are not drawn to scale and wherein like reference numerals in the various drawings refer to like elements.

FIG. 1 is a perspective view showing an overall configuration of a smart input device for target analysis according to the present invention, and FIG. 2 is a perspective view illustrating an embodiment of a smart input device for target analysis according to the present invention.

1 and 2, the smart touch device of the present invention can be implemented in the shape of a pen to allow the user to conveniently and familiarly use the smart touch device. In particular, The camera lens 32 is constructed so that the field of view of the camera lens 32 is sufficiently secured.

The smart input device for target analysis of the present invention comprises a barrel (10), a shutter part (20) and a photographing part (30).

The barrel 10 forms the entire body of the smart touch device of the present invention and serves to provide space for the user to hold the smart touch device of the present invention and to support the entire components. The barrel 10 may be a cylindrical shape, a square pillar shape, a pentagonal prism shape, or the like. The barrel 10 may be a smart touch device of the present invention. A cylindrical shape will be described by way of example.

The shutter part 20 is tapered from one end of the barrel 10 and has a touch-type touch sensor 21 interlocked with the camera module.

The shutter part 20 is formed to extend from a part of the circumference of the barrel 10 in the longitudinal direction. In this case, a part of the circumference refers to a hot spring having a length of 50% or less of the circumferential length, and is tapered in the longitudinal direction of the barrel 10 from the tip end 10a of the barrel 10 And the tip end thereof is formed into a tip shape having a very small cross-sectional area. The reason why the shutter part 20 is formed in a tapered shape is that the end portion of the shutter part 20 is used to provide an effect of allowing the target character area 1 on the screen or the screen to be touched relatively accurately locally, (20) at the same time.

The touch sensor 21 is a reduced pressure sensor formed at the tip of the tip of the shutter part 20. When the user applies pressure in such a manner that the user touches the touch sensor 21 on the floor or on the screen, Thereby generating a touch signal sensed by the touch sensor. The camera module of the photographing part 30 is driven via the touch signal, thereby photographing the target character 1 on the ground or on the screen.

The photographing part 30 is provided so as to extend from the right side 10b of the total length of the front end side of the barrel 10 to the side of the shutter part 20 shorter than the length of the shutter part 20, And a camera module is built in the internal space 30e.

In other words, the shooting part 30 is formed to cover the entire periphery of the barrel 10 at the side of the barrel 10 of the shutter part 20, and may be seen as being protruded in a dome, ellipsoid, hemispherical shape, The imaging part 30 and the shutter part 20 side are connected to each other to form a hollow to secure an internal space 30e and a camera module is installed in the internal space 30e.

The shooting part 30 may be rounded so that the inner space 30e gradually expands from the tip end well 10b of the barrel 10 toward the shutter part 20 as shown in the drawing. That is, the cross section is widened from the barrel 10 side to the shutter 20 side, and the diameter is increased. The photographing part 30 is rounded to have a longer circumference than the barrel 10 to allow the camera module to have a wider field of view when photographing the target character 1 on the ground or on the screen, So that a sufficient space for accommodating the module can be sufficiently provided.

3A to 3C are side views of respective embodiments of the photographing part 30 according to the present invention.

3A, the photographing part 30 includes first, second, and third inflection points 31a, 31b, and 31c, and classifies the first, second, and third inflection points 31a, 31b, An extension 30a, a connecting portion 30b, and a convex portion 30c.

The first inflection point 31a is formed at a point where the distal end portion of the barrel 10 and the imaging part 30 are connected to each other and unlike the barrel 10 which is linear in the side view, As shown in FIG.

The second inflection point 31b is formed at a position spaced apart from the first inflection point 31a by a predetermined distance in the extending direction of the shutter part 20, that is, in the longitudinal direction, and is curved at a point lower than the first inflection point 31a And the third inflection point 31c are spaced from the second inflection point 31b in the longitudinal direction, but the curvature is formed to be higher than the second inflection point 31b.

An extension part 30a formed by rounding upward between the first and second inflection points 31a and 31b and a connection part 30b extended and extended between the second and third inflection points 31b and 31c, And a convex portion 30c which is formed by convexly rounding from the third inflection point 31c and whose end is converged slightly downward.

That is, the extension portion 30a is formed to be rounded so as to be bent at the tip end portion of the barrel 10 and to expand the internal space 30e, to have a relatively gentle slope from the second inflection point 31b, 30c), a convex shape with a slight convergence of the ends is shown.

The extended portion 30a can support the hand when gripping the smart touch device of the present invention like a user holding a pen, thereby providing a more stable grip feeling, and the convex portion 30c can enlarge the field of view of the camera module At the same time, it is stored more stably and boasts an overall smooth appearance.

The end portion of the convex portion 30c, that is, the front end of the shooting part 30, may be embodied as another embodiment as shown in FIGS. 3A and 3C.

3A, the front end of the shooting part 30 may be formed as a vertical part 30d (1) formed by cutting the shutter part 20 in the vertical direction, and the shooting part 30 may be formed as shown in FIG. The cover 30d (2) may be formed in a shape extending further from the top of the tip to the longitudinal direction along the perimeter, that is, the shape to surround the view securing space 37. [

Or a concave portion 30d (3) that is recessed and recessed toward the inside of the shooting part 30 as shown in FIG. 3C.

Any of the above embodiments may be used in consideration of the viewing angle, aesthetics, and the like of the camera lens 32, and it may be implemented as other modified embodiments.

In addition, the photographing part 30 may further include a fitting slit 36. The fitting slit 36 is a space that allows the smart touch device of the present invention to be inserted into a bag, paper, or the like, Is a slit formed in the extending direction of the shooting part (30), i.e., the direction in which the shutter part (20) is provided.

Since the fitting slit 36 is formed, the smart touch device of the present invention can be stored more stably, thereby enhancing the convenience of the user.

In the drawings and the description, the external configuration of the shooting part 30 and its function have been described by way of example, but it goes without saying that various modifications are possible.

The camera module may include a known lens, an image sensor, a memory, and the like to photograph a target character (1) through a signal transmitted from the touch sensor by a user's touch to generate a shot image have. The camera lens 32 of the camera module is provided inside the tip portion of the shooting part 30 and may be installed at the center of the hollow part. However, in order to provide a wider view, It may be preferable to be positioned at the upper center of the inner side of the tip end of the inner tube 30.

That is, the camera module can generate a captured image according to a signal transmitted from the touch sensor 21, which will be described in more detail in the controller 40 described later.

The smart touch device of the present invention is characterized in that the visual confirmation space 37 for extending the viewing angle of the camera lens 32 is provided due to the physical configuration of the shutter part 20 and the shooting part 30 .

The field of view securing space 37 is provided to extend the length of the shutter part 20 from the front end of the shooting part 30 so as to secure a field of view of the camera lens 32. That is, the visual field securing space 37 is a space surrounded by the shutter part 20 and the photographing part 30, and the camera module formed inside the photographing part 30 can smoothly move the target character 1 And provides a physical space for photographing. By providing the visual field securing space 37, the present invention can provide ease of use different from existing technologies.

The controller 40 can control the shutter part 20 and the shooting part 30 of the present invention to generate a shot image and to transmit the shot image in conjunction with a separate application installed in the mobile device, The controller 40 basically includes an image generation module 41, a character extraction module 42, and an information transmission module 44.

The image generation module 41 receives the touch signal generated by the user by providing pressure to the touch sensor 21, operates the camera module to photograph the ground or the screen at the point where the touch signal is generated, Create and save images. The image generation module 41 may be interlocked with the camera module and may include a separate memory or the like.

In this case, the touch sensor 21 generates a touch signal at the first part of a sentence or a manner in which the touch sensor 21 underlines the corresponding sentence, and generates a touch signal at the end of the sentence It is possible to generate a touch signal and continuously photograph the surface of each touch signal to form a video image. The smart-touch device may be designed to be easily converted to a word mode or a sentence mode by using a simple button device or the like. In this case, the image- And a generating unit.

As in the above-described embodiment, when the touch sensor 21 generates a single touch signal once, the word image generating unit may take a picture of the corresponding point and store it as a captured image, and the sentence image generating unit When the touch sensor 21 generates a continuous touch signal, a continuous image is photographed using the camera module and is stored as a photographed image.

The character extraction module 42 extracts a specific character to be searched in the captured image.

4 is a conceptual diagram showing an embodiment of a photographed image according to the present invention.

4, the photographed image is basically captured by the camera module at the time when the user punches the touch sensor 21 on the ground, so that the target character 1 And a part of the shutter part 20 are shown together. At this time, the character extracting module 42 extracts only a character portion except a part of the shutter part 20, and generates target character information for the character.

As a method of extracting the target character 1 from the character extraction module 42, it is most preferable to use optical character recognition (OCR). That is, the target character 1 is extracted from the photographed image by using an optical character reading method, and is stored as target character information.

At this time, it is preferable that the character extracted by the target character (1) is a character located at the end of the shutter part (20) on the captured image, that is, just above the part on the side of the touch sensor (21). The character extraction module 42 includes a word bundle sorting unit 42a, a distance calculation unit 42b, and an optical reading unit 42c. May be further included.

The word bundle sorting unit 42a selects the word bundles included in the captured image, and recognizes the inter-margins 2 formed by the spacing and the inter-line margins 3 formed by line breaks, And extracts a plurality of existing word bundles to generate at least one word bundle information.

The distance calculation unit 42b selects word bundle information closest to the touch point among the plurality of word bundle information. The touch sensor 21 is photographed in the image taken by the smart touch device of the present invention. The portion of the touch sensor 21 can be applied as the touch point. That is, it is possible to calculate the distance between the point and the word bundle information by recognizing the touch sensor 21 on the captured image. Further, since the position of the touch point existing on the image is always the same, it is also possible to set the touch point.

The word bundle information selected by the distance calculation unit 42b is stored as target character information by being analyzed by the optical reading unit 42c using the above-described optical character recognition (OCR).

As described above, by configuring the character extraction module 42, the smart touch device of the present invention recognizes the target character (1) located nearest to the word to be searched by the user without error, It is possible to block the occurrence of the work that is provided at the source. In addition, even in the case of a photographed image of a sentence, it is possible to form target character information formed by a sentence by selecting the closest characters in real time and arranging the target characters (1) in a time series by constructing a sentence.

The information transmission module 44 transmits the target character information formed in the character extraction module 42 to a target character analysis system such as an application installed in the mobile device. At this time, it is most preferable that the transmission method is a wireless network scheme such as Bluetooth pairing or WiFi. The transmitted target character information is searched and analyzed by the target character analysis system, and the result is provided to the user.

The target character analysis system will be briefly described below. It is most preferable that the system is an application that can be installed in a mobile device such as a smart phone. It is also possible to implement the search database by processing the transferred target character information as memo information. In addition, when the target character information is a sentence, it is of course possible to generate and provide translation information.

In order to supply power for operating the controller 40, a battery may be housed inside the barrel 10 as a power supply means for supplying power to the recognition apparatus of the present invention. The battery may be a battery, a rechargeable battery, And the like.

In addition, the controller 40 of the present invention can also perform control of an LED device such as an LED device or an electric device such as a display panel, which may be included in the smart touch device of the present invention.

An embodiment of a process for intuitively and conveniently acquiring information desired by a user using a target character analysis system provided separately from each configuration of the smart touch device of the present invention will be described. The touch sensor 21 of the smart touch device according to the present invention touches a peripheral portion of the target word and most preferably a lower portion of the target word to transmit the touch signal by the action of the image generation module 41 of the controller 40 And the camera module is operated to photograph the corresponding point. At this time, the visual target securing space 37 provided by the structural features of the shutter part 20 and the shooting part 30 of the present invention enables the user to shoot a desired target word smoothly without obstructing the view.

Then, the character extracting module 42 of the controller 40 extracts target character information from the photographed image, which can be transmitted to the separately provided target character analysis system.

As an additional method of generating a shot image, there is a method of photographing a camera module at the moment of operation of the touch sensor, in addition to receiving a touch signal from the touch sensor in the active (10 seconds volatile memory) state of the camera module It is possible to use a method of extracting an image of a moment when a touch signal is received from a video image secured in advance. At this time, since the power loss becomes large if it is always in the active state, it is also possible to control the active state only when the motion is detected by utilizing the gyro sensor separately provided.

Using this method, images without shaking can be obtained continuously.

Although the above-described configuration does not have any difficulty in realizing the object of the present invention, the photographing part 30 according to the present invention may further include a position adjusting assembly.

2, when the user grasps and uses the smart touch device 100 of the present invention in the same manner as a pen, the smart target device 100 is slightly angled with respect to the target character 1, It is common to arrange them at an oblique angle. In this case, the target character 1 in the photographed image by the camera module will be implemented in an inclined shape with respect to the photographed image frame, and sometimes a part of the target character 1 can not be photographed.

5 is a perspective view of a position adjustment assembly in accordance with the present invention.

In order to adjust the position of the camera lens 32, the photographing part 30 of the present invention further includes a rotation groove 34a and a position adjusting assembly.

The pivoting groove 34a is elongated at a position spaced apart from the tip end of the shooting part 30 by a predetermined distance from the barrel 10, thereby securing a space in which the pivoting control part 34 can be moved .

The position adjusting assembly includes a lens forming portion 33, a rotation controlling portion 34 and a guide portion 35. The lens forming portion 33 is configured to mount the camera lens 32 in a mechanical configuration, 30 and is separated from the photographing part 30 so as to be rotated by the power transmitting part.

The rotation control unit 34 is connected to the lens forming unit 33 and extends along the inner surface on the upper side of the photographing unit 30 and then is vertically bent upward to form a rotation groove 34a provided in the photographing unit 30. [ As shown in Fig. The rotation control unit 34 connected to the lens forming unit 33 may be rotated by the user in the lateral direction along the rotation groove 34a so that the position of the lens forming unit 33 may be varied have.

The guide part 35 is provided so that the lens forming part 33 moved in accordance with the pressure applied to the rotation control part 34 can rotate along the upper line tangs of the tip end hollow part of the shooting part 30, The rotation position of the lens forming portion 33 is guided so as to be able to rotate without departing along the line. 5, the guide portion 35 may be a housing that covers a space where the lens forming portion 33 can be located, or the lens forming portion 33 and the guide portion 35 can be interlocked with each other in a rack-and-pinion manner.

The operation of the angle adjustment of the camera lens 32 using the above-described configuration will be described. The user applies pressure to the rotation control unit 34 with a finger or the like depending on the habit of the user's pen or whether the camera is a left-handed or right- So that the rotation control unit 34 can rotate left and right along the rotation groove 34a. At this time, the lens forming portion 33 connected to the rotation control portion 34 is guided and guided by the guide portion 35 together with the pressure applied to the rotation control portion 34. Through this process, the angle of the camera lens 32 is adjusted according to the habit of holding the pen of the user so that the target character 1 can be photographed more accurately and safely. In this case, a display panel (not shown) for generating and displaying a photographed image on one side of the barrel 10 is additionally provided so that the angle of the camera lens 32 is adjusted using the position adjusting assembly in real time The position of the camera lens 32 can be more accurately set.

In addition, the position adjustment assembly may be automatically controlled by the controller 40, and the controller 40 may further include a lens angle setting module 43.

The lens angle setting module 43 includes a character damage determination unit 43a and a lens angle control unit 43b.

The character damage determination unit 43a determines whether the word bundle in the word bundle information selected by the distance calculation unit 42b is not clipped, generates a character damage signal when the character is damaged, And a lens angle control unit 43b for receiving the character damage signal and providing power to the rotation control unit 34. [

That is, it is possible to determine whether the characters in the word bundle information are damaged or not from the photographed image and to control the angle of the camera lens 32. Thus, It is preferable that the angle of the lens 32 is controlled so that the operation of the lens angle control unit 43b is stopped when there is no character damage.

In addition, it is also possible to provide a notification to the user through LED lights, vibration devices, and the like, which are additionally provided in the smart input device of the present invention, when the control is completed so as to prevent the character from being damaged.

When such an automatic control method is used, the rotation control unit 34 may be formed so as not to protrude from the surface of the photographing part 30. [

In addition, the shooting part 30 and the shutter part 20 may be configured to be modularized to be detachable from the barrel 10. That is, the shooting part 30 and the shutter part 20 can be separated from the barrel 10, and the separated configuration can be used as a lid.

Although the smart input device for target analysis according to the present invention has been described above and illustrated in the drawings, the spirit of the present invention is not limited to the above description and drawings, It will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

1: Target character 2: Jagged space
3: Leading margin 10: Barrel
10a: barrel end opening 10b: barrel end opening
20: Shutter part 21: Touch sensor
30: Shooting part 30a: Extension part
30b: connection portion 30c: convex portion
30d (1): vertical portion 30d (2): covering portion
30d (3): concave portion 30e: space inside the shooting part
31a: first inflection point 31b: second inflection point
31c: third inflection point 32: camera lens
33: lens forming part 34: rotation control part
34a: rotation groove 35: guide portion
36: insertion slit 37: field of view securing
40: controller 41: image generation module
42: Character extraction module 42a: Word collection selection module
42b: distance calculating section 42c: optical reading section
43: lens angle setting module 43a:
43b: lens angle control unit 44: information transmission module
100: Smart Touch Device

Claims (7)

A smart input device for target analysis,
A cylindrical barrel defining a body of the smart input device;
A shutter part extending in the longitudinal direction of the barrel at a tip end of the barrel and having a touch sensor formed at an end thereof;
The barrel is formed so as to be connected to the barrel-side portion of the shutter part in the lengthwise direction of the barrel at the tip end of the barrel, the length of which is shorter than that of the shutter part, part;
An image generation module for generating and storing a shot image by receiving a touch signal from the touch sensor and controlling the camera module;
And a character extraction module for extracting a target character from the captured image and generating target character information,
The character extraction module,
A word bundle sorting unit for extracting the word bundles by recognizing the inter-character space and the inter-line margins of the photographed image to generate at least one or more word bundle information;
A distance calculation unit for selecting word group information closest to the touch point among the plurality of word group information,
And an optical reading unit for analyzing the selected word bundle information by an optical character reading method and storing the analyzed word word information as target character information,
And extracts a target character located nearest to the touch sensor on the captured image.
The method according to claim 1,
The photographing part,
Wherein the barrel is rounded from the tip end of the barrel toward the shutter part, and the circumference of the barrel is long so that the inner space is expanded as the barrel extends toward the shutter part.
3. The method of claim 2,
In the longitudinal cross section of the photographing part,
A second inflection point formed at a position spaced a predetermined distance in the longitudinal direction from the first inflection point and a third inflection point spaced apart from the second inflection point in a longitudinal direction are formed in the barrel, ,
An extension extending upwardly between the first and second inflection points,
A connecting portion smoothly extended between the second and third inflection points,
And a convex portion formed in a convexly rounded shape upward from the third inflection point and having an end formed in a direction perpendicular to the shutter portion.
The method according to claim 1,
The photographing part,
And a pivoting groove formed at a position spaced apart from the tip end side of the photographing part by a predetermined distance from the barrel side,
The camera module includes:
A lens forming section for housing a camera lens,
A rotation control part connected to the lens forming part and extending along the inner surface at one end side of the shooting part and then being vertically bent outside the barrel and protruding through the rotation groove,
And a guide portion for guiding the pivotal position of the lens forming portion so that the lens forming portion, which is moved according to the pressure applied to the pivoting control portion, rotates along the tip of the photographing portion. A smart input device for target analysis.
The method of claim 3,
The photographing part,
Further comprising a fitting slit formed in a direction toward the shutter part at the first inflection point portion.
5. The method of claim 4,
The controller comprising:
A character damage judging unit for judging whether a word bundle in the word bundle information selected by the distance calculating unit is not clipped and generating a character damage signal when the character is damaged;
And a lens angle control unit that receives the character damage signal and provides power to the rotation control unit.

delete
KR1020150104225A 2015-07-23 2015-07-23 Smart Input Apparatus for Analyzing Target KR101649252B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020150104225A KR101649252B1 (en) 2015-07-23 2015-07-23 Smart Input Apparatus for Analyzing Target
PCT/KR2016/008018 WO2017014594A1 (en) 2015-07-23 2016-07-22 Smart input device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150104225A KR101649252B1 (en) 2015-07-23 2015-07-23 Smart Input Apparatus for Analyzing Target

Publications (1)

Publication Number Publication Date
KR101649252B1 true KR101649252B1 (en) 2016-08-23

Family

ID=56875638

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150104225A KR101649252B1 (en) 2015-07-23 2015-07-23 Smart Input Apparatus for Analyzing Target

Country Status (1)

Country Link
KR (1) KR101649252B1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120134964A (en) * 2011-06-04 2012-12-12 제노젠(주) Input device having digitizer ability based on camera
KR101368444B1 (en) * 2014-01-10 2014-02-28 주식회사 아하정보통신 Electronic touch pen for iwb touch senser

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120134964A (en) * 2011-06-04 2012-12-12 제노젠(주) Input device having digitizer ability based on camera
KR101368444B1 (en) * 2014-01-10 2014-02-28 주식회사 아하정보통신 Electronic touch pen for iwb touch senser

Similar Documents

Publication Publication Date Title
CN103916592B (en) For shooting the device and method of portrait in the portable terminal with camera
KR102434865B1 (en) Rollable mobile terminal and control method thereof
JP6657593B2 (en) Biological imaging apparatus, biological imaging method, and biological imaging program
KR102352683B1 (en) Apparatus and method for inputting note information into an image of a photographed object
US9513711B2 (en) Electronic device controlled by a motion and controlling method thereof using different motions to activate voice versus motion recognition
US20130120548A1 (en) Electronic device and text reading guide method thereof
KR20120080074A (en) Display apparatus controled by a motion, and motion control method thereof
US20140168054A1 (en) Automatic page turning of electronically displayed content based on captured eye position data
US20130120430A1 (en) Electronic device and text reading guide method thereof
US20230199102A1 (en) Mobile terminal and control method therefor
KR20150034257A (en) Input device, apparatus, input method, and recording medium
JP5989479B2 (en) Character recognition device, method for controlling character recognition device, control program, and computer-readable recording medium on which control program is recorded
CN103390155B (en) Picture and text identification method and picture and text identification device
CN110059678A (en) A kind of detection method, device and computer readable storage medium
JP5929364B2 (en) Nail printing apparatus and printing control method
US8983132B2 (en) Image recognition apparatus and image recognition method
CN111144414A (en) Image processing method, related device and system
KR101649252B1 (en) Smart Input Apparatus for Analyzing Target
JP2008250823A (en) Image forming apparatus
EP2793458B1 (en) Apparatus and method for auto-focusing in device having camera
US20180225438A1 (en) Biometric authentication apparatus, biometric authentication method, and non-transitory computer-readable storage medium for storing program for biometric authentication
JP5149744B2 (en) Image search device, image search system, image search method and program
US9430702B2 (en) Character input apparatus and method based on handwriting
KR20170005975A (en) Digital mirror
JP2009205203A (en) Iris authentication device

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20190909

Year of fee payment: 4