WO2018194227A1 - Dispositif de reconnaissance tactile tridimensionnel utilisant un apprentissage profond et procédé de reconnaissance tactile tridimensionnel utilisant ledit dispositif - Google Patents

Dispositif de reconnaissance tactile tridimensionnel utilisant un apprentissage profond et procédé de reconnaissance tactile tridimensionnel utilisant ledit dispositif Download PDF

Info

Publication number
WO2018194227A1
WO2018194227A1 PCT/KR2017/011272 KR2017011272W WO2018194227A1 WO 2018194227 A1 WO2018194227 A1 WO 2018194227A1 KR 2017011272 W KR2017011272 W KR 2017011272W WO 2018194227 A1 WO2018194227 A1 WO 2018194227A1
Authority
WO
WIPO (PCT)
Prior art keywords
marker
deep learning
brightness
dimensional
touch recognition
Prior art date
Application number
PCT/KR2017/011272
Other languages
English (en)
Korean (ko)
Inventor
이수웅
권순오
안희경
이강원
이종일
Original Assignee
한국생산기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국생산기술연구원 filed Critical 한국생산기술연구원
Publication of WO2018194227A1 publication Critical patent/WO2018194227A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/04166Details of scanning methods, e.g. sampling time, grouping of sub areas or time sharing with display driving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B5/00Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
    • G08B5/22Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission

Definitions

  • the present invention relates to a three-dimensional touch recognition device using deep learning and a three-dimensional touch recognition method using the same, and more particularly, to allow a user input to be performed on a flexible soft material, a variety of users using deep learning
  • a three-dimensional touch recognition apparatus capable of recognizing, determining, and processing an input, and a three-dimensional touch recognition method using the same.
  • Commonly used input devices include a mouse, a keyboard, a touch pad, a trackball, and the like, which may be used by the user to grab or touch the casing and the main part of the device with a proper force and move and click. Sophisticated operation is required.
  • Korean Patent No. 10-1719278 name of the invention: deep learning framework and image recognition method for visual content-based image recognition
  • deep learning technology is modularized, in / out parameter property extraction and training data for each module Equipped with a content-based deep learning analysis tool to automate set analysis and deep learning scenarios, and a parameter property interworking module for interlocking parameter properties of IN / OUT between modules through the content-based deep learning analysis tool, and interworking between modules is possible.
  • An analysis result repository that stores the analysis results through the dynamic call interface interworking module, the standard API interface integration module between the modules, the one-pass integration module and the deep learning analysis tool integrating the task analysis, results and confirmation of the modules into one.
  • An integrated GUI framework is disclosed.
  • An object of the present invention for solving the above problems is to implement a user input such as pressing, moving, etc. without the need for complicated operation.
  • An object of the present invention is to analyze a three-dimensional pattern input to a device using a deep learning algorithm.
  • the input unit having a sheet having a function to restore the initial shape when the user input acts on the outer surface and the user input is resolved;
  • a plurality of markers arranged and arranged along an inner side surface of the sheet;
  • An imaging unit which collects a marker image as an image of the marker that changes in response to the user input;
  • An illumination unit irradiating light toward an inner surface of the sheet;
  • an analysis unit configured to generate a 3D pattern by analyzing the marker image and to output a deep learning result through an iterative operation of inputting data of the 3D pattern into a deep learning algorithm.
  • the deep learning algorithm may be any one of a deep neural network, a convolutional neural network, or a cyclic neural network.
  • the analysis unit may determine the three-dimensional pattern by a position change or a brightness change of a size point marker, which is a marker that is recognized as the largest size among the plurality of markers. .
  • the analysis unit may be configured by changing the size or shape of the size auxiliary marker positioned within a predetermined range around the size point marker, or changing the brightness of the size point marker.
  • the dimensional pattern can be determined.
  • the analysis unit may determine the three-dimensional pattern by changing the position or changing the brightness of the brightness point marker, which is a marker recognized among the plurality of the markers with the highest brightness. .
  • the analysis unit may be configured by changing the position or brightness of the brightness point marker, and changing the size or shape of the brightness assist marker positioned within a predetermined range around the brightness point marker.
  • the dimensional pattern can be determined.
  • the marker may be formed in a circular shape.
  • a start notification unit may further include a start notification unit configured to perform visual notification when a start input, which is the first user input among the user inputs, acts on the outer surface of the sheet.
  • the deep learning result value may be received from the analysis unit, and the control unit may further include a control unit for transmitting a control signal corresponding to the deep learning result value to an external device.
  • the configuration of the present invention for achieving the above object, (i) the step of the user input acting on the sheet; (ii) transferring the marker image photographed by photographing the inner surface of the sheet per unit time to the analyzer; (iii) the analysis unit analyzing the marker image to generate a three-dimensional pattern by changing the position or brightness of the marker; And (iv) inputting data of the three-dimensional pattern into a deep learning algorithm and outputting a deep learning result value.
  • the analysis unit in the step (iii), the analysis unit, the three-dimensional pattern by changing the position of the size point marker that is the marker that is recognized the largest size among the plurality of the markers; You can judge.
  • the analysis unit may generate the three-dimensional pattern by changing the position of the brightness point marker, which is a marker recognized by the imaging unit among the plurality of markers having the highest brightness. You can judge.
  • step (iii) in the step (iii), it is possible to measure the horizontal displacement of the three-dimensional displacement of the marker by changing the position of the marker.
  • step (iii) it is possible to measure the vertical displacement of the three-dimensional displacement of the marker by changing the brightness of the marker.
  • the effect of the present invention is that by analyzing and determining the three-dimensional pattern input to the device by the user input using a deep learning algorithm, it is possible to improve the accuracy for various user patterns.
  • FIG. 1 is a perspective view of a three-dimensional touch recognition device according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a 3D touch recognition device according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of the matter that the first user input acts on the sheet according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the matter that the pressing operation acts on the sheet according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of the matter that the movement operation in one direction for the sheet according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of the matter that the movement operation in the other direction with respect to the sheet according to an embodiment of the present invention.
  • FIG. 7 is an image in which a three-dimensional pattern is imaged according to an exemplary embodiment.
  • the input unit having a sheet having a function to restore the initial shape when the user input is applied to the outer surface and the user input is resolved;
  • a plurality of markers arranged and arranged along an inner side surface of the sheet;
  • An imaging unit which collects a marker image as an image of the marker that changes in response to the user input;
  • An illumination unit irradiating light toward an inner surface of the sheet;
  • an analysis unit configured to generate a 3D pattern by analyzing the marker image and to output a deep learning result through an iterative operation of inputting data of the 3D pattern into a deep learning algorithm.
  • FIG. 1 is a perspective view of a three-dimensional touch recognition device according to an embodiment of the present invention
  • Figure 2 is a block diagram of a three-dimensional touch recognition device according to an embodiment of the present invention.
  • the 3D touch recognition apparatus using the deep learning of the present invention includes a sheet 110 having a function of restoring to an initial shape when a user input is applied to an outer surface and the user input is cancelled.
  • An input unit 100 provided; Markers 120 are arranged in a plurality arranged along the inner surface of the sheet 110;
  • An imaging unit 200 collecting a marker image as an image of the marker 120 that changes in response to a user input;
  • An illumination unit 300 for irradiating light to the inner surface of the sheet 110;
  • an analyzing unit 400 for generating a 3D pattern by analyzing the marker image and outputting a deep learning result through an iterative operation of inputting data of the 3D pattern to the deep learning algorithm.
  • the marker 120 may be formed in a circular shape.
  • the marker 120 is described as being circular, but is not necessarily limited thereto, and may be formed in various shapes such as an ellipse, a square, and a polygon.
  • the marker 120 may be a figure formed by a line, or may be a form in which a color is filled in the figure.
  • the installation pattern of the plurality of markers 120 may have a predetermined number of rows and columns, and various patterns such as the same spacing between rows and rows, columns and columns, or an arrangement of concentric circles may be considered.
  • the plurality of markers 120 may form a square array.
  • the change in the marker image may be generated by a change in the spacing between the columns or rows of the array formed by the marker 120 or a change in the shape of the array.
  • the change in the marker image may be generated by a change in the size or shape of the marker 120 itself.
  • the user input may be performed by stopping or operating a body part of the user while the body part of the user is in contact with the outer surface of the sheet 110.
  • a part of the user's entity may mean any body part capable of pressing, releasing, moving, etc. in contact with one surface of the seat 110 by using a hand or a finger, a foot or a toe, an elbow or a knee.
  • the user who generates the user input may include not only a person but also a machine, a robot, and other devices.
  • the pressing pressure and the time for holding the pressing are not limited.
  • the time required for releasing the pressing is not limited to a specific range.
  • Movement means that the user moves from one point of one surface of the sheet 110 to another while maintaining the pressing state.
  • the moving path is not limited to a specific one, and may include a straight line, a curved line, a curved line, Circles, ellipses, arcs, splines and the like, but may not be limited thereto.
  • the imaging unit 200 may include an element such as a CCD, but is not limited thereto.
  • the imaging unit 200 may include a wide-angle lens to enable photographing of the entire photographing surface.
  • the sheet 110 may be made of an elastic material.
  • the remaining portion of the input unit 100 except for the sheet 110 may be formed of a flexible material, such as the sheet 110, or may be formed of a material having no elasticity, unlike the sheet 110.
  • the input unit 100 may have a shape in which the internal space is opened from the outside, and as shown in FIG. 2, the input space 100 may have a shape in which the internal space is closed from the outside.
  • the lighting unit 300 may include an LED lamp.
  • a configuration such as automatically turned off may be adopted.
  • FIG 3 is a schematic diagram of the matter that the first user input to the sheet 110 according to an embodiment of the present invention
  • Figure 4 is a pressing operation for the seat 110 according to an embodiment of the present invention
  • Figure 5 is a schematic diagram of the action
  • Figure 5 is a schematic diagram of the action of the movement action in one direction with respect to the sheet 110 according to an embodiment of the present invention.
  • Figure 6 is a schematic diagram of the matter that the movement action in a different direction with respect to the sheet 110 according to an embodiment of the present invention
  • Figure 7 is a three-dimensional pattern is imaged according to an embodiment of the present invention Image.
  • 3 (a), 4 (a), 5 (a) and 6 (a) may be cross-sectional views of the input unit to which the respective user inputs act.
  • 3 (b), 4 (b), 5 (b) and 6 (b) may be plan views of the inner surface of the sheet on which the respective user inputs are acting.
  • reference numeral 121 may indicate a size point marker 121 or a brightness point marker 121.
  • reference numeral 122 may indicate a size assist marker 122 or a brightness assist marker 122.
  • the analysis unit 400 may determine the 3D pattern by changing the position or changing the brightness of the size point marker 121, which is a marker that is recognized as the largest size among the plurality of markers 120 by the imaging unit 200. have. Alternatively, the analysis unit 400 determines the 3D pattern by changing the position or changing the brightness of the brightness point marker 121, which is a marker recognized among the plurality of markers 120 by the imaging unit 200. can do.
  • the size point marker 121 and the brightness point marker 121 may represent the largest marker 120 and the brightest marker 120 based on the marker image recognized by the imaging unit 200. Particularly, in the case of the brightness point marker 121, the imaging unit 200 and the lighting unit 300 are located in the same direction, and thus the brightness increases when the position of the marker 120 approaches the imaging unit 200. It may be.
  • the three-dimensional pattern may be a change in position by two-dimensional horizontal displacement and a vertical displacement perpendicular to this horizontal displacement.
  • the two-dimensional horizontal displacement may be expressed as a displacement on the x-y plane
  • the vertical displacement may be expressed as a displacement with respect to the z axis perpendicular to the x-y plane.
  • the size point marker 121 may be recognized. 4 to 6, when the pressing operation is performed after the pressing operation is performed on the outer surface of the sheet 110 by using the hand, the imaging unit 200 may be pressed by pressing and moving. The position of the size point marker 121 that is recognized as the largest size may be changed.
  • the movement of the size point marker 121 may be recognized by changing the position of the marker 120 having the largest size recognized by the imaging unit 200 among the plurality of markers 120. That is, the marker 120 itself may not be moved but may recognize the position change of the size point marker 121 to determine the movement of the size point marker 121.
  • the analysis unit 400 the position change or brightness change of the size point marker 121, and the change in the size or shape of the size auxiliary marker 122 located within a predetermined range around the size point marker 121.
  • the three-dimensional pattern can be judged by.
  • the predetermined range is L ranges (L is an arbitrary integer) around the matrix range or the size point markers 121 forming an array of NXMs (N and M are arbitrary integers) around the size point markers 121.
  • the marker may be in the range of a circle included.
  • the present invention is not limited thereto.
  • the predetermined range is the entire inner surface of the sheet 110. (The predetermined range is shown by the dotted line in FIGS. 5 and 6.)
  • the size point marker 121 When the size point marker 121 is repositioned, the size or shape of the size auxiliary marker 122 located within a predetermined range is changed, and data about the size marker 12 is previously stored in the analysis unit 400 or is learned by deep learning. It may be stored in the analysis unit 400.
  • the analysis unit 400 analyzes not only an image of a position change or a brightness change of the size point marker 121, but also an image of a change of the size or shape of each of the plurality of individual size auxiliary markers 122 and three-dimensionally. The pattern can be determined.
  • the size point marker 121 when the size point marker 121 performs the horizontal displacement movement, when the position change of the size point marker 121 is performed, the size assistance within a predetermined range of the 3 ⁇ 3 array is performed.
  • the size or shape of the marker 122 is changed, and the three-dimensional pattern may be determined by comprehensively determining the movement of the size point marker 121 and the size or shape change of the size auxiliary marker 122.
  • the two-dimensional horizontal displacement of the hand can be measured by changing the position of the size point marker 121 as described above, and the vertical displacement of the hand can be measured by changing the brightness of the size point marker 121.
  • three-dimensional patterns can be determined for user input by hand.
  • the brightness point marker 121 may be recognized. 4 to 6, when the pressing operation is performed after the pressing operation is performed on the outer surface of the sheet 110 by using the hand, the imaging unit 200 may be pressed by pressing and moving. The position of the brightness point marker 121 recognized as the highest brightness may be changed.
  • the movement of the brightness point marker 121 may be recognized by changing the position of the marker 120 that is recognized as the highest brightness among the plurality of markers 120 by the imaging unit 200. That is, the marker 120 itself does not move but recognizes the positional change of the brightness point marker 121 to determine the movement of the brightness point marker 121.
  • the analysis unit 400 changes the position or the brightness of the brightness point marker 121 and the change in the size or shape of the brightness assist marker 122 positioned within a predetermined range around the brightness point marker 121.
  • the three-dimensional pattern can be judged by.
  • the predetermined range is L ranges (L is an arbitrary integer) around the matrix range or the brightness point markers 121 forming an array of NXMs (N and M are arbitrary integers) around the brightness point markers 121.
  • the marker may be in the range of a circle included.
  • the present invention is not limited thereto.
  • the predetermined range is the entire inner surface of the sheet 110. (The predetermined range is shown by the dotted line in FIGS. 5 and 6.)
  • the size or shape of the brightness subsidiary marker 122 positioned within a predetermined range is changed, and data about the brightness subsidiary marker 122 is stored in advance in the analysis unit 400 or is learned by deep learning. It may be stored in the analysis unit 400.
  • the analysis unit 400 analyzes not only an image of a position change or a brightness change of the brightness point marker 121, but also an image of a change in size or shape of each of the plurality of individual brightness assistant markers 122 to be three-dimensional.
  • the pattern can be determined.
  • the brightness point marker 121 when the brightness point marker 121 performs the horizontal displacement movement, when the position change of the brightness point marker 121 is performed, brightness assistance within a predetermined range of the 3 ⁇ 3 array is performed.
  • the size or shape of the marker 122 changes, and the three-dimensional pattern may be determined by comprehensively determining the movement of the brightness point marker 121 and the change in the size or shape of the brightness assist marker 122.
  • the two-dimensional horizontal displacement of the hand can be measured by changing the position of the brightness point marker 121 as described above, and the vertical displacement of the hand can be measured by changing the brightness of the brightness point marker 121.
  • three-dimensional patterns can be determined for user input by hand.
  • any one marker 120 is picked up at a portion where the pressing operation of the inner surface of the sheet 110 is performed.
  • the brightness may increase while moving closer to the unit 200.
  • the marker 120 having increased brightness may be recognized as the brightness point marker 121, and the first point P1 may be set as a start coordinate.
  • the brightness point marker 121 by the pressing operation of a stronger force may be recognized that the brightness is further increased, and the vertical displacement due to the user input may be measured by the brightness change of the brightness point marker 121. That is, it can be measured by recognizing that the three-dimensional coordinates are changed from the first point (P1) to the second point (P2).
  • a vertical displacement may be formed in a direction opposite to that in FIG. 4. Can be.
  • the marker 120 having the greatest brightness may be changed, and such a marker 120 may be changed.
  • the brightness point marker 121 moves from the second point P2 to the third point P3.
  • the marker 120 having the greatest brightness may be changed.
  • the brightness point marker 121 may be recognized to move from the third point P3 to the fourth point P4.
  • a three-dimensional pattern as shown in FIG. 7 may be formed by the complex displacement of the horizontal displacement and the vertical displacement as described above.
  • the three-dimensional touch recognition apparatus using the deep learning of the present invention can detect not only a three-dimensional pattern but also a force in a vertical direction or a horizontal direction.
  • the vertical force may be measured by multiplying the vertical displacement by the elastic modulus of the sheet 110.
  • the sheet 110 may be manufactured so that the elastic modulus of the sheet 110 is the same at each point of the sheet 110.
  • the horizontal force is a vertical force N and the sheet 110 applied to the seat 110 by the pressing operation of the user's hand. It can be measured as a force corresponding to the frictional force calculated by the product of the friction coefficient ( ⁇ ) of).
  • the friction coefficient of the sheet 110 may be determined by referring to the friction coefficient reference data table stored in the analysis unit 400.
  • the coefficient of friction of the sheet 110 is determined by the sheet 110. It may be different from the surface friction coefficient as a property of the surface of.
  • the coefficient of friction of the sheet 110 used in the horizontal force detection method of the present invention is determined in consideration of the state in which the shape of the sheet 110 is bent by the pressing operation. It can be determined by the data obtained through mechanical experiments on the sides.
  • the force in the horizontal direction may be calculated by multiplying the friction coefficient of the sheet 110 determined by the above process with the vertical force, which is the vertical force.
  • the deep learning algorithm may be one of a deep neural network, a convolutional neural network, or a cyclic neural network.
  • the deep learning algorithm used in the 3D touch recognition device using the deep learning of the present invention may be a known technique.
  • the neural network described above is used as the deep learning algorithm, but the present invention is not limited thereto.
  • each three-dimensional pattern may be represented by a change in three-dimensional coordinates, and as shown in FIG. 7, each three-dimensional pattern may be represented and stored as a three-dimensional image, respectively.
  • the analysis unit 400 performs a learning using a deep learning algorithm for each 3D pattern, and analyzes and determines a 3D pattern by a user input based on the learned data to derive a deep learning result value. Can be.
  • the analysis unit 400 inputs the 3D pattern input by the user.
  • the user can recognize the user input of the intended 3D pattern. Accordingly, even if the 3D pattern input by the user is determined to be the same pattern and there is a difference in the 3D coordinates, the analyzer 400 determines that the pattern generated by the user input is the same as the 3D pattern stored in advance. It may be determined as a pattern, and a deep learning result value thereof may be output.
  • the 3D touch recognition apparatus using the deep learning of the present invention may further include a start notification unit that performs visual notification when a start input, which is the first user input among the user inputs, acts on the outer surface of the sheet 110.
  • the formation of the three-dimensional pattern may be started, the user may need a method for checking whether the first user input is recognized.
  • the start notification unit emits light so that the user can confirm the start of formation of the three-dimensional pattern.
  • the start notification unit emits light so that the 3D pattern formation may be confirmed.
  • the start notification unit is not limited thereto, and the user may confirm the start of formation of the 3D pattern using sound or vibration. You can do that.
  • the 3D touch recognition apparatus using the deep learning of the present invention further includes a controller 500 that receives the deep learning result from the analysis unit 400 and transmits a control signal corresponding to the deep learning result to external equipment. can do.
  • Each three-dimensional pattern can be matched to a specific instruction. Specifically, when the 3D pattern as shown in FIG. 7 is formed, and if it is matched that the start of the operation of the external equipment is performed, the controller 500 analyzes and determines the input of the 3D pattern as shown in FIG. 7 to the controller 500. The deep learning result value is output, and the controller 500 may transmit a control signal matching the deep learning result value to the external device to start the operation of the external device.
  • a computer having a three-dimensional touch recognition device using the deep learning of the present invention can be manufactured.
  • the deep learning result value for the 3D pattern processed by the analyzer 400 may be transmitted to the computer through the control unit 500, and a command corresponding to the 3D pattern may be transmitted to the computer so that the computer may execute the command. .
  • the robot having the 3D touch recognition device using the deep learning of the present invention can be manufactured.
  • the end of the robot performing the work corresponding to the three-dimensional pattern can be moved, the three-dimensional touch recognition device using the deep learning of the present invention can function as a steering device for controlling the robot.
  • user input may act on the sheet 110.
  • the imaging unit 200 may transfer the photographed marker image to the analysis unit 400 by photographing the inner surface of the sheet 110 per unit time.
  • the unit time may be in milliseconds (ms), and the photographing may be performed in units smaller than milliseconds (ms) in order to improve accuracy.
  • the analysis unit 400 may analyze the marker image to generate a three-dimensional pattern by changing the position or brightness of the marker 120.
  • the analysis unit 400 may determine the 3D pattern by changing the position of the size point marker 121, which is a marker that is recognized as the largest size among the plurality of markers 120 by the imaging unit 200. .
  • the analyzer 400 may determine the 3D pattern by changing the position of the brightness point marker 121, which is a marker recognized among the plurality of markers 120 by the imaging unit 200 with the highest brightness. .
  • the horizontal displacement of the three-dimensional displacement of the marker 120 can be measured by changing the position of the marker 120.
  • the vertical displacement of the three-dimensional displacement of the marker 120 may be measured by changing the brightness of the marker 120.
  • the deep learning result may be output by inputting data of the 3D pattern to the deep learning algorithm.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Position Input By Displaying (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Selon un mode de réalisation, la présente invention concerne : un dispositif de reconnaissance tactile tridimensionnel pour permettre à une entrée d'utilisateur d'être effectuée sur un matériau élastique souple, et diverses entrées d'utilisateur à traiter en étant reconnues et déterminées à l'aide d'un apprentissage profond ; et un procédé de reconnaissance tactile tridimensionnel utilisant ledit dispositif. Le dispositif de reconnaissance tactile tridimensionnel utilisant un apprentissage profond, selon un mode de réalisation de la présente invention, comprend : une unité d'entrée pourvue d'une feuille ayant une entrée d'utilisateur appliquée à la surface externe de celle-ci, et ayant une fonction de rétablissement à sa forme initiale lorsque l'entrée d'utilisateur est relâchée ; une pluralité de marqueurs disposés le long de la surface interne de la feuille ; une unité de capture pour collecter des images de marqueur qui sont des images de marqueurs changeant selon l'entrée d'utilisateur ; une unité d'éclairage pour irradier de la lumière vers la surface interne de la feuille ; et une unité d'analyse pour générer un motif tridimensionnel par analyse des images de marqueur, et délivrer en sortie une valeur de résultat d'apprentissage profond au moyen d'une opération répétitive de saisie de données associées au motif tridimensionnel dans un algorithme d'apprentissage profond.
PCT/KR2017/011272 2017-04-20 2017-10-12 Dispositif de reconnaissance tactile tridimensionnel utilisant un apprentissage profond et procédé de reconnaissance tactile tridimensionnel utilisant ledit dispositif WO2018194227A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0051174 2017-04-20
KR1020170051174A KR101921140B1 (ko) 2017-04-20 2017-04-20 딥러닝을 이용한 3차원 터치 인식 장치 및 이를 이용한 3차원 터치 인식 방법

Publications (1)

Publication Number Publication Date
WO2018194227A1 true WO2018194227A1 (fr) 2018-10-25

Family

ID=63855937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/011272 WO2018194227A1 (fr) 2017-04-20 2017-10-12 Dispositif de reconnaissance tactile tridimensionnel utilisant un apprentissage profond et procédé de reconnaissance tactile tridimensionnel utilisant ledit dispositif

Country Status (2)

Country Link
KR (1) KR101921140B1 (fr)
WO (1) WO2018194227A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796708A (zh) * 2020-06-02 2020-10-20 南京信息工程大学 一种在触摸屏上再现图像三维形状特征的方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102163143B1 (ko) * 2019-01-07 2020-10-08 한림대학교 산학협력단 딥러닝 기반의 터치센서 측정 오류 보정 장치 및 방법
KR102268003B1 (ko) 2019-12-11 2021-06-21 한림대학교 산학협력단 딥러닝을 이용한 이종 다변량 멀티 모달 데이터 기반 표면 인식 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080044690A (ko) * 2006-11-17 2008-05-21 실리콤텍(주) 촬상 센서를 이용한 입력장치 및 그 방법
JP2009087264A (ja) * 2007-10-02 2009-04-23 Alps Electric Co Ltd 中空型スイッチ装置及びこれを備えた電子機器
KR20110084028A (ko) * 2010-01-15 2011-07-21 삼성전자주식회사 이미지 데이터를 이용한 거리 측정 장치 및 방법
KR20120060548A (ko) * 2010-12-02 2012-06-12 전자부품연구원 마커 기반 3차원 입체 시스템
KR101396203B1 (ko) * 2013-03-13 2014-05-19 한국생산기술연구원 에어 쿠션 동작 감지 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080044690A (ko) * 2006-11-17 2008-05-21 실리콤텍(주) 촬상 센서를 이용한 입력장치 및 그 방법
JP2009087264A (ja) * 2007-10-02 2009-04-23 Alps Electric Co Ltd 中空型スイッチ装置及びこれを備えた電子機器
KR20110084028A (ko) * 2010-01-15 2011-07-21 삼성전자주식회사 이미지 데이터를 이용한 거리 측정 장치 및 방법
KR20120060548A (ko) * 2010-12-02 2012-06-12 전자부품연구원 마커 기반 3차원 입체 시스템
KR101396203B1 (ko) * 2013-03-13 2014-05-19 한국생산기술연구원 에어 쿠션 동작 감지 장치 및 방법

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796708A (zh) * 2020-06-02 2020-10-20 南京信息工程大学 一种在触摸屏上再现图像三维形状特征的方法
CN111796708B (zh) * 2020-06-02 2023-05-26 南京信息工程大学 一种在触摸屏上再现图像三维形状特征的方法

Also Published As

Publication number Publication date
KR101921140B1 (ko) 2018-11-22
KR20180117952A (ko) 2018-10-30

Similar Documents

Publication Publication Date Title
WO2018194227A1 (fr) Dispositif de reconnaissance tactile tridimensionnel utilisant un apprentissage profond et procédé de reconnaissance tactile tridimensionnel utilisant ledit dispositif
WO2013183938A1 (fr) Procédé et appareil d'interface utilisateur basés sur une reconnaissance d'emplacement spatial
WO2018128355A1 (fr) Robot et dispositif électronique servant à effectuer un étalonnage œil-main
WO2016200185A1 (fr) Système de balayage tridimensionnel et mécanisme cible pour l'alignement de laser à lignes correspondant
WO2014109498A1 (fr) Afficheur facial réalisant un étalonnage du regard, et procédé de commande associé
WO2014142596A1 (fr) Dispositif de détection de fonctionnement d'un coussin de sécurité gonflable et procédé associé
WO2016028097A1 (fr) Dispositif pouvant être porté
EP3359043A1 (fr) Appareil d'imagerie à rayons x, procédé de commande associé et détecteur de rayons x
WO2016182181A1 (fr) Dispositif portable et procédé permettant de fournir une rétroaction d'un dispositif portable
WO2017034283A1 (fr) Appareil destiné à générer fournir une sensation tactile
WO2020218644A1 (fr) Procédé et robot permettant de redéfinir l'emplacement d'un robot à l'aide de l'intelligence artificielle
WO2016171335A1 (fr) Dispositif de transmission tactile et système d'interface d'utilisateur doté de celui-ci
WO2018203590A1 (fr) Algorithme de mesure de position et de profondeur de contact pour une reconnaissance tactile tridimensionnelle
WO2020242087A1 (fr) Dispositif électronique et procédé de correction de données biométriques sur la base de la distance entre le dispositif électronique et l'utilisateur, mesurée à l'aide d'au moins un capteur
WO2017082496A1 (fr) Procédé d'alignement de tranche et équipement d'alignement utilisant celui-ci
WO2022050668A1 (fr) Procédé de détection du mouvement de la main d'un dispositif de réalité augmentée vestimentaire à l'aide d'une image de profondeur, et dispositif de réalité augmentée vestimentaire capable de détecter un mouvement de la main à l'aide d'une image de profondeur
WO2018080112A1 (fr) Dispositif d'entrée et dispositif d'affichage le comprenant
WO2022045497A1 (fr) Dispositif d'authentification d'utilisateur et son procédé de commande
WO2019135462A1 (fr) Procédé et appareil d'étalonnage entre un dispositif de reconnaissance de mouvement et un visiocasque utilisant un algorithme de réglage de faisceau
WO2021045481A1 (fr) Système et procédé de reconnaissance d'objets
WO2019074201A1 (fr) Dispositif d'imagerie à rayons x, détecteur de rayons x et système d'imagerie à rayons x
WO2023163305A1 (fr) Procédé de détection d'un type de démarche basé sur un apprentissage profond et programme informatique le mettant en oeuvre
WO2022080549A1 (fr) Dispositif de suivi de déplacement de structure de capteur lidar double
WO2023003157A1 (fr) Dispositif électronique et procédé d'acquisition d'informations d'empreinte digitale d'un dispositif électronique
WO2021020883A1 (fr) Dispositif et procédé de balayage tridimensionnel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17906574

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17906574

Country of ref document: EP

Kind code of ref document: A1