US20230288723A1 - Input device - Google Patents
Input device Download PDFInfo
- Publication number
- US20230288723A1 US20230288723A1 US18/016,785 US202118016785A US2023288723A1 US 20230288723 A1 US20230288723 A1 US 20230288723A1 US 202118016785 A US202118016785 A US 202118016785A US 2023288723 A1 US2023288723 A1 US 2023288723A1
- Authority
- US
- United States
- Prior art keywords
- aerial
- image
- display region
- image display
- displayed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 claims abstract description 24
- 239000000463 material Substances 0.000 claims abstract description 15
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000005282 brightening Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/50—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
- G02B30/56—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels by projecting aerial or floating images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/70—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
- G06F21/82—Protecting input, output or interconnection devices
- G06F21/83—Protecting input, output or interconnection devices input devices, e.g. keyboards, mice or controllers thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0416—Control or interface arrangements specially adapted for digitisers
- G06F3/0418—Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04803—Split screen, i.e. subdividing the display area or the window area into separate subareas
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0428—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by sensing at the edges of the touch surface the interruption of optical paths, e.g. an illumination plane, parallel to the touch surface which may be virtual
Definitions
- the present invention relates to an input device for users to input information such as PINs using their fingertips.
- the aerial-image display device includes an aerial-image forming mechanism and a display portion.
- the PIN display and input portion includes a PIN display portion and a PIN input portion.
- a keypad for inputting the PIN is displayed on the display portion.
- the aerial-image forming mechanism projects the keypad displayed on the display portion into a space so as to form an aerial image and to display it on the PIN display portion.
- the PIN input portion includes a detection mechanism which detects an operation performed by the user on the aerial image of the keypad displayed in the PIN display portion.
- the detection mechanism is, for example, an infrared sensor, a camera or the like that detects the position of the user's fingertip in a plane containing the aerial image of the keypad displayed on the PIN display portion.
- the PIN is input on the basis of a detection result of the detection mechanism by the user moving his/her fingertip sequentially to a predetermined position of the aerial image of the keypad displayed on the PIN display portion.
- Patent Literature 1 Japanese Unexamined Patent Application Publication 2020-134843
- the inventor of this application has developed an input device for users to input information such as PINs using an aerial image displayed in a space, as in the automatic transaction device described in the Patent Literature 1.
- the inventor of this application is also examining addition of an information reading function to this input device in order to optically read predetermined information such as bar codes and two-dimensional codes.
- This input device is preferably capable of optically read predetermined information easily with a simple configuration.
- an object of the present invention is to provide an input device that can optically read predetermined information easily with a simple configuration in an input device for a user to input PINs and the like by using an aerial image displayed in an aerial-image display region.
- an input device of at least an embodiment of the present invention is an input device inputting information using a fingertip of a user, including a display mechanism having a display surface for displaying an image, an aerial-image forming mechanism which forms an aerial image by projecting an image displayed on the display surface into a space, a detection mechanism which detects a fingertip of the user in an aerial-image display region, which is a region in which the aerial image is displayed, and a camera module disposed at a position where the aerial-image display region can be photographed, in which the aerial-image display region serves as an input portion for the user to input information, and the aerial-image forming mechanism includes a beam splitter that reflects a part of light emitted from the display surface and a retroreflective material to which the light reflected by the beam splitter is incident and which reflects the incident light in the same direction as the incident direction toward the beam splitter, in which an aerial image is formed in the aerial-image display region by the light transmitted through the beam splitter after being reflected
- the camera module is disposed on the side where the beam splitter is disposed with respect to the aerial-image display region and is also disposed at a position where the aerial-image display region can be photographed. Therefore, in this aspect, when the user holds a medium or a device on which predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region, the predetermined information can be photographed by the camera module. In other words, in this aspect, when the user holds the medium or the device on which the predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region, the predetermined information can be optically read by the camera module. Therefore, in this aspect, the predetermined information can be optically read easily.
- the camera module can optically read the predetermined information even if the input device does not include lighting that illuminates the medium or the device in which the predetermined information is recorded or displayed. Therefore, in this aspect, the predetermined information can be optically read with a simple configuration.
- the space formed between the beam splitter and the aerial-image display region in order to form an aerial image in the aerial-image display region can be effectively used as a space for optically reading the predetermined information by the camera module.
- the display mechanism displays a frame for indicating a region to which a photographed portion with the information photographed by the camera module recorded or displayed is held on the display surface
- the aerial-image forming mechanism displays the frame displayed on the display surface as an aerial image in the aerial-image display region.
- the user can recognize which part of the aerial-image display region the photographed portion is to be held.
- the display mechanism displays on the display surface a guidance to prompt the user to hold the photographed portion to the aerial-image display region in which information photographed by the camera module is recorded or displayed
- the aerial-image forming mechanism displays the guidance displayed on the display surface as an aerial image on the aerial-image display region.
- the user can hold the photographed portion in accordance with the guidance displayed in the aerial-image display region.
- predetermined information in an input device for a user to input a PIN or the like using an aerial image displayed in the aerial-image display region, predetermined information can be optically read easily with a simple configuration.
- FIG. 1 is a schematic diagram for explaining a configuration of an input device according to an embodiment of the present invention.
- FIG. 2 is a schematic diagram for explaining a configuration of the input device shown in FIG. 1 .
- FIG. 3 A and FIG. 3 B are diagrams illustrating an example of an aerial image displayed in the aerial-image display region shown in FIG. 1 .
- FIG. 1 is a schematic diagram for explaining a configuration of an input device 1 according to an embodiment of the present invention.
- FIG. 2 is a schematic diagram for explaining a configuration of the input device 1 shown in FIG. 1 .
- FIGS. 3 A and FIG. 3 B are diagrams illustrating an example of an aerial image displayed in the aerial-image display region R shown in FIG. 1 .
- the input device 1 in this embodiment is a device inputting information using a user's fingertips and is used by being mounted on, for example, ATMs, authentication devices for credit card and other payments, automatic ticketing machines, vending machines, or higher-level devices such as access control device.
- a PIN is input.
- the input device 1 of this embodiment can optically read predetermined information.
- the input device 1 has an aerial-image display device 2 which displays an aerial image in a three-dimensional space, a detection mechanism 3 for detecting a position of the user's fingertip in the aerial-image display region R, which is a region in which the aerial image is displayed, a camera module 4 for optically reading predetermined information, and an enclosure 5 in which these configurations are accommodated.
- the aerial-image display device 2 has a display mechanism 6 having a display surface 6 a for displaying images, and an aerial-image forming mechanism 7 that projects the image displayed on the display surface 6 a into a space so as to form an image as an aerial image.
- the aerial-image forming mechanism 7 has a beam splitter 8 and a retroreflective material 9 .
- a Y-direction in FIG. 1 which is orthogonal to an up-down direction (vertical direction), is referred to as a left-right direction, and a direction orthogonal to the up-down direction and the left-right direction is referred to as a front-back direction.
- a user standing on a front side of the input device 1 inputs predetermined information on the front side of the input device 1 .
- the display mechanism 6 is, for example, a liquid crystal display or an organic EL display, and the display surface 6 a is a display screen.
- the display surface 6 a faces diagonally rearward and upward.
- the beam splitter 8 is formed having a flat plate shape.
- the beam splitter 8 is disposed on a rear side of the display mechanism 6 .
- the beam splitter 8 reflects a part of light emitted from the display surface 6 a. That is, a surface on one side of the beam splitter 8 is a reflective surface 8 a that reflects a part of the light emitted from the display surface 6 a.
- the reflective surface 8 a faces diagonally forward and downward.
- the retroreflective material 9 is formed having a flat plate shape.
- the retroreflective material 9 is disposed on a rear side of the display mechanism 6 and is disposed on a lower side of the beam splitter 8 .
- To the retroreflective material 9 the light reflected by the beam splitter 8 is incident.
- the retroreflective material 9 reflects the incident light in the same direction as an incident direction toward the beam splitter 8 .
- a surface on one side of the retroreflective material 9 is a retroreflective surface 9 a, to which the light reflected by the beam splitter 8 is incident, and also reflects the incident light in the same direction as the incident direction toward the beam splitter 8 .
- a quarter-wavelength plate is attached to the retroreflective surface 9 a.
- the retroreflective surface 9 a faces diagonally forward and upward.
- a part of the light emitted from the display surface 6 a of the display mechanism 6 is reflected by the reflective surface 8 a of the beam splitter 8 and enters the retroreflective surface 9 a of the retroreflective material 9 .
- the light reflected by the reflective surface 8 a is directed diagonally rearward and downward.
- the light incident to the retroreflective surface 9 a is reflected in the same direction as the incident direction of the light to the retroreflective surface 9 a.
- the light reflected by the retroreflective surface 9 a goes diagonally forward and upward and passes through the beam splitter 8 .
- the light transmitted through the beam splitter 8 forms an aerial image in the aerial-image display region R.
- the aerial-image display region R is formed on an upper side of the beam splitter 8 .
- the aerial image formed in the aerial-image display region R is recognized by a user standing in front of the input device 1 as an image slightly inclined downward as it moves toward the front side.
- the enclosure 5 has a frame body 11 that surrounds the aerial-image display region R.
- An inner peripheral side of the frame body 11 is an opening portion 11 a that leads to an inside of the enclosure 5 .
- the opening portion 11 a is formed having a rectangular or regular-square shape.
- the aerial-image display region R is formed in the opening portion 11 a.
- the aerial-image display region R serves as an input portion 12 for the user to input information using the fingertips.
- the user standing in front of the input device 1 inputs the PIN and the like from the upper side of the input portion 12 in the input portion 12 .
- the detection mechanism 3 detects the position of the user's fingertip in the aerial-image display region R, as described above. In other words, the input portion 12 is included in a detection range of the detection mechanism 3 .
- the detection mechanism 3 is an optical sensor. Specifically, the detection mechanism 3 is an infrared sensor. In addition, the detection mechanism 3 is a line sensor.
- the detection mechanism 3 is a reflection-type optical sensor including a light emitting portion which emits infrared light and a light receiving portion to which the infrared light emitted from the light emitting portion and reflected by the user's fingertip is incident.
- the detection mechanism 3 is disposed on the side of the opening portion 11 a.
- the detection mechanism 3 detects the position of the user's fingertip in a plane containing the aerial-image display region R (that is, in the plane containing the input portion 12 ).
- the detection mechanism 3 may be a transmission-type optical sensor.
- the camera module 4 is fixed inside the enclosure 5 .
- the camera module 4 is disposed at a position where the aerial-image display region R can be photographed.
- the camera module 4 is disposed below the aerial-image display region R. That is, the camera module 4 is disposed at a position where the aerial-image display region R can be photographed from below.
- the aerial-image display region R is formed on the upper side of the beam splitter 8
- the beam splitter 8 is disposed on a lower side of the aerial-image display region R. That is, the camera module 4 is disposed on a side where the beam splitter 8 is disposed with respect to the aerial-image display region R.
- the camera module 4 is disposed on a rear side of the beam splitter 8 and on the further rear side of the aerial-image display region R.
- the camera module 4 is disposed at a position where the light transmitted through the beam splitter 8 after being reflected by the retroreflective material 9 is not shielded and is disposed at a position where the aerial-image display region R can be photographed from a diagonally rear and lower side.
- an optical axis of the camera module 4 is inclined upward as it moves toward the front side.
- predetermined information can be photographed by the camera module 4 by holding a photographing target 15 (see FIG. 1 ) such as a medium, a device or the like in which the predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region R from above.
- the predetermined information can be optically read by the camera module 4 by holding the photographing target 15 in which the predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region R from above.
- a mobile device such as a smartphone on which a two-dimensional code is displayed on the screen is the photographing target 15 , and by holding this mobile device to the aerial image displayed in the aerial-image display region R from above, the two-dimensional code displayed on the screen of the mobile device can be optically read by the camera module 4 .
- a medium such as a card with a barcode recorded is the photographing target 15 , and by holding this medium to the aerial image displayed in the aerial-image display region R from above, the barcode recorded in the medium can be optically read by the camera module 4 .
- the input device 1 is a code reader that reads two-dimensional codes and bar codes.
- a PIN can be input as described above.
- the display mechanism 6 displays the keypad for inputting the PIN on the display surface 6 a
- the aerial-image forming mechanism 7 displays the keypad displayed on the display surface 6 a as an aerial image in the aerial-image display region R (see FIG. 3 A ).
- the user inputs the PIN by using the keypad displayed in the aerial-image display region R. Specifically, the user inputs the PIN by sequentially moving the fingertip to a position of a predetermined key (number) in the keypad displayed in the aerial-image display region R. In other words, the user inputs the PIN by sequentially moving the fingertip on the input portion 12 .
- the PIN input in the input portion 12 is recognized on the basis of a detection result of the detection mechanism 3 (that is, the detection result of the user's fingertip position).
- information such as two-dimensional codes can be optically read by the camera module 4 , as described above.
- the display mechanism 6 displays on the display surface 6 a a frame F for indicating a region where a photographed portion 15 a in which the information photographed by the camera module 4 is recorded or displayed (for example, a part in a screen of a mobile device in which a two-dimensional code is displayed or a part in a medium in which a bar code is recorded) is held and a guidance G for prompting the user to hold the photographed portion 15 a to the aerial-image display region R and displays the frame F, and the aerial-image forming mechanism 7 displays the frame F and the guidance G displayed on the display surface 6 a as an aerial image in the aerial-image display region R, for example (see FIG. 3 B ).
- the frame F is, for example, a rectangular frame.
- the guidance G is, for example, a guidance message, operation instructions or an arrow pointing to the frame F.
- the information recorded or displayed in the photographed portion 15 a is read by the camera module 4 .
- the parts in the aerial image displayed in the aerial-image display region R that do not interfere with the reading of the information by the camera module 4 shine.
- the aerial image displayed in the aerial-image display region R serves as illumination that illuminates the photographed portion 15 a with light.
- the camera module 4 is disposed on the side where the beam splitter 8 is disposed with respect to the aerial-image display region R and is also disposed at a position where it can photograph the aerial-image display region R, and by holding the photographing target 15 on which the predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region R from the opposite side of the beam splitter 8 , the predetermined information can be optically read by the camera module 4 . Therefore, in this embodiment, the predetermined information can be optically read easily.
- the aerial image displayed in the aerial-image display region R serves as illumination that illuminates the photographed portion 15 a with light.
- the predetermined information can be read optically by the camera module 4 even if the input device 1 does not include the illumination that illuminates the photographed portion 15 a with light. Therefore, in this embodiment, the predetermined information can be optically read with a simple configuration.
- the space formed between the beam splitter 8 and the aerial-image display region R in order to form an aerial image in the aerial-image display region R can be effectively used as a space for optically reading the information by the camera module 4 .
- the frame F for indicating a region to which the photographed portion 15 a is to be held is displayed as an aerial image in the aerial-image display region R.
- the user can recognize to which part of the aerial-image display region R the photographed portion 15 a is to be held.
- the guidance G for prompting the user to hold the photographed portion 15 a to the aerial-image display region R is displayed as an aerial image in the aerial-image display region R, when the information is to be optically read by the camera module 4 , the user can hold the photographed portion 15 a in accordance with the guidance G displayed in the aerial-image display region R.
- information other than the PIN may be input in the input device 1 .
- the user's signature signature
- the frame in which the signature is to be input is displayed as an aerial image in the aerial-image display region R.
- the detection mechanism 3 may be a capacitance sensor or a motion sensor.
- the detection mechanism 3 may be constituted by two cameras.
- a user standing behind the input device 1 may input predetermined information on the rear surface side of the input device 1 .
Abstract
An input device has a beam splitter that reflects a part of light emitted from a display surface of a display mechanism, a retroreflective material to which the light reflected by the beam splitter is incident and which reflects the incident light toward the beam splitter, a detection mechanism for detecting the user's fingertip in an aerial-image display region in which an aerial image is displayed, and a camera module disposed at a position where the aerial-image display region can be photographed. In the input device, an aerial image is formed in the aerial-image display region by the light transmitted through the beam splitter after being reflected by the retroreflective material. The camera module is disposed on a side where the beam splitter is disposed with respect to the aerial-image display region.
Description
- This is the U.S. national stage of application No. PCT/JP2021/027020, filed on Jul. 19, 2021. Priority under 35 U.S.C. § 119(a) is claimed from PCT/JP2021/020744, filed on May 31, 2021, and priority under 35 U.S.C. § 119(e) is claimed to U.S. Provisional Application No. 63/054,799, filed Jul. 22, 2020, the disclosure of which is also incorporated herein by reference.
- The present invention relates to an input device for users to input information such as PINs using their fingertips.
- Conventionally, automated transaction devices such as ATMs (Automated Teller Machines) including an aerial-image display device and a PIN (Personal Identification Number) display and input portion are known (see,
Patent Literature 1, for example). In the automatic transaction device described inPatent Literature 1, the aerial-image display device includes an aerial-image forming mechanism and a display portion. The PIN display and input portion includes a PIN display portion and a PIN input portion. On the display portion, a keypad for inputting the PIN is displayed. The aerial-image forming mechanism projects the keypad displayed on the display portion into a space so as to form an aerial image and to display it on the PIN display portion. - In the automatic transaction device in
Patent Literature 1, the PIN input portion includes a detection mechanism which detects an operation performed by the user on the aerial image of the keypad displayed in the PIN display portion. The detection mechanism is, for example, an infrared sensor, a camera or the like that detects the position of the user's fingertip in a plane containing the aerial image of the keypad displayed on the PIN display portion. In the automatic transaction device ofPatent Literature 1, the PIN is input on the basis of a detection result of the detection mechanism by the user moving his/her fingertip sequentially to a predetermined position of the aerial image of the keypad displayed on the PIN display portion. - [Patent Literature 1] Japanese Unexamined Patent Application Publication 2020-134843
- The inventor of this application has developed an input device for users to input information such as PINs using an aerial image displayed in a space, as in the automatic transaction device described in the
Patent Literature 1. The inventor of this application is also examining addition of an information reading function to this input device in order to optically read predetermined information such as bar codes and two-dimensional codes. This input device is preferably capable of optically read predetermined information easily with a simple configuration. - Therefore, an object of the present invention is to provide an input device that can optically read predetermined information easily with a simple configuration in an input device for a user to input PINs and the like by using an aerial image displayed in an aerial-image display region.
- In order to solve the above problem, an input device of at least an embodiment of the present invention is an input device inputting information using a fingertip of a user, including a display mechanism having a display surface for displaying an image, an aerial-image forming mechanism which forms an aerial image by projecting an image displayed on the display surface into a space, a detection mechanism which detects a fingertip of the user in an aerial-image display region, which is a region in which the aerial image is displayed, and a camera module disposed at a position where the aerial-image display region can be photographed, in which the aerial-image display region serves as an input portion for the user to input information, and the aerial-image forming mechanism includes a beam splitter that reflects a part of light emitted from the display surface and a retroreflective material to which the light reflected by the beam splitter is incident and which reflects the incident light in the same direction as the incident direction toward the beam splitter, in which an aerial image is formed in the aerial-image display region by the light transmitted through the beam splitter after being reflected by the retroreflective material, and the camera module is disposed on a side where the beam splitter is disposed with respect to the aerial-image display region.
- In the input device of this aspect, the camera module is disposed on the side where the beam splitter is disposed with respect to the aerial-image display region and is also disposed at a position where the aerial-image display region can be photographed. Therefore, in this aspect, when the user holds a medium or a device on which predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region, the predetermined information can be photographed by the camera module. In other words, in this aspect, when the user holds the medium or the device on which the predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region, the predetermined information can be optically read by the camera module. Therefore, in this aspect, the predetermined information can be optically read easily.
- In addition, in this aspect, by brightening the aerial image displayed in the aerial-image display region, the camera module can optically read the predetermined information even if the input device does not include lighting that illuminates the medium or the device in which the predetermined information is recorded or displayed. Therefore, in this aspect, the predetermined information can be optically read with a simple configuration. In addition, in this aspect, since the camera module is disposed on the side where the beam splitter is disposed with respect to the aerial-image display region, the space formed between the beam splitter and the aerial-image display region in order to form an aerial image in the aerial-image display region can be effectively used as a space for optically reading the predetermined information by the camera module.
- In this aspect, for example, the display mechanism displays a frame for indicating a region to which a photographed portion with the information photographed by the camera module recorded or displayed is held on the display surface, and the aerial-image forming mechanism displays the frame displayed on the display surface as an aerial image in the aerial-image display region. In this case, the user can recognize which part of the aerial-image display region the photographed portion is to be held.
- In this aspect, for example, the display mechanism displays on the display surface a guidance to prompt the user to hold the photographed portion to the aerial-image display region in which information photographed by the camera module is recorded or displayed, and the aerial-image forming mechanism displays the guidance displayed on the display surface as an aerial image on the aerial-image display region. In this case, the user can hold the photographed portion in accordance with the guidance displayed in the aerial-image display region.
- As described above, in at least an embodiment of the present invention, in an input device for a user to input a PIN or the like using an aerial image displayed in the aerial-image display region, predetermined information can be optically read easily with a simple configuration.
- Embodiments will now be described, by way of example only, with reference to the accompanying drawings which are meant to be exemplary, not limiting, and wherein like elements are numbered alike in several Figures, in which:
-
FIG. 1 is a schematic diagram for explaining a configuration of an input device according to an embodiment of the present invention. -
FIG. 2 is a schematic diagram for explaining a configuration of the input device shown inFIG. 1 . -
FIG. 3A andFIG. 3B are diagrams illustrating an example of an aerial image displayed in the aerial-image display region shown inFIG. 1 . - In the following, embodiments of the present invention will be explained with reference to the drawings.
-
FIG. 1 is a schematic diagram for explaining a configuration of aninput device 1 according to an embodiment of the present invention.FIG. 2 is a schematic diagram for explaining a configuration of theinput device 1 shown inFIG. 1 .FIGS. 3A andFIG. 3B are diagrams illustrating an example of an aerial image displayed in the aerial-image display region R shown inFIG. 1 . - The
input device 1 in this embodiment is a device inputting information using a user's fingertips and is used by being mounted on, for example, ATMs, authentication devices for credit card and other payments, automatic ticketing machines, vending machines, or higher-level devices such as access control device. In theinput device 1 of this embodiment, for example, a PIN is input. Moreover, theinput device 1 of this embodiment can optically read predetermined information. Theinput device 1 has an aerial-image display device 2 which displays an aerial image in a three-dimensional space, adetection mechanism 3 for detecting a position of the user's fingertip in the aerial-image display region R, which is a region in which the aerial image is displayed, acamera module 4 for optically reading predetermined information, and anenclosure 5 in which these configurations are accommodated. - The aerial-
image display device 2 has adisplay mechanism 6 having adisplay surface 6 a for displaying images, and an aerial-image forming mechanism 7 that projects the image displayed on thedisplay surface 6 a into a space so as to form an image as an aerial image. The aerial-image forming mechanism 7 has abeam splitter 8 and aretroreflective material 9. In the following explanation, a Y-direction inFIG. 1 , which is orthogonal to an up-down direction (vertical direction), is referred to as a left-right direction, and a direction orthogonal to the up-down direction and the left-right direction is referred to as a front-back direction. An X1-direction side inFIG. 1 , which is one side in the front-back direction, is assumed to be a “front” side, and an X2-direction side inFIG. 1 , which is a side opposite to that, is assumed to be a “rear” side. In this embodiment, a user standing on a front side of theinput device 1 inputs predetermined information on the front side of theinput device 1. - The
display mechanism 6 is, for example, a liquid crystal display or an organic EL display, and thedisplay surface 6 a is a display screen. Thedisplay surface 6 a faces diagonally rearward and upward. Thebeam splitter 8 is formed having a flat plate shape. Thebeam splitter 8 is disposed on a rear side of thedisplay mechanism 6. Thebeam splitter 8 reflects a part of light emitted from thedisplay surface 6 a. That is, a surface on one side of thebeam splitter 8 is areflective surface 8 a that reflects a part of the light emitted from thedisplay surface 6 a. Thereflective surface 8 a faces diagonally forward and downward. - The
retroreflective material 9 is formed having a flat plate shape. Theretroreflective material 9 is disposed on a rear side of thedisplay mechanism 6 and is disposed on a lower side of thebeam splitter 8. To theretroreflective material 9, the light reflected by thebeam splitter 8 is incident. Theretroreflective material 9 reflects the incident light in the same direction as an incident direction toward thebeam splitter 8. In other words, a surface on one side of theretroreflective material 9 is aretroreflective surface 9 a, to which the light reflected by thebeam splitter 8 is incident, and also reflects the incident light in the same direction as the incident direction toward thebeam splitter 8. A quarter-wavelength plate is attached to theretroreflective surface 9 a. Theretroreflective surface 9 a faces diagonally forward and upward. - A part of the light emitted from the
display surface 6 a of thedisplay mechanism 6 is reflected by thereflective surface 8 a of thebeam splitter 8 and enters theretroreflective surface 9 a of theretroreflective material 9. The light reflected by thereflective surface 8 a is directed diagonally rearward and downward. The light incident to theretroreflective surface 9 a is reflected in the same direction as the incident direction of the light to theretroreflective surface 9 a. The light reflected by theretroreflective surface 9 a goes diagonally forward and upward and passes through thebeam splitter 8. The light transmitted through thebeam splitter 8 forms an aerial image in the aerial-image display region R. The aerial-image display region R is formed on an upper side of thebeam splitter 8. The aerial image formed in the aerial-image display region R is recognized by a user standing in front of theinput device 1 as an image slightly inclined downward as it moves toward the front side. - The
enclosure 5 has aframe body 11 that surrounds the aerial-image display region R. An inner peripheral side of theframe body 11 is an openingportion 11 a that leads to an inside of theenclosure 5. The openingportion 11 a is formed having a rectangular or regular-square shape. The aerial-image display region R is formed in the openingportion 11 a. The aerial-image display region R serves as aninput portion 12 for the user to input information using the fingertips. The user standing in front of theinput device 1 inputs the PIN and the like from the upper side of theinput portion 12 in theinput portion 12. - The
detection mechanism 3 detects the position of the user's fingertip in the aerial-image display region R, as described above. In other words, theinput portion 12 is included in a detection range of thedetection mechanism 3. Thedetection mechanism 3 is an optical sensor. Specifically, thedetection mechanism 3 is an infrared sensor. In addition, thedetection mechanism 3 is a line sensor. Thedetection mechanism 3 is a reflection-type optical sensor including a light emitting portion which emits infrared light and a light receiving portion to which the infrared light emitted from the light emitting portion and reflected by the user's fingertip is incident. Thedetection mechanism 3 is disposed on the side of the openingportion 11 a. Thedetection mechanism 3 detects the position of the user's fingertip in a plane containing the aerial-image display region R (that is, in the plane containing the input portion 12). Thedetection mechanism 3 may be a transmission-type optical sensor. - The
camera module 4 is fixed inside theenclosure 5. Thecamera module 4 is disposed at a position where the aerial-image display region R can be photographed. Moreover, thecamera module 4 is disposed below the aerial-image display region R. That is, thecamera module 4 is disposed at a position where the aerial-image display region R can be photographed from below. As described above, the aerial-image display region R is formed on the upper side of thebeam splitter 8, and thebeam splitter 8 is disposed on a lower side of the aerial-image display region R. That is, thecamera module 4 is disposed on a side where thebeam splitter 8 is disposed with respect to the aerial-image display region R. - Moreover, the
camera module 4 is disposed on a rear side of thebeam splitter 8 and on the further rear side of the aerial-image display region R. In other words, thecamera module 4 is disposed at a position where the light transmitted through thebeam splitter 8 after being reflected by theretroreflective material 9 is not shielded and is disposed at a position where the aerial-image display region R can be photographed from a diagonally rear and lower side. When viewed from the left-right direction, an optical axis of thecamera module 4 is inclined upward as it moves toward the front side. - In this embodiment, predetermined information can be photographed by the
camera module 4 by holding a photographing target 15 (seeFIG. 1 ) such as a medium, a device or the like in which the predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region R from above. In other words, in this embodiment, the predetermined information can be optically read by thecamera module 4 by holding the photographingtarget 15 in which the predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region R from above. - For example, a mobile device such as a smartphone on which a two-dimensional code is displayed on the screen is the photographing
target 15, and by holding this mobile device to the aerial image displayed in the aerial-image display region R from above, the two-dimensional code displayed on the screen of the mobile device can be optically read by thecamera module 4. Alternatively, a medium such as a card with a barcode recorded is the photographingtarget 15, and by holding this medium to the aerial image displayed in the aerial-image display region R from above, the barcode recorded in the medium can be optically read by thecamera module 4. In other words, theinput device 1 is a code reader that reads two-dimensional codes and bar codes. - In the
input device 1, a PIN can be input as described above. When a PIN is to be input in theinput device 1, thedisplay mechanism 6 displays the keypad for inputting the PIN on thedisplay surface 6 a, and the aerial-image forming mechanism 7 displays the keypad displayed on thedisplay surface 6 a as an aerial image in the aerial-image display region R (seeFIG. 3A ). The user inputs the PIN by using the keypad displayed in the aerial-image display region R. Specifically, the user inputs the PIN by sequentially moving the fingertip to a position of a predetermined key (number) in the keypad displayed in the aerial-image display region R. In other words, the user inputs the PIN by sequentially moving the fingertip on theinput portion 12. The PIN input in theinput portion 12 is recognized on the basis of a detection result of the detection mechanism 3 (that is, the detection result of the user's fingertip position). - In addition, in the
input device 1, information such as two-dimensional codes can be optically read by thecamera module 4, as described above. When information is optically read by thecamera module 4, thedisplay mechanism 6 displays on thedisplay surface 6 a a frame F for indicating a region where a photographedportion 15 a in which the information photographed by thecamera module 4 is recorded or displayed (for example, a part in a screen of a mobile device in which a two-dimensional code is displayed or a part in a medium in which a bar code is recorded) is held and a guidance G for prompting the user to hold the photographedportion 15 a to the aerial-image display region R and displays the frame F, and the aerial-image forming mechanism 7 displays the frame F and the guidance G displayed on thedisplay surface 6 a as an aerial image in the aerial-image display region R, for example (seeFIG. 3B ). The frame F is, for example, a rectangular frame. The guidance G is, for example, a guidance message, operation instructions or an arrow pointing to the frame F. - When the user holds the photographed
portion 15 a to the frame F displayed in the aerial-image display region R, the information recorded or displayed in the photographedportion 15 a is read by thecamera module 4. When the information is read by thecamera module 4, the parts in the aerial image displayed in the aerial-image display region R that do not interfere with the reading of the information by thecamera module 4 shine. In other words, when the information is read by thecamera module 4, the aerial image displayed in the aerial-image display region R serves as illumination that illuminates the photographedportion 15 a with light. - As explained above, in this embodiment, the
camera module 4 is disposed on the side where thebeam splitter 8 is disposed with respect to the aerial-image display region R and is also disposed at a position where it can photograph the aerial-image display region R, and by holding the photographingtarget 15 on which the predetermined information is recorded or displayed to the aerial image displayed in the aerial-image display region R from the opposite side of thebeam splitter 8, the predetermined information can be optically read by thecamera module 4. Therefore, in this embodiment, the predetermined information can be optically read easily. - In this embodiment, when the information is optically read by the
camera module 4, the aerial image displayed in the aerial-image display region R serves as illumination that illuminates the photographedportion 15 a with light. Thus, in this embodiment, the predetermined information can be read optically by thecamera module 4 even if theinput device 1 does not include the illumination that illuminates the photographedportion 15 a with light. Therefore, in this embodiment, the predetermined information can be optically read with a simple configuration. In addition, in this embodiment, since thecamera module 4 is disposed on the side where thebeam splitter 8 is disposed with respect to the aerial-image display region R, the space formed between thebeam splitter 8 and the aerial-image display region R in order to form an aerial image in the aerial-image display region R can be effectively used as a space for optically reading the information by thecamera module 4. - In this embodiment, when the information is optically read by the
camera module 4, the frame F for indicating a region to which the photographedportion 15 a is to be held is displayed as an aerial image in the aerial-image display region R. Thus, in this embodiment, the user can recognize to which part of the aerial-image display region R the photographedportion 15 a is to be held. In addition, in this embodiment, since the guidance G for prompting the user to hold the photographedportion 15 a to the aerial-image display region R is displayed as an aerial image in the aerial-image display region R, when the information is to be optically read by thecamera module 4, the user can hold the photographedportion 15 a in accordance with the guidance G displayed in the aerial-image display region R. - The embodiment described above is an example of a preferred embodiment of the present invention, but it is not limiting, and various modifications can be made within a range not changing the gist of the present invention.
- In the embodiment described above, information other than the PIN may be input in the
input device 1. For example, the user's signature (signature) may be input in theinput device 1. In this case, for example, the frame in which the signature is to be input is displayed as an aerial image in the aerial-image display region R. In addition, in the embodiment described above, at least either one of the frame F and the guidance G does not have to be displayed in the aerial-image display region R when the information is to be optically read by thecamera module 4. Furthermore, in the embodiment described above, thedetection mechanism 3 may be a capacitance sensor or a motion sensor. In addition, thedetection mechanism 3 may be constituted by two cameras. Moreover, in the embodiment described above, a user standing behind theinput device 1 may input predetermined information on the rear surface side of theinput device 1. The above description relates to specific examples according to the present invention, and various modifications are possible without departing from the spirit of the present invention. The appended claims are intended to cover such applications within the true scope and spirit of the invention. -
-
- 1 Input device
- 3 Detection mechanism
- 4 Camera module
- 6 Display mechanism
- 6 a Display surface
- 7 Aerial-image forming mechanism
- 8 Beam splitter
- 9 Retroreflective material
- 12 Input portion
- 15 a Photographed portion
- F Frame
- G Guidance
- R Aerial-image display region
Claims (4)
1. An input device inputting information using a fingertip of a user, the input device comprising:
a display mechanism having a display surface for displaying an image, an aerial-image forming mechanism which forms an aerial image by projecting an image displayed on the display surface into a space, a detection mechanism which detects the fingertip of the user in an aerial-image display region, which is a region in which the aerial image is displayed, and a camera module disposed at a position where the aerial-image display region is photographed, wherein
the aerial-image display region serves as an input portion for the user to input information;
the aerial-image forming mechanism includes a beam splitter that reflects a part of light emitted from the display surface and a retroreflective material to which the light reflected by the beam splitter is incident and which reflects the incident light in the same direction as a direction of the incidence toward the beam splitter;
the aerial image is formed in the aerial-image display region by light transmitted through the beam splitter after being reflected by the retroreflective material; and
the camera module is disposed on a side where the beam splitter is disposed with respect to the aerial-image display region.
2. The input device according to claim 1 , wherein
the display mechanism displays, on the display surface, a frame for indicating a region to which a photographed portion, in which information photographed by the camera module is recorded or displayed, is held; and
the aerial-image forming mechanism displays the frame displayed on the display surface as the aerial image in the aerial-image display region.
3. The input device according to claim 1 , wherein
the display mechanism displays a guidance on the display surface for prompting the user to hold a photographed portion, in which information photographed by the camera module is recorded or displayed, to the aerial-image display region, and the aerial-image forming mechanism displays the guidance displayed on the display surface as the aerial image in the aerial-image display region.
4. The input device according to claim 2 , wherein
the display mechanism displays on the display surface a guidance for prompting the user to hold a photographed portion, in which information photographed by the camera module is recorded or displayed, to the aerial-image display region, and the aerial-image forming mechanism displays the guidance displayed on the display surface as the aerial image in the aerial-image display region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/016,785 US20230288723A1 (en) | 2020-07-22 | 2021-07-19 | Input device |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063054799P | 2020-07-22 | 2020-07-22 | |
WOPCT/JP2021/020744 | 2021-05-31 | ||
PCT/JP2021/020744 WO2022018973A1 (en) | 2020-07-22 | 2021-05-31 | Input device and information processing device |
US18/016,785 US20230288723A1 (en) | 2020-07-22 | 2021-07-19 | Input device |
PCT/JP2021/027020 WO2022019279A1 (en) | 2020-07-22 | 2021-07-19 | Input device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230288723A1 true US20230288723A1 (en) | 2023-09-14 |
Family
ID=79728618
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/016,925 Pending US20240036678A1 (en) | 2020-07-22 | 2021-05-31 | Input device and input device control method |
US18/016,927 Pending US20230351804A1 (en) | 2020-07-22 | 2021-05-31 | Input device and information processing device |
US18/016,785 Pending US20230288723A1 (en) | 2020-07-22 | 2021-07-19 | Input device |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/016,925 Pending US20240036678A1 (en) | 2020-07-22 | 2021-05-31 | Input device and input device control method |
US18/016,927 Pending US20230351804A1 (en) | 2020-07-22 | 2021-05-31 | Input device and information processing device |
Country Status (3)
Country | Link |
---|---|
US (3) | US20240036678A1 (en) |
JP (4) | JPWO2022018973A1 (en) |
WO (4) | WO2022018971A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7402265B2 (en) | 2021-06-28 | 2023-12-20 | 日立チャネルソリューションズ株式会社 | information processing system |
JP2023131250A (en) * | 2022-03-09 | 2023-09-22 | アルプスアルパイン株式会社 | Method for manufacturing optical element, optical element, aerial picture display device, and aerial input device |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5949348A (en) * | 1992-08-17 | 1999-09-07 | Ncr Corporation | Method and apparatus for variable keyboard display |
JPH07210299A (en) * | 1994-01-20 | 1995-08-11 | Sumitomo Wiring Syst Ltd | Detecting method for input position of optical input device |
JP2000181632A (en) * | 1998-12-17 | 2000-06-30 | Funai Electric Co Ltd | Touch input device for video equipment |
JP2002236944A (en) * | 2001-02-08 | 2002-08-23 | Hitachi Ltd | Information providing terminal |
JP4064647B2 (en) * | 2001-08-24 | 2008-03-19 | 富士通株式会社 | Information processing apparatus and input operation apparatus |
JP2004178584A (en) * | 2002-11-26 | 2004-06-24 | Asulab Sa | Input method of security code by touch screen for accessing function, device or specific place, and device for executing the method |
WO2012032842A1 (en) * | 2010-09-06 | 2012-03-15 | シャープ株式会社 | Display system and detection method |
JP5602668B2 (en) * | 2011-03-24 | 2014-10-08 | 日本電産サンキョー株式会社 | Medium processing apparatus and flexible cable |
US8978975B2 (en) * | 2011-07-18 | 2015-03-17 | Accullink, Inc. | Systems and methods for authenticating near field communcation financial transactions |
US20130301830A1 (en) * | 2012-05-08 | 2013-11-14 | Hagai Bar-El | Device, system, and method of secure entry and handling of passwords |
JP2015060296A (en) * | 2013-09-17 | 2015-03-30 | 船井電機株式会社 | Spatial coordinate specification device |
JP6364994B2 (en) * | 2014-06-20 | 2018-08-01 | 船井電機株式会社 | Input device |
US9531689B1 (en) * | 2014-11-10 | 2016-12-27 | The United States Of America As Represented By The Secretary Of The Navy | System and method for encryption of network data |
JP6927554B2 (en) * | 2015-12-07 | 2021-09-01 | 国立大学法人宇都宮大学 | Display device |
JP6733731B2 (en) * | 2016-06-28 | 2020-08-05 | 株式会社ニコン | Control device, program and control method |
JP6822473B2 (en) * | 2016-06-28 | 2021-01-27 | 株式会社ニコン | Display device |
JP6725371B2 (en) * | 2016-09-07 | 2020-07-15 | シャープ株式会社 | Display device, display system, and display method |
EP3319069B1 (en) * | 2016-11-02 | 2019-05-01 | Skeyecode | Method for authenticating a user by means of a non-secure terminal |
JP6913494B2 (en) * | 2017-03-30 | 2021-08-04 | 日本電産サンキョー株式会社 | Flexible printed circuit board and card reader |
JP6974032B2 (en) * | 2017-05-24 | 2021-12-01 | シャープ株式会社 | Image display device, image forming device, control program and control method |
JP2019109636A (en) * | 2017-12-18 | 2019-07-04 | コニカミノルタ株式会社 | Non-contact input device |
JP2019133284A (en) * | 2018-01-30 | 2019-08-08 | コニカミノルタ株式会社 | Non-contact input device |
JP7128720B2 (en) * | 2018-10-25 | 2022-08-31 | 日立チャネルソリューションズ株式会社 | Input/output device and automatic transaction device |
JP7164405B2 (en) * | 2018-11-07 | 2022-11-01 | 日立チャネルソリューションズ株式会社 | Image reader and method |
JP7251301B2 (en) * | 2019-05-10 | 2023-04-04 | 京セラドキュメントソリューションズ株式会社 | Image processing system, image processing method, and image forming apparatus |
-
2021
- 2021-05-31 WO PCT/JP2021/020742 patent/WO2022018971A1/en active Application Filing
- 2021-05-31 US US18/016,925 patent/US20240036678A1/en active Pending
- 2021-05-31 WO PCT/JP2021/020743 patent/WO2022018972A1/en active Application Filing
- 2021-05-31 JP JP2022538617A patent/JPWO2022018973A1/ja active Pending
- 2021-05-31 JP JP2022538615A patent/JPWO2022018971A1/ja active Pending
- 2021-05-31 US US18/016,927 patent/US20230351804A1/en active Pending
- 2021-05-31 JP JP2022538616A patent/JPWO2022018972A1/ja active Pending
- 2021-05-31 WO PCT/JP2021/020744 patent/WO2022018973A1/en active Application Filing
- 2021-07-19 JP JP2022538006A patent/JPWO2022019279A1/ja active Pending
- 2021-07-19 US US18/016,785 patent/US20230288723A1/en active Pending
- 2021-07-19 WO PCT/JP2021/027020 patent/WO2022019279A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
US20240036678A1 (en) | 2024-02-01 |
JPWO2022018972A1 (en) | 2022-01-27 |
WO2022018973A1 (en) | 2022-01-27 |
JPWO2022018971A1 (en) | 2022-01-27 |
US20230351804A1 (en) | 2023-11-02 |
WO2022019279A1 (en) | 2022-01-27 |
JPWO2022018973A1 (en) | 2022-01-27 |
WO2022018972A1 (en) | 2022-01-27 |
JPWO2022019279A1 (en) | 2022-01-27 |
WO2022018971A1 (en) | 2022-01-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230288723A1 (en) | Input device | |
US8191787B2 (en) | Image capturing apparatus, method for capturing an image shown in a display and arrangement of an image capturing unit and of a display as well as use of the image capturing apparatus and use of the arrangement | |
JP5053396B2 (en) | Code symbol reader and its control program | |
US8226005B2 (en) | Code symbol reading apparatus | |
JP7164405B2 (en) | Image reader and method | |
JP5482522B2 (en) | Display control apparatus, display control method, and program | |
US10055627B2 (en) | Mobile imaging barcode scanner | |
JP7437661B2 (en) | payment terminal | |
JP2012073822A (en) | Form reading device | |
WO2022018927A1 (en) | Aerial image display device and input apparatus | |
US10803267B2 (en) | Illuminator for a barcode scanner | |
US11093723B2 (en) | Coaxial aimer for imaging scanner | |
JP6362282B2 (en) | Scanner unit and image scanner device | |
JP6650374B2 (en) | Reader | |
US8366006B2 (en) | Combined laser and imaging scanner | |
JP6407739B2 (en) | Code reader unit and fuel oil sales apparatus including the same | |
US9509958B2 (en) | Image pick-up device and POS system including the same | |
JP2023079463A (en) | Non-contact type information processing device | |
US11568163B1 (en) | Barcode reader with transflective mirror | |
EP4293637A1 (en) | Biometric authentication device | |
WO2021085567A1 (en) | Stationary terminal device capable of displaying information and performing payment process | |
JP2012128759A (en) | Information reading device | |
JPH10173875A (en) | Image reading device | |
CN116794665A (en) | Positioning sensor and input terminal device | |
CN107622217A (en) | Image scanning instrument with positioning and display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NIDEC SANKYO CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MIYAZAWA, SHINYA;TAKEUCHI, JUNRO;FUJIMOTO, MASAYA;AND OTHERS;SIGNING DATES FROM 20230113 TO 20230116;REEL/FRAME:062417/0249 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |