WO2022181412A1 - Input assisting mechanism and input system - Google Patents

Input assisting mechanism and input system Download PDF

Info

Publication number
WO2022181412A1
WO2022181412A1 PCT/JP2022/006132 JP2022006132W WO2022181412A1 WO 2022181412 A1 WO2022181412 A1 WO 2022181412A1 JP 2022006132 W JP2022006132 W JP 2022006132W WO 2022181412 A1 WO2022181412 A1 WO 2022181412A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
input
user
distance
display
Prior art date
Application number
PCT/JP2022/006132
Other languages
French (fr)
Japanese (ja)
Inventor
健志 加畑
Original Assignee
有限会社アドリブ
株式会社Eggs
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 有限会社アドリブ, 株式会社Eggs filed Critical 有限会社アドリブ
Publication of WO2022181412A1 publication Critical patent/WO2022181412A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means

Definitions

  • the present invention relates to a technique for assisting a user who performs input in an input device that is attached to a target surface so that the target surface can be hover-inputted.
  • a typical example of a target surface is a display that does not allow hover input, whether it is a touch panel or not.
  • Liquid crystal displays, organic EL displays and other displays are widely used.
  • a display may be used simply for the purpose of displaying an image, but is very often used for the purpose of allowing a user to perform some kind of input according to the image displayed on the display.
  • the user is often required to perform input via an accessory that is separate from the personal computer and the display, such as a mouse and trackball.
  • a touch panel as the display provided by it, it is becoming common to make input not via accessories by touching the screen of the user's display, and in smartphones, tablets, etc. In most cases, the user makes an input by touching a display, which is a touch panel.
  • Input from the touch panel realized by using the display as a touch panel does not require the preparation and operation of accessories. Since it is possible to perform intuitive operation, it is convenient. Under these circumstances, by using a touch panel as the display, input via the touch panel is widely used in public applications such as ATMs (automated/automatic teller machines), ticket vending machines, and convenience store registers. It is also widely used for the display of the equipment of
  • the input via the touch panel as described above which is realized by using a touch panel as the display of a device for public use, is convenient, it is not well received by some users.
  • users who are extremely reluctant to touch the touch panel in order to avoid infection with the new coronavirus or other viruses beyond the issue of cleanliness is appearing.
  • the number of users who wish to perform input without touching the touch panel is certainly increasing.
  • a hover input is an input mechanism for inputting via a touch panel by touching a space of a predetermined distance in front of the screen of the touch panel with a finger.
  • a type of display that allows hover input that allows non-contact input also known as a touchless touch panel, is a touch panel that has grown rapidly in recent years due to the spread of the new coronavirus infection. It is a very good match for the demand for input without having to.
  • a display that allows hover input has a sensor built into the display that detects a slight change in capacitance between the display and the user's finger, thereby allowing input to the display. Some detect the coordinates of the user's finger on the display.
  • Such displays are manufactured, sold, or released by Nissha Co., Ltd. and Japan Display Co., Ltd., for example.
  • a display that is a touch panel that has been shipped and is already in use for example, is used as a display that can perform hover input after the fact.
  • An apparatus for is disclosed.
  • AirBar (trademark) which is a device sold by Techwind Co., Ltd. in Japan for the exclusive purpose of making a display that is not a touch panel function as a touch panel, is also used for the purpose of enabling hover input on a display that is a touch panel. It is applicable.
  • the present invention allows a user who performs input to a display capable of hover input to intuitively know how close the finger to the display is to perform input.
  • the task is to provide the technology to make it understandable to everyone.
  • the present invention receives a target surface, which is a surface on which display is performed on a display on which a user operates, and operation data, which is data about an operation performed by the user on the target surface, and performs operations based on the operation data.
  • a target surface which is a surface on which display is performed on a display on which a user operates
  • operation data which is data about an operation performed by the user on the target surface
  • an information processing device that executes predetermined information processing
  • an information processing device that is located on the target surface when viewed from the front of the target surface and is located at a position closer than a first distance, which is a predetermined distance from the target surface.
  • Input support used in combination with an input device that detects the positional coordinates of a user's fingertip on the target surface and outputs positional coordinate data, which is data about the positional coordinates, to the information processing device. mechanism.
  • the input support mechanism of the present invention includes a display having a target surface on which display is performed, an input device that generates operation data regarding an operation performed by a user on the display, and an input device that receives operation data from the input device and provides predetermined information. It is used in combination with a device enabled for hover input, including an information processing device that executes processing.
  • the display and the information processing device are usually integrated devices.
  • the input device may be integrated with the display and the information processing device, or may be used after being attached to the integrated display and information processing device.
  • the input support mechanism includes a light source that emits light, and directs the light from the light source to the target on a plane parallel to the target surface that is a distance equal to or smaller than the first distance from the target surface. and a diffusing member that generates a light plane, which is planar light, by expanding a range covering at least a predetermined portion of the target plane when the plane is viewed from the front.
  • the input support mechanism enables a user who has visually recognized a fingertip illuminated by light from the light source by crossing the light surface to recognize that the user's fingertip is closer to the target surface than the first distance.
  • the fingertip is configured to If such an input support mechanism exists, when a user who intends to perform a hover input by bringing his/her finger close to the target surface of the display brings his/her fingertip close to the display, the fingertip eventually crosses the light surface. Then, even if the light surface itself in the air is invisible to the user, the fingertip crossing the light surface reflects or scatters the light from the light source and shines. Since the light plane is parallel to the target plane and is located at a distance from the target plane that is less than or equal to the first distance, the illuminated user's fingertip is positioned at a distance from the target plane that is less than or equal to the first distance.
  • the input device outputs the position coordinate data of the fingertip to the information processing device when the fingertip is at a position closer than the first distance from the target surface of the display. Therefore, by moving the user's fingertip along the target surface of the display while confirming that the fingertip is shining, the position coordinates of the user's fingertip located closer to the target surface than the first distance are The input device reliably outputs to the information processing device. In other words, if an input support mechanism such as the one described above exists, hover input is possible when the fingertip is illuminated (when the fingertip is illuminated, the fingertip approaches the target surface of the display to the extent that hover input is possible).
  • the input support mechanism of the present application enables a user who performs input to a display on which hover input can be performed to intuitively understand how close the finger must be to the target surface of the display to perform hover input.
  • the input support mechanism of the present invention includes a light source and a diffusing member.
  • the light source has a function of emitting light.
  • the diffusion member has a function of generating a light plane, which is planar light, by spreading the light from the light source.
  • the light source and diffusion member may be configured in any way.
  • the light source can be a linear light source that emits linear light. Examples of linear light sources are lasers, and examples of linear light sources are laser devices.
  • the diffusion member can be a cylindrical lens or a cylindrical mirror. The light plane generated by the combination of the linear light source and the cylindrical lens or the cylindrical mirror is in a state where the light plane exists at any moment in the time zone during which the light plane is generated.
  • the light plane according to the present application is a case where planar light does not exist and only linear light exists when focusing on a moment in the time zone in which the light plane is generated.
  • light on a straight line exists at a certain high speed along a certain plane, for example, by wiping or rotating, and when focusing on a certain length of time, it exists in a plane is also a light plane.
  • a linear light source typically a laser device, and a mirror that oscillates back and forth within an angle, or a mirror that rotates about an axis (both e.g. Micro Electro Mechanical Systems) technology).
  • a straight line of light is applied to an oscillating or rotating mirror perpendicular to the axis of oscillation or rotation, a planar light surface along a plane will be generated.
  • other optical elements such as oscillating or rotating mirrors and mirrors, prisms, etc. in the optical path before or after reaching the cylindrical mirror.
  • the input device used in combination with the input support mechanism of the present invention detects not only the positional coordinates of the user's fingertip on the target surface but also the distance coordinates, which are the coordinates of the distance from the target surface. It may be. In this case, the input device approaches the target plane to a distance where the distance from the user's fingertip to the target plane is shorter than a second distance, which is a predetermined distance shorter than the distance from the light plane to the target plane.
  • a proximity signal may be generated when it is detected based on the distance coordinates, and the generated proximity signal may be sent to the light source.
  • the light source in the input support mechanism used in combination with such an input device is capable of changing the state of illumination of light, and when receiving the proximity signal, the state of illumination of light is changed.
  • Said change in the state of illumination of light carried out by the light source can be, for example, blinking of light, change of wavelength of light in the visible light range, change of intensity of light.
  • a change in the wavelength of light in the visible light region is essentially a change in the color of the light.
  • the input support mechanism includes a light source and a diffusing member, as described above.
  • a plurality of sets of the light source and the diffusion member may be provided.
  • the second set of light source and diffusion member may be the same as the light source and diffusion member described above.
  • the second set of light source and diffusion member is referred to as an auxiliary light source and auxiliary diffusion member.
  • An input support mechanism having an auxiliary light source and an auxiliary diffusion member includes an auxiliary light source that irradiates light, and the light from the auxiliary light source to the target surface separated from the target surface by a predetermined distance smaller than the first distance.
  • an auxiliary diffusion member that generates an auxiliary light surface that is planar light by expanding a range covering at least a predetermined portion of the target surface when the target surface is viewed from the front on a parallel plane.
  • the user can recognize that the fingertip is closer to the target surface of the display than the position where the hover input is possible.
  • the user can recognize that the fingertip will touch the display if the fingertip is brought closer to the target plane of the display.
  • the wavelength of the light from the light source and the wavelength of the light from the auxiliary light source may both be wavelengths in the visible light region and may be different from each other. As a result, the color of the light reflected or scattered by the fingertip differs between when it traverses the light plane and when it traverses the auxiliary light plane.
  • the light from the light source and the auxiliary light source may have the same wavelength but different intensities.
  • the number of pairs of the light source and the diffusion member was two, but it is naturally possible to have three or more pairs.
  • a plurality of auxiliary light planes may exist in parallel.
  • the input support mechanism comprises a sensor for detecting that the user has attempted to input using the input device, and only when the sensor detects that the user has attempted to input using the input device , the light source may emit light.
  • a light source consumes power to emit light. Especially when the light source is a laser device, the power consumption is large. If there is such a sensor as described above, the light source emits light only when the user attempts to make an input using the input device, so power consumption can be suppressed.
  • the input support mechanism in the present invention is used in combination with the display, the information processing device, and the input device.
  • the display and the information processing device are integrated devices, and the input device may or may not be integrated with the display and the information processing device.
  • the input device is retrofitted to the display and the information processing device.
  • the input support mechanism can be as follows. That is, the input support mechanism in the present invention is separate from the display and the information processing device, and is integrated with the input device attached to the display and the information processing device integrated. Also good. In other words, the input support mechanism may be part of an input device attached to the display and the information processing device.
  • the input support mechanism of the present application is automatically attached or incorporated therein.
  • the input device may comprise a frame surrounding the target surface, and the light source and diffusion member may be attached to the frame. According to this, when the frame is correctly positioned with respect to the target surface of the display, the light source and the diffusion member in the input support mechanism are also automatically correctly positioned with respect to the target surface of the display.
  • the inventor of the present application also proposes, as one aspect of the present invention, an input system in which the above-described input support mechanism is combined with a device capable of hover input, including a display, an information processing device, and an input device.
  • the effect of such an input system is equivalent to the effect of the input assist mechanism built into the input system.
  • An example input system accepts a target surface, which is a surface on which display is performed on a display on which a user operates, and operation data, which is data about an operation performed by the user at a position close to the target surface.
  • an information processing device that executes predetermined information processing based on the operation data; and a first distance that is located on the target surface when viewed from the front of the target surface and that is a predetermined distance from the target surface.
  • An input device and an input support that detect position coordinates on the target surface of a user's fingertip located closer than It is an input system with a mechanism.
  • the input support mechanism in the input system includes a light source that emits light, and directs the light from the light source onto a plane parallel to the target surface that is a distance equal to or smaller than the first distance from the target surface. and a diffusing member that generates a light surface that is planar light by expanding a range covering at least a predetermined portion of the target surface when the target surface is viewed from the front, wherein the light surface A user who sees the fingertip illuminated by the light from the light source by crossing can recognize that the user's fingertip is closer to the target surface than the first distance.
  • FIG. 1 is a perspective view of an input device including an input support mechanism of the first embodiment
  • FIG. 3 is a cross-sectional view taken along line BB in FIG. 2 of the input device shown in FIG. 2
  • 3A and 3B are views showing examples of patterns formed by light emitters on the inner surfaces of the second long side plate and the second short side plate of the input device shown in FIG. 2
  • FIG. 3 is a plan view of the input device shown in FIG. 2 with the front plate removed
  • FIG. 3 is a hardware configuration diagram of a computer included in the input device shown in FIG. 2;
  • FIG. 3 is a functional block diagram showing functional blocks generated in a computer included in the input device shown in FIG. 2;
  • FIG. 3 is a diagram showing paths of image light from a finger to a camera when the input device shown in FIG. 2 is used;
  • 3 is another diagram showing the path of image light from the finger to the camera when the input device shown in FIG. 2 is used;
  • FIG. FIG. 4 is a cross-sectional view of the input device according to modification 2 at the same position as in FIG. 3 ;
  • FIG. 5 is a cross-sectional view of the input device according to modification 2 at the same position as in FIG. 4 ;
  • FIG. 11 is a plan view of the input device according to modification 2 with the front panel removed; Sectional drawing of the same position as FIG.
  • FIG. 10 is a principle diagram in the case of creating a light plane using an oscillating mirror as a diffusing member.
  • an input support mechanism according to the present invention is incorporated in an input device that is retrofitted to an existing device including a display and an information processing device.
  • Existing devices for the public are for example, but not limited to, ATMs, ticket vending machines, convenience store registers. More specifically, the device may be a device that includes a display and an information processing device, and they are integrated directly or via other parts, and does not need to be a device for public use. It doesn't even have to be an existing device. However, since all such devices may be known or well-known, the details of their construction will not be described.
  • the display has a function of displaying on its screen information necessary for allowing the user to input operation data.
  • the display may or may not be a touch panel.
  • the user inputs operation data by touching the display.
  • Operation data is input from a predetermined input device such as a push button provided near the display of the device.
  • the existing device is an ATM already installed in town (see FIG. 1), and the display is a touch panel.
  • the ATM has a display 110 .
  • the display 110 is attached to a known table 131 provided in the ATM in such a manner that the screen 111 is exposed, and is connected via a signal line 132 to the information processing device 120 incorporated inside the ATM. there is The exposed portion of the screen 111 is the target surface referred to in the present application.
  • FIG. 1 conceptually illustrates an ATM, but in the figure, only the screen 111 of the display 110 and the table 131 are illustrated with solid lines, and the entire ATM, the information processing device 120, and the signal line 132 are illustrated with dashed lines. Illustrated.
  • the screen 111 of the display 110 which corresponds to the object surface referred to in the invention of the present application, is rectangular although it is not necessarily so.
  • the four corners of the screen 111 may be chamfered, but since the screen 111 does not need to be rectangular in the first place, the definition of rectangle need not be strict. In this embodiment, the four corners are chamfered. However, it is safe to say that the display 110 is rectangular.
  • the information processing device 120 is a general computer.
  • the information processing apparatus 120 has a function of displaying information necessary for input to a user who inputs operation data on the screen 111 of the display 110 . If the device provided with the display 110 is an ATM, the image displayed on the screen 111 may be, for example, an image prompting the user to enter the PIN number of the cash card or a procedure for the ATM to be performed by the user.
  • the information processing device 120 generates image data for an image displayed on the screen 111 of the display 110 and sends the image data to the display 110 via the signal line 132, as is known or known. . As a result, an image corresponding to the image data is displayed on the screen 111 of the display 110 .
  • the information processing device 120 also executes information processing based on input operation data. For example, when the device is an ATM, the information processing executed by the information processing device 120 includes user authentication based on a personal identification number entered by the user, and processing selected by the user from withdrawal, deposit, and transfer. are three.
  • the operation data is originally sent from the display 110 to the information processing device 120 via the signal line 132 .
  • the operation data is sent from the input device to the information processing device 120, as will be described later.
  • the input device in this embodiment is used in combination with the ATM explained above. Specifically, the input device is used by being fixed to a table 131 to which the display 110 is attached while being properly positioned with respect to the screen 111 of the display 110 . Thereby, the user can perform hover input on the screen 111 of the display 110 .
  • a perspective view of the input device 200 is shown in FIG. 3 is an AA cross-sectional view of the input device 200 in FIG. 2, FIG. 4 is a BB cross-sectional view of the same, and FIG. 5 is a cut in the middle of the thickness direction of the input device 200 in FIG. It is a cross-sectional view.
  • the input device 200 has a frame 210 .
  • the frame 210 in this embodiment has, but is not limited to, a flat rectangular parallelepiped shape that does not have the bottom and top surfaces in FIG. 2, or that has open bottom and top surfaces.
  • the frame 210 is made of metal or opaque resin, for example.
  • the frame 210 is rectangular.
  • the vertical plate located on the left side is called a first short side plate 213 and the vertical plate facing the first short side plate 213 is called a second short side plate 214 .
  • the first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 are all rectangular, and the length in the height direction in FIG. all equal.
  • the first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 are A rectangle is created, which in this embodiment may be referred to as a "virtual rectangle".
  • the front plate 215 is a single plate in this embodiment, this is not the only option.
  • each of the first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 is formed into a so-called angle having an L-shaped cross section, so that the first long side plate 211 , the second long side plate 212, the first short side plate 213, and the second short side plate 214 may be combined.
  • the hole 215A of the front plate 215 substantially matches the shape and size of the screen 111 of the display 110 used in combination with the input device 200. As shown in FIG. The size of the hole 215A does not have to match the shape and size of the screen 111 of the display 110 completely. If there is, the size of the hole 215A may be smaller than the size of the screen 111.
  • the frame 210 is configured such that the screen 111 of the display 110 is formed into a virtual rectangle consisting of a first long side plate 211, a second long side plate 212, a first short side plate 213, and a second short side plate 214. It is attached to the table 131 in a surrounding manner, but at that time, it is permissible for the inner edge of the hole 215A of the front plate 215 to slightly overlap the edge of the screen 111 when viewed from the front.
  • the front plate 215 has holes near the inner surfaces of the first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 (the inner surface of the frame 210, the same shall apply hereinafter). This is to prevent stray light from the external environment from entering through 215A, but this is not essential.
  • Mirrors 221 and 222 are attached to the inner surface of the first long side plate 211 and the inner surface of the first short side plate 213, respectively.
  • the mirror 221 attached to the inner surface of the first long side plate 211 is rectangular, covers substantially the entire inner surface of the first long side plate 211, and has a mirror surface.
  • the mirror 222 attached to the inner surface of the first short side plate 213 is rectangular, covers substantially the entire inner surface of the first short side plate 213, and the inner surface is a mirror surface.
  • the mirror 221 may cover the entire inner surface of the first long side plate 211, but in this embodiment, it does not cover the inner surface of the first long side plate 211 on the top, bottom, left, and right.
  • the mirror 222 may cover the entire inner surface of the first short side plate 213, but in this embodiment, it does not cover the inner surface of the first short side plate 213 on the top, bottom, left, and right. That is, the mirrors 221 and 222 are present along two adjacent sides of the virtual rectangle described above, but are somewhat shorter in length than each of the two adjacent sides. In this way, the side lengths of the virtual rectangles along which the mirrors 221 and 222 are aligned may be shorter than the lengths of the sides of the virtual rectangles along which the mirror surfaces of the mirrors 221 and 222 are aligned. do not have. However, when the virtual rectangle is viewed from above in FIG.
  • both the line segment formed by the mirror surface of the mirror 221 and the line segment formed by the mirror surface of the mirror 222 are the sides of the virtual rectangle parallel to these line segments. It is preferable that the rectangular range where the two intersect when translated toward the far side of , has a shape and size that allows the screen 111 of the display 110 as the target surface to fit within the range. 2 where the mirrors 221 and 222 are present are the same in FIGS. The range in the height direction where the mirrors 221 and 222 exist may be determined according to the distance from the screen 111 where the user's finger is to be detected.
  • Light emitters 231 and 232 are provided on the inner surfaces of the second long side plate 212 and the second short side plate 214, respectively, although not limited thereto.
  • the light emitters 231 and 232 have a role of increasing the contrast between the finger and its background in the captured image when the finger is imaged by a camera, which will be described later.
  • the light emitters 231, 232 can be made of known or well-known materials such as EL wires, LEDs, and the like. These emit light by power supplied from a power source (not shown). Both the light emitters 231 and 232 may cover the entire inner surface of the second long side plate 212 and the second short side plate 214.
  • the light emitters 231 and 232 are It covers only a part of the inner surface of the second long side plate 212 and the second short side plate 214, and more specifically, although not limited to this, a predetermined pattern continuous to those inner surfaces is formed. forming.
  • An example of a continuous pattern is shown in FIG.
  • the light emitters 231 and 232 form vertical stripes in FIG. 2 on the entire inner surface of the second long side plate 212 or the second short side plate 214 .
  • the width of the light emitters 231 and 232 and the width of the portion without the light emitters 231 and 232 are the same, but this is not necessarily the case.
  • the light emitters 231 and 232 form horizontal stripes in the horizontal direction in FIG.
  • the width of the light emitters 231 and 232 and the width of the portion without the light emitters 231 and 232 are the same, but this is not necessarily the case.
  • the light emitters 231 and 232 form a checkered pattern on the entire inner surface of the second long side plate 212 or the second short side plate 214 .
  • the color of the portions where the light emitters 231 and 232 do not exist is, for example, black.
  • a color with low brightness such that the contrast between the existing portion and the existing portion is high when the image is captured by a camera described later.
  • the light emitters 231 and 232 are not present on the inner surfaces of the second long side plate 212 and the second short side plate 214, it is preferable to lower the brightness of the color of those inner surfaces.
  • a camera 240 is provided on the inner surface of the frame 210, in this embodiment, at the intersection of the second long side plate 212 and the second short side plate 214 (FIG. 6). However, when the frame 210 is attached to the table 131 having the object surface, the position where the camera 240 is arranged is outside the object surface, and both the mirror surface of the mirror 221 and the mirror surface of the mirror 222 are positioned. Any position is acceptable as long as the entire image can be captured.
  • a dashed line in FIG. 6 indicates the angle of view of the camera 240 .
  • the height in FIG. 2 at which the camera 240 is mounted can be selected as appropriate. For example, the camera 240 can be mounted in the middle of the frame 210 in the height direction in FIG.
  • the camera 240 may be any device that can capture moving images and output moving image data, which is data about the moving images. Of course, such cameras are known or known and are commercially available. A suitable camera may be selected from such cameras and used as the camera 240 . However, it is advantageous for the camera 240 to have a wide angle of view so that the entire mirror surface of the mirror 221 and the mirror surface of the mirror 222 can be captured by the camera 240. Therefore, when selecting the camera 240, Such points should also be taken into consideration.
  • a computer (not shown) is arranged in the frame 210 .
  • the computer is connected to the camera 240, and receives moving image data generated by the camera 240 from the camera 240 substantially in real time by a known technique.
  • the method by which the computer receives moving image data from the camera 240 may be wired or wireless.
  • the computer need not be attached to the frame 210, and may be provided inside the ATM, for example.
  • the information processing device 120 described above may also serve as the computer.
  • it is assumed that a small computer is mounted on frame 210, and more particularly in the space within frame 210 above or below camera 240 in FIG.
  • the computer has a function of generating operation data, which will be described later, based on moving image data and outputting it to the information processing device 120 . If so, the computer may be known or well-known and commercially available.
  • the computer has hardware as shown in FIG.
  • the computer in this embodiment comprises a CPU (Central Processing Unit) 311, a ROM (Read Only Memory) 312, a RAM (Random Access Memory) 313, an interface 315, and a bus 316 connecting them.
  • a CPU 311 is an arithmetic device and controls the entire computer. The CPU 311 executes the processing described below by executing the computer program.
  • the ROM 312 is a non-rewritable memory, and stores a computer program for operating the CPU 311, data required when the computer executes the following processes, and the like.
  • a RAM 313 is a rewritable memory and provides a work area for the CPU 311 to execute the following processes. For example, part of moving image data and operation data may be temporarily written to the RAM 313 .
  • the interface 315 serves as a window for connecting the CPU 311, ROM 312, and RAM 313 to the outside of the computer hardware shown in FIG. Data can be exchanged with the outside of the computer that has the hardware.
  • the interface 315 is connected to a connection terminal connected to a conductor (not shown) for transmitting moving image data connecting the camera 240 and the computer. It is As a result, the CPU 311 or the like in the computer can receive moving image data from the camera 240 via the cable, terminal, and interface 315 . If the computer receives moving image data wirelessly from the camera 240, a receiver for receiving moving image data from a transmitter provided in the camera 240 for transmitting the moving image data is connected. As a result, the CPU 311 or the like in the computer can receive moving image data from the camera 240 via the receiver and the interface 315 .
  • the interface 315 since the computer needs to transmit generated operation data to the information processing device 120 , components necessary for transmitting such data are connected to the interface 315 .
  • the interface 315 has a terminal for connecting a cable (for example, a USB cable) to the information processing apparatus 120 by wire.
  • the interface 315 includes a transmitter (for example, Wi-Fi (trademark) or Bluetooth (trademark) standard) that transmits the operation data to a receiver included in the information processing device 120. transmitter) will be connected.
  • An input unit 321, an operation data generation unit 322, a coordinate data recording unit 323, and an output unit 324 are generated inside the computer.
  • the input section 321 is connected to the interface 315 and receives input from the interface 315 .
  • Data received by the input unit 321 from the interface 315 is moving image data sent from the camera 240 .
  • Moving image data is sent from the input unit 321 to the operation data generation unit 322 .
  • the operation data generator 322 has a function of generating operation data based on the moving image data received from the input unit 321 .
  • the operation data generation unit 322 reads the coordinate data recorded in the coordinate data recording unit 323 when generating operation data, and uses it.
  • the method by which the operation data generator 322 generates operation data will be described later.
  • the operation data generator 322 sends it to the output unit 324 .
  • Coordinate data is recorded in the coordinate data recording section 323 as described above. Details of the coordinate data will be described later.
  • the output unit 324 has a function of outputting the operation data generated by the operation data generation unit 322 from the computer to the outside.
  • the output unit 324 outputs operation data to the interface 315 .
  • the operation data received by the interface 315 is sent to the information processing device 120 by wire or wirelessly.
  • the input device 200 more specifically, the frame 210 is provided with an input support mechanism according to the present invention.
  • the input support mechanism is located at a first distance, which is a predetermined distance, or a shorter distance from the screen 111 of the display 110. It is for forming a light plane, which is a plane of light parallel to the screen 111 and will be described later.
  • the input assist mechanism in this embodiment includes a light source 11 that emits light and directs the light from the light source 11 to the screen 111 of the display 110 at a distance that is a first distance or less than the first distance from the screen 111 of the display 110 .
  • the first distance is the distance from display 110 at which input device 200 generates operation data when the user's fingertip is positioned closer to screen 111 of display 110 than that distance.
  • the light emitted by the light source 11 in this embodiment is light with a wavelength in the visible light region.
  • the light source 11 in this embodiment is, but not limited to, a linear light source that emits linear light. More specifically, the light source 11 in this embodiment is a laser device that emits a laser. Laser devices that emit lasers are, of course, publicly known or well-known, and are also commercially available, so the light source 11 can be appropriately selected from such laser devices.
  • the light source 11 which is a laser device in this embodiment, is positioned inside the frame 210 , more specifically, at the intersection of the first long side plate 211 and the second short side plate 213 that constitute the frame 210 .
  • the mounted position of the light source 11 need not be at such a position as long as the light plane can be produced at such a position as described above.
  • the light source 11 may be attached to the middle of the first long side plate 211 or the second long side plate 212 in the length direction.
  • the laser which is linear light emitted from the light source 11 in this embodiment, is directed generally along the diagonals of the frame 210, that is, the second long side plate 212 and the second short side plate 214. , and is irradiated toward the portion where the Further, the light emitted from the light source 11 is emitted in a direction parallel to the screen 111 of the display 110 when the input device 200 is attached to the table 131 .
  • the diffusion member 12 Light emitted from the light source 11 passes through the diffusion member 12 , and the diffusion member 12 is arranged slightly inside the frame 210 of the light source 11 .
  • the light passing through the diffusing member 12 spreads parallel to the screen 111 of the display 110 with a predetermined thin thickness, for example ideally a thickness corresponding to the diameter of a straight line of light.
  • a cylindrical lens is employed as the diffusion member 12 for realizing such diffusion of light.
  • a cylindrical mirror can be used as the diffusing member 12 instead of the cylindrical lens, but when the cylindrical mirror is used as the diffusing member 12, the positional relationship between the diffusing member 12 and the light source 11 is reversed.
  • the cylindrical lens and the cylindrical mirror used as the diffusing member 12 can be appropriately selected from among them.
  • the light spread thinly after passing through the diffusion member 12 is the light plane P (see FIGS. 3 and 4).
  • the light plane P is formed, for example, so as to cover the entire screen 111 of the display 110 when viewed from the front. Although not limited to this, in this embodiment, the light plane P extends all the way inside the frame 210 . Also, in this embodiment, although not limited to this, the distance from the light plane P to the screen 111 of the display 110 is made equal to the first distance.
  • the light plane P itself is invisible, but when an object that reflects or scatters light crosses the light plane P, the part of the object that crosses the light plane P glows, so the user can see the screen of the display 110 within the frame 210. By moving the finger along 111, it can be seen that there is a light plane P at a predetermined distance away from the display 110.
  • FIG. the light plane P according to the present application is the case where planar light does not exist and only linear light exists when focusing on a moment in the time zone when the light plane is generated. In the case where light on a straight line exists at high speed along a plane, for example, by wiping or rotating, so that it exists in a plane for a certain length of time. is also the light plane P.
  • a linear light source typically a laser device, a mirror that reciprocates within a certain angle, or a mirror that rotates around a certain axis.
  • the light X1 which is linear light (for example, laser) emitted from the light source 11, corresponds to the swing axis of the planar mirror 19 swinging between the angles ⁇ .
  • the light X1 When the light X1 is applied to a portion, the light X1 is reflected in the direction X2 when the mirror 19 is in the position indicated by the solid line, and is reflected in the direction X3 when the mirror 19 is in the position indicated by the two-dot chain line (this , the mirror 19 corresponds to the diffusion member 12). Therefore, if the light X1, which is a straight line of light, continues to be emitted, the light X1 moves the plane between the light X2 and the light X3 (shaded plane) from the position of the solid line to the position of the broken line. Fill while moving up to the position.
  • the light source 11 that emits linear light and the mirror 19 that oscillates are used in combination, it is possible to create a planar light plane P.
  • FIG. In other words, in the example shown in FIG. 15, ⁇ is set small for convenience of drawing.
  • the combination of the light source 11 and the mirror 19 can generate the light plane P having the same range as the light plane P created by using the light source 11 and the diffusion member 12 which is a cylindrical lens.
  • the swinging speed of the mirror 19 can be set to a speed at which the light moving in a straight line can be viewed with the naked eye, for example, 30 Hz (30 reciprocations/second).
  • the same light plane P as described above is created using the light source 11 and the diffusion member 12, which is a cylindrical lens. It is possible to generate a light plane P over a range.
  • the rotation speed of the mirror can be, for example, 60 revolutions/second.
  • the oscillating and rotating mirror can be composed of, for example, a MEMS mirror, but is not limited to this.
  • the input device 200 When the input device 200 is used, it is fixed to the table 131 of the ATM.
  • the input device 200 is fixed to the table 131 with the lower portion of FIG. 2 in contact with the upper surface of the table 131 . If the positioning described later is achieved, the input device 200 can function even if it is simply placed on the table 131 and not fixed. However, when the input device 200 is used, it must be positioned at a predetermined position with respect to the display 110 as will be described later. In this embodiment, the input device 200 is fixed to the table 131 because there is a possibility that the input device 200 may deviate from the set position.
  • the fixing of the input device 200 to the table 131 may be performed by a known or well-known technique such as screwing, adhesion, or the like.
  • the first long side plate 211 , the second long side plate 212 , the first short side plate 213 , and the second short side plate 214 are exposed on the upper surface of the table 131 . surrounding the screen 111 of the display 110 that is on.
  • the frame 210 of the input device 200 and the screen of the display 110 are relatively fixed so as to have a predetermined positional relationship (see FIG. 9).
  • operation data can be transmitted from the computer of the input device 200 to the information processing device 120. do. If operation data is transmitted and received wirelessly between the two, at least one of the information processing device 120 and the input device 200 is set so that the operation data can be transmitted and received wirelessly. In this state, in the space above the screen 111 of the ATM (roughly, the space below the light plane P in the space in which the mirrors 221 and 222 exist on the sides of the screen 111), Allows hover input. It is preferable to inform the user in some way beforehand that hover input can be performed without touching the screen 111 with the fingertip by bringing the finger closer to the screen 111 of the display 110 than the position where the fingertip shines.
  • camera 240 is constantly capturing moving images, and moving image data is constantly being sent from camera 240 to the computer.
  • the camera 240 can be used only during a period in which the user may perform hover input, for example, from when the user starts a transaction (for example, after a cash card is inserted into an ATM) to when the transaction ends.
  • a moving image may be picked up only during the interval and the moving image data may be sent to the computer.
  • the light source 11 may also irradiate light only from the time the user starts trading until the end of the trading.
  • the target for which the input support mechanism is used is not an ATM
  • the light source 11 emits light only when a person approaches the device to which the hover input is performed by a well-known human sensor.
  • the light source 11 that has been turned off until then may emit light when the finger is captured in the camera 240 .
  • the control of the light source 11, in which the light source 11, which has been turned off until then, emits light when the finger is photographed in the camera 240 can of course be applied to the case where the target of the input support mechanism is an ATM. be.
  • the operation data generator 322 performs such processing, it is necessary to connect the computer interface 315 provided in the input device 200 and the light source 11 with a cable (not shown). be.
  • a light plane P is generated as already explained. The light plane P remains constant in this embodiment until the light source 11 is extinguished.
  • Moving image data generated by capturing moving images by the camera 240 is sent from the camera 240 to the computer.
  • moving image data passes from the camera 240 through a cable (not shown) and through the terminal and interface 315 to reach the input unit 321 in the computer.
  • the moving image data is sent from the input section 321 to the operation data generation section 322 .
  • the imaging range of the moving image captured by the camera 240 includes, for example, all of the space within the frame 210 that is within the first distance from the screen 111, which is the space where the above-described hover input on the screen 111 can be performed.
  • the embodiment also includes screen 111 .
  • the moving image capturing range includes the entire mirror surfaces of the mirrors 221 and 222 .
  • FIG. 9 shows how an image in the moving image specified by the moving image data when the user's finger enters the space in the frame 210 through the hole 215A of the frame 210.
  • FIG. 10 will be described as an example. 9 and 10, illustration of the input support mechanism (the light source 11 and the diffusion member 12) is omitted.
  • the user's finger is denoted by F in FIG.
  • the image includes the finger by the image lights F11 and F12 from the finger F reaching the camera 240 by being reflected once by the mirror surface of the mirror 221 or the mirror 222 . Furthermore, the image reflects the finger by the image lights F21 and F22 from the finger F reaching the camera 240 by being reflected twice on the mirror surfaces of the mirror 221 and the mirror 222 once each. In other words, the image captured by the camera 240 includes five fingers F, and the fingers F captured in the image are captured from different directions. Also, when the finger F is at the position shown in FIG. 10, only the finger by the image light L0 from the finger F directly reaching the camera 240 is reflected in the image.
  • the operation data generation unit 322 can store the image of the camera at a certain time.
  • the state of the fingers reflected in the moving image specified by the moving image data received from 240 (the number of fingers, the position of each finger, and the orientation if necessary) is recorded in the coordinate data recording unit 323.
  • the coordinate data is the above-described data that associates the coordinates with the example of the image in which the finger is reflected, which should be recorded in the coordinate data recording unit 323 .
  • the coordinate data for the combination of the X coordinate and the Y coordinate for the position of the user's finger is constantly generated by the operation data generator 322 while the user is performing input using the input device 200.
  • the operation data generation unit 322 when the user's finger moves closer to the screen 111 of the display 110 than the first distance and leaves the screen 111 of the display 110 after staying there for a predetermined time, for example, 0.3 seconds or more, the user's finger touches the screen 111 corresponding to the location where the user's finger was resting.
  • the operation data generation unit 322 When the finger moves away from the screen 111 of the display 110 as described above, the operation data generation unit 322 generates coordinate data indicating the position of the finger at which the finger remained and the location specified by the coordinate data.
  • the operation data is generated as a set of data including information indicating . This operation data can be exactly the same as a set of coordinate data+data indicating that the user has touched the coordinate, which can be input using a general touch panel.
  • the operation data generator 322 detects the Z coordinate of the fingertip position all the time while the coordinate data of the fingertip position is being generated. Thereby, the operation data generator 322 can detect that the fingertip has left the screen 111 of the display 110 .
  • the fingertip traverses the light plane P, so that the fingertip shines. Therefore, by bringing the fingertip closer to the screen 111 of the display 110 while visually recognizing that the fingertip is shining, and moving the finger away from the screen 111 after keeping the finger still there for 0.3 seconds, the input can be performed reliably. However, in other words, it is possible to cause the operation data generation unit 322 to generate the operation data.
  • the operation data is sent from the operation data generator 322 to the output unit 324, and sent from the output unit 324 to the information processing apparatus 120 via the interface 315, terminals and cables (not shown).
  • the information processing device 120 that receives the operation data executes necessary information processing based on the operation data.
  • the operation data which is a combination of the X and Y coordinates of the screen and the data indicating that the coordinates have been touched, can be sent from the display 110 to the information processing apparatus when the display 110 is a touch panel as described above. It can be exactly the same as the data input to 120 . Therefore, the information processing device 120 can display images from the input device 200 without changing the image displayed on the display 110 at all, or rather without modifying the entire ATM other than connecting to the input device 200 . Conventional information processing can be performed based on the input operation data. After the user finishes inputting with the input device 200, the light source 11 is turned off.
  • the operation data generation unit 322 generates the image of the finger, which is reflected in the image at that time, by the image light directly reaching the camera 240 from the user's finger, and the image of the finger up to the screen 111. From the distance, the Z coordinate of the tip of the finger was specified. However, when the light emitters 231 and 232 are as shown in FIGS. 5(B) and 5(C), the image of the finger (the image of the finger by the image light directly reaching the camera 240 from the user's finger) ), it is possible to specify the Z coordinate of the tip of the finger at that time also by which light emitters 231 and 232 are hidden.
  • the operation data generating unit 322 can generate the moving image specified by the moving image data. It is also possible to specify the Z coordinate of the tip of the user's finger based on the positions and numbers of the light emitters 231 and 232 appearing in the image of each frame in the image.
  • the coordinate data recorded in the coordinate data recording unit 323 is, for example, a combination of a set of the X coordinate and the Y coordinate where the finger is positioned and an image captured by the camera 240 at that point in time where the finger is captured as teacher data. It is also possible to generate as learned data generated by deep running a computer. Of course, it is assumed that the image data used for deep running is obtained in a situation where the screen 111 and the frame 210 are in a predetermined relative positional relationship. In that case, the operation data generation unit 322 functions as artificial intelligence, and can specify the X, Y, and Z coordinates of the tip of the finger at that time from the moving image data.
  • Modification 1 will also be described in which the input device 200 is attached to the ATM as in the first embodiment. There is almost no difference between the ATM and the input device 200 between Modification 1 and the first embodiment. The difference between the first embodiment and Modification 1 is the function of the operation data generator 322 and the manner of lighting the light source 11 .
  • the light plane P is kept constant while the user is making an input, in other words, while the light source 11 is on.
  • the operation data generator 322 while the user is inputting or while the light source 11 is on, the operation data generator 322 generates the X coordinate, Y coordinate, and Z coordinate of the fingertip. It was supposed to keep detecting coordinates.
  • the user visually recognizes the fingertip that shines by crossing the light plane P generated by the light from the light source 11 that is always on, so that the fingertip receives the operation data from the input device 200. It can be confirmed that the screen 111 of the display 110 is approached to the extent that the information processing apparatus 120 can input.
  • the fingertip may come too close to the screen 111 and touch the screen 111 .
  • the user may notice that the fingertip is closer to the screen 111 of the display 110 than the light plane P, Alternatively, there may be cases where it is desired to notify that the distance between the fingertip and the screen 111 of the display 110 has become shorter than the first distance.
  • the user's fingertip is closer to the screen 111 than a second distance (eg, 1/2 to 1/3 of the first distance) that is shorter than the first distance.
  • the operation data generator 322 continues to detect the X-coordinate, Y-coordinate, and Z-coordinate of the fingertip in the same manner as in the first embodiment. Then, the operation data generation unit 322 of Modification 1 generates proximity data when detecting from the Z coordinate of the fingertip position that the distance between the fingertip and the screen 111 of the display 110 has become shorter than the second distance. It is designed to This proximity data is sent to the light source 11 via the output unit 324 , the computer interface 315 provided in the input device 200 , and a cable (not shown) connecting the interface 315 and the light source 11 .
  • the light source 11 changes the state of light irradiation.
  • changes in the state of illumination of light from the light source 11 can be flickering of light, change of wavelength of light in the visible light region, or change of intensity of light. If the light blinks, the light plane P blinks, and the light generated at the fingertip when the fingertip crosses the light plane P blinks. If the wavelength of light is changed, the color of the light generated at the fingertip and visually recognized by the user when the fingertip crosses the light plane P changes. Further, if the intensity of light is changed, the intensity of light generated at the fingertip when the fingertip crosses the light plane P is changed.
  • Modification 1 when the user's fingertip is closer to the screen 111 of the display 110 than the first distance, the fingertip shines in a constant state, and then the fingertip is closer to the display than the second distance. When approaching the screen 111 of 110, the fingertip will glow differently.
  • the user can intuitively recognize the distance between the fingertip and the screen 111 of the display 110 by visually recognizing such a change in the state of light occurring at the fingertip.
  • two distances, a first distance and a second distance are set for the distance between the fingertip and the screen 111 of the display 110, and the light emission method of the light source 11 is changed to two types.
  • Modification 2 will also be described in which the input device 200 is attached to the ATM as in the first embodiment. There is almost no difference between the modification 2 and the first embodiment in both the ATM and the input device 200 .
  • the difference between the first embodiment and the modification 2 is that in the first embodiment, the input support mechanism is composed of a set of the light source 11 and the diffusion member 12, whereas in the modification 2, the light sources 11 and 13 and the diffusion member The point is that it is composed of two sets of light sources 12 and 14 and diffusion members.
  • the light source 13 and the diffusion member 14 generate a light plane P1 parallel to the light plane P, different from the light plane P, at a position closer to the screen 111 of the display 110 than the light plane P created by the light source 11 and the diffusion member 12. (see FIGS.
  • the light plane P1 is formed, for example, like the light plane P so as to cover the entire screen 111 of the display 110 when viewed from the front. It is made to spread all the way inside.
  • the light source 13 and diffuser member 14 can be designed as appropriately as they can, for example, they can be the same as the light source 11 and diffuser member 12 .
  • the light source 13 and the diffusion member 14 in Modification 2 are the same as the light source 11 and the diffusion member 12.
  • the wavelengths of the light emitted by the light source 13 and the light source 11 belong to the visible light region, they are not necessarily so. Although not required, the wavelengths of the two are assumed to be different.
  • the light sources 11 and 13 may irradiate light of the same wavelength, but the intensity of the emitted light may be different.
  • the light source 13 and the diffusion member 14 are provided directly below the light source 11 and the diffusion member 12, for example, at positions corresponding to the second distance described in the first modification.
  • the screen 111 of the display 110 is changed while the user's fingertip approaches the screen 111 of the display 110.
  • crossing the light plane P causes the user's fingertip to shine.
  • the distance from the user's fingertip to the screen 111 of the display 110 becomes shorter than the second distance, the user's fingertip crosses both the light plane P and the light plane P1. , is different from the case where the fingertip crosses only the light plane P.
  • the input device 200 is combined with the ATM as in the first embodiment.
  • the input device 200 attached to the ATM incorporates an input support mechanism configured in the same manner as in the case of the first embodiment.
  • the input device 200 of the first embodiment uses the camera 240 and the two mirrors 221 and 222 to specify the X and Y coordinates of the position of the fingertip during input.
  • the input device 200 of the second embodiment specifies the X and Y coordinates of the position of the fingertip during input in the same way as the input device 200 of the first embodiment.
  • the principle and mechanism for specifying the Y coordinate are different from those of the first embodiment.
  • the input device 200 of the second embodiment has a frame 210 .
  • the frame 210 in the second embodiment is configured similarly to the frame 210 in the first embodiment. However, inside the frame 210 of the second embodiment, neither the camera 240 nor the mirrors 221, 222 nor the light emitters 231, 232 that were present in the first embodiment are present. Instead, a large number of light emitting units 291 and light receiving units 292 are provided on the inner surface of the frame 210 in the input device 200 of the second embodiment (FIGS. 13 and 14).
  • a large number of light emitting portions 291 are provided inside the first long side plate 211 and inside the first short side plate 213 . All of the light emitting portions 291 provided on the first long side plate 211 are positioned at the same height position when the vertical direction in FIG. Installed at regular intervals. Further, all of the light emitting portions 291 provided on the first short side plate 213 are attached at equal intervals in the length direction of the first short side plate 213 so as to be positioned at the same height position. A large number of light-emitting portions 291 provided inside the first long side plate 211 and inside the first short side plate 213 do not necessarily have to be at the same height position, but in this embodiment, they are at the same height position.
  • the light emitting unit 291 is a light source that emits linear light.
  • the light emitting unit 291 is adapted to emit light, for example, infrared light, which is light with wavelengths in the invisible region.
  • the light emitting part 291 is directed toward the facing second long side plate 212 or second short side plate 214 in a direction perpendicular to the inner surface of the first long side plate 211 or the first short side plate 213 to which it is attached. It is designed to emit a straight line of light.
  • a large number of light receiving portions 292 are provided inside the second long side plate 212 and inside the second short side plate 214 .
  • the number of light receiving units 292 provided on the second long side plate 212 is the same as the number of light emitting units 291 provided on the first long side plate 211, and the light receiving units 292 and the light emitting units 291 provided on each of the two are one to one. Yes.
  • the light-emitting portions 291 and the light-receiving portions 292 in one-to-one correspondence are located at exactly corresponding positions when viewed from the rear surface of the first long side plate 211 .
  • the infrared rays emitted from the light emitting portions 291 provided on the first long side plate 211 are received by the light receiving portions 292 provided on the second long side plate 212 corresponding to the light emitting portions 291 that emitted the infrared rays. I am ready to receive it.
  • the number of light receiving units 292 provided on the second short side plate 214 is the same as the number of light emitting units 291 provided on the first short side plate 213, and the light receiving units 292 and the light emitting units 291 provided on both sides are one-to-one. Yes.
  • the light-emitting portions 291 and the light-receiving portions 292 in one-to-one correspondence are located at exactly corresponding positions when viewed from the rear surface of the first short side plate 213 .
  • the infrared light emitted from the light emitting portion 291 provided on the first short side plate 213 is received by the light receiving portion 292 provided on the second short side plate 214 in a one-to-one correspondence with the light emitting portion 291 that emitted the infrared light. It is designed to be accepted.
  • Each light receiving unit 292 detects whether or not the light emitted by the light emitting unit 291 is received at that time, and generates data indicating the light reception state.
  • the light receiving unit 292 that has received the cut off infrared rays R You will not receive R. Then, the light-receiving unit 292, which has generated data indicating that light (infrared R) is being received, now generates data indicating that light is not being received. Therefore, which of the light receiving portions 292 on the second long side plate 212 is not receiving light and which of the light receiving portions 292 on the second short side plate 214 is not receiving light. By detecting whether the state is reached, it becomes possible to detect the position of the fingertip with respect to the screen of the display 110, that is, the X and Y coordinates of the fingertip.
  • the input device 200 in the second embodiment is also provided with an input support mechanism.
  • the input support mechanism in the second embodiment includes a light source 11 and a diffusion member 12 (see FIG. 14), as in the first embodiment. Note that the illustration of the input support mechanism is omitted in FIG. 13 .
  • the input support mechanism in the second embodiment generates a light plane P parallel to the screen 111 of the display 110 provided in the device to which the input device 200 is attached, as in the first embodiment.
  • the light plane P in the second embodiment extends all the way inside the frame 210 like the light plane P in the first embodiment.
  • the X coordinate and the Y coordinate of the fingertip are detected depending on which infrared rays R in the vertical and horizontal directions of the infrared rays R arranged in a matrix are blocked by the fingertip. be able to. Therefore, unless the fingertip is brought close to the screen of the display 110 at least to the position of the infrared rays R arranged in a matrix, the user cannot perform input using the input device 200 according to the second embodiment.
  • the first distance in the invention of the present application is the distance from the position where the infrared rays R arranged in a matrix form exist to the screen 111 of the display 110 .
  • the light plane P is provided at a position such that the distance from the screen 111 of the display 110 is equal to or shorter than the first distance. In the example shown in FIG. 14, the light plane P is positioned slightly closer to the screen 111 of the display 110 than the first distance from the screen 111 .
  • the fingertip shines as in the case of the first embodiment, so the user can visually recognize that the fingertip shines.
  • the fingertip glows the user's finger always crosses the matrix-shaped infrared rays R in front of the user's finger.
  • an input support device 500 will be described.
  • the input support device 500 according to the third embodiment is manufactured and sold by Nissha Co., Ltd. and Japan Display Co., Ltd., or has been announced. It is used in conjunction with a display that has the ability to detect the coordinates on the display of a user's finger attempting to provide input to the display by detecting changes.
  • the display using the input support mechanism 500 in this embodiment includes the input device 200 itself in the first embodiment or the function of the input device 200 .
  • the input support device 500 of the third embodiment is used to solve such problems.
  • the input support device 500 includes an irradiation section 510 and a fixing section 520 .
  • the irradiation unit 510 incorporates the light source 11 and the diffusion member 12 described in the first embodiment, and has a function of emitting planar light, that is, light corresponding to the light plane P in the first embodiment.
  • the fixing section 520 has a function of fixing the irradiation section 510 to the display directly or indirectly via another member while the irradiation section 510 is positioned with respect to the screen 111 of the display.
  • the fixed part 520 is, for example, rod-shaped.
  • the input support device 500 is used in a state in which the irradiation unit 510 is positioned appropriately with respect to the screen 111 and fixed to the display using the fixing unit 520 .
  • the irradiation unit 510 forms a light plane P at a position where the display can detect the position of the fingertip when the fingertip is brought close to the screen 111 of the display, or at a position closer to the screen 111 than that. do.
  • the light plane P is formed, for example, in a range that covers the entire screen 111 when the screen 111 of the display is viewed from the front. By crossing the light plane P and confirming that the fingertip is shining, if the fingertip is brought close to the screen 111 of the display and input is made to the display, the position of the fingertip can be reliably specified by the display. No typos or typos.

Abstract

The present invention enables a user to intuitively understand the extent to which a finger should be brought near to a display with which hovering input is possible, in order for input to be made. In the present invention, an input device 200 is attached to a display with which hovering input is not possible. The input device 200 is provided with a light source 11 for radiating a laser, and a spread member 12 for expanding the laser radiated from the light source 11 in parallel to the screen of the display. The laser that has passed through the spread member 12 forms a planar light plane P. The input device 200 generates position information pertaining to a user's fingertip that has reached below the light plane P. The fingertip that has crossed the light plane P shines. Therefore, when the user moves the fingertip while confirming that the fingertip is shining, it is possible for the input device 200 to reliably generate position information pertaining to the fingertip.

Description

入力支援機構、入力システムinput support mechanism, input system
 本発明は、対象面に取付けることにより、対象面をホバー入力可能な面とするための入力装置において、入力を行うユーザを支援するための技術に関する。
 対象面の典型例は、タッチパネルであるかを問わず、ホバー入力が不可能なディスプレイである。
TECHNICAL FIELD The present invention relates to a technique for assisting a user who performs input in an input device that is attached to a target surface so that the target surface can be hover-inputted.
A typical example of a target surface is a display that does not allow hover input, whether it is a touch panel or not.
 液晶ディスプレイ、有機ELディスプレイその他のディスプレイが広く用いられている。ディスプレイは単に画像を表示する目的で用いられる場合もあるが、ディスプレイに表示した画像に応じた何らかの入力をユーザに行わせる目的で用いられる場合も非常に多い。
 例えば、デスクトップ型のパーソナルコンピュータであれば、マウスやトラックボール等のパーソナルコンピュータ及びディスプレイとは別の製品であるアクセサリを介した入力をユーザに行わせることが多いが、ノートブック型のパーソナルコンピュータの場合には、それが備えるディスプレイをタッチパネルとすることにより、アクセサリを介さない入力を、ユーザのディスプレイの画面に対する接触によって行わせることが一般的になって来つつあり、また、スマートフォン、タブレット等では、タッチパネルであるディスプレイに接触させることによってユーザに入力を行わせることが殆どとなっている。
Liquid crystal displays, organic EL displays and other displays are widely used. A display may be used simply for the purpose of displaying an image, but is very often used for the purpose of allowing a user to perform some kind of input according to the image displayed on the display.
For example, in the case of a desktop personal computer, the user is often required to perform input via an accessory that is separate from the personal computer and the display, such as a mouse and trackball. In some cases, by using a touch panel as the display provided by it, it is becoming common to make input not via accessories by touching the screen of the user's display, and in smartphones, tablets, etc. In most cases, the user makes an input by touching a display, which is a touch panel.
 ディスプレイをタッチパネルとすることによって実現されるタッチパネルからの入力は、アクセサリの準備及びその操作を不要とするものであり、そして何より、ディスプレイの画面に表示されたボタンなどをユーザがタッチすることにより入力を行うことができるため直感的な操作が可能となるので便利である。
 このような事情から、ディスプレイをタッチパネルとすることによる、タッチパネルを介しての入力は、ATM(automated/automatic teller machine、現金自動預け払い機)や、券売機、コンビニエンスストアのレジスター等の、公共向けの装置のディスプレイにも広く採用されている。
Input from the touch panel realized by using the display as a touch panel does not require the preparation and operation of accessories. Since it is possible to perform intuitive operation, it is convenient.
Under these circumstances, by using a touch panel as the display, input via the touch panel is widely used in public applications such as ATMs (automated/automatic teller machines), ticket vending machines, and convenience store registers. It is also widely used for the display of the equipment of
 公共向けの装置が備えるディスプレイをタッチパネルとすることにより実現される上述の如きタッチパネルを介しての入力は便利ではあるものの、一部のユーザにはよく思われていない。一部のユーザは、不特定多数の人間が触り得るタッチパネルを自分も触らなければならないのは不潔であると嫌悪する。
 特に、新型コロナウイルスの感染が広まった昨今においては、清潔か不潔かという問題を超え、新型コロナウイルスへの、或いは他のウイルス等への感染を避けるため、タッチパネルに触ることを極度に嫌うユーザが現れている。また、そこまで強い嫌悪感を抱かないユーザであっても、もし可能なのであれば、タッチパネルに触らないで入力を行いたいと希望する者が確実に増えている。
Although the input via the touch panel as described above, which is realized by using a touch panel as the display of a device for public use, is convenient, it is not well received by some users. Some users dislike having to touch a touch panel that can be touched by an unspecified number of people because it is unclean.
In particular, with the recent spread of the new coronavirus infection, users who are extremely reluctant to touch the touch panel in order to avoid infection with the new coronavirus or other viruses beyond the issue of cleanliness is appearing. Moreover, even among users who do not have such a strong sense of disgust, the number of users who wish to perform input without touching the touch panel, if possible, is certainly increasing.
 もちろん、タッチパネルとしてのディスプレイの中には、既に実用化されているものも存在するが、ホバー入力を行えるものがある。ホバー入力とは、タッチパネルの画面前面の所定距離の空間を指で触れることにより、タッチパネルを介しての入力を行う入力の仕組みである。タッチレスタイプのタッチパネル等とも称される、非接触での入力が可能なホバー入力の行えるタイプのディスプレイは、新型コロナウイルスの感染が広まった昨今において急激に大きくなった、タッチパネルであるディスプレイに触ることなく入力を行いたいという需要に、非常によくマッチしている。 Of course, there are some touch panel displays that have already been put into practical use, but there are also those that allow hover input. A hover input is an input mechanism for inputting via a touch panel by touching a space of a predetermined distance in front of the screen of the touch panel with a finger. A type of display that allows hover input that allows non-contact input, also known as a touchless touch panel, is a touch panel that has grown rapidly in recent years due to the spread of the new coronavirus infection. It is a very good match for the demand for input without having to.
 ホバー入力を可能としたディスプレイには、例えば、ディスプレイに内蔵されたセンサが、ディスプレイとユーザの指との間に生じる僅かな静電容量の変化を検出することによって、ディスプレイに対する入力を行おうとしているユーザの指のディスプレイ上での座標を検出するものがある。このようなディスプレイは、例えば、NISSHA株式会社や、株式会社ジャパンディスプレイによって製造販売され、或いは発表されている。
 また、特開2007-310441や、本願出願人が先に出願した特願2020-145788には、出荷され、例えば既に使用されているタッチパネルであるディスプレイを、事後的にホバー入力を行えるディスプレイとするための装置が開示されている。
 加えて、タッチパネルでないディスプレイをタッチパネルとして機能させることを専らの目的として日本ではテックウインド株式会社が販売するデバイスである「AirBar(商標)」も、タッチパネルであるディスプレイをホバー入力可能とする目的にも応用可能である。
A display that allows hover input, for example, has a sensor built into the display that detects a slight change in capacitance between the display and the user's finger, thereby allowing input to the display. Some detect the coordinates of the user's finger on the display. Such displays are manufactured, sold, or released by Nissha Co., Ltd. and Japan Display Co., Ltd., for example.
In addition, in Japanese Patent Application Laid-Open No. 2007-310441 and Japanese Patent Application No. 2020-145788 filed earlier by the applicant of the present application, a display that is a touch panel that has been shipped and is already in use, for example, is used as a display that can perform hover input after the fact. An apparatus for is disclosed.
In addition, "AirBar (trademark)", which is a device sold by Techwind Co., Ltd. in Japan for the exclusive purpose of making a display that is not a touch panel function as a touch panel, is also used for the purpose of enabling hover input on a display that is a touch panel. It is applicable.
 このように、ホバー入力が可能なディスプレイ、或いはタッチパネルであるディスプレイを事後的にホバー入力の可能なディスプレイとするための技術には既に様々なものが存在する。
 しかしながら、これら技術には共通する難点がある。
 それは、ホバー入力による入力を行う場合にはユーザは当然に、ディスプレイを触らないで入力を行うため、ユーザが指をどの程度までディスプレイに近づけたら入力を行えるのか(ディスプレイ等側が備える入力を受付けるための仕組みが反応するのか)ということをユーザに直感的に理解させるのが難しいという点である。
 もちろん、入力が行えたときに、例えば、ディスプレイ上の表示や、或いはスピーカから発せられる音によってユーザに、入力が行えたという事実を通知することは考えられる。しかしながら、上述の難点はむしろ、入力が行えた後ではなく、入力を行っている最中の、言い換えればディスプレイ上で指を任意の方向に移動させているユーザに、ディスプレイに対してこの程度の距離まで指を近づけたのであれば入力を行えるということを理解させたいのであるから、上述の通知はあまり役に立たない。
As described above, there are already various techniques for making a display capable of hover input or a display that is a touch panel into a display capable of hover input afterward.
However, these techniques have common drawbacks.
When performing input by hover input, the user naturally performs input without touching the display. The point is that it is difficult to make the user intuitively understand that the system reacts.
Of course, it is conceivable to notify the user of the fact that the input has been performed, for example, by means of a display on the display or sound emitted from a speaker. However, the above-mentioned difficulty is rather that the user, who is moving his/her finger in any direction on the display while inputting, rather than after the input has been made, is exposed to this degree of motion relative to the display. The above notification is not very useful as we want them to understand that they can make an input if they bring their finger close enough.
 本願発明は、事後的にホバー入力を行えるようにされたのか否かを問わず、ホバー入力を行えるディスプレイに対する入力を行うユーザが、ディスプレイにどの程度まで指を近づけたら入力を行えるかを直感的に理解できるようにするための技術を提供することをその課題とする。 Regardless of whether or not hover input can be performed after the fact, the present invention allows a user who performs input to a display capable of hover input to intuitively know how close the finger to the display is to perform input. The task is to provide the technology to make it understandable to everyone.
 上述の課題を解決するために、本願発明者は、以下の発明を提案する。
 本願発明は、ユーザが操作を行う対象となるディスプレイにおける表示が行われる面である対象面、前記対象面に対してユーザが行った操作についてのデータである操作データを受付けて前記操作データに基づく所定の情報処理を実行する情報処理装置、及び前記対象面の正面から見た場合において前記対象面上に位置する、前記対象面からの距離が所定の距離である第1距離よりも近い位置にあるユーザ指先の前記対象面上での位置座標を検出して当該位置座標についてのデータである位置座標データを前記情報処理装置へ出力するようになっている入力装置、と組合せて用いられる入力支援機構である。
 つまり、本願発明の入力支援機構は、表示が行われる対象面を備えるディスプレイ、ディスプレイに対してユーザが行った操作についての操作データを生成する入力装置、入力装置から操作データを受取って所定の情報処理を実行する情報処理装置を含む、ホバー入力を行えるようにされた装置と組合せて用いられる。ここで、ディスプレイと情報処理装置とは通常一体の装置である。他方、入力装置は、ディスプレイ及び情報処理装置と一体であることもあるし、一体であるディスプレイ及び情報処理装置に対して後付して用いられるものである場合もある。
 入力支援機構は、光を照射する光源と、前記光源からの光を、前記対象面から前記第1距離と同じかそれよりも小さい距離だけ離れた前記対象面と平行な平面上の、前記対象面を正面から見た場合における前記対象面の所定の部分を少なくとも覆う範囲に広げることにより面状の光である光面を生成する拡散部材と、を備えている。入力支援機構は、前記光面を横切ることにより前記光源からの光を受けて光った指先を視認したユーザが、ユーザの指先が前記第1距離よりも前記対象面に近づいたことを認識できるように構成されている。
 このような入力支援機構が存在すると、ディスプレイの対象面に対して指を近づけてホバー入力を行おうとするユーザが指先をディスプレイに近づけて行くと、やがて指先が光面を横切ることになる。そうすると、空中にある光面自体はユーザに視認できないものであったとしても、光面を横切る指先は、光源からの光を反射或いは散乱させて光る。光面は、対象面と平行であり、対象面から第1距離以下の距離だけ離れたところに位置しているため、光ったユーザの指先は、対象面からの距離が第1距離以下の位置にあることが保証される。上述したように、入力装置が指先の位置座標データを情報処理装置へ出力するのは、指先がディスプレイの対象面から第1距離よりも近い位置にあるときである。したがって、指先が光っていることをユーザが確認しながらユーザが指先をディスプレイの対象面に沿って動かすことにより、第1距離よりも対象面に近い場所に位置するユーザの指先の位置座標を、入力装置は確実に情報処理装置へと出力する。
 つまり、上述の如き入力支援機構が存在するのであれば、指先が光っているときにはホバー入力が可能である(指先が光っているときにはホバー入力が可能な程度に指先がディスプレイの対象面に近づいている)ということをユーザに認識させておくことにより、ユーザは、自らの指先が光っていることを視認することにより、今現在の指先の位置(ディスプレイの対象面からの距離)が、ホバー入力が可能な位置にあることを認識することができるようになる。つまり、本願の入力支援機構により、ホバー入力を行えるディスプレイに対する入力を行うユーザは、ディスプレイの対象面にどの程度まで指を近づけたらホバー入力を行えるかを直感的に理解できるようになる。
In order to solve the above problems, the inventors of the present application propose the following inventions.
The present invention receives a target surface, which is a surface on which display is performed on a display on which a user operates, and operation data, which is data about an operation performed by the user on the target surface, and performs operations based on the operation data. an information processing device that executes predetermined information processing; and an information processing device that is located on the target surface when viewed from the front of the target surface and is located at a position closer than a first distance, which is a predetermined distance from the target surface. Input support used in combination with an input device that detects the positional coordinates of a user's fingertip on the target surface and outputs positional coordinate data, which is data about the positional coordinates, to the information processing device. mechanism.
In other words, the input support mechanism of the present invention includes a display having a target surface on which display is performed, an input device that generates operation data regarding an operation performed by a user on the display, and an input device that receives operation data from the input device and provides predetermined information. It is used in combination with a device enabled for hover input, including an information processing device that executes processing. Here, the display and the information processing device are usually integrated devices. On the other hand, the input device may be integrated with the display and the information processing device, or may be used after being attached to the integrated display and information processing device.
The input support mechanism includes a light source that emits light, and directs the light from the light source to the target on a plane parallel to the target surface that is a distance equal to or smaller than the first distance from the target surface. and a diffusing member that generates a light plane, which is planar light, by expanding a range covering at least a predetermined portion of the target plane when the plane is viewed from the front. The input support mechanism enables a user who has visually recognized a fingertip illuminated by light from the light source by crossing the light surface to recognize that the user's fingertip is closer to the target surface than the first distance. is configured to
If such an input support mechanism exists, when a user who intends to perform a hover input by bringing his/her finger close to the target surface of the display brings his/her fingertip close to the display, the fingertip eventually crosses the light surface. Then, even if the light surface itself in the air is invisible to the user, the fingertip crossing the light surface reflects or scatters the light from the light source and shines. Since the light plane is parallel to the target plane and is located at a distance from the target plane that is less than or equal to the first distance, the illuminated user's fingertip is positioned at a distance from the target plane that is less than or equal to the first distance. is guaranteed to be in As described above, the input device outputs the position coordinate data of the fingertip to the information processing device when the fingertip is at a position closer than the first distance from the target surface of the display. Therefore, by moving the user's fingertip along the target surface of the display while confirming that the fingertip is shining, the position coordinates of the user's fingertip located closer to the target surface than the first distance are The input device reliably outputs to the information processing device.
In other words, if an input support mechanism such as the one described above exists, hover input is possible when the fingertip is illuminated (when the fingertip is illuminated, the fingertip approaches the target surface of the display to the extent that hover input is possible). By allowing the user to recognize that the user's fingertip is glowing, the user can see that the current fingertip position (distance from the target surface of the display) is the hover input. It becomes possible to recognize that it is in a position where In other words, the input support mechanism of the present application enables a user who performs input to a display on which hover input can be performed to intuitively understand how close the finger must be to the target surface of the display to perform hover input.
 本願発明の入力支援機構は上述したように、光源と拡散部材とを備えている。光源は光を照射する機能を有する。拡散部材は、光源からの光を広げることにより面状の光である光面を生成する機能を有する。それら機能が保証される限り、光源、拡散部材はどのように構成されていても構わない。
 前記光源は直線光を照射する直線光光源とすることができる。直線光の例はレーザであり、直線光光源の例はレーザ装置である。また、前記拡散部材はシリンドリカルレンズ又はシリンドリカルミラーとすることができる。
 なお、直線光光源とシリンドリカルレンズ又はシリンドリカルミラーの組合せによって生成される光面は、光面が生成されている時間帯のどの瞬間においても、光面が存在している状態にある。もちろん、シリンドリカルレンズや、シリンドリカルミラーに到達する前、或いは後の光路に鏡やプリズム等の他の光学要素を配することも可能である。
 他方、本願による、光面は、光面が生成されている時間帯のある一瞬に着目した場合には面状の光が存在しておらず、直線上の光のみが存在している場合であって、直線上の光がある程度高速で、ある平面に沿って、例えば、ワイパー運動することによって、或いは回転することによって、ある程度の時間長さに着目した場合には面状に存在する場合においても光面であるものとする。かかる光面を生成するには、典型的にはレーザ装置である直線光光源と、ある角度内で往復して揺動する鏡や、ある軸回りに回転する鏡(双方とも、例えば、MEMS(Micro Electro Mechanical Systems)の技術を用いて実現可能である。)を用いて構成することができる。例えば、直線光を、揺動或いは回転する鏡に対して、揺動或いは回転の軸に対して垂直に照射すれば、ある平面に沿う、平面上の光面が生成されることになる。もちろん、揺動或いは回転する鏡や、シリンドリカルミラーに到達する前、或いは後の光路に鏡やプリズム等の他の光学要素を配することも可能である。
As described above, the input support mechanism of the present invention includes a light source and a diffusing member. The light source has a function of emitting light. The diffusion member has a function of generating a light plane, which is planar light, by spreading the light from the light source. As long as those functions are guaranteed, the light source and diffusion member may be configured in any way.
The light source can be a linear light source that emits linear light. Examples of linear light sources are lasers, and examples of linear light sources are laser devices. Also, the diffusion member can be a cylindrical lens or a cylindrical mirror.
The light plane generated by the combination of the linear light source and the cylindrical lens or the cylindrical mirror is in a state where the light plane exists at any moment in the time zone during which the light plane is generated. Of course, it is also possible to arrange other optical elements such as mirrors and prisms in the optical path before or after reaching the cylindrical lens or cylindrical mirror.
On the other hand, the light plane according to the present application is a case where planar light does not exist and only linear light exists when focusing on a moment in the time zone in which the light plane is generated. In the case where light on a straight line exists at a certain high speed along a certain plane, for example, by wiping or rotating, and when focusing on a certain length of time, it exists in a plane is also a light plane. To generate such a plane of light, a linear light source, typically a laser device, and a mirror that oscillates back and forth within an angle, or a mirror that rotates about an axis (both e.g. Micro Electro Mechanical Systems) technology). For example, if a straight line of light is applied to an oscillating or rotating mirror perpendicular to the axis of oscillation or rotation, a planar light surface along a plane will be generated. Of course, it is also possible to arrange other optical elements such as oscillating or rotating mirrors and mirrors, prisms, etc. in the optical path before or after reaching the cylindrical mirror.
 本願発明における入力支援機構と組合せて用いられる前記入力装置は、ユーザの指先の前記対象面上での位置座標に加えて、前記対象面からの距離の座標である距離座標をも検出するようになっていてもよい。この場合、入力装置は、ユーザの指先から前記対象面までの距離が、前記光面から前記対象面までの距離よりも短い所定の距離である第2距離よりも近い距離まで前記対象面に近づいたことを前記距離座標に基づいて検知したときに近接信号を生成し、生成した前記近接信号を前記光源に送るようになっていてもよい。
 そのような入力装置と組合せて用いられる入力支援機構における前記光源は、光の照射の状態を変化させることができるようになっており、且つ前記近接信号を受取ったときに、光の照射の状態を変化させるようになっていてもよい。
 このようにすることにより、ユーザは、光源から照射される光の状態の変化に伴って生じる、光面を横切る指先の光り方の変化を視認することにより、指先が、ホバー入力が可能な位置よりもディスプレイの対象面により近づいたことを認識することができることになる。第2距離の設定の仕方によっては、ユーザは、それ以上指先をディスプレイの対象面に近づけると指先がディスプレイに触れてしまうということを認識できることになる。
 光源が実行する光の照射の状態の前記変化は、例えば、光の点滅又は、可視光領域における光の波長の変更、光の強度の変更とすることができる。可視光領域における光の波長の変更は、要するに光の色の変化である。
The input device used in combination with the input support mechanism of the present invention detects not only the positional coordinates of the user's fingertip on the target surface but also the distance coordinates, which are the coordinates of the distance from the target surface. It may be. In this case, the input device approaches the target plane to a distance where the distance from the user's fingertip to the target plane is shorter than a second distance, which is a predetermined distance shorter than the distance from the light plane to the target plane. A proximity signal may be generated when it is detected based on the distance coordinates, and the generated proximity signal may be sent to the light source.
The light source in the input support mechanism used in combination with such an input device is capable of changing the state of illumination of light, and when receiving the proximity signal, the state of illumination of light is changed. may be changed.
By doing so, the user can see the change in the way the fingertip shines across the light surface caused by the change in the state of the light emitted from the light source. Therefore, it is possible to recognize that the object surface of the display is closer than that. Depending on how the second distance is set, the user can recognize that the fingertip will touch the display if the fingertip is brought closer to the target surface of the display.
Said change in the state of illumination of light carried out by the light source can be, for example, blinking of light, change of wavelength of light in the visible light range, change of intensity of light. A change in the wavelength of light in the visible light region is essentially a change in the color of the light.
 入力支援機構は、上述したように、光源と拡散部材とを備える。光源と拡散部材は、複数組であっても良い。2組目の光源と拡散部材は上述した光源と拡散部材と同じもので良い。本願発明では、2組目の光源と拡散部材とを、補助光源と補助拡散部材と称する。
 補助光源と補助拡散部材を有する入力支援機構は、光を照射する補助光源と、前記補助光源からの光を、前記対象面から前記第1距離よりも小さい所定の距離だけ離れた前記対象面と平行な平面上の、前記対象面を正面から見た場合における前記対象面の所定の部分を少なくとも覆う範囲に広げることにより面状の光である補助光面を生成する補助拡散部材と、を備えており、前記補助光面を横切ることにより前記補助光源からの光を受けて光った指先を視認したユーザが、ユーザの指先が前記第1距離よりも前記対象面に更に近づいたことを認識できるようになっている。
 このような入力支援機構が存在すれば、ユーザが、指先をディスプレイの対象面に近づけて行くと、指先はまず光面を横切り、やがては補助光面を横切ることになる。光面を横切ったときにまず指先が光り、補助光面を横切ったときには、光面を横切るとともに補助光面を横切ることにより指先が2箇所で光ることになる。そのような指先の光り具合の変化を視認することにより、ユーザは、指先が、ホバー入力が可能な位置よりもディスプレイの対象面により近づいたことを認識することができることになる。補助光面の対象面からの距離の設定の仕方によっては、ユーザは、それ以上指先をディスプレイの対象面に近づけると指先がディスプレイに触れてしまうということを認識できることになる。
 前記光源からの光の波長と、前記補助光源からの光の波長とは、いずれも可視光領域の波長であり、且つ互いに異なるものとすることができる。そうすると、光面を横切ったときと、補助光面を横切ったときとで、指先で反射乃至散乱する光の色に差が生じるので、ユーザは、指先の光の色を視認することにより、指先が光面のみを横切ったのか、光面と補助光面の双方を横切ったのかを容易に判別することが可能となる。また、光源と補助光源からの光は、同じ波長で強度が異なっていても良い。
 なお、上述した例では、光源と拡散部材との組は2組であったが、それを3組以上とすることも当然に可能である。つまり、補助光面は平行に複数存在していても構わない。
The input support mechanism includes a light source and a diffusing member, as described above. A plurality of sets of the light source and the diffusion member may be provided. The second set of light source and diffusion member may be the same as the light source and diffusion member described above. In the present invention, the second set of light source and diffusion member is referred to as an auxiliary light source and auxiliary diffusion member.
An input support mechanism having an auxiliary light source and an auxiliary diffusion member includes an auxiliary light source that irradiates light, and the light from the auxiliary light source to the target surface separated from the target surface by a predetermined distance smaller than the first distance. an auxiliary diffusion member that generates an auxiliary light surface that is planar light by expanding a range covering at least a predetermined portion of the target surface when the target surface is viewed from the front on a parallel plane. By crossing the auxiliary light surface, the user who visually recognizes the fingertip illuminated by the light from the auxiliary light source can recognize that the user's fingertip is closer to the target surface than the first distance. It's like
If such an input support mechanism exists, when the user brings the fingertip closer to the target surface of the display, the fingertip will first cross the light plane and eventually cross the auxiliary light plane. When the finger crosses the light plane, the fingertip shines first, and when the finger crosses the fill light plane, the fingertip shines at two points by crossing the light plane and the fill light plane. By visually recognizing such a change in the brightness of the fingertip, the user can recognize that the fingertip is closer to the target surface of the display than the position where the hover input is possible. Depending on how the distance of the auxiliary light plane from the target plane is set, the user can recognize that the fingertip will touch the display if the fingertip is brought closer to the target plane of the display.
The wavelength of the light from the light source and the wavelength of the light from the auxiliary light source may both be wavelengths in the visible light region and may be different from each other. As a result, the color of the light reflected or scattered by the fingertip differs between when it traverses the light plane and when it traverses the auxiliary light plane. It is possible to easily determine whether the crossing crosses only the light plane or crosses both the light plane and the auxiliary light plane. Also, the light from the light source and the auxiliary light source may have the same wavelength but different intensities.
In the above example, the number of pairs of the light source and the diffusion member was two, but it is naturally possible to have three or more pairs. In other words, a plurality of auxiliary light planes may exist in parallel.
 本願発明による入力支援機構は、前記入力装置による入力をユーザが行おうとしたことを検知するセンサを備えており、前記センサが前記入力装置による入力をユーザが行おうとしたことを検知した場合にのみ、前記光源が光を照射するようになっていても構わない。
 光源は光を発するために電力を消費する。特に、光源がレーザ装置である場合には、消費電力は大きい。上述の如きセンサが存在すれば、ユーザが入力装置による入力を行おうとしたときにのみ光源が光を照射するので、電力の消費を抑制することができる。
The input support mechanism according to the present invention comprises a sensor for detecting that the user has attempted to input using the input device, and only when the sensor detects that the user has attempted to input using the input device , the light source may emit light.
A light source consumes power to emit light. Especially when the light source is a laser device, the power consumption is large. If there is such a sensor as described above, the light source emits light only when the user attempts to make an input using the input device, so power consumption can be suppressed.
 上述したように、本願発明における入力支援機構は、ディスプレイ、情報処理装置、及び入力装置に組合せて用いられる。ここで、ディスプレイと情報処理装置とは一体の装置であり、入力装置は、ディスプレイ及び情報処理装置と一体である場合もあるし、そうでない場合もある。入力装置がディスプレイ及び情報処理装置と一体でない場合には、入力装置はディスプレイ及び情報処理装置に対して後付される。
 入力装置がディスプレイ及び情報処理装置と一体でない場合、入力支援機構は以下のようなものとすることができる。すなわち、本願発明における入力支援機構は、前記ディスプレイ及び情報処理装置と別体とされ、一体とされた前記ディスプレイ及び情報処理装置に対して取付けられるようにされた前記入力装置と一体とされていても良い。つまり、入力支援機構は、ディスプレイ及び情報処理装置に後付される入力装置の一部とされていても良い。これによれば、ディスプレイ及び情報処理装置に入力装置を取付けると自動的に、本願の入力支援機構がそれらに取付けられる、或いは組込まれることになる。
 入力支援機構が入力装置の一部である場合、前記入力装置は、前記対象面を囲むフレームを備えており、前記光源及び前記拡散部材は、前記フレームに取付けられていてもよい。これによれば、フレームをディスプレイの対象面に対して正しく位置決めすれば、入力支援機構中の光源と拡散部材もディスプレイの対象面に対して自動的に正しく位置決めされた状態となる。
As described above, the input support mechanism in the present invention is used in combination with the display, the information processing device, and the input device. Here, the display and the information processing device are integrated devices, and the input device may or may not be integrated with the display and the information processing device. When the input device is not integrated with the display and the information processing device, the input device is retrofitted to the display and the information processing device.
When the input device is not integrated with the display and the information processing device, the input support mechanism can be as follows. That is, the input support mechanism in the present invention is separate from the display and the information processing device, and is integrated with the input device attached to the display and the information processing device integrated. Also good. In other words, the input support mechanism may be part of an input device attached to the display and the information processing device. According to this, when the input device is attached to the display and the information processing device, the input support mechanism of the present application is automatically attached or incorporated therein.
Where the input assisting mechanism is part of an input device, the input device may comprise a frame surrounding the target surface, and the light source and diffusion member may be attached to the frame. According to this, when the frame is correctly positioned with respect to the target surface of the display, the light source and the diffusion member in the input support mechanism are also automatically correctly positioned with respect to the target surface of the display.
 本願発明者は、上述した入力支援機構を、ディスプレイ、情報処理装置、及び入力装置を含む、ホバー入力を行うことが可能な装置に組合せた入力システムをも本願発明の一態様として提案する。かかる入力システムの効果は、入力システムに組込まれた入力支援機構の効果に等しい。
 一例となる入力システムは、ユーザが操作を行う対象となるディスプレイにおける表示が行われる面である対象面、前記対象面に近接する位置でユーザが行った操作についてのデータである操作データを受付けて前記操作データに基づく所定の情報処理を実行する情報処理装置、及び前記対象面の正面から見た場合において前記対象面上に位置する、前記対象面からの距離が所定の距離である第1距離よりも近い位置にあるユーザ指先の前記対象面上での位置座標を検出して当該位置座標についてのデータを含む操作データを前記情報処理装置へ出力するようになっている入力装置、及び入力支援機構を備えている入力システムである。
 入力システムにおける前記入力支援機構は、光を照射する光源と、前記光源からの光を、前記対象面から前記第1距離と同じかそれよりも小さい距離だけ離れた前記対象面と平行な平面上の、前記対象面を正面から見た場合における前記対象面の所定の部分を少なくとも覆う範囲に広げることにより面状の光である光面を生成する拡散部材と、を備えており、前記光面を横切ることにより前記光源からの光を受けて光った指先を視認したユーザが、ユーザの指先が前記第1距離よりも前記対象面に近づいたことを認識できるようになっている。
The inventor of the present application also proposes, as one aspect of the present invention, an input system in which the above-described input support mechanism is combined with a device capable of hover input, including a display, an information processing device, and an input device. The effect of such an input system is equivalent to the effect of the input assist mechanism built into the input system.
An example input system accepts a target surface, which is a surface on which display is performed on a display on which a user operates, and operation data, which is data about an operation performed by the user at a position close to the target surface. an information processing device that executes predetermined information processing based on the operation data; and a first distance that is located on the target surface when viewed from the front of the target surface and that is a predetermined distance from the target surface. An input device and an input support that detect position coordinates on the target surface of a user's fingertip located closer than It is an input system with a mechanism.
The input support mechanism in the input system includes a light source that emits light, and directs the light from the light source onto a plane parallel to the target surface that is a distance equal to or smaller than the first distance from the target surface. and a diffusing member that generates a light surface that is planar light by expanding a range covering at least a predetermined portion of the target surface when the target surface is viewed from the front, wherein the light surface A user who sees the fingertip illuminated by the light from the light source by crossing can recognize that the user's fingertip is closer to the target surface than the first distance.
第1実施形態の入力支援機構を含む入力装置と組合せて用いられるATMの構造を概略的に示す斜視図。The perspective view which shows roughly the structure of ATM used in combination with the input device containing the input support mechanism of 1st Embodiment. 第1実施形態の入力支援機構を含む入力装置の斜視図。1 is a perspective view of an input device including an input support mechanism of the first embodiment; FIG. 図2に示した入力装置の図2におけるA-A断面図。AA sectional drawing in FIG. 2 of the input device shown in FIG. 図2に示した入力装置の図2におけるB-B断面図。FIG. 3 is a cross-sectional view taken along line BB in FIG. 2 of the input device shown in FIG. 2; 図2に示した入力装置の第2長辺板及び第2短辺板の内面に発光体によって作られるパターンの例を示す図。3A and 3B are views showing examples of patterns formed by light emitters on the inner surfaces of the second long side plate and the second short side plate of the input device shown in FIG. 2; 図2に示した入力装置の正面板を除去した状態の平面図。FIG. 3 is a plan view of the input device shown in FIG. 2 with the front plate removed; 図2に示した入力装置が備えるコンピュータのハードウエア構成図。FIG. 3 is a hardware configuration diagram of a computer included in the input device shown in FIG. 2; 図2に示した入力装置が備えるコンピュータ内に生成される機能ブロックを示す機能ブロック図。FIG. 3 is a functional block diagram showing functional blocks generated in a computer included in the input device shown in FIG. 2; 図2に示した入力装置の使用状態における指からカメラに至る像光の経路を示す図。FIG. 3 is a diagram showing paths of image light from a finger to a camera when the input device shown in FIG. 2 is used; 図2に示した入力装置の使用状態における指からカメラに至る像光の経路を示す他の図。3 is another diagram showing the path of image light from the finger to the camera when the input device shown in FIG. 2 is used; FIG. 変形例2による入力装置の図3と同じ位置の断面図。FIG. 4 is a cross-sectional view of the input device according to modification 2 at the same position as in FIG. 3 ; 変形例2による入力装置の図4と同じ位置の断面図。FIG. 5 is a cross-sectional view of the input device according to modification 2 at the same position as in FIG. 4 ; 変形例2による入力装置の正面板を除去した状態の平面図。FIG. 11 is a plan view of the input device according to modification 2 with the front panel removed; 第2実施形態における入力装置の図3と同じ位置の断面図。Sectional drawing of the same position as FIG. 3 of the input device in 2nd Embodiment. 第3実施形態における入力支援装置をディスプレイに取付けた状態を示す斜視図。The perspective view which shows the state which attached the input support apparatus in 3rd Embodiment to the display. 揺動する鏡を拡散部材として用いて光面を作る場合の原理図。FIG. 10 is a principle diagram in the case of creating a light plane using an oscillating mirror as a diffusing member.
 以下、図面を参照して、本発明の第1から第3実施形態、及びそれらの変形例について説明する。
 各実施形態及び変形例の説明において、共通する対象には共通の符号を付すものとし、重複する説明は、場合により省略するものとする。また、各実施形態及び変形例に記載した内容は、組合せた場合に矛盾が生じない限り、他の実施形態及び変形例にも組合せ可能なものとする。
Hereinafter, first to third embodiments of the present invention and modifications thereof will be described with reference to the drawings.
In the description of each embodiment and modifications, common objects are denoted by common reference numerals, and overlapping descriptions may be omitted as the case may be. In addition, the contents described in each embodiment and modifications can be combined with other embodiments and modifications as long as there is no contradiction when combined.
≪第1実施形態≫
 第1実施形態では、ディスプレイ及び情報処理装置を含む既存の装置に後付して用いられる入力装置に、本願発明における入力支援機構が組込まれた例について説明するものとする。
 公共向けの既存の装置は、例えば、ATM、券売機、コンビニエンスストアのレジスターであるが、これらに限られない。もっと言えば、装置は、ディスプレイと情報処理装置を備え、それらが直接的に或いは他の部品を介して一体的にされた装置であれば良く、公共向けの装置である必要はなく、また、既存の装置である必要もない。とはいえ、それら装置はすべて、公知或いは周知のもので良いから、それらの構成の詳細についての説明は行わない。
 ディスプレイは、ユーザに操作データの入力を行わせるために必要となる情報を、その画面に表示する機能を有している。ディスプレイは、タッチパネルであっても良いし、そうでなくても良いが、ディスプレイがタッチパネルである場合には、ユーザはディスプレイに触れることにより操作データの入力を行い、ディスプレイがタッチパネルでない場合には、その装置におけるディスプレイの近傍に設けられている押し釦等の所定の入力装置から操作データの入力を行うようになっている。
 この実施形態では、既存の装置は、既に街に設置されているATMであり(図1参照)、ディスプレイは、タッチパネルであるものとする。
<<First embodiment>>
In the first embodiment, an example in which an input support mechanism according to the present invention is incorporated in an input device that is retrofitted to an existing device including a display and an information processing device will be described.
Existing devices for the public are for example, but not limited to, ATMs, ticket vending machines, convenience store registers. More specifically, the device may be a device that includes a display and an information processing device, and they are integrated directly or via other parts, and does not need to be a device for public use. It doesn't even have to be an existing device. However, since all such devices may be known or well-known, the details of their construction will not be described.
The display has a function of displaying on its screen information necessary for allowing the user to input operation data. The display may or may not be a touch panel. When the display is a touch panel, the user inputs operation data by touching the display. Operation data is input from a predetermined input device such as a push button provided near the display of the device.
In this embodiment, the existing device is an ATM already installed in town (see FIG. 1), and the display is a touch panel.
 ATMはディスプレイ110を備えている。ディスプレイ110は、ATMが備える公知のテーブル131に、その画面111が露出する態様で取付けられており、また、ATMの内部に組込まれた情報処理装置120と、信号線132を介して接続されている。画面111の露出している部分が本願で言う対象面である。
 図1に概念的に、ATMを図示するが、同図では、ディスプレイ110の画面111とテーブル131とのみを実線で図示し、ATMの全体と、情報処理装置120、信号線132は一点鎖線で図示している。なお、この実施形態では、本願発明でいう対象面に相当するディスプレイ110の画面111は、必ずしもそうである必要はないが矩形である。画面111の四つ角は面取りされている場合もあるが、そもそも画面111が矩形である必要もないので矩形の定義は厳密なものとする必要はなく、この実施形態においては、四つ角において面取りがなされていてもディスプレイ110が矩形であるといって差し支えない。
 情報処理装置120は一般的なコンピュータである。情報処理装置120は、ディスプレイ110の画面111に、操作データの入力を行うユーザに入力のために必要な情報を表示させる機能を有している。ディスプレイ110を備える装置がATMである場合であれば、画面111に表示される画像は、例えば、キャッシュカードの暗証番号の入力をユーザに促す画像や、ユーザがATMに行わせる手続きを、例えば出金、入金、振込等から選択させるための公知或いは周知の画像である。情報処理装置120は、公知或いは周知のように、ディスプレイ110の画面111に表示される画像についての画像データを生成し、その画像データをディスプレイ110に信号線132を介して送るようになっている。それにより、ディスプレイ110の画面111には、画像データに対応した画像が表示されるようになっている。
 情報処理装置120は、また、入力された操作データに基づいた情報処理を実行するようになっている。情報処理装置120が実行する情報処理は、例えば、装置がATMである場合には、ユーザが入力した暗証番号に基づくユーザの認証や、出金、入金、振込等からユーザが選択した処理の一つ等である。ATMのディスプレイ110がタッチパネルである場合には本来、操作データはディスプレイ110から信号線132を介して情報処理装置120に送られる。ただし、この実施形態では、後述するように、操作データは、入力装置から、情報処理装置120に送られるようになっている。もちろん、入力装置から情報処理装置120に操作データが入力されることを基本としつつ、ディスプレイ110からも情報処理装置120に操作データが入力できるようにする(入力装置を後付した状態においてもディスプレイ110からの操作データの入力機能をそのまま生かしておく)ことも可能である。そうすれば、仮に入力装置に故障その他の不具合が生じたとしても、ディスプレイ110からの操作データの入力を従前どおりに行うことが可能となる。
The ATM has a display 110 . The display 110 is attached to a known table 131 provided in the ATM in such a manner that the screen 111 is exposed, and is connected via a signal line 132 to the information processing device 120 incorporated inside the ATM. there is The exposed portion of the screen 111 is the target surface referred to in the present application.
FIG. 1 conceptually illustrates an ATM, but in the figure, only the screen 111 of the display 110 and the table 131 are illustrated with solid lines, and the entire ATM, the information processing device 120, and the signal line 132 are illustrated with dashed lines. Illustrated. In this embodiment, the screen 111 of the display 110, which corresponds to the object surface referred to in the invention of the present application, is rectangular although it is not necessarily so. The four corners of the screen 111 may be chamfered, but since the screen 111 does not need to be rectangular in the first place, the definition of rectangle need not be strict. In this embodiment, the four corners are chamfered. However, it is safe to say that the display 110 is rectangular.
The information processing device 120 is a general computer. The information processing apparatus 120 has a function of displaying information necessary for input to a user who inputs operation data on the screen 111 of the display 110 . If the device provided with the display 110 is an ATM, the image displayed on the screen 111 may be, for example, an image prompting the user to enter the PIN number of the cash card or a procedure for the ATM to be performed by the user. It is a publicly known or well-known image for making a selection from money, deposit, transfer, and the like. The information processing device 120 generates image data for an image displayed on the screen 111 of the display 110 and sends the image data to the display 110 via the signal line 132, as is known or known. . As a result, an image corresponding to the image data is displayed on the screen 111 of the display 110 .
The information processing device 120 also executes information processing based on input operation data. For example, when the device is an ATM, the information processing executed by the information processing device 120 includes user authentication based on a personal identification number entered by the user, and processing selected by the user from withdrawal, deposit, and transfer. are three. When the display 110 of the ATM is a touch panel, the operation data is originally sent from the display 110 to the information processing device 120 via the signal line 132 . However, in this embodiment, the operation data is sent from the input device to the information processing device 120, as will be described later. Of course, it is possible to input operation data from the display 110 to the information processing apparatus 120 while basically inputting operation data from the input device to the information processing apparatus 120 (even if the input device is retrofitted, the display It is also possible to keep the function of inputting operation data from 110 as it is). In this way, even if the input device fails or has other problems, it is possible to input operation data from the display 110 as before.
 以上で説明したATMにこの実施形態における入力装置は組み合わせて使用される。具体的には、入力装置は、ディスプレイ110が取付けられたテーブル131に、ディスプレイ110の画面111に対して適宜に位置決めした状態で固定して用いられる。それにより、ユーザは、ディスプレイ110の画面111に対して、ホバー入力を行えるようになる。
 入力装置200の斜視図を図2に示す。また、図3は、入力装置200の図2におけるA-A断面図、図4は、同B-B断面図、図5は、入力装置200の図2における厚さ方向の中程で切断した断面図である。
 入力装置200はフレーム210を備えている。この実施形態におけるフレーム210は、これには限られないが、図2における底面と上面とを有さない、或いは底面と上面とが開放された、平べったい直方体形状である。フレーム210は例えば、金属或いは不透明な樹脂でできている。
 フレーム210を、図2における上から見た場合において、フレーム210は矩形である。便宜上、フレーム210の図2において奥側に位置する鉛直な板を第1長辺板211、第1長辺板211と対向する鉛直な板を第2長辺板212、フレーム210の図2において左側に位置する鉛直な板を第1短辺板213、第1短辺板213と対向する鉛直な板を第2短辺板214と称するものとする。
 第1長辺板211、第2長辺板212、第1短辺板213、第2短辺板214はいずれも矩形であり、図2における高さ方向の長さは、例えばこの実施形態では皆等しい。図2における入力装置200を平面視した場合において、第1長辺板211、第2長辺板212、第1短辺板213、第2短辺板214は、図2を平面視したときに矩形を作るが、この実施形態ではこの矩形を「仮想の矩形」と称する場合がある。
 第1長辺板211、第2長辺板212、第1短辺板213、第2短辺板214の図2における上側には、孔215Aの開いた正面板215が存在している。正面板215はこの実施形態では1枚ものの板であるが、これはこの限りではない。例えば、第1長辺板211、第2長辺板212、第1短辺板213、第2短辺板214のそれぞれを断面L字状のいわゆるアングルとすることにより、第1長辺板211、第2長辺板212、第1短辺板213、第2短辺板214の上側の板を組合せて作られていても良い。
 正面板215の孔215Aは、この入力装置200と組合せて用いられるディスプレイ110の画面111の形状、大きさと略一致している。孔215Aの大きさはディスプレイ110の画面111の形状、大きさと完全に一致している必要はなく、例えば、ディスプレイ110の画面111の端の部分においてユーザが入力を行わないことが当初から明らかなのであるなら、孔215Aの大きさは画面111の大きさよりも小さくても構わない。つまり、後述するようにしてフレーム210は、ディスプレイ110の画面111を第1長辺板211、第2長辺板212、第1短辺板213、第2短辺板214からなる仮想の矩形が囲むようにしてテーブル131に取付けられるが、そのとき、正面板215の孔215Aの内側の縁が、正面から見た場合の画面111の縁に若干被さることは許容されうる。正面板215は、第1長辺板211、第2長辺板212、第1短辺板213、第2短辺板214の内面(フレーム210の内側の面、以下同じ。)付近に、孔215Aを介して、外部環境からの迷光が入り込むのを防ぐためのものであるが、これは必ずしも必須ではない。
The input device in this embodiment is used in combination with the ATM explained above. Specifically, the input device is used by being fixed to a table 131 to which the display 110 is attached while being properly positioned with respect to the screen 111 of the display 110 . Thereby, the user can perform hover input on the screen 111 of the display 110 .
A perspective view of the input device 200 is shown in FIG. 3 is an AA cross-sectional view of the input device 200 in FIG. 2, FIG. 4 is a BB cross-sectional view of the same, and FIG. 5 is a cut in the middle of the thickness direction of the input device 200 in FIG. It is a cross-sectional view.
The input device 200 has a frame 210 . The frame 210 in this embodiment has, but is not limited to, a flat rectangular parallelepiped shape that does not have the bottom and top surfaces in FIG. 2, or that has open bottom and top surfaces. The frame 210 is made of metal or opaque resin, for example.
When the frame 210 is viewed from above in FIG. 2, the frame 210 is rectangular. For the sake of convenience, in FIG. The vertical plate located on the left side is called a first short side plate 213 and the vertical plate facing the first short side plate 213 is called a second short side plate 214 .
The first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 are all rectangular, and the length in the height direction in FIG. all equal. When the input device 200 in FIG. 2 is viewed in plan, the first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 are A rectangle is created, which in this embodiment may be referred to as a "virtual rectangle".
Above the first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 in FIG. 2, there is a front plate 215 with a hole 215A. Although the front plate 215 is a single plate in this embodiment, this is not the only option. For example, each of the first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 is formed into a so-called angle having an L-shaped cross section, so that the first long side plate 211 , the second long side plate 212, the first short side plate 213, and the second short side plate 214 may be combined.
The hole 215A of the front plate 215 substantially matches the shape and size of the screen 111 of the display 110 used in combination with the input device 200. As shown in FIG. The size of the hole 215A does not have to match the shape and size of the screen 111 of the display 110 completely. If there is, the size of the hole 215A may be smaller than the size of the screen 111. In other words, as will be described later, the frame 210 is configured such that the screen 111 of the display 110 is formed into a virtual rectangle consisting of a first long side plate 211, a second long side plate 212, a first short side plate 213, and a second short side plate 214. It is attached to the table 131 in a surrounding manner, but at that time, it is permissible for the inner edge of the hole 215A of the front plate 215 to slightly overlap the edge of the screen 111 when viewed from the front. The front plate 215 has holes near the inner surfaces of the first long side plate 211, the second long side plate 212, the first short side plate 213, and the second short side plate 214 (the inner surface of the frame 210, the same shall apply hereinafter). This is to prevent stray light from the external environment from entering through 215A, but this is not essential.
 第1長辺板211の内面と、第1短辺板213の内面とには、鏡221、222がそれぞれ取付けられている。
 第1長辺板211の内面に取付けられた鏡221は、矩形で、第1長辺板211の内面の略全面を覆っており、且つその内面が鏡面となっている。
 第1短辺板213の内面に取付けられた鏡222は、矩形で、第1短辺板213の内面の略全面を覆っており、且つその内面が鏡面となっている。
 鏡221は第1長辺板211の内面の全面を覆っていても良いが、この実施形態では、その上下左右においては第1長辺板211の内面を覆っていない。また、鏡222は第1短辺板213の内面の全面を覆っていても良いが、この実施形態では、その上下左右においては第1短辺板213の内面を覆っていない。つまり、鏡221、及び鏡222は、上述する仮想の矩形の隣接する2辺に沿って存在しているものの、隣接する2辺のそれぞれよりもその長さが幾らか短い。このように、鏡221と鏡222のそれが沿う仮想の矩形の辺方向の長さは、鏡221の鏡面と鏡222の鏡面のそれが沿う仮想の矩形の辺の長さよりも短くても構わない。ただし、仮想の矩形を図2の上方から見た場合において、鏡221の鏡面が作る線分と、鏡222の鏡面が作る線分との双方を、それら線分と平行な仮想の矩形の辺の遠い側に向けて平行移動させた場合に両者が交差する矩形の範囲は、対象面としてのディスプレイ110の画面111をその範囲内に収められる形状、大きさとするのが好ましい。
 また、鏡221と鏡222が存在する図2における高さ方向の範囲は、図2、3、4等では同一とされているが、これは同一である必要は必ずしもない。鏡221、鏡222の存在する高さ方向の範囲は、ユーザの指を検出すべき画面111からの距離に応じて決定すれば良い。
Mirrors 221 and 222 are attached to the inner surface of the first long side plate 211 and the inner surface of the first short side plate 213, respectively.
The mirror 221 attached to the inner surface of the first long side plate 211 is rectangular, covers substantially the entire inner surface of the first long side plate 211, and has a mirror surface.
The mirror 222 attached to the inner surface of the first short side plate 213 is rectangular, covers substantially the entire inner surface of the first short side plate 213, and the inner surface is a mirror surface.
The mirror 221 may cover the entire inner surface of the first long side plate 211, but in this embodiment, it does not cover the inner surface of the first long side plate 211 on the top, bottom, left, and right. Also, the mirror 222 may cover the entire inner surface of the first short side plate 213, but in this embodiment, it does not cover the inner surface of the first short side plate 213 on the top, bottom, left, and right. That is, the mirrors 221 and 222 are present along two adjacent sides of the virtual rectangle described above, but are somewhat shorter in length than each of the two adjacent sides. In this way, the side lengths of the virtual rectangles along which the mirrors 221 and 222 are aligned may be shorter than the lengths of the sides of the virtual rectangles along which the mirror surfaces of the mirrors 221 and 222 are aligned. do not have. However, when the virtual rectangle is viewed from above in FIG. 2, both the line segment formed by the mirror surface of the mirror 221 and the line segment formed by the mirror surface of the mirror 222 are the sides of the virtual rectangle parallel to these line segments. It is preferable that the rectangular range where the two intersect when translated toward the far side of , has a shape and size that allows the screen 111 of the display 110 as the target surface to fit within the range.
2 where the mirrors 221 and 222 are present are the same in FIGS. The range in the height direction where the mirrors 221 and 222 exist may be determined according to the distance from the screen 111 where the user's finger is to be detected.
 第2長辺板212と、第2短辺板214との内面には、これには限られないが発光体231、232がそれぞれ設けられている。発光体231と、発光体232とは、後述するカメラで指を撮像する際に、撮像された画像中の指とその背景とのコントラストを高める役割を有している。発光体231、232は例えば、ELワイヤ、LED等の公知、或いは周知の材料で構成することができる。これらは、図示を省略の電源から供給される電力によって発光する。
 発光体231、232はともに、第2長辺板212と第2短辺板214の内面の全面を覆うようになっていても構わないが、この実施形態では、発光体231、232はともに、第2長辺板212と第2短辺板214の内面の一部のみを覆うようになっており、より詳細には、これには限られないが、それらの内面に連続する所定のパターンを形成している。
 連続するパターンの例を図5に示す。
 図5(A)に示した例では、発光体231、232は、第2長辺板212又は第2短辺板214の内面の全面に、図2における縦方向の縦縞を形成している。この例では、発光体231、232の幅と発光体231、232が無い部分の幅は同じであるが、これは必ずしもこの限りではない。
 同(B)に示した例では、発光体231、232は、第2長辺板212又は第2短辺板214の内面の全面に、図2における横方向の横縞を形成している。この例では、発光体231、232の幅と発光体231、232が無い部分の幅は同じであるが、これは必ずしもこの限りではない。
 (C)に示した例では、発光体231、232は、第2長辺板212又は第2短辺板214の内面の全面に、市松模様を形成している。
 第2長辺板212、及び第2短辺板214の内面のうち、発光体231、及び発光体232が存在しない部分の色彩は、例えば、黒等の、発光体231、及び発光体232が存在する部分との間で、後述するカメラで撮像した場合においてコントラストが大きくなるような明度の低い色彩とされているのが好ましい。
 第2長辺板212、及び第2短辺板214の内面に発光体231、及び発光体232が存在しない場合には、それらの内面の色彩の明度を低くするのが良い。
Light emitters 231 and 232 are provided on the inner surfaces of the second long side plate 212 and the second short side plate 214, respectively, although not limited thereto. The light emitters 231 and 232 have a role of increasing the contrast between the finger and its background in the captured image when the finger is imaged by a camera, which will be described later. The light emitters 231, 232 can be made of known or well-known materials such as EL wires, LEDs, and the like. These emit light by power supplied from a power source (not shown).
Both the light emitters 231 and 232 may cover the entire inner surface of the second long side plate 212 and the second short side plate 214. In this embodiment, the light emitters 231 and 232 are It covers only a part of the inner surface of the second long side plate 212 and the second short side plate 214, and more specifically, although not limited to this, a predetermined pattern continuous to those inner surfaces is formed. forming.
An example of a continuous pattern is shown in FIG.
In the example shown in FIG. 5A, the light emitters 231 and 232 form vertical stripes in FIG. 2 on the entire inner surface of the second long side plate 212 or the second short side plate 214 . In this example, the width of the light emitters 231 and 232 and the width of the portion without the light emitters 231 and 232 are the same, but this is not necessarily the case.
In the example shown in FIG. 2B, the light emitters 231 and 232 form horizontal stripes in the horizontal direction in FIG. In this example, the width of the light emitters 231 and 232 and the width of the portion without the light emitters 231 and 232 are the same, but this is not necessarily the case.
In the example shown in (C), the light emitters 231 and 232 form a checkered pattern on the entire inner surface of the second long side plate 212 or the second short side plate 214 .
Of the inner surfaces of the second long side plate 212 and the second short side plate 214, the color of the portions where the light emitters 231 and 232 do not exist is, for example, black. It is preferable to use a color with low brightness such that the contrast between the existing portion and the existing portion is high when the image is captured by a camera described later.
When the light emitters 231 and 232 are not present on the inner surfaces of the second long side plate 212 and the second short side plate 214, it is preferable to lower the brightness of the color of those inner surfaces.
 また、フレーム210の内面の、この実施形態では第2長辺板212と第2短辺板214とが交わる部分には、カメラ240が設けられている(図6)。もっとも、カメラ240が配置される位置は、フレーム210が対象面のあるテーブル131に対して取付けられたときに、対象面の外側であり、鏡221の鏡面と、鏡222の鏡面との双方の全体を撮像できるような位置であれば良い。図6における鎖線は、カメラ240の画角を示している。カメラ240が取付けられる図2における高さは適宜選択することができるが、例えば、フレーム210の図2における高さ方向の中程に、カメラ240を取付けることができる。
 カメラ240は動画像を撮像することができ、且つその動画像についてのデータである動画像データを出力することができるようなものであれば良い。もちろんそのようなカメラは公知或いは周知であり、市販もされている。そのようなカメラから適当なものを選択してカメラ240として利用すれば良い。もっとも、カメラ240で鏡221の鏡面と、鏡222の鏡面との双方の全体を撮像できるようにするにはカメラ240の画角が広い方が有利であるので、カメラ240の選択の際にはそのような点も考慮すべきである。
A camera 240 is provided on the inner surface of the frame 210, in this embodiment, at the intersection of the second long side plate 212 and the second short side plate 214 (FIG. 6). However, when the frame 210 is attached to the table 131 having the object surface, the position where the camera 240 is arranged is outside the object surface, and both the mirror surface of the mirror 221 and the mirror surface of the mirror 222 are positioned. Any position is acceptable as long as the entire image can be captured. A dashed line in FIG. 6 indicates the angle of view of the camera 240 . The height in FIG. 2 at which the camera 240 is mounted can be selected as appropriate. For example, the camera 240 can be mounted in the middle of the frame 210 in the height direction in FIG.
The camera 240 may be any device that can capture moving images and output moving image data, which is data about the moving images. Of course, such cameras are known or known and are commercially available. A suitable camera may be selected from such cameras and used as the camera 240 . However, it is advantageous for the camera 240 to have a wide angle of view so that the entire mirror surface of the mirror 221 and the mirror surface of the mirror 222 can be captured by the camera 240. Therefore, when selecting the camera 240, Such points should also be taken into consideration.
 フレーム210には、図示を省略のコンピュータが配されている。コンピュータは、カメラ240と接続されており、カメラ240が生成した動画像データを公知或いは周知の技術によりカメラ240から略実時間で受取るようになっている。コンピュータがカメラ240から動画像データを受取る方法は、有線であっても無線であっても構わない。
 なお、コンピュータは、フレーム210に取付けられている必要はなく、例えば、ATMの内部に設けられていても良い。コンピュータがATMの内部に設けられている場合には、上述の情報処理装置120がコンピュータを兼ねても良い。この実施形態では、これには限られないが、フレーム210に、より詳細にはフレーム210内の、図2におけるカメラ240の上又は下の空間に、小さなコンピュータが取付けられているものとする。
A computer (not shown) is arranged in the frame 210 . The computer is connected to the camera 240, and receives moving image data generated by the camera 240 from the camera 240 substantially in real time by a known technique. The method by which the computer receives moving image data from the camera 240 may be wired or wireless.
Note that the computer need not be attached to the frame 210, and may be provided inside the ATM, for example. When a computer is provided inside the ATM, the information processing device 120 described above may also serve as the computer. In this embodiment, but not limited to, it is assumed that a small computer is mounted on frame 210, and more particularly in the space within frame 210 above or below camera 240 in FIG.
 コンピュータは、動画像データに基づいて後述する操作データを生成し、それを情報処理装置120へと出力する機能を有する。それが可能であるなら、コンピュータは公知の或いは周知のもので良く、市販のもので良い。
 コンピュータは、図7に示したようなハードウエアを備えている。この実施形態におけるコンピュータは、CPU(Central Processing Unit)311、ROM(Read Only Memory)312、RAM(Random Access Memory)313、インタフェイス315、及びこれらを接続するバス316を備えている。
 CPU311は、演算装置であり、コンピュータ全体の制御を行う。CPU311は、コンピュータプログラムを実行することで、以下に説明するような処理を実行する。
 ROM312は、書換不可能なメモリであり、CPU311を動作させるためのコンピュータプログラム、及びコンピュータが以下の処理を実行する際に必要なデータなどを記憶している。
 RAM313は、書換可能なメモリであり、CPU311が以下の処理を実行する場合のワーク領域を提供する。RAM313には、例えば、動画像データの一部や操作データが一時的に書き込まれることがある。
 インタフェイス315は、CPU311、ROM312、RAM313とコンピュータの図7で示したハードウエアの外部とを繋ぐ窓口となるものであり、CPU311、ROM312、RAM313は、インタフェイス315を介して図7に示したハードウエアを持つコンピュータの外部と、データ交換を行えるようになっている。例えば、コンピュータがカメラ240から有線で動画像データを受取る場合であれば、インタフェイス315はカメラ240とコンピュータとを結ぶ動画像データを送信するための図示せぬ導線と接続される接続端子が接続されている。これにより、コンピュータ内のCPU311等は、ケーブルと端子とインタフェイス315とを介して、カメラ240から動画像データを受取れるようになる。コンピュータがカメラ240から無線で動画像データを受取る場合であれば、動画像データを送信するためにカメラ240に設けられた送信機から動画像データを受信するための受信機が接続されている。これにより、コンピュータ内のCPU311等は、受信機とインタフェイス315とを介して、カメラ240から動画像データを受取れるようになる。同様に、コンピュータは生成した操作データを情報処理装置120へと送信する必要があるので、かかるデータの送信に必要な部品が、インタフェイス315には接続されている。コンピュータから情報処理装置120に有線で操作データを送信するなら、インタフェイス315には、情報処理装置120に有線で接続するケーブル(例えば、USBケーブル)を接続するための端子が、コンピュータから情報処理装置120に無線で操作データを送信するなら、インタフェイス315には、情報処理装置120が備える受信機に操作データを送信する送信機(例えば、Wi-Fi(商標)やBlutooth(商標)の規格の送信機)が接続されることになる。
The computer has a function of generating operation data, which will be described later, based on moving image data and outputting it to the information processing device 120 . If so, the computer may be known or well-known and commercially available.
The computer has hardware as shown in FIG. The computer in this embodiment comprises a CPU (Central Processing Unit) 311, a ROM (Read Only Memory) 312, a RAM (Random Access Memory) 313, an interface 315, and a bus 316 connecting them.
A CPU 311 is an arithmetic device and controls the entire computer. The CPU 311 executes the processing described below by executing the computer program.
The ROM 312 is a non-rewritable memory, and stores a computer program for operating the CPU 311, data required when the computer executes the following processes, and the like.
A RAM 313 is a rewritable memory and provides a work area for the CPU 311 to execute the following processes. For example, part of moving image data and operation data may be temporarily written to the RAM 313 .
The interface 315 serves as a window for connecting the CPU 311, ROM 312, and RAM 313 to the outside of the computer hardware shown in FIG. Data can be exchanged with the outside of the computer that has the hardware. For example, if the computer receives moving image data from the camera 240 by wire, the interface 315 is connected to a connection terminal connected to a conductor (not shown) for transmitting moving image data connecting the camera 240 and the computer. It is As a result, the CPU 311 or the like in the computer can receive moving image data from the camera 240 via the cable, terminal, and interface 315 . If the computer receives moving image data wirelessly from the camera 240, a receiver for receiving moving image data from a transmitter provided in the camera 240 for transmitting the moving image data is connected. As a result, the CPU 311 or the like in the computer can receive moving image data from the camera 240 via the receiver and the interface 315 . Similarly, since the computer needs to transmit generated operation data to the information processing device 120 , components necessary for transmitting such data are connected to the interface 315 . If operation data is transmitted from the computer to the information processing apparatus 120 by wire, the interface 315 has a terminal for connecting a cable (for example, a USB cable) to the information processing apparatus 120 by wire. If the operation data is to be transmitted wirelessly to the device 120, the interface 315 includes a transmitter (for example, Wi-Fi (trademark) or Bluetooth (trademark) standard) that transmits the operation data to a receiver included in the information processing device 120. transmitter) will be connected.
 上記コンピュータプログラムをCPU311が実行することにより、コンピュータの内部には、図8に示した如き種々の機能ブロックが生成される。
 コンピュータの内部には、入力部321、操作データ生成部322、座標データ記録部323、出力部324が生成される。
 入力部321はインタフェイス315と接続されており、インタフェイス315からの入力を受取るものである。入力部321がインタフェイス315から受取るデータは、カメラ240から送られた動画像データである。動画像データは、入力部321から操作データ生成部322へと送られるようになっている。
 操作データ生成部322は、入力部321から受取った動画像データに基づいて操作データを生成する機能を有する。操作データ生成部322は、操作データを生成する際に座標データ記録部323に記録された座標データを読み込み、それを利用する。操作データ生成部322が操作データを生成する方法については後述する。操作データ生成部322は操作データを生成したら、それを出力部324に送るようになっている。
 座標データ記録部323には上述したように座標データが記録されている。座標データの詳細については後述する。
 出力部324は、コンピュータから外部へ、操作データ生成部322が生成した操作データを出力する機能を有する。出力部324は、操作データをインタフェイス315へ出力する。インタフェイス315が受取った操作データは、有線又は無線で情報処理装置120へと送られる。
When the CPU 311 executes the computer program, various functional blocks as shown in FIG. 8 are generated inside the computer.
An input unit 321, an operation data generation unit 322, a coordinate data recording unit 323, and an output unit 324 are generated inside the computer.
The input section 321 is connected to the interface 315 and receives input from the interface 315 . Data received by the input unit 321 from the interface 315 is moving image data sent from the camera 240 . Moving image data is sent from the input unit 321 to the operation data generation unit 322 .
The operation data generator 322 has a function of generating operation data based on the moving image data received from the input unit 321 . The operation data generation unit 322 reads the coordinate data recorded in the coordinate data recording unit 323 when generating operation data, and uses it. The method by which the operation data generator 322 generates operation data will be described later. After generating the operation data, the operation data generator 322 sends it to the output unit 324 .
Coordinate data is recorded in the coordinate data recording section 323 as described above. Details of the coordinate data will be described later.
The output unit 324 has a function of outputting the operation data generated by the operation data generation unit 322 from the computer to the outside. The output unit 324 outputs operation data to the interface 315 . The operation data received by the interface 315 is sent to the information processing device 120 by wire or wirelessly.
 この実施形態では、入力装置200に、より詳細には、フレーム210に対して、本願発明における入力支援機構が設けられている。
 入力支援機構は、フレーム210が後述するようにしてテーブル131に固定された場合において、ディスプレイ110の画面111から所定の距離である第1距離だけ、或いはそれよりも短い距離だけ離れた部分に、画面111と平行な光の面である後述する光面を形成するためのものである。
 この実施形態における入力支援機構は、光を発する光源11と、光源11からの光を、ディスプレイ110の画面111から第1距離又は第1距離よりも短い距離だけ離れた距離においてディスプレイ110の画面111に平行な面状に広げることにより光面を生成する拡散部材12とを備えて構成される。第1距離は、そこよりもディスプレイ110の画面111に近い位置にユーザの指先が位置した場合に入力装置200が操作データを生成する、ディスプレイ110からの距離である。
In this embodiment, the input device 200, more specifically, the frame 210 is provided with an input support mechanism according to the present invention.
When the frame 210 is fixed to the table 131 as described later, the input support mechanism is located at a first distance, which is a predetermined distance, or a shorter distance from the screen 111 of the display 110. It is for forming a light plane, which is a plane of light parallel to the screen 111 and will be described later.
The input assist mechanism in this embodiment includes a light source 11 that emits light and directs the light from the light source 11 to the screen 111 of the display 110 at a distance that is a first distance or less than the first distance from the screen 111 of the display 110 . a diffusing member 12 that generates a light plane by spreading out in a plane parallel to the . The first distance is the distance from display 110 at which input device 200 generates operation data when the user's fingertip is positioned closer to screen 111 of display 110 than that distance.
 この実施形態における光源11が発する光は、可視光領域の波長の光である。この実施形態における光源11は、これには限られないが、直線状の光を発する直線光光源である。より詳細には、この実施形態における光源11は、レーザを発するレーザ装置である。レーザを発するレーザ装置はもちろん公知或いは周知であり、また、市販もされているので、光源11はそのようなレーザ装置から適宜選択することができる。
 この実施形態ではレーザ装置である光源11は、フレーム210の内側、より詳細には、フレーム210を構成する第1長辺板211と第2短辺板213とが交わる部分における図2における所定の高さ位置に取付けられているが、光源11の取付けられる位置は、光面を上述するような位置に生成可能な限りかかる位置に存在する必要はない。例えば、第1長辺板211や第2長辺板212の長さ方向の中程に光源11が取付けられていても良い。この実施形態における光源11から照射される直線光であるレーザは、これには限られないが概ね、フレーム210の対角に沿う方向、つまり、第2長辺板212と第2短辺板214とが交わる部分に向けて照射されるようになっている。また、光源11から照射される光は、テーブル131に入力装置200が取付けられた場合において、ディスプレイ110の画面111に平行な方向に照射されるようになっている。
 光源11から出た光は、拡散部材12を通過するようになっており、拡散部材12は、光源11のフレーム210に対してやや内側に配されている。拡散部材12を通過した光は、薄い所定の厚さ、例えば、理想的には直線光の直径に相当する厚さで、ディスプレイ110の画面111に平行に広がるようになっている。この実施形態では、そのような光の拡散を実現するための拡散部材12として、シリンドリカルレンズを採用している。シリンドリカルレンズの代わりにシリンドリカルミラーを拡散部材12として用いることも可能であるが、シリンドリカルミラーを拡散部材12として用いる場合には、拡散部材12と光源11の位置関係が逆転する。シリンドリカルレンズ、シリンドリカルミラーともに公知或いは周知であり、市販もされているので、拡散部材12として用いられるシリンドリカルレンズやシリンドリカルミラーは、そのようなものの中から適宜選択することができる。拡散部材12を通過して薄く広がった上述の光が、光面Pである(図3、図4参照)。光面Pは、例えば、正面から見た場合におけるディスプレイ110の画面111の全体を覆うように形成される。これには限られないが、この実施形態では、光面Pは、フレーム210の内側一杯に広がるようにされる。また、この実施形態では、これには限られないが光面Pからディスプレイ110の画面111までの距離は第1距離に等しくされる。
 光面Pそのものは、不可視であるが、光を反射或いは散乱させる物体が光面Pを横切ると、その物体の光面Pを横切った部分が光るためユーザは、フレーム210内でディスプレイ110の画面111に沿って指を動かすことにより、ディスプレイ110から所定の距離離れた位置に光面Pがあるということを理解することができる。
 他方、本願による、光面Pは、光面が生成されている時間帯のある一瞬に着目した場合には面状の光が存在しておらず、直線上の光のみが存在している場合であって、直線上の光が高速で、ある平面に沿って、例えば、ワイパー運動することによって、或いは回転することによって、ある程度の時間長さに着目した場合には面状に存在する場合においても光面Pであるものとする。かかる光面Pを生成するには、典型的にはレーザ装置である直線光光源と、ある角度内で往復して揺動する鏡や、ある軸回りに回転する鏡を用いて構成することができる。
 例えば、図15に示したように、光源11から照射される直線光(例えば、レーザ)である光X1を、角度αの間で揺動する平面状の鏡19の揺動の軸に相当する部分に照射した場合には、光X1は、鏡19が実線の位置にあるときにはX2の方向に反射され、鏡19が2点鎖線の位置にあるときにはX3の方向に反射される(なお、この場合には、鏡19が拡散部材12に相当することになる。)。したがって、直線光である光X1が照射され続けているのであれば、光X1は、光X2と、光X3の間の平面(網掛けされた平面)を、鏡19が実線の位置から破線の位置にまで移動する間に埋め尽くす。つまり、直線光を発する光源11と揺動する鏡19とを組合せて用いれば、平面上の光面Pを作ることが可能となる。もちろん、図15に示した例では、作図の都合上αを小さく取っているので、光X1が光X2と光X3の間に作られる扇形の光面Pの中心角は小さいが、αを大きくとってやれば、光源11とシリンドリカルレンズである拡散部材12を用いて作られる上述の光面Pと同じ範囲にわたる光面Pを、光源11と鏡19の組合せによって生成することも可能である。この場合、鏡19の揺動の速さは、移動する直線上の光が肉眼によって面状に視認程度の速さ、例えば、30Hz(30往復/秒)とすることができる。
 同様に、回転する鏡の例えば、回転の軸の近辺に、直線光である光を照射することによっても、光源11とシリンドリカルレンズである拡散部材12を用いて作られる上述の光面Pと同じ範囲にわたる光面Pを生成することが可能である。この場合、鏡の回転速度は例えば、60回転/秒とすることができる。
 揺動、回転する鏡は、例えば、MEMSミラーによって構成することができるがこれには限られない。
The light emitted by the light source 11 in this embodiment is light with a wavelength in the visible light region. The light source 11 in this embodiment is, but not limited to, a linear light source that emits linear light. More specifically, the light source 11 in this embodiment is a laser device that emits a laser. Laser devices that emit lasers are, of course, publicly known or well-known, and are also commercially available, so the light source 11 can be appropriately selected from such laser devices.
The light source 11 , which is a laser device in this embodiment, is positioned inside the frame 210 , more specifically, at the intersection of the first long side plate 211 and the second short side plate 213 that constitute the frame 210 . Although mounted at a height position, the mounted position of the light source 11 need not be at such a position as long as the light plane can be produced at such a position as described above. For example, the light source 11 may be attached to the middle of the first long side plate 211 or the second long side plate 212 in the length direction. Although not limited to this, the laser, which is linear light emitted from the light source 11 in this embodiment, is directed generally along the diagonals of the frame 210, that is, the second long side plate 212 and the second short side plate 214. , and is irradiated toward the portion where the Further, the light emitted from the light source 11 is emitted in a direction parallel to the screen 111 of the display 110 when the input device 200 is attached to the table 131 .
Light emitted from the light source 11 passes through the diffusion member 12 , and the diffusion member 12 is arranged slightly inside the frame 210 of the light source 11 . The light passing through the diffusing member 12 spreads parallel to the screen 111 of the display 110 with a predetermined thin thickness, for example ideally a thickness corresponding to the diameter of a straight line of light. In this embodiment, a cylindrical lens is employed as the diffusion member 12 for realizing such diffusion of light. A cylindrical mirror can be used as the diffusing member 12 instead of the cylindrical lens, but when the cylindrical mirror is used as the diffusing member 12, the positional relationship between the diffusing member 12 and the light source 11 is reversed. Since both the cylindrical lens and the cylindrical mirror are publicly known or are commercially available, the cylindrical lens and the cylindrical mirror used as the diffusing member 12 can be appropriately selected from among them. The light spread thinly after passing through the diffusion member 12 is the light plane P (see FIGS. 3 and 4). The light plane P is formed, for example, so as to cover the entire screen 111 of the display 110 when viewed from the front. Although not limited to this, in this embodiment, the light plane P extends all the way inside the frame 210 . Also, in this embodiment, although not limited to this, the distance from the light plane P to the screen 111 of the display 110 is made equal to the first distance.
The light plane P itself is invisible, but when an object that reflects or scatters light crosses the light plane P, the part of the object that crosses the light plane P glows, so the user can see the screen of the display 110 within the frame 210. By moving the finger along 111, it can be seen that there is a light plane P at a predetermined distance away from the display 110. FIG.
On the other hand, the light plane P according to the present application is the case where planar light does not exist and only linear light exists when focusing on a moment in the time zone when the light plane is generated. In the case where light on a straight line exists at high speed along a plane, for example, by wiping or rotating, so that it exists in a plane for a certain length of time. is also the light plane P. In order to generate such a light plane P, it is possible to use a linear light source, typically a laser device, a mirror that reciprocates within a certain angle, or a mirror that rotates around a certain axis. can.
For example, as shown in FIG. 15, the light X1, which is linear light (for example, laser) emitted from the light source 11, corresponds to the swing axis of the planar mirror 19 swinging between the angles α. When the light X1 is applied to a portion, the light X1 is reflected in the direction X2 when the mirror 19 is in the position indicated by the solid line, and is reflected in the direction X3 when the mirror 19 is in the position indicated by the two-dot chain line (this , the mirror 19 corresponds to the diffusion member 12). Therefore, if the light X1, which is a straight line of light, continues to be emitted, the light X1 moves the plane between the light X2 and the light X3 (shaded plane) from the position of the solid line to the position of the broken line. Fill while moving up to the position. That is, if the light source 11 that emits linear light and the mirror 19 that oscillates are used in combination, it is possible to create a planar light plane P. FIG. Of course, in the example shown in FIG. 15, α is set small for convenience of drawing. In other words, the combination of the light source 11 and the mirror 19 can generate the light plane P having the same range as the light plane P created by using the light source 11 and the diffusion member 12 which is a cylindrical lens. In this case, the swinging speed of the mirror 19 can be set to a speed at which the light moving in a straight line can be viewed with the naked eye, for example, 30 Hz (30 reciprocations/second).
Similarly, by irradiating light, which is linear light, near the axis of rotation of a rotating mirror, for example, the same light plane P as described above is created using the light source 11 and the diffusion member 12, which is a cylindrical lens. It is possible to generate a light plane P over a range. In this case, the rotation speed of the mirror can be, for example, 60 revolutions/second.
The oscillating and rotating mirror can be composed of, for example, a MEMS mirror, but is not limited to this.
 以上で説明した入力装置200が取付けられたATMの使用方法、及び動作について説明する。
 入力装置200を用いる場合、それをATMのテーブル131に固定する。入力装置200は、図2の下側の部分をテーブル131の上面に当接させた状態で、テーブル131に対して固定される。後述するような位置決めができていれば、入力装置200はテーブル131に載置しただけで固定していなくても機能はする。しかしながら、使用時において入力装置200は後述するようにディスプレイ110に対して予定された位置に位置決めがなされている必要があり、単にテーブル131に載置されただけだと入力装置200の位置が予定された位置からずれるおそれがあるから、この実施形態では入力装置200をテーブル131に対して固定することとしている。入力装置200のテーブル131に対する固定は、ネジ止め、接着等、公知或いは周知技術により行えば良い。
 入力装置200をテーブル131に対して固定する場合、第1長辺板211、第2長辺板212、第1短辺板213、及び第2短辺板214が、テーブル131の上面に露出しているディスプレイ110の画面111を囲むようにする。また、入力装置200のフレーム210とディスプレイ110の画面とが、相対的に、予定された位置関係となるようにして両者を固定する(図9参照)。
 また、入力装置200のコンピュータと、ATMの情報処理装置120とを、例えば、図示を省略のケーブルで接続することにより、入力装置200のコンピュータから情報処理装置120へと操作データを送信できる状態とする。両者の間での操作データの送受信を無線で行うのであれば、無線で操作データの送受信を行えるように、情報処理装置120と入力装置200との少なくとも一方で必要な設定を行う。
 この状態でユーザは、ATMが備える画面111上の空間(概ね、画面111上で鏡221、及び鏡222が側方に存在する範囲の空間のうち、光面Pよりも下方の空間)で、ホバー入力を行えるようになる。ユーザには、指先が光る位置よりもディスプレイ110の画面111に指を近づければ、指先を画面111に触れずともホバー入力を行えることを、何らかの方法で予め告げておくと良い。
The usage and operation of the ATM to which the input device 200 described above is attached will be described.
When the input device 200 is used, it is fixed to the table 131 of the ATM. The input device 200 is fixed to the table 131 with the lower portion of FIG. 2 in contact with the upper surface of the table 131 . If the positioning described later is achieved, the input device 200 can function even if it is simply placed on the table 131 and not fixed. However, when the input device 200 is used, it must be positioned at a predetermined position with respect to the display 110 as will be described later. In this embodiment, the input device 200 is fixed to the table 131 because there is a possibility that the input device 200 may deviate from the set position. The fixing of the input device 200 to the table 131 may be performed by a known or well-known technique such as screwing, adhesion, or the like.
When the input device 200 is fixed to the table 131 , the first long side plate 211 , the second long side plate 212 , the first short side plate 213 , and the second short side plate 214 are exposed on the upper surface of the table 131 . surrounding the screen 111 of the display 110 that is on. Also, the frame 210 of the input device 200 and the screen of the display 110 are relatively fixed so as to have a predetermined positional relationship (see FIG. 9).
Further, by connecting the computer of the input device 200 and the information processing device 120 of the ATM, for example, with a cable (not shown), operation data can be transmitted from the computer of the input device 200 to the information processing device 120. do. If operation data is transmitted and received wirelessly between the two, at least one of the information processing device 120 and the input device 200 is set so that the operation data can be transmitted and received wirelessly.
In this state, in the space above the screen 111 of the ATM (roughly, the space below the light plane P in the space in which the mirrors 221 and 222 exist on the sides of the screen 111), Allows hover input. It is preferable to inform the user in some way beforehand that hover input can be performed without touching the screen 111 with the fingertip by bringing the finger closer to the screen 111 of the display 110 than the position where the fingertip shines.
 この実施形態では、カメラ240は常時動画像を撮像しており、動画像データが常時カメラ240からコンピュータに送られている。もっとも、カメラ240は、ユーザがホバー入力を行う可能性のある時間帯のみ、例えば、ユーザが取引を開始してから(例えば、キャッシュカードがATMに入れられてから)、取引が終了するまでの間のみ、動画像を撮像して、動画像データをコンピュータに送るようになっていても構わない。これは、光源11についても同様である。光源11も、ユーザが取引を開始してから、取引が終了するまでの間のみ光を照射するようになっていても構わない。また、入力支援機構が用いられる対象がATMでない場合には、例えば、公知或いは周知の人感センサによってホバー入力が行われる装置に人が近づいたときのみ光源11が発光するようになっていても良いし、また、カメラ240に指が写り込んだことをきっかけとして、それまで消灯していた光源11が発光するようになっていてもよい。、カメラ240に指が写り込んだことをきっかけとして、それまで消灯していた光源11が発光するという光源11の制御は、入力支援機構が用いられる対象がATMである場合にももちろん応用可能である。例えば、操作データ生成部322に光源11の点灯、消灯の処理を行わせることが可能である。もちろん、操作データ生成部322がそのような処理を行うのであれば、入力装置200に設けられたコンピュータのインタフェイス315と光源11とを、図示せぬケーブルで結んでおくことが必要となるであろう。
 いずれにせよ、光源11が発光すると、既に説明したようにして光面Pが生成されることになる。光面Pは、この実施形態では、光源11が消灯するまで、一定の状態を保つ。
In this embodiment, camera 240 is constantly capturing moving images, and moving image data is constantly being sent from camera 240 to the computer. However, the camera 240 can be used only during a period in which the user may perform hover input, for example, from when the user starts a transaction (for example, after a cash card is inserted into an ATM) to when the transaction ends. A moving image may be picked up only during the interval and the moving image data may be sent to the computer. The same applies to the light source 11 as well. The light source 11 may also irradiate light only from the time the user starts trading until the end of the trading. Further, when the target for which the input support mechanism is used is not an ATM, for example, even if the light source 11 emits light only when a person approaches the device to which the hover input is performed by a well-known human sensor. Alternatively, the light source 11 that has been turned off until then may emit light when the finger is captured in the camera 240 . The control of the light source 11, in which the light source 11, which has been turned off until then, emits light when the finger is photographed in the camera 240, can of course be applied to the case where the target of the input support mechanism is an ATM. be. For example, it is possible to cause the operation data generation unit 322 to perform processing for turning on and off the light source 11 . Of course, if the operation data generator 322 performs such processing, it is necessary to connect the computer interface 315 provided in the input device 200 and the light source 11 with a cable (not shown). be.
In any case, when the light source 11 emits light, a light plane P is generated as already explained. The light plane P remains constant in this embodiment until the light source 11 is extinguished.
 カメラ240が動画像を撮像することによって生成された動画像データは、カメラ240からコンピュータへと送られる。これには限られないが、この実施形態では、カメラ240から図示せぬケーブルを通り、端子、インタフェイス315を経ることにより、動画像データはコンピュータ内の入力部321へと至る。動画像データは、入力部321から操作データ生成部322へと送られる。 Moving image data generated by capturing moving images by the camera 240 is sent from the camera 240 to the computer. Although not limited to this, in this embodiment, moving image data passes from the camera 240 through a cable (not shown) and through the terminal and interface 315 to reach the input unit 321 in the computer. The moving image data is sent from the input section 321 to the operation data generation section 322 .
 カメラ240が撮像する動画像の撮像範囲は、例えば、画面111上の上述したホバー入力を行える空間である、フレーム210内の画面111から第1距離よりも離れていない空間のすべてを含み、この実施形態では画面111も含む。また、動画の撮像範囲には、鏡221及び鏡222の鏡面の全体が含まれる。
 フレーム210内の空間に、フレーム210の孔215Aを介してユーザの指が入ってきた場合の動画像データによって特定される動画像中の画像がどのようなものとなるかを、図9、図10を例として説明する。なお、図9、10では、入力支援機構(光源11と、拡散部材12)の図示を省略している。
 例えば、図9中Fの符号が付されたのがユーザの指であるが、その位置に指Fがある場合、画像には、直接カメラ240に至った指Fからの像光L0による指が写り込む。また、画像には、鏡221又は鏡222の鏡面に1回反射されることによってカメラ240に至った指Fからの像光F11、F12による指が写り込む。更に、画像には、鏡221及び鏡222の鏡面にそれぞれ1回ずつ、合計2回反射されることによってカメラ240に至った指Fからの像光F21、F22による指が写り込む。つまり、カメラ240で撮像される画像には、指Fが5本写り込むことになり、また、画像中に写り込む指Fは、それぞれ異なる方向から撮像されることになる。
 また、図10に示した位置に指Fがある場合、画像には、直接カメラ240に至った指Fからの像光L0による指のみが写り込むことになる。なぜなら、鏡221、鏡222で反射された像光は、カメラ240に近い位置にある指Fによって遮られ、カメラ240に至ることができないからである。
 以上の説明から明らかなように、指Fが画面111上のどの位置に存在するかということと、動画像データによって特定される動画像を構成する各画像(フレームごとの静止画)中に写り込んだ指の状態とは、一対一対応したものとなる。つまり、指のX座標とY座標の組合せと、動画像を構成する各画像中に写り込んだ指の状態とは、一対一対応する。
 したがって、画面111とフレーム210との相対的位置が予定された関係になっていることが前提となるが、両者が予定された関係にある場合における、画面111の座標ごと(X座標とY座標の組合せごと)に、その座標に指があるときの指が写り込んだ画像の例(或いはモデル)を座標データ記録部323に記録しておけば、操作データ生成部322は、ある時点においてカメラ240から受取った動画像データによって特定される動画像中の画像に写り込んだ指の状態(指の数と、各指の位置と、必要であれば向き)と、座標データ記録部323に記録されていた上述のデータとを比較することによって、その時点において指がその直上に存在する画面111上の座標、つまり、指のX座標及びY座標を特定することが可能となる。なお、座標データ記録部323に記録されるべき、上述の、座標と指が写り込んだ画像の例とを紐付けたデータが、座標データである。
 このようにして、ユーザの指の位置についてのX座標及びY座標の組合せについての座標データが、ユーザが入力装置200を用いて入力を行っている間中、常に操作データ生成部322で生成されることになる。
The imaging range of the moving image captured by the camera 240 includes, for example, all of the space within the frame 210 that is within the first distance from the screen 111, which is the space where the above-described hover input on the screen 111 can be performed. The embodiment also includes screen 111 . In addition, the moving image capturing range includes the entire mirror surfaces of the mirrors 221 and 222 .
FIG. 9 shows how an image in the moving image specified by the moving image data when the user's finger enters the space in the frame 210 through the hole 215A of the frame 210. FIG. 10 will be described as an example. 9 and 10, illustration of the input support mechanism (the light source 11 and the diffusion member 12) is omitted.
For example, the user's finger is denoted by F in FIG. be reflected. Further, the image includes the finger by the image lights F11 and F12 from the finger F reaching the camera 240 by being reflected once by the mirror surface of the mirror 221 or the mirror 222 . Furthermore, the image reflects the finger by the image lights F21 and F22 from the finger F reaching the camera 240 by being reflected twice on the mirror surfaces of the mirror 221 and the mirror 222 once each. In other words, the image captured by the camera 240 includes five fingers F, and the fingers F captured in the image are captured from different directions.
Also, when the finger F is at the position shown in FIG. 10, only the finger by the image light L0 from the finger F directly reaching the camera 240 is reflected in the image. This is because the image light reflected by the mirrors 221 and 222 is blocked by the finger F located near the camera 240 and cannot reach the camera 240 .
As is clear from the above description, it is possible to determine where the finger F is on the screen 111 and whether it is captured in each image (still image for each frame) that constitutes the moving image specified by the moving image data. There is a one-to-one correspondence between the state of the stuck finger and the finger. In other words, there is a one-to-one correspondence between the combination of the X coordinate and the Y coordinate of the finger and the state of the finger captured in each image forming the moving image.
Therefore, it is assumed that the relative positions of the screen 111 and the frame 210 are in a predetermined relationship. If an example (or a model) of an image in which the finger is captured at the coordinates is recorded in the coordinate data recording unit 323 for each combination of coordinates, the operation data generation unit 322 can store the image of the camera at a certain time. The state of the fingers reflected in the moving image specified by the moving image data received from 240 (the number of fingers, the position of each finger, and the orientation if necessary) is recorded in the coordinate data recording unit 323. By comparing with the above-mentioned data, it becomes possible to specify the coordinates on the screen 111 where the finger is located directly above at that time, that is, the X and Y coordinates of the finger. Note that the coordinate data is the above-described data that associates the coordinates with the example of the image in which the finger is reflected, which should be recorded in the coordinate data recording unit 323 .
In this way, the coordinate data for the combination of the X coordinate and the Y coordinate for the position of the user's finger is constantly generated by the operation data generator 322 while the user is performing input using the input device 200. will be
 そして、この実施形態では、ユーザの指が第1距離よりもディスプレイ110の画面111に近づき、そして所定の時間、例えば0.3秒以上そこにとどまった後にディスプレイ110の画面111から離れた場合に、ユーザの指がとどまっていたその箇所に対応する画面111に接触したものとみなすこととしている。操作データ生成部322は、上述のようにして指がディスプレイ110の画面111から離れた場合、とどまっていたその指の位置を示す座標データとその座標データによって特定される場所をユーザがタッチしたことを示す情報とを、セットにしたデータとして、操作データを生成する。この操作データは、一般的なタッチパネルを用いて入力できる、座標データ+ユーザがその座標にタッチしたことを示すデータの組とまったく同じものとすることができる。
 指がディスプレイ110の画面111から離れたことを検出するには、画面111からの距離をZ座標とするのであれば、Z座標を検出することが必要である。もっともZ座標は、画像中に写り込んでいる指の像のうち、ユーザの指から直接カメラ240に至った像光による指の像の先端から画像中に写り込んでいる画面111までの距離から容易に特定することが可能である。この実施形態では、指先の位置の座標データが生成されている間中ずっと、操作データ生成部322は、指先の位置のZ座標の検出を行うようになっている。それにより、指先がディスプレイ110の画面111から離れたことを、操作データ生成部322は検知可能となっている。
Then, in this embodiment, when the user's finger moves closer to the screen 111 of the display 110 than the first distance and leaves the screen 111 of the display 110 after staying there for a predetermined time, for example, 0.3 seconds or more, , the user's finger touches the screen 111 corresponding to the location where the user's finger was resting. When the finger moves away from the screen 111 of the display 110 as described above, the operation data generation unit 322 generates coordinate data indicating the position of the finger at which the finger remained and the location specified by the coordinate data. The operation data is generated as a set of data including information indicating . This operation data can be exactly the same as a set of coordinate data+data indicating that the user has touched the coordinate, which can be input using a general touch panel.
In order to detect that the finger has left the screen 111 of the display 110, it is necessary to detect the Z coordinate if the distance from the screen 111 is the Z coordinate. Of course, the Z coordinate is the distance from the tip of the finger image of the image light directly reaching the camera 240 from the user's finger to the screen 111 reflected in the image. It can be easily identified. In this embodiment, the operation data generator 322 detects the Z coordinate of the fingertip position all the time while the coordinate data of the fingertip position is being generated. Thereby, the operation data generator 322 can detect that the fingertip has left the screen 111 of the display 110 .
 ここで、ユーザは、第1距離まで指を近づけると、指先が光面Pを横切るため指先が光る。したがって、指先が光ったことを視認しつつ指先をディスプレイ110の画面111に近づけ、そして、そこで例えば0.3秒指を静止させてから指を画面111から遠ざけることにより、確実に入力を行うことが、つまり、操作データ生成部322に操作データを生成させることが可能となる。 Here, when the user brings the finger close to the first distance, the fingertip traverses the light plane P, so that the fingertip shines. Therefore, by bringing the fingertip closer to the screen 111 of the display 110 while visually recognizing that the fingertip is shining, and moving the finger away from the screen 111 after keeping the finger still there for 0.3 seconds, the input can be performed reliably. However, in other words, it is possible to cause the operation data generation unit 322 to generate the operation data.
 操作データは、操作データ生成部322から出力部324に送られ、出力部324から、インタフェイス315、図示せぬ端子及びケーブルを経て情報処理装置120へと送られる。操作データを受取った情報処理装置120は、その操作データに基づいて、必要な情報処理を実行する。
 上述の如き、画面のX座標及びY座標と、その座標にタッチされたことを示すデータの組合せである操作データは、上述のようにディスプレイ110がタッチパネルである場合に、ディスプレイ110から情報処理装置120へ入力されるデータとまったく同じものとすることができる。したがって、情報処理装置120は、ディスプレイ110に表示する画像にもまったく変更を加えなくとも、というより、ATMの全体に入力装置200との接続を行う以外の修正を加えなくとも、入力装置200から入力される操作データに基づいて、従来どおりの情報処理を行うことができる。
 ユーザが入力装置200による入力を終えたら、光源11は消灯する。
The operation data is sent from the operation data generator 322 to the output unit 324, and sent from the output unit 324 to the information processing apparatus 120 via the interface 315, terminals and cables (not shown). The information processing device 120 that receives the operation data executes necessary information processing based on the operation data.
As described above, the operation data, which is a combination of the X and Y coordinates of the screen and the data indicating that the coordinates have been touched, can be sent from the display 110 to the information processing apparatus when the display 110 is a touch panel as described above. It can be exactly the same as the data input to 120 . Therefore, the information processing device 120 can display images from the input device 200 without changing the image displayed on the display 110 at all, or rather without modifying the entire ATM other than connecting to the input device 200 . Conventional information processing can be performed based on the input operation data.
After the user finishes inputting with the input device 200, the light source 11 is turned off.
 なお、上述の例では、操作データ生成部322は、その時点の画像に写り込んだ指の像のうちのユーザの指から直接カメラ240に至った像光による指の像と、画面111までの距離から、指の先端のZ座標を特定することとしていた。
 しかしながら、発光体231、232が、図5(B)、(C)に示したようなものである場合には、指の像(ユーザの指から直接カメラ240に至った像光による指の像には限らない。)により、どの発光体231、232が隠れているかということによっても、そのときの指の先端のZ座標を特定することが可能である。
 したがって、座標データ記録部323に記録された上述した座標データに、更に、発光体231、232の写り込みパターンのデータまで加えることによって、操作データ生成部322は、動画像データによって特定される動画像中の各フレームの画像において写り込んでいる発光体231、232それぞれの位置や数に基づいて、ユーザの指の先端のZ座標を特定することも可能である。
In the above example, the operation data generation unit 322 generates the image of the finger, which is reflected in the image at that time, by the image light directly reaching the camera 240 from the user's finger, and the image of the finger up to the screen 111. From the distance, the Z coordinate of the tip of the finger was specified.
However, when the light emitters 231 and 232 are as shown in FIGS. 5(B) and 5(C), the image of the finger (the image of the finger by the image light directly reaching the camera 240 from the user's finger) ), it is possible to specify the Z coordinate of the tip of the finger at that time also by which light emitters 231 and 232 are hidden.
Therefore, by adding the data of the reflection pattern of the light emitters 231 and 232 to the above-described coordinate data recorded in the coordinate data recording unit 323, the operation data generating unit 322 can generate the moving image specified by the moving image data. It is also possible to specify the Z coordinate of the tip of the user's finger based on the positions and numbers of the light emitters 231 and 232 appearing in the image of each frame in the image.
 座標データ記録部323に記録する座標データは、例えば、指の位置するX座標とY座標の組と、その時点における指が写り込んだカメラ240で撮像された画像との組合せを教師データとして、コンピュータにディープランニングさせることによって生成された学習済みデータとして生成することも可能である。もちろんディープランニングさせるために用いられる画像のデータは、画面111とフレーム210とを予定された相対的な位置関係とした状況で得られたものとする。
 その場合、操作データ生成部322は人工知能として機能して、動画像データから、その時点における指の先端のX座標、Y座標、Z座標を特定することができるようになる。
The coordinate data recorded in the coordinate data recording unit 323 is, for example, a combination of a set of the X coordinate and the Y coordinate where the finger is positioned and an image captured by the camera 240 at that point in time where the finger is captured as teacher data. It is also possible to generate as learned data generated by deep running a computer. Of course, it is assumed that the image data used for deep running is obtained in a situation where the screen 111 and the frame 210 are in a predetermined relative positional relationship.
In that case, the operation data generation unit 322 functions as artificial intelligence, and can specify the X, Y, and Z coordinates of the tip of the finger at that time from the moving image data.
<変形例1>
 変形例1で説明するのも、第1実施形態と同様に、ATMに入力装置200を取付けたものである。ATM、入力装置200ともに変形例1と第1実施形態とで殆ど違いはない。第1実施形態と変形例1とで異なるのは、操作データ生成部322の機能と、光源11の点灯の仕方である。
 第1実施形態では、光面Pは、ユーザが入力を行っている間中、言い換えれば、光源11が点灯している間中一定の状態を保つようになっていた。また、第1実施形態の入力装置200では、ユーザが入力を行っている間、或いは、光源11が点灯している間中、操作データ生成部322は、指先のX座標、Y座標、及びZ座標を検出し続けるようになっていた。
 また、第1実施形態では、ユーザは、常時点灯している光源11からの光によって生じている光面Pを横切ることによって光る指先を視認することにより、指先が、操作データを入力装置200から情報処理装置120に入力させることが可能な程度にディスプレイ110の画面111に近づいていることを確認することができるようになっている。
<Modification 1>
Modification 1 will also be described in which the input device 200 is attached to the ATM as in the first embodiment. There is almost no difference between the ATM and the input device 200 between Modification 1 and the first embodiment. The difference between the first embodiment and Modification 1 is the function of the operation data generator 322 and the manner of lighting the light source 11 .
In the first embodiment, the light plane P is kept constant while the user is making an input, in other words, while the light source 11 is on. In addition, in the input device 200 of the first embodiment, while the user is inputting or while the light source 11 is on, the operation data generator 322 generates the X coordinate, Y coordinate, and Z coordinate of the fingertip. It was supposed to keep detecting coordinates.
Further, in the first embodiment, the user visually recognizes the fingertip that shines by crossing the light plane P generated by the light from the light source 11 that is always on, so that the fingertip receives the operation data from the input device 200. It can be confirmed that the screen 111 of the display 110 is approached to the extent that the information processing apparatus 120 can input.
 ところで、ユーザが入力装置200を用いた入力を行っている最中に、指先が画面111に近づきすぎ、指先を画面111に接触させてしまう場合があり得る。また、指先とディスプレイ110の画面111との距離に応じて入力されるデータを異ならせる処理を行う等の事情により、ユーザに、指先が光面Pよりもディスプレイ110の画面111に近づいたこと、或いは指先とディスプレイ110の画面111との距離が第1距離よりも短くなったことを知らせたいような場合もあり得る。
 これを可能とするため、この実施形態では、ユーザの指先が、第1距離よりも短い第2距離(例えば、第1距離の1/2~1/3の距離)よりも画面111に近づいた場合には、光源11からの光の照射の状態を変化させることとしている。
 変形例1でも第1実施形態の場合と同様に、操作データ生成部322は、指先のX座標、Y座標、及びZ座標を検出し続ける。そして、変形例1の操作データ生成部322は、指先とディスプレイ110の画面111との距離が第2距離よりも短くなったことを指先の位置のZ座標から検出したときに、近接データを生成するようになっている。この近接データは、出力部324、入力装置200に設けられたコンピュータのインタフェイス315、及びインタフェイス315と光源11とを結ぶ図示を省略のケーブルを介して、光源11へと送られる。そうすると、光源11は、光の照射の状態を変化させる。光源11における光の照射の状態の変化の例は、光の点滅、可視光領域における光の波長の変更、又は光の強度の変更とすることができる。光が点滅すれば、光面Pは点滅して、光面Pを指先が横切ったときに指先に生じる光が点滅することになる。光の波長を変更させれば、光面Pを指先が横切ったときに指先に生じてユーザが視認することになる光の色が変化することとなる。また、光の強度を変化させれば、光面Pを指先が横切ったときに指先に生じる光の強さが変化する。
By the way, while the user is performing input using the input device 200 , the fingertip may come too close to the screen 111 and touch the screen 111 . In addition, due to the fact that the input data is changed according to the distance between the fingertip and the screen 111 of the display 110, etc., the user may notice that the fingertip is closer to the screen 111 of the display 110 than the light plane P, Alternatively, there may be cases where it is desired to notify that the distance between the fingertip and the screen 111 of the display 110 has become shorter than the first distance.
To enable this, in this embodiment, the user's fingertip is closer to the screen 111 than a second distance (eg, 1/2 to 1/3 of the first distance) that is shorter than the first distance. In some cases, the state of light irradiation from the light source 11 is changed.
In Modification 1, the operation data generator 322 continues to detect the X-coordinate, Y-coordinate, and Z-coordinate of the fingertip in the same manner as in the first embodiment. Then, the operation data generation unit 322 of Modification 1 generates proximity data when detecting from the Z coordinate of the fingertip position that the distance between the fingertip and the screen 111 of the display 110 has become shorter than the second distance. It is designed to This proximity data is sent to the light source 11 via the output unit 324 , the computer interface 315 provided in the input device 200 , and a cable (not shown) connecting the interface 315 and the light source 11 . Then, the light source 11 changes the state of light irradiation. Examples of changes in the state of illumination of light from the light source 11 can be flickering of light, change of wavelength of light in the visible light region, or change of intensity of light. If the light blinks, the light plane P blinks, and the light generated at the fingertip when the fingertip crosses the light plane P blinks. If the wavelength of light is changed, the color of the light generated at the fingertip and visually recognized by the user when the fingertip crosses the light plane P changes. Further, if the intensity of light is changed, the intensity of light generated at the fingertip when the fingertip crosses the light plane P is changed.
 このように、変形例1によれば、ユーザの指先が第1距離よりもディスプレイ110の画面111に近づいたときにまず、指先が一定の状態で光り、そして、指先が第2距離よりもディスプレイ110の画面111に近づいたときに、指先がそれとは異なる状態で光ることになる。そのような指先に生じる光の状態の変化を視認することにより、ユーザは、指先とディスプレイ110の画面111との間の距離を直感的に認識できるようになる。
 なお、変形例1では、指先とディスプレイ110の画面111との距離について第1距離と第2距離という2つの距離を設定し、光源11の発光のさせ方を2種類に変化させることとしたが、距離の設定を3種類或いはそれ以上とし、光源11の発光のさせかたを3種類或いはそれ以上とすることももちろん可能である。
Thus, according to Modification 1, when the user's fingertip is closer to the screen 111 of the display 110 than the first distance, the fingertip shines in a constant state, and then the fingertip is closer to the display than the second distance. When approaching the screen 111 of 110, the fingertip will glow differently. The user can intuitively recognize the distance between the fingertip and the screen 111 of the display 110 by visually recognizing such a change in the state of light occurring at the fingertip.
In the first modification, two distances, a first distance and a second distance, are set for the distance between the fingertip and the screen 111 of the display 110, and the light emission method of the light source 11 is changed to two types. Of course, it is also possible to set three or more distance settings and three or more ways of emitting light from the light source 11 .
<変形例2>
 変形例2で説明するのも、第1実施形態と同様に、ATMに入力装置200を取付けたものである。ATM、入力装置200ともに変形例2と第1実施形態とで殆ど違いはない。第1実施形態と変形例2とで異なるのは、第1実施形態では、1組の光源11及び拡散部材12で構成されていた入力支援機構が、変形例2では光源11、13及び拡散部材12、14という2組の光源及び拡散部材によって構成されている、という点である。
 光源13及び拡散部材14は、光源11及び拡散部材12が作る光面Pよりディスプレイ110の画面111に近い位置に、光面Pとは別の、光面Pと平行な光面P1を生成するためのものである(図11、12参照)。光面P1は、例えば、光面Pと同様に、正面から見た場合におけるディスプレイ110の画面111の全体を覆うように形成され、これには限られないが、変形例2では、フレーム210の内側一杯に広がるようにされる。
 光源13及び拡散部材14はそれが可能な限り適当に設計することができ、例えば、光源11及び拡散部材12と同じものとすることができる。変形例2における光源13及び拡散部材14は、光源11及び拡散部材12と同じものであるが、ただし、光源13と光源11が発する光の波長はいずれも可視光領域に属するものの、必ずしもそうする必要はないが、両者の波長は異なるものとされている。或いは、光源11と光源13は、同じ波長の光を照射するものの、発する光の強度が異なっていても良い。また、光源13及び拡散部材14は、光源11及び拡散部材12の直下であって、例えば、変形例1で説明した第2距離に対応する位置に設けられている。
<Modification 2>
Modification 2 will also be described in which the input device 200 is attached to the ATM as in the first embodiment. There is almost no difference between the modification 2 and the first embodiment in both the ATM and the input device 200 . The difference between the first embodiment and the modification 2 is that in the first embodiment, the input support mechanism is composed of a set of the light source 11 and the diffusion member 12, whereas in the modification 2, the light sources 11 and 13 and the diffusion member The point is that it is composed of two sets of light sources 12 and 14 and diffusion members.
The light source 13 and the diffusion member 14 generate a light plane P1 parallel to the light plane P, different from the light plane P, at a position closer to the screen 111 of the display 110 than the light plane P created by the light source 11 and the diffusion member 12. (see FIGS. 11 and 12). The light plane P1 is formed, for example, like the light plane P so as to cover the entire screen 111 of the display 110 when viewed from the front. It is made to spread all the way inside.
The light source 13 and diffuser member 14 can be designed as appropriately as they can, for example, they can be the same as the light source 11 and diffuser member 12 . The light source 13 and the diffusion member 14 in Modification 2 are the same as the light source 11 and the diffusion member 12. However, although the wavelengths of the light emitted by the light source 13 and the light source 11 belong to the visible light region, they are not necessarily so. Although not required, the wavelengths of the two are assumed to be different. Alternatively, the light sources 11 and 13 may irradiate light of the same wavelength, but the intensity of the emitted light may be different. Further, the light source 13 and the diffusion member 14 are provided directly below the light source 11 and the diffusion member 12, for example, at positions corresponding to the second distance described in the first modification.
 このような2組の光源11、13及び拡散部材12、14を備える入力装置200にてユーザが入力を行う場合、ユーザの指先がディスプレイ110の画面111に近づいていく過程でディスプレイ110の画面111までの距離が第1距離よりも短くなると、光面Pを横切ることによりユーザの指先が光る。また、ユーザの指先からディスプレイ110の画面111までの距離が第2距離よりも短くなると、ユーザの指先は、光面Pと光面P1の双方を横切ることになるので、指先の発光の状態が、指先が光面Pのみを横切っていた場合とは異なるものとなる。
 このように、変形例2によれば、ユーザの指先が第1距離よりもディスプレイ110の画面111に近づいたときと、ユーザの指先が第2距離よりもディスプレイ110の画面111に近づいたときとで、指先に生じる光の状態が変化することになる。したがって、指先に生じる光の状態の変化を視認することにより、ユーザは、変形例1の場合と同様に、指先とディスプレイ110の画面111との間の距離を直感的に認識できるようになる。しかも、変形例2では、光面Pと光面P1を横切ったときに指先にそれぞれ生じる光の色が異なるので、ユーザは、上述の視認を行いやすい。
When the user performs input with the input device 200 including the two sets of light sources 11 and 13 and the diffusion members 12 and 14, the screen 111 of the display 110 is changed while the user's fingertip approaches the screen 111 of the display 110. When the distance to is shorter than the first distance, crossing the light plane P causes the user's fingertip to shine. Further, when the distance from the user's fingertip to the screen 111 of the display 110 becomes shorter than the second distance, the user's fingertip crosses both the light plane P and the light plane P1. , is different from the case where the fingertip crosses only the light plane P.
Thus, according to Modification 2, when the user's fingertip is closer to the screen 111 of the display 110 than the first distance, and when the user's fingertip is closer to the screen 111 of the display 110 than the second distance, Then, the state of the light generated at the fingertip changes. Therefore, the user can intuitively recognize the distance between the fingertip and the screen 111 of the display 110 by visually recognizing the change in the state of light occurring at the fingertip, as in the first modification. Moreover, in Modification 2, the colors of the light generated at the fingertip when crossing the light plane P and the light plane P1 are different, so that the user can easily perform the above-described visual recognition.
≪第2実施形態≫
 第2実施形態でも、第1実施形態と同様にATMに入力装置200を組合わせる。
 ATMに取付けて用いられる入力装置200には、第1実施形態の場合と同様に構成された入力支援機構が組込まれている。
 第1実施形態の入力装置200は、カメラ240と2つの鏡221、222によって、入力中の指先の位置のX座標及びY座標を特定するものとなっていた。第2実施形態の入力装置200は、第1実施形態の入力装置200と同様に入力中の指先の位置のX座標及びY座標を特定するものとなっているが、指先の位置のX座標及びY座標を特定するための原理、仕組みが第1実施形態の場合と異なる。
 第1実施形態の入力装置200と同様に、第2実施形態の入力装置200はフレーム210を備えている。第2実施形態におけるフレーム210は、第1実施形態のフレーム210と同様に構成されている。
 しかしながら、第2実施形態のフレーム210の内側には、代位1実施形態で存在したカメラ240も鏡221、222も、また、発光体231、232も存在しない。その代わりに、第2実施形態の入力装置200におけるフレーム210の内面には、多数の発光部291と、受光部292とが設けられている(図13、図14)。
<<Second embodiment>>
Also in the second embodiment, the input device 200 is combined with the ATM as in the first embodiment.
The input device 200 attached to the ATM incorporates an input support mechanism configured in the same manner as in the case of the first embodiment.
The input device 200 of the first embodiment uses the camera 240 and the two mirrors 221 and 222 to specify the X and Y coordinates of the position of the fingertip during input. The input device 200 of the second embodiment specifies the X and Y coordinates of the position of the fingertip during input in the same way as the input device 200 of the first embodiment. The principle and mechanism for specifying the Y coordinate are different from those of the first embodiment.
Similar to the input device 200 of the first embodiment, the input device 200 of the second embodiment has a frame 210 . The frame 210 in the second embodiment is configured similarly to the frame 210 in the first embodiment.
However, inside the frame 210 of the second embodiment, neither the camera 240 nor the mirrors 221, 222 nor the light emitters 231, 232 that were present in the first embodiment are present. Instead, a large number of light emitting units 291 and light receiving units 292 are provided on the inner surface of the frame 210 in the input device 200 of the second embodiment (FIGS. 13 and 14).
 発光部291は、第1長辺板211の内側と、第1短辺板213の内側とにそれぞれ多数設けられている。第1長辺板211に設けられる発光部291はいずれも、図14における上下方向を高さとした場合において、同じ高さ位置に位置するようにして、第1長辺板211の長さ方向において等間隔に取付けられている。また、第1短辺板213に設けられる発光部291はいずれも、同じ高さ位置に位置するようにして、第1短辺板213の長さ方向において等間隔に取付けられている。第1長辺板211の内側と、第1短辺板213の内側とにそれぞれ多数設けられる発光部291は、必ずしも同じ高さ位置に存在する必要はないが、この実施形態では同じ高さ位置に位置するようにされている。
 発光部291は、直線光を発する光源である。発光部291は、光を発するようにされており、例えば、不可視領域の波長の光である赤外線を発するようにされている。発光部291は、それが取付けられている第1長辺板211又は第1短辺板213の内面に垂直な方向に、対面する第2長辺板212又は第2短辺板214に向けて直線光を発するようになっている。
 受光部292は、第2長辺板212の内側と、第2短辺板214の内側とにそれぞれ多数設けられている。
 第2長辺板212に設けられる受光部292の数は、第1長辺板211に設けられる発光部291の数と同数であり、両者にそれぞれ設けられる受光部292と発光部291は一対一対応している。一対一対応する発光部291と受光部292とは、第1長辺板211の背面から見た場合に、丁度対応する位置に位置している。それにより、第1長辺板211に設けられた発光部291から出た赤外線は、第2長辺板212に設けられた、その赤外線を発した発光部291に一対一対応した受光部292によって受けられるようになっている。
 第2短辺板214に設けられる受光部292の数は、第1短辺板213に設けられる発光部291の数と同数であり、両者にそれぞれ設けられる受光部292と発光部291は一対一対応している。一対一対応する発光部291と受光部292とは、第1短辺板213の背面から見た場合に、丁度対応する位置に位置している。それにより、第1短辺板213に設けられた発光部291から出た赤外線は、第2短辺板214に設けられた、その赤外線を発した発光部291に一対一対応した受光部292によって受けられるようになっている。
 各受光部292は、発光部291が発した光をその時点で受信しているか、受信していないかを検知し、光の受信状態を示すデータを生成するようになっている。
 入力装置200を用いてユーザが入力を行うとき、すべての発光部291は、対応する受光部292に向けて赤外線を発する。それにより、平面視した場合のフレーム210の内側には、縦横に走る赤外線によって形成されるマトリクスが形成される。図13、14において、Rの符号が付されているのが直線光である赤外線であり、赤外線Rに付された矢印は、赤外線Rの進行方向を示している。
 入力装置200を用いて入力を行うユーザの指先がフレーム210内に入って来て、縦横、例えば1本ずつの赤外線Rを遮断すると、遮断された赤外線Rを受けていた受光部292は、赤外線Rを受けない状態となる。そうすると、それまで光(赤外線R)を受信していることを示すデータを生成していた受光部292は、光を受信していないことを示すデータを生成するようになる。
 したがって、第2長辺板212にある受光部292のうちどれが光を受信していない状態になったかと、第2短辺板214にある受光部292のうちどれが光を受信していない状態になったかということを検出することにより、ディスプレイ110の画面に対する指先の位置、つまり、指先のX座標とY座標とを検出することが可能となる。
A large number of light emitting portions 291 are provided inside the first long side plate 211 and inside the first short side plate 213 . All of the light emitting portions 291 provided on the first long side plate 211 are positioned at the same height position when the vertical direction in FIG. Installed at regular intervals. Further, all of the light emitting portions 291 provided on the first short side plate 213 are attached at equal intervals in the length direction of the first short side plate 213 so as to be positioned at the same height position. A large number of light-emitting portions 291 provided inside the first long side plate 211 and inside the first short side plate 213 do not necessarily have to be at the same height position, but in this embodiment, they are at the same height position. It is designed to be located in
The light emitting unit 291 is a light source that emits linear light. The light emitting unit 291 is adapted to emit light, for example, infrared light, which is light with wavelengths in the invisible region. The light emitting part 291 is directed toward the facing second long side plate 212 or second short side plate 214 in a direction perpendicular to the inner surface of the first long side plate 211 or the first short side plate 213 to which it is attached. It is designed to emit a straight line of light.
A large number of light receiving portions 292 are provided inside the second long side plate 212 and inside the second short side plate 214 .
The number of light receiving units 292 provided on the second long side plate 212 is the same as the number of light emitting units 291 provided on the first long side plate 211, and the light receiving units 292 and the light emitting units 291 provided on each of the two are one to one. Yes. The light-emitting portions 291 and the light-receiving portions 292 in one-to-one correspondence are located at exactly corresponding positions when viewed from the rear surface of the first long side plate 211 . As a result, the infrared rays emitted from the light emitting portions 291 provided on the first long side plate 211 are received by the light receiving portions 292 provided on the second long side plate 212 corresponding to the light emitting portions 291 that emitted the infrared rays. I am ready to receive it.
The number of light receiving units 292 provided on the second short side plate 214 is the same as the number of light emitting units 291 provided on the first short side plate 213, and the light receiving units 292 and the light emitting units 291 provided on both sides are one-to-one. Yes. The light-emitting portions 291 and the light-receiving portions 292 in one-to-one correspondence are located at exactly corresponding positions when viewed from the rear surface of the first short side plate 213 . As a result, the infrared light emitted from the light emitting portion 291 provided on the first short side plate 213 is received by the light receiving portion 292 provided on the second short side plate 214 in a one-to-one correspondence with the light emitting portion 291 that emitted the infrared light. It is designed to be accepted.
Each light receiving unit 292 detects whether or not the light emitted by the light emitting unit 291 is received at that time, and generates data indicating the light reception state.
When the user makes an input using the input device 200 , all the light emitters 291 emit infrared rays toward the corresponding light receivers 292 . As a result, a matrix formed by infrared rays running vertically and horizontally is formed inside the frame 210 when viewed from above. In FIGS. 13 and 14, infrared rays, which are straight rays, are denoted by R, and the arrow attached to the infrared rays R indicates the direction in which the infrared rays R travel.
When the fingertip of the user who performs input using the input device 200 enters the frame 210 and cuts off the infrared rays R vertically and horizontally, for example, one by one, the light receiving unit 292 that has received the cut off infrared rays R You will not receive R. Then, the light-receiving unit 292, which has generated data indicating that light (infrared R) is being received, now generates data indicating that light is not being received.
Therefore, which of the light receiving portions 292 on the second long side plate 212 is not receiving light and which of the light receiving portions 292 on the second short side plate 214 is not receiving light. By detecting whether the state is reached, it becomes possible to detect the position of the fingertip with respect to the screen of the display 110, that is, the X and Y coordinates of the fingertip.
 第2実施形態における入力装置200にも、入力支援機構が設けられている。第2実施形態における入力支援機構は、第1実施形態の場合と同様に、光源11と拡散部材12とを含んで構成されている(図14参照)。なお、図13では、入力支援機構の図示を省略している。
 第2実施形態における入力支援機構は、第1実施形態の場合と同様に、入力装置200が取付けられる装置に設けられたディスプレイ110の画面111と平行な光面Pを生成する。これには限られないが、第2実施形態における光面Pは、第1実施形態における光面Pと同様に、フレーム210の内側一杯に広がるようになっている。
 上述したように、第2実施形態の入力装置200では、マトリクス状に張り巡らされた赤外線Rのうち縦横のどの赤外線Rが指先によって遮断されたかによって、指先のX座標とY座標とを検出することができる。したがって、指先を、少なくともマトリクス状に張り巡らされた赤外線Rの位置までディスプレイ110の画面に近づけないと、ユーザは、第2実施形態による入力装置200による入力を行えない。この場合には、マトリクス状に張り巡らされた赤外線Rが存在する位置から、ディスプレイ110の画面111までの距離が、本願発明における第1距離となる。
 光面Pは、ディスプレイ110の画面111からの距離が第1距離と同じかそれよりも短くなるような位置に設けられる。図14に示した例では、光面Pは、ディスプレイ110の画面111から第1距離離れた位置よりもやや画面111に近い位置に位置している。ユーザが指先で光面Pを横切ると、第1実施形態の場合と同様に指先が光るので、ユーザは指先が光ることを視認することができる。指先が光っている状態が生じているとき、ユーザの指はその手前にあるマトリクス状の赤外線Rをかならず横切っている。したがって、指先が光っていることを確認しながらユーザが入力装置200の操作を行えば、指先がマトリクス状に張り巡らされた赤外線Rに届いていないことによって生じる誤入力や入力ミスを生じさせることなく、確実な入力を実現できることとなる。
The input device 200 in the second embodiment is also provided with an input support mechanism. The input support mechanism in the second embodiment includes a light source 11 and a diffusion member 12 (see FIG. 14), as in the first embodiment. Note that the illustration of the input support mechanism is omitted in FIG. 13 .
The input support mechanism in the second embodiment generates a light plane P parallel to the screen 111 of the display 110 provided in the device to which the input device 200 is attached, as in the first embodiment. Although not limited to this, the light plane P in the second embodiment extends all the way inside the frame 210 like the light plane P in the first embodiment.
As described above, in the input device 200 of the second embodiment, the X coordinate and the Y coordinate of the fingertip are detected depending on which infrared rays R in the vertical and horizontal directions of the infrared rays R arranged in a matrix are blocked by the fingertip. be able to. Therefore, unless the fingertip is brought close to the screen of the display 110 at least to the position of the infrared rays R arranged in a matrix, the user cannot perform input using the input device 200 according to the second embodiment. In this case, the first distance in the invention of the present application is the distance from the position where the infrared rays R arranged in a matrix form exist to the screen 111 of the display 110 .
The light plane P is provided at a position such that the distance from the screen 111 of the display 110 is equal to or shorter than the first distance. In the example shown in FIG. 14, the light plane P is positioned slightly closer to the screen 111 of the display 110 than the first distance from the screen 111 . When the user crosses the light plane P with the fingertip, the fingertip shines as in the case of the first embodiment, so the user can visually recognize that the fingertip shines. When the fingertip glows, the user's finger always crosses the matrix-shaped infrared rays R in front of the user's finger. Therefore, if the user operates the input device 200 while confirming that his or her fingertips are glowing, an erroneous input or an input error due to the fact that the fingertips do not reach the infrared rays R arranged in a matrix can be avoided. Therefore, it is possible to realize reliable input.
≪第3実施形態≫
 第3実施形態では、入力支援装置500について説明する。
 第3実施形態における入力支援装置500は、NISSHA株式会社や、株式会社ジャパンディスプレイによって製造販売され、或いは発表されている、ディスプレイの画面111とユーザの指との間に生じる僅かな静電容量の変化を検出することによって、ディスプレイに対する入力を行おうとしているユーザの指のディスプレイ上での座標を検出する機能を有するディスプレイに組合せて用いられる。つまり、この実施形態における入力支援機構500が用いられるディスプレイは、第1実施形態における入力装置200自体、或いは入力装置200の機能を内包している。
 しかしながら、このようなディスプレイを用いて入力を行おうとしてもユーザは、画面111からどの程度の距離まで指を近づければ、ディスプレイが指の座標を検知することが可能かをにわかには知ることができない。
 そのような不具合を解決すべく第3実施形態の入力支援装置500は使用される。
<<Third Embodiment>>
In the third embodiment, an input support device 500 will be described.
The input support device 500 according to the third embodiment is manufactured and sold by Nissha Co., Ltd. and Japan Display Co., Ltd., or has been announced. It is used in conjunction with a display that has the ability to detect the coordinates on the display of a user's finger attempting to provide input to the display by detecting changes. In other words, the display using the input support mechanism 500 in this embodiment includes the input device 200 itself in the first embodiment or the function of the input device 200 .
However, even if the user tries to make an input using such a display, the user suddenly knows how close the finger must be to the screen 111 before the display can detect the coordinates of the finger. Can not.
The input support device 500 of the third embodiment is used to solve such problems.
 入力支援装置500は、照射部510と、固定部520とを備えている。
 照射部510は、第1実施形態で説明した光源11及び拡散部材12を内蔵しており、平面状の光、要するに第1実施形態における光面Pに相当する光を射出する機能を有している。
 固定部520は、照射部510を、ディスプレイの画面111に対して位置決めした状態で、ディスプレイに対して直接的、或いは他の部材を介して間接的に固定する機能を有している。固定部520は例えば棒状である。また、固定部520は、画面111から照射部510までの距離を可変として構成するのが望ましい。
 入力支援装置500は、照射部510が画面111に対して適宜の位置になるようにして、固定部520を用いてディスプレイに対して固定した状態で用いる。その状態で、照射部510は、ディスプレイの画面111に対して指先を近づけた場合においてディスプレイが指先の位置を検知できる位置と同じか、それよりも画面111に近い位置に、光面Pを形成する。光面Pは、例えば、正面からディスプレイの画面111を見た場合において、画面111の全体を覆うような範囲に形成される。
 光面Pを横切ることによって指先が光っていることを確認しながら指先をディスプレイの画面111に近づけてディスプレイに対して入力を行えば、ディスプレイは指先の位置を確実に特定することができるため、誤入力や入力ミスが生じることがない。
The input support device 500 includes an irradiation section 510 and a fixing section 520 .
The irradiation unit 510 incorporates the light source 11 and the diffusion member 12 described in the first embodiment, and has a function of emitting planar light, that is, light corresponding to the light plane P in the first embodiment. there is
The fixing section 520 has a function of fixing the irradiation section 510 to the display directly or indirectly via another member while the irradiation section 510 is positioned with respect to the screen 111 of the display. The fixed part 520 is, for example, rod-shaped. Moreover, it is desirable that the fixing unit 520 be configured such that the distance from the screen 111 to the irradiation unit 510 is variable.
The input support device 500 is used in a state in which the irradiation unit 510 is positioned appropriately with respect to the screen 111 and fixed to the display using the fixing unit 520 . In this state, the irradiation unit 510 forms a light plane P at a position where the display can detect the position of the fingertip when the fingertip is brought close to the screen 111 of the display, or at a position closer to the screen 111 than that. do. The light plane P is formed, for example, in a range that covers the entire screen 111 when the screen 111 of the display is viewed from the front.
By crossing the light plane P and confirming that the fingertip is shining, if the fingertip is brought close to the screen 111 of the display and input is made to the display, the position of the fingertip can be reliably specified by the display. No typos or typos.

Claims (10)

  1.  ユーザが操作を行う対象となるディスプレイにおける表示が行われる面である対象面、前記対象面に近接する位置でユーザが行った操作についてのデータである操作データを受付けて前記操作データに基づく所定の情報処理を実行する情報処理装置、及び前記対象面の正面から見た場合において前記対象面上に位置する、前記対象面からの距離が所定の距離である第1距離よりも近い位置にあるユーザ指先の前記対象面上での位置座標を検出して当該位置座標についてのデータを含む操作データを前記情報処理装置へ出力するようになっている入力装置、と組合せて用いられる入力支援機構であって、
     光を照射する光源と、
     前記光源からの光を、前記対象面から前記第1距離と同じかそれよりも小さい距離だけ離れた前記対象面と平行な平面上の、前記対象面を正面から見た場合における前記対象面の所定の部分を少なくとも覆う範囲に広げることにより面状の光である光面を生成する拡散部材と、
     を備えており、
     前記光面を横切ることにより前記光源からの光を受けて光った指先を視認したユーザが、ユーザの指先が前記第1距離よりも前記対象面に近づいたことを認識できるようになっている、
     入力支援機構。
    A target surface, which is a surface on which display is performed on a display to be operated by a user, and operation data, which is data about an operation performed by the user at a position close to the target surface, are received, and a predetermined operation is performed based on the operation data. An information processing device that executes information processing, and a user positioned on the target surface when viewed from the front of the target surface and at a position closer than a first distance, which is a predetermined distance from the target surface. An input support mechanism used in combination with an input device configured to detect positional coordinates of a fingertip on the target surface and output operation data including data about the positional coordinates to the information processing device. hand,
    a light source that emits light;
    light from the light source on a plane parallel to the target surface at a distance equal to or less than the first distance from the target surface, when the target surface is viewed from the front. a diffusing member that generates a light plane, which is planar light, by expanding a range that covers at least a predetermined portion;
    and
    By crossing the light surface, a user who visually recognizes the fingertip illuminated by the light from the light source can recognize that the user's fingertip is closer to the target surface than the first distance.
    Input support mechanism.
  2.  前記光源は直線光を照射する直線光光源であり、前記拡散部材はシリンドリカルレンズ又はシリンドリカルミラーである、
     請求項1記載の入力支援機構。
    The light source is a linear light source that emits linear light, and the diffusion member is a cylindrical lens or a cylindrical mirror,
    The input support mechanism according to claim 1.
  3.  前記入力装置は、ユーザの指先の前記対象面上での位置座標に加えて、前記対象面からの距離の座標である距離座標をも検出するようになっているとともに、ユーザの指先から前記対象面までの距離が、前記光面から前記対象面までの距離よりも短い所定の距離である第2距離よりも近い距離まで前記対象面に近づいたことを前記距離座標に基づいて検知したときに近接信号を生成し、生成した前記近接信号を前記光源に送るようになっており、
     前記光源は、光の照射の状態を変化させることができるようになっており、且つ前記近接信号を受取ったときに、光の照射の状態を変化させるようになっている、
     請求項1又は2記載の入力支援機構。
    The input device detects, in addition to positional coordinates of the user's fingertip on the target surface, distance coordinates, which are coordinates of a distance from the target surface, and also detects distance coordinates from the user's fingertip to the target. when it is detected based on the distance coordinates that the distance to the surface has approached the object surface to a distance shorter than a second distance, which is a predetermined distance shorter than the distance from the light surface to the object surface; generating a proximity signal and sending the generated proximity signal to the light source;
    The light source is adapted to change the state of light irradiation, and is adapted to change the state of light irradiation when the proximity signal is received.
    3. The input support mechanism according to claim 1 or 2.
  4.  光の照射の状態の前記変化は、光の点滅又は、可視光領域における光の波長の変更である、
     請求項3記載の入力支援機構。
    The change in the state of light irradiation is blinking of light or a change in wavelength of light in the visible light region.
    4. The input support mechanism according to claim 3.
  5.  光を照射する補助光源と、
     前記補助光源からの光を、前記対象面から前記第1距離よりも小さい所定の距離だけ離れた前記対象面と平行な平面上の、前記対象面を正面から見た場合における前記対象面の所定の部分を少なくとも覆う範囲に広げることにより面状の光である補助光面を生成する補助拡散部材と、
     を備えており、
     前記補助光面を横切ることにより前記補助光源からの光を受けて光った指先を視認したユーザが、ユーザの指先が前記第1距離よりも前記対象面に更に近づいたことを認識できるようになっている、
     請求項1記載の入力支援機構。
    an auxiliary light source that emits light;
    The light from the auxiliary light source is projected onto a plane parallel to the target surface, which is separated from the target surface by a predetermined distance smaller than the first distance. an auxiliary diffusion member that generates an auxiliary light surface that is planar light by expanding at least a range covering the part of
    and
    By traversing the auxiliary light surface, a user who visually recognizes the fingertip illuminated by the light from the auxiliary light source can recognize that the user's fingertip is closer to the target surface than the first distance. ing,
    The input support mechanism according to claim 1.
  6.  前記光源からの光の波長と、前記補助光源からの光の波長とは、いずれも可視光領域の波長であり、且つ互いに異なる、
     請求項5記載の入力支援機構。
    both the wavelength of the light from the light source and the wavelength of the light from the auxiliary light source are wavelengths in the visible light region and are different from each other;
    6. The input support mechanism according to claim 5.
  7.  前記入力装置による入力をユーザが行おうとしたことを検知するセンサを備えており、
     前記センサが前記入力装置による入力をユーザが行おうとしたことを検知した場合にのみ、前記光源が光を照射するようになっている、
     請求項1記載の入力支援機構。
    A sensor is provided to detect that the user has attempted to input using the input device,
    The light source emits light only when the sensor detects an attempt by the user to make an input with the input device.
    The input support mechanism according to claim 1.
  8.  前記ディスプレイ及び情報処理装置と別体とされ、一体とされた前記ディスプレイ及び情報処理装置に対して取付けられるようにされた前記入力装置と一体とされている、
     請求項1記載の入力支援機構。
    It is separate from the display and the information processing device, and is integrated with the input device attached to the integrated display and information processing device,
    The input support mechanism according to claim 1.
  9.  前記入力装置は、前記対象面を囲むフレームを備えており、前記光源及び前記拡散部材は、前記フレームに取付けられている、
     請求項8記載の入力支援機構。
    The input device comprises a frame surrounding the target surface, and the light source and the diffusing member are attached to the frame.
    9. The input support mechanism according to claim 8.
  10.  ユーザが操作を行う対象となるディスプレイにおける表示が行われる面である対象面、前記対象面に近接する位置でユーザが行った操作についてのデータである操作データを受付けて前記操作データに基づく所定の情報処理を実行する情報処理装置、及び前記対象面の正面から見た場合において前記対象面上に位置する、前記対象面からの距離が所定の距離である第1距離よりも近い位置にあるユーザの指先の前記対象面上での位置座標を検出して当該位置座標についてのデータを含む操作データを前記情報処理装置へ出力するようになっている入力装置、及び入力支援機構を備えている入力システムであって、
     前記入力支援機構は、
     光を照射する光源と、
     前記光源からの光を、前記対象面から前記第1距離と同じかそれよりも小さい距離だけ離れた前記対象面と平行な平面上の、前記対象面を正面から見た場合における前記対象面の所定の部分を少なくとも覆う範囲に広げることにより面状の光である光面を生成する拡散部材と、
     を備えており、
     前記光面を横切ることにより前記光源からの光を受けて光った指先を視認したユーザが、ユーザの指先が前記第1距離よりも前記対象面に近づいたことを認識できるようになっている、
     入力システム。
    A target surface, which is a surface on which display is performed on a display to be operated by a user, and operation data, which is data about an operation performed by the user at a position close to the target surface, are received, and a predetermined operation is performed based on the operation data. An information processing device that executes information processing, and a user positioned on the target surface when viewed from the front of the target surface and at a position closer than a first distance, which is a predetermined distance from the target surface. an input device adapted to detect the position coordinates of the fingertip of the user on the target surface and output operation data including data about the position coordinates to the information processing device; and an input device comprising an input support mechanism a system,
    The input support mechanism is
    a light source that emits light;
    light from the light source on a plane parallel to the target surface at a distance equal to or less than the first distance from the target surface, when the target surface is viewed from the front. a diffusing member that generates a light plane, which is planar light, by expanding a range that covers at least a predetermined portion;
    and
    By crossing the light surface, a user who visually recognizes the fingertip illuminated by the light from the light source can recognize that the user's fingertip is closer to the target surface than the first distance.
    input system.
PCT/JP2022/006132 2021-02-24 2022-02-16 Input assisting mechanism and input system WO2022181412A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-027415 2021-02-24
JP2021027415A JP2022128932A (en) 2021-02-24 2021-02-24 Input support mechanism and input system

Publications (1)

Publication Number Publication Date
WO2022181412A1 true WO2022181412A1 (en) 2022-09-01

Family

ID=83049289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/006132 WO2022181412A1 (en) 2021-02-24 2022-02-16 Input assisting mechanism and input system

Country Status (2)

Country Link
JP (1) JP2022128932A (en)
WO (1) WO2022181412A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009008161A1 (en) * 2007-07-11 2009-01-15 Access Co., Ltd. Portable information terminal
JP2010012158A (en) * 2008-07-07 2010-01-21 Fujifilm Corp Input device and medical appliance
JP2014149643A (en) * 2013-01-31 2014-08-21 Seiko Epson Corp Position detector, method for controlling position detector, and irradiation device
JP2014219938A (en) * 2013-05-10 2014-11-20 株式会社ゲッシュ Input assistance device, input assistance method, and program
JP6200553B1 (en) * 2016-04-28 2017-09-20 アールエヌディ プラス カンパニーリミテッドRndplus Co., Ltd TOUCH SCREEN DEVICE AND CONTROL METHOD THEREOF, AND DISPLAY DEVICE {TOUCHSCREEN DEVICE AND METHOD FOR CONTROLLING THE SAME AND DISPLAY APPARATUS}
JP2018206149A (en) * 2017-06-06 2018-12-27 オムロン株式会社 Input apparatus
JP2022022568A (en) * 2020-06-26 2022-02-07 沖電気工業株式会社 Display operation unit and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009008161A1 (en) * 2007-07-11 2009-01-15 Access Co., Ltd. Portable information terminal
JP2010012158A (en) * 2008-07-07 2010-01-21 Fujifilm Corp Input device and medical appliance
JP2014149643A (en) * 2013-01-31 2014-08-21 Seiko Epson Corp Position detector, method for controlling position detector, and irradiation device
JP2014219938A (en) * 2013-05-10 2014-11-20 株式会社ゲッシュ Input assistance device, input assistance method, and program
JP6200553B1 (en) * 2016-04-28 2017-09-20 アールエヌディ プラス カンパニーリミテッドRndplus Co., Ltd TOUCH SCREEN DEVICE AND CONTROL METHOD THEREOF, AND DISPLAY DEVICE {TOUCHSCREEN DEVICE AND METHOD FOR CONTROLLING THE SAME AND DISPLAY APPARATUS}
JP2018206149A (en) * 2017-06-06 2018-12-27 オムロン株式会社 Input apparatus
JP2022022568A (en) * 2020-06-26 2022-02-07 沖電気工業株式会社 Display operation unit and device

Also Published As

Publication number Publication date
JP2022128932A (en) 2022-09-05

Similar Documents

Publication Publication Date Title
JP6822473B2 (en) Display device
JP6724987B2 (en) Control device and detection method
JP6078884B2 (en) Camera-type multi-touch interaction system and method
WO2012124730A1 (en) Detection device, input device, projector, and electronic apparatus
JP6270898B2 (en) Non-contact input method
KR100974894B1 (en) 3d space touch apparatus using multi-infrared camera
JP6757779B2 (en) Non-contact input device
JP5509391B1 (en) Method and apparatus for detecting a designated position of a reproduced image in a non-contact manner
US20130127705A1 (en) Apparatus for touching projection of 3d images on infrared screen using single-infrared camera
US20180348960A1 (en) Input device
JP2016154035A5 (en)
JP6721875B2 (en) Non-contact input device
JP2017142726A (en) Electronic blackboard system, display device, and display method
KR100936666B1 (en) Apparatus for touching reflection image using an infrared screen
KR100977558B1 (en) Space touch apparatus using infrared rays
WO2022181412A1 (en) Input assisting mechanism and input system
KR101002072B1 (en) Apparatus for touching a projection of images on an infrared screen
JP2014087067A (en) Electronic device, in particular telecommunication device including projection unit and method for operating electronic device
US20220172392A1 (en) Device and method for non-contact optical imaging of a selected surface area of a hand
JP6663736B2 (en) Non-contact display input device and method
JP2014048565A (en) Image display device
JP5856357B1 (en) Non-contact input device and method
JP2014233005A (en) Image display apparatus
JP2003186621A (en) Touch panel and apparatus with the touch panel
JP5957611B1 (en) Non-contact input device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22759451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22759451

Country of ref document: EP

Kind code of ref document: A1