WO2022215346A1 - Electronic device - Google Patents

Electronic device Download PDF

Info

Publication number
WO2022215346A1
WO2022215346A1 PCT/JP2022/005227 JP2022005227W WO2022215346A1 WO 2022215346 A1 WO2022215346 A1 WO 2022215346A1 JP 2022005227 W JP2022005227 W JP 2022005227W WO 2022215346 A1 WO2022215346 A1 WO 2022215346A1
Authority
WO
WIPO (PCT)
Prior art keywords
operator
virtual operation
operation space
space
electronic device
Prior art date
Application number
PCT/JP2022/005227
Other languages
French (fr)
Japanese (ja)
Inventor
順以 山口
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Priority to JP2023512839A priority Critical patent/JPWO2022215346A1/ja
Publication of WO2022215346A1 publication Critical patent/WO2022215346A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means

Definitions

  • Embodiments according to the present invention relate to electronic devices. This application claims priority to Japanese Patent Application No. 2021-065575 filed in Japan on April 8, 2021, the contents of which are incorporated herein.
  • Cited Document 1 describes a technique that uses a small distance sensor to detect gestures of an operator in a space, and recognizes the movement as an operation for performing predetermined information processing.
  • Cited Document 1 there is a problem that, when recognizing the operator's finger motion, there is a possibility that, for example, the character key type switching motion and the actual character selection motion may be erroneously recognized. ing.
  • Cited Document 1 does not disclose anything about such problems.
  • an electronic device capable of suppressing erroneous input.
  • An electronic device capable of accepting non-contact information input by an operator, comprising: a first detection unit that detects an inclination of the electronic device with respect to a predetermined reference axis; a measurement unit that measures the distance between the operator, a second detection unit that detects the position of the operator's eyes, the inclination detected by the first detection unit, the distance measured by the measurement unit, and the second detection unit an obtaining unit that obtains a position in the virtual operation space for recognizing a non-contact information input operation by the operator based on the position of the eye detected in .
  • FIG. 1 is an external view of a smartphone according to a first embodiment
  • FIG. 1 is an external view of a smartphone according to a first embodiment
  • FIG. Sectional drawing of the smart phone which concerns on 1st Embodiment.
  • FIG. 2 is a hardware configuration diagram of a smartphone according to the first embodiment
  • FIG. 2 is a hardware configuration diagram of a smartphone according to the first embodiment
  • 3 is a functional block diagram of a CPU according to the first embodiment
  • FIG. 4 is a flowchart showing the operation of the smart phone according to the first embodiment
  • 6 is a flowchart showing the operation of the smartphone according to the modified example of the first embodiment
  • 4 is a conceptual diagram of a determination table according to the first embodiment
  • FIG. 4 is a conceptual diagram of an imaging region according to the first embodiment
  • FIG. 4 is a conceptual diagram of an imaging region according to the first embodiment
  • FIG. 4 is a conceptual diagram showing the inclination of the smartphone according to the first embodiment;
  • FIG. 4 is a conceptual diagram showing a method of determining the position of the virtual operation space in the smart phone according to the first embodiment;
  • FIG. 4 is a conceptual diagram showing a method of determining the position of the virtual operation space in the smart phone according to the first embodiment;
  • FIG. 4 is a conceptual diagram showing a method of determining the position of the virtual operation space in the smart phone according to the first embodiment;
  • FIG. 10 is a hardware configuration diagram of a smartphone according to the second embodiment;
  • FIG. 10 is a hardware configuration diagram of a smartphone according to the second embodiment;
  • the functional block diagram of CPU which concerns on 2nd Embodiment.
  • FIG. 8 is a flowchart showing the operation of the smart phone according to the second embodiment
  • FIG. 11 is a conceptual diagram showing a method of calibrating the virtual operation space in the smartphone according to the second embodiment
  • FIG. 11 is a conceptual diagram showing a method of calibrating the virtual operation space in the smartphone according to the second embodiment
  • FIG. 1 is an external perspective view of a smartphone according to this embodiment.
  • the smartphone 10 includes, for example, a power button 11, a display 12, a camera 13, a ToF (Time of Flight) sensor 14, a speaker 15, a microphone 16, and a USB (Universal Serial Bus) terminal. 17.
  • a power button 11 a display 12, a camera 13, a ToF (Time of Flight) sensor 14, a speaker 15, a microphone 16, and a USB (Universal Serial Bus) terminal. 17.
  • the power button 11 is for powering on the smartphone 10 by an operation by the operator of the smartphone 10 .
  • the display 12 has a touch panel function.
  • the display 12 displays various application screens and accepts various information input by the operator such as character input and selection operation by the operator through the touch panel function. Information such as characters and numbers can be input by the operator without touching the display 12, the details of which will be described later.
  • FIG. 1 shows, as an example, a case where a numeric keypad is displayed on the display 12 as a display example for inputting information from the user.
  • the camera 13 is an in-camera for imaging an operator or the like, for example.
  • the ToF sensor 14 measures the distance to the object using infrared light, for example. Note that the camera 13 and the ToF sensor 14 may be integrated.
  • the speaker 15 outputs voice during a call, and the microphone 16 receives the voice of the operator during a call.
  • the USB terminal 17 is used for charging the smartphone 10 and transferring information.
  • FIGS. 2A and 2B are external perspective views of the smartphone 10, similar to FIG. 1, and FIG. 2B is a cross-sectional view of the smartphone 10, showing an example in which the operator inputs numbers using the numeric keypad in a non-contact operation.
  • a virtual operation space 19 is provided above the display 12 of the smartphone 10 (that is, the space between the display 12 and the operator).
  • the virtual operation space 19 is provided at a position separated from the display 12 by a certain distance.
  • This virtual operation space is provided at an appropriate position according to various conditions. This method will be described in detail later.
  • the virtual operation space 19 may or may not actually be displayed in the space so that the operator can recognize it, in other words, it may or may not actually be visible to the operator.
  • FIG. 3A is a hardware configuration diagram showing the internal configuration of the smartphone 10 according to this embodiment.
  • the smartphone 10 includes a CPU (Central Processing Unit) 101, a ROM (Read Only Memory) 102, a RAM (Random Access Memory) 103, a gyro sensor 104, a gyro data processing unit (first detection unit) 105, a ToF sensor 106 (ToF sensor 14 described in FIG. 1), ToF data processing unit (measurement unit) 107, camera 108 (camera 13 described in FIG. 1), imaging data processing unit (second detection unit) 109, position determination unit ( An acquisition unit) 110 and an input determination unit 111 are provided.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • gyro sensor 104 a gyro data processing unit
  • first detection unit 105
  • ToF sensor 106 ToF sensor 14 described in FIG. 1
  • ToF data processing unit Measurement unit
  • camera 108 camera 13 described in FIG. 1
  • the CPU 101 controls the operation of the smartphone 10 as a whole.
  • Various processors can be used as the CPU 101, and the processor is not necessarily limited to a CPU.
  • the ROM 102 holds various programs and data such as a program 120 and a determination table 121 for operating the smartphone 10 .
  • a RAM 103 functions as a work area for the CPU 101 and holds various programs and data.
  • the gyro sensor 104 detects changes in the rotation angle and orientation of the smartphone 10 as angular velocity, and transfers the result to the gyro data processing unit 105 .
  • the gyro data processing unit 105 calculates, for example, the angle of the smartphone 10 with respect to the horizontal axis based on the data obtained from the gyro sensor 104 .
  • the angle of the smartphone 10 is not necessarily calculated based on the horizontal axis, and may be a predetermined reference axis or reference plane. good.
  • the ToF sensor 106 detects the distance to an object positioned on the display 12 side of the smartphone 10 and transfers the result to the ToF data processing unit 107 .
  • the ToF data processing unit 107 calculates the distance to the face, for example, the eyes of the operator of the smartphone 10 based on the data obtained from the ToF sensor 106 .
  • the camera 108 captures an image of an object located on the display 12 side of the smartphone 10 at a certain angle of view, and transfers the result to the captured data processing unit 109 .
  • the captured data processing unit 109 detects the positions of the operator's eyes captured by the camera 108 and calculates the relative positional relationship between the smartphone 10 and the operator. Further, the captured data processing unit 109 detects the position and movement of the operator's finger captured by the camera 108 .
  • the position determination unit 110 determines the position of the figure based on the angle obtained by the gyro data processing unit 105, the distance to the face obtained by the ToF data processing unit 107, the eye position obtained by the imaging data processing unit 109, and the like. Appropriate positions (coordinates) of the virtual operation space 19 described with reference to 2A and 2B are calculated.
  • the input determination unit 111 determines whether the position of the operator's finger is within the virtual operation space 19 obtained by the position determination unit 110, and whether the motion of the operator's finger is within the virtual operation space 19. determine whether Then, when it is within the virtual operation space 19, it receives an input operation by the operator based on the operation of the finger.
  • FIG. 3A is a hardware configuration diagram showing the internal configuration of smartphone 10 according to the modification of FIG. 3A
  • FIG. 3C is a functional block diagram of CPU 101 when program 120 is executed.
  • the gyro data processing unit 105, the ToF data processing unit 107, the position determination unit 110, and the input determination unit 111 described in FIG. 3A are implemented by the CPU 101 executing the program 120 instead of hardware. It may function as a processing unit.
  • the imaging data processing unit 109 is similar, but the imaging data processing unit 109 is realized by a processor (GPU (Graphical Processing Unit) 112 in the example of FIG. 3B) different from the CPU, as shown in FIG. 3B. may
  • GPU Graphics Processing Unit
  • FIG. 4A is a flowchart showing the operation of the smartphone 10.
  • FIG. A method of determining the position of the smartphone 10 in the virtual operation space 19 and a method of detecting an operator's motion in the virtual operation space 19 will be described below.
  • the gyro sensor 104 first acquires angular velocity data of the smartphone 10 (step S10).
  • the gyro data processing unit 105 detects, for example, the tilt ⁇ 1 of the smartphone 10 with respect to the horizontal axis based on the angular velocity data obtained in step S10, and stores it in the RAM 103 (step S11).
  • the ToF sensor 106 also acquires distance data to the object within the imaging range (step S12).
  • the ToF data processing unit 107 calculates the distance L to the operator's face, eg eyes, based on the distance data obtained in step S12, and stores it in the RAM 103 (step S13).
  • the camera 108 takes an image of the object within the angle of view ⁇ 2 (step S14). Based on the image data obtained in step S14, the imaging data processing unit 109 detects the operator's face, for example, the eyes, calculates the relative positional relationship between the smartphone 10 and the operator, and stores it in the RAM 103. (step S15).
  • step S10 The above steps S10, S12, and S14 are executed in parallel, for example. Then, based on ⁇ 1 obtained in step S11, the distance L to the operator's face obtained in step S13, and the relative positional relationship between the smartphone 10 and the operator's face obtained in step S15, , the position determining unit 110 calculates an appropriate position in the virtual manipulation space 19 and stores it in the RAM 103 (step S16). Calculation of the appropriate position of the virtual manipulation space 19 in step S16 can be performed using, for example, a determination table 121 prepared in advance. The determination table will be described later with reference to FIG.
  • the camera 108 captures an image of the object within the angle of view ⁇ 2 (step S17). This operation may be performed in step S14. Subsequently, the imaging data processing unit 109 determines whether or not the operator's finger is detected in the imaging data obtained in step S17 (step S18).
  • step S20 the ToF data processing unit 107 calculates the distance to the operator's finger (step S20). Then, the camera 108 transfers the imaging data obtained in step S17 (or S14) to the input determination unit 111, and the ToF data processing unit 107 transfers the calculation result in step S20 to the input determination unit 111 (step S21). .
  • the calculation result obtained in step S ⁇ b>20 may be temporarily held in the RAM 103 and read from the RAM 103 by the input determination unit 111 .
  • the input determination unit 111 determines the positions and movements of the operator's fingers. It is determined whether or not it is within the operation space 19 (step S22). If it is determined in step S22 that the operator's finger is within the virtual operation space 19 (step S23, YES), the input determination unit 111 regards the detected finger movement of the operator as an operation command to the smartphone. to decide. Then, the CPU 101 performs processing based on the operation instruction (step S24).
  • the ToF data processing unit 107 uses the imaging data acquired by the camera 108 to identify specific parts of the operator, such as the face and finer eyes. A distance may be calculated. An example of this case will be described with reference to FIG. 4B.
  • FIG. 4B is a modification of the method described with reference to FIG. 4A, and relates to the case where the ToF data processing unit 107 measures the distance to a specific site using imaging data.
  • the imaged data processing unit 109 detects, for example, eyes of the operator based on the imaged data
  • the imaged data processing unit 109 transmits data on the positions of the eyes to the ToF data processing unit 107 (step S30).
  • the ToF data processing unit 107 calculates the distance to the eye based on the data regarding the position of the eye received from the imaging data processing unit 109 (step S31).
  • the ToF data processing unit 107 may use imaging data.
  • the camera 108 may be a ToF three-dimensional image sensor, and for example, the ToF data processing unit 107 and imaging data processing unit 109 described with reference to FIG. 3A may be integrated. Any configuration is not limited as long as it can calculate the distance to a predetermined part of the operator, for example, the face, and specify the position of the predetermined part of the operator, for example, the eyes.
  • FIG. 5 is a conceptual diagram of the determination table 121.
  • the determination table 121 includes the position of the face (eg, the eyes in this example) in the imaging range of the imaging data obtained by the imaging data processing unit 109, the horizontal axis of the smartphone 10 obtained by the gyro data processing unit 105, for example. and the distance L to the operator's face (eyes in this example) obtained by the ToF data processing unit 107, information about the position (space coordinates) of the virtual operation space 19 is held.
  • the positions of the eyes in the imaging range are three regions A, B, and C in the height direction (for example, the longitudinal direction of the housing of the smartphone 10) in the imaging range 130 obtained by the camera 108. are categorized.
  • area A is the upper 1 ⁇ 3 area of the imaging range 130
  • area B is the central 1 ⁇ 3 area of the imaging range 130
  • area C is the lower 1 ⁇ 3 area of the imaging range 130 .
  • FIG. 6A is only an example, and it is not necessarily divided into three in the height direction, it may be divided into two or four or more, and it is divided into a plurality of regions in the width direction instead of the height direction. Alternatively, it may be divided in both the height direction and the width direction.
  • the angle ⁇ 1 of the smartphone 10 is, for example, the angle of the smartphone 10 with respect to the horizontal axis, as shown in FIG. 6B. More specifically, it is the angle formed between the horizontal axis and the back surface of the smartphone 10 .
  • the inclination ⁇ 1 is classified into three ranges of 0° or more and less than 30°, 30° or more and less than 60°, and 60° or more and 90° or less.
  • the range of the angle ⁇ 1 is 0° or more and 90° or less, but the range may be narrower or wider than this.
  • the reference axis for the angle ⁇ 1 is not limited to the horizontal axis.
  • the distance L is the distance to the operator's face measured by the ToF sensor 106, and may be, for example, the distance to the eyes of the operator. In the example of FIG. 5, the distance L is classified into four ranges: 0 cm or more and less than 15 cm, 15 cm or more and less than 30 cm, 30 cm or more and less than 45 cm, and 45 cm or more.
  • the determination table 121 holds data on the position (spatial coordinates) of the virtual operation space 19 for each combination of conditions. For example, when the position of the eye is within the region A and the angle ⁇ 1 is within the range of 0° ⁇ 1 ⁇ 30°, the following position data is held according to the distance L. ⁇ 0cm ⁇ L ⁇ 15cm: “Aa-1” ⁇ 15cm ⁇ L ⁇ 30cm: “Aa-2” ⁇ 30cm ⁇ L ⁇ 45cm: “Aa-3” ⁇ 45 cm ⁇ L: “A-a-4” Further, when ⁇ 1 is within the range of 30° ⁇ 1 ⁇ 60°, the following position data is held according to the distance L.
  • Information indicating the position of the virtual operation space 19 is, for example, “Aa-1” indicated by double quotations in the above description. More specifically, when the display surface of the display 12 is a two-dimensional plane of the X axis and the Y axis, and the direction perpendicular to the display surface is the Z axis, "Aa-1" is the direction along the Z axis. Information such as the distance between the display 12 and the virtual operation space 19 in the virtual operation space 19, the X-axis coordinates and Y-axis coordinates of the operation surface by the operator in the virtual operation space 19, or the angle of the virtual operation space 19 with respect to the horizontal axis, for example. including.
  • the above position data includes the eyes of the operator, a predetermined position (for example, near the center) of the virtual operation space 19, and a predetermined position (for example, a numeric keypad) of the actual input screen displayed on the display 12. (near the center) is coordinate information that forms a straight line.
  • the numeric keypad 18 is displayed on the display 12 as shown in FIG.
  • the coordinate information is such that it is aligned with the area for receiving the number "5" or "8" on the ten key 18 above.
  • FIG. 2A shows the case of the numeric keypad 18, a QWERTY layout keyboard may also be used.
  • the operator's eyes, the area in the virtual manipulation space 19 that receives the keys “G”, “H”, “J”, or “K”, and the keys "G”, "H” on the display 12 , “J”, or “K” may be selected so as to be aligned with the region.
  • the screen displayed on the display 12 may allow the operator to select two areas of "YES” and “NO”, or select one area such as "OK”.
  • the virtual operation space 19 may be provided so as to be aligned with the position of the person's eyes. In other words, when the operator taps a certain area in the virtual operation space 19, coordinate information that can realize a positional relationship that allows the operator to recognize that the area intended by the operator on the display screen of the display 12 has correctly received the input. If it is By using the determination table 121 in this way, the appropriate position of the virtual manipulation space 19 can be easily obtained in step S16.
  • FIG. 7A is a cross-sectional view of the smartphone 10 according to this embodiment, showing the position of the virtual operation space 19A corresponding to the angle of the smartphone 10 with respect to the horizontal axis and the position of the operator's eyes.
  • the smartphone 10 has an angle ⁇ 1 of 30° with respect to the horizontal axis, the distance L measured by the ToF sensor 14 is 20 cm, and the operator's eye 200 is positioned at the position of the camera 13.
  • the case is shown in the area B in the imaging range 130 .
  • the position data "Bb-2" corresponds to the determination table 121 described with reference to FIG. Therefore, the position determination unit 110 determines the position of the virtual operation space 19A based on the position data "Bb-2", and the input determination unit 111 detects the operator's finger in the virtual operation space 19A.
  • the area of the operation surface in the virtual operation space 19 may be set larger than the display area of the numeric keypad 18 on the display 12 in order to facilitate key input by the operator.
  • FIG. 7B shows a case where the position of the operator's eyes 200 has moved to area C in the imaging range 130 in FIG. 7A.
  • the virtual operation space 19A determined in the case of FIG. 7A is indicated by a dashed line.
  • the distance L measured by the ToF sensor 14 is 35 cm.
  • the position data "Cb-3" corresponds to the determination table 121 described with reference to FIG. Therefore, the position determination unit 110 determines the position of the virtual operation space 19B based on the position data “Cb-3”, and the input determination unit 111 detects the operator's finger in the virtual operation space 19B. .
  • FIGS. 7A and 7B When the examples of FIGS. 7A and 7B are compared, if the virtual operation space 19A exists at the position shown in FIG. 7A regardless of the position of the operator's eyes shown in FIG. 7B, as shown in FIG. 7B
  • the operator moves his/her finger to the position of the virtual operation space 19A with the intention of tapping, for example, the central portion in the Y direction of the numeric keypad 18 on the display 12, the operator's line of sight is shifted compared to the case of FIG. 7A. Therefore, in the virtual operation space 19A, it is determined that the portion below the central portion has been tapped, which causes an erroneous input.
  • the smartphone 10 may recognize that the operator intended to select the number "4" on the numeric keypad, but actually selected the symbol "0" located below it.
  • the position of the virtual operation space 19 is changed according to the angle ⁇ 1, the distance L, and the position of the operator's eyes.
  • a virtual operation space 19B is provided at a position corresponding to the deviation of the line of sight. This can reduce the occurrence of erroneous input by the operator.
  • Fig. 7C shows another example.
  • the smartphone 10 has an angle ⁇ 1 of 60° with respect to the horizontal axis.
  • a virtual operation space 19C indicated by broken lines in FIG. 7C is provided.
  • the position of the operator's eye 200 changed, the distance L measured by the ToF sensor 14 was 45 cm, and the position of the operator's eye 200 was in the area A in the imaging range 130 of the camera 13. do.
  • the position data "Ac-4" corresponds to the determination table 121 described with reference to FIG.
  • the position determining section 110 determines the position of the virtual operation space 19D based on the position data "Ac-4". That is, since the operator looks down on the smartphone 10 from above rather than from the front, the virtual operation space 19D is also provided above the virtual operation space 19C along the Y direction according to this positional difference. In the example of FIG. 7C, the distance between the display 12 and the virtual operation space 19D and the distance between the display 12 and the virtual operation space 19C in the Z-axis direction also change. This is determined by the determination table 121, but is not limited as long as the virtual operation space 19 is provided at a position where input is easier based on the viewpoint of the operator.
  • the virtual operation space is preferably positioned at a distance of, for example, 10 cm from the smartphone 10.
  • the angle of the virtual operation space 19D with respect to the horizontal axis may also differ from that of the virtual operation space 19C. This also applies to the example of FIG. 7B.
  • FIG. 8A is a hardware configuration diagram showing the internal configuration of the smartphone 10 according to this embodiment, and corresponds to FIG. 3A described in the first embodiment.
  • the configuration according to the present embodiment differs from FIG. 3A described in the first embodiment in that the smartphone 10 further includes a calibration control unit (adjustment unit) 140 and calibration data 122 is held in the RAM 103. It is in.
  • the calibration control unit 140 calculates correction data for adjusting the position of the virtual operation space 19 determined by the method described in the first embodiment based on the parallax of the operator as described above.
  • the calibration data 122 is adjustment data for the position of the virtual operational space 19 based on the correction data calculated by the calibration control section 140 .
  • FIG. 8A may be implemented by software, as in the first embodiment.
  • 8B and 8C are respectively a hardware configuration diagram showing the internal configuration of smartphone 10 according to the modification of FIG. 3A and a functional block diagram of CPU 101 when program 120 is executed. As illustrated, the functions of the calibration control unit 140 may be realized by the CPU 101 when the program 120 is executed.
  • FIG. 9 is a flowchart showing operations of the smartphone 10 according to this embodiment. The following description will focus on the operation of adjusting the position of the virtual operation space 19 by the calibration control unit 140 .
  • the operations of FIG. 9 may be performed, for example, when the smartphone 10 is powered on for the first time by the operator, or every time the power is turned on, and/or from a setting command within the smartphone.
  • the calibration control unit 140 notifies the operator of the start of calibration by displaying a message (for example, a message such as "The input surface will be adjusted now") on the display 12. (Step S30). Subsequently, the calibration control section 140 displays data for calibration on the display 12 (step S31). Further, the calibration control unit 140 prompts the operator to adjust the angle of the smartphone 10 and the position of the operator's face to appropriate positions (step S32). Specifically, for example, the operator's eye position is on a substantially vertical line from the center of the calibration data on the display 12, in other words, the operator's line of sight faces the center of the calibration data. Then, a message for the operator is displayed on the display 12 .
  • a message for example, a message such as "The input surface will be adjusted now
  • the calibration control unit 140 determines that the angle of the smartphone 10 and the position of the operator's face are appropriate based on the detection results from the gyro sensor 104, the ToF sensor 106, and the camera 108.
  • the calibration control unit 140 prompts the operator to tap a specific portion of the data for calibration in the virtual operation space (step S33). Also at this time, the calibration control unit 140 displays a message such as, for example, please tap a specific place on the display 12 .
  • steps S10 to S21 described with reference to FIGS. 4A and 4B in the first embodiment are executed.
  • step S16 is completed at least before step S33 is executed.
  • step S33 when the operator taps the virtual operation space 19 (steps S17 to S21), the calibration control unit 140 changes the position obtained in step S16 and the position actually tapped by the operator in the virtual operation space 19. Detect the difference from the determined position. Then, this is stored in the RAM 103 as the calibration data 122 (step S34). After that, the calibration control unit 140 notifies the operator of the completion of the calibration (step S35). After that, for example, the position determination unit 110 of the smartphone 10 corrects the virtual operational space 19 obtained in step S16 using the calibration data 122, and determines the position of the virtual operational space 19 (step S36).
  • FIGS. 10A and 10B are external perspective views of the smartphone 10 during execution of the calibration operation.
  • the calibration control unit 140 displays the start of calibration on the display 12, and then displays data 20 for calibration on the display 12 as shown in FIG. 10A.
  • the letters "A" to "E” are displayed at the four corners and the center of the (3 ⁇ 3) display area, but the calibration data 20 is not limited to this example. do not have.
  • the calibration control unit 140 displays on the display 12 a message urging that the positions of the smartphone 10 and the operator are appropriate, in the example of FIG.
  • a message 21 prompting the user is displayed on the display 12 .
  • the virtual operation space 19E accepts the operator's finger motion. Note that the area of the operation surface in the virtual operation space 19 may be set larger than the display area of the calibration data 122 on the display 12 in order to facilitate key input by the operator.
  • the calibration control unit 140 recognizes that the operator has a deviation between the actual virtual operation space 19E and the position recognized by the operator due to the parallax. Therefore, the calibration control unit 140 detects the amount of deviation ⁇ between the actually tapped position and the current virtual operation space and the direction thereof. These data are stored in the RAM 103 as calibration data 122 . After that, when the smartphone 10 receives an input from the operator in the virtual operation space 19, the smartphone 10 sets the position of the virtual operation space 19 to a position shifted leftward by ⁇ from the position obtained in step S16.
  • erroneous input in the non-contact smartphone 10 can be suppressed.
  • the smart phone 10 was mentioned as an example in the said embodiment, it is widely applicable to other electronic devices.
  • it can be applied to tablet PCs, televisions, automatic ticket vending machines for trains and movies, automatic check-in machines at airports, and cash registers at restaurants and the like.
  • the angle of the display is likely to be constant in many cases. It may be the case that angle ⁇ 1 need not be taken into account in determining the position of .
  • the angle of the electronic device itself may change when, for example, it is mounted on a ship or an aircraft. preferably.
  • the case where the operator's input is performed with a finger and the finger is detected by the camera 108 has been described as an example.
  • input by the operator is not limited to a finger, and is not limited as long as it is a member capable of designating a specific area, such as a touch pen (stylus pen).
  • the captured image data processing unit 109 detects these members in advance, it may be recognized that "a finger has been detected" in step S19 in FIGS. 4A and 4B.
  • the imaging data may contain a plurality of human faces.
  • the captured data processing unit 109 may select one face using, for example, face authentication processing. That is, the imaging data processing unit 109 causes the RAM 103 to hold the face data of the operator previously photographed by the camera 13, and when a plurality of faces are recognized in step S14, the face data held in advance in the RAM 103 is stored in the RAM 103. Authentication processing may be performed using the face data, and the position of the authenticated face data may be calculated in step S15.
  • the operations described in the first and second embodiments can be implemented by executing the program 120, for example.
  • the program 120 can be downloaded via the Internet or the like and held in the ROM 102 or the RAM 103, so that after the purchase of the electronic device, the above first and It is possible to realize the operation described in the second embodiment.

Abstract

The electronic device according to the embodiment is capable of receiving information input from an operator in a non-contact manner, and comprises: a first detection unit that detects the inclination of the electronic device with respect to a predefined reference axis; a measurement unit that measures the distance to the operator; a second detection unit that detects the eye position of the operator; and an acquisition unit that acquires, on the basis of the inclination detected by the first detection unit, the distance measured by the measurement unit, and the eye position detected by the second detection unit, the position of a virtual operation space for recognizing information input operations performed by the operator in the non-contact manner.

Description

電子機器Electronics
 本発明に係る実施形態は、電子機器に関する。本出願は、2021年4月8日に日本に出願された特願2021-065575号に優先権を主張し、その内容をここに援用する。  Embodiments according to the present invention relate to electronic devices. This application claims priority to Japanese Patent Application No. 2021-065575 filed in Japan on April 8, 2021, the contents of which are incorporated herein.
 引用文献1には、小型の距離センサを用いて空間内における操作者のジェスチャーを検出し、その動きを、所定の情報処理を行うための操作として認識する技術が記載されている。 Cited Document 1 describes a technique that uses a small distance sensor to detect gestures of an operator in a space, and recognizes the movement as an operation for performing predetermined information processing.
特開2017-058818号公報JP 2017-058818 A
 引用文献1の構成では、操作者による指の動作を認識する際に、例えば文字キー種別の切り替え動作と、実際の文字の選択動作とを誤って認識する場合があり得ることを問題点として挙げている。 In the configuration of Cited Document 1, there is a problem that, when recognizing the operator's finger motion, there is a possibility that, for example, the character key type switching motion and the actual character selection motion may be erroneously recognized. ing.
 しかしながら、文字の選択動作等に関しては、例えば情報処理装置と操作者との位置や、情報処理装置のタッチパネルと操作者の視点との関係によっても、誤入力が生じるおそれがある。そして、このような問題点について引用文献1は何ら開示していない。 However, with respect to character selection operations, etc., there is a risk of erroneous input occurring, for example, depending on the position of the information processing device and the operator, or the relationship between the touch panel of the information processing device and the operator's viewpoint. In addition, Cited Document 1 does not disclose anything about such problems.
 本開示の一態様によれば、誤入力を抑制できる電子機器を提供できる。 According to one aspect of the present disclosure, it is possible to provide an electronic device capable of suppressing erroneous input.
 本開示の一態様に係る電子機器は、非接触による操作者の情報入力を受け付け可能な電子機器であって、所定の基準軸に対する前記電子機器の傾きを検出する第1検出部と、操作者との間の距離を測定する測定部と、操作者の目の位置を検出する第2検出部と、第1検出部で検出された傾き、測定部で測定された距離、及び第2検出部で検出された目の位置に基づいて、操作者による非接触による情報入力動作を認識するための仮想操作空間の位置を取得する取得部とを備える。 An electronic device according to an aspect of the present disclosure is an electronic device capable of accepting non-contact information input by an operator, comprising: a first detection unit that detects an inclination of the electronic device with respect to a predetermined reference axis; a measurement unit that measures the distance between the operator, a second detection unit that detects the position of the operator's eyes, the inclination detected by the first detection unit, the distance measured by the measurement unit, and the second detection unit an obtaining unit that obtains a position in the virtual operation space for recognizing a non-contact information input operation by the operator based on the position of the eye detected in .
第1実施形態に係るスマートフォンの外観図。1 is an external view of a smartphone according to a first embodiment; FIG. 第1実施形態に係るスマートフォンの外観図。1 is an external view of a smartphone according to a first embodiment; FIG. 第1実施形態に係るスマートフォンの断面図。Sectional drawing of the smart phone which concerns on 1st Embodiment. 第1実施形態に係るスマートフォンのハードウェア構成図。FIG. 2 is a hardware configuration diagram of a smartphone according to the first embodiment; 第1実施形態に係るスマートフォンのハードウェア構成図。FIG. 2 is a hardware configuration diagram of a smartphone according to the first embodiment; 第1実施形態に係るCPUの機能ブロック図。3 is a functional block diagram of a CPU according to the first embodiment; FIG. 第1実施形態に係るスマートフォンの動作を示すフローチャート。4 is a flowchart showing the operation of the smart phone according to the first embodiment; 第1実施形態の変形例に係るスマートフォンの動作を示すフローチャート。6 is a flowchart showing the operation of the smartphone according to the modified example of the first embodiment; 第1実施形態に係る判定テーブルの概念図。4 is a conceptual diagram of a determination table according to the first embodiment; FIG. 第1実施形態に係る撮像領域の概念図。4 is a conceptual diagram of an imaging region according to the first embodiment; FIG. 第1実施形態に係るスマートフォンの傾きを示す概念図。FIG. 4 is a conceptual diagram showing the inclination of the smartphone according to the first embodiment; 第1実施形態に係るスマートフォンにおける仮想操作空間の位置決定方法を示す概念図。FIG. 4 is a conceptual diagram showing a method of determining the position of the virtual operation space in the smart phone according to the first embodiment; 第1実施形態に係るスマートフォンにおける仮想操作空間の位置決定方法を示す概念図。FIG. 4 is a conceptual diagram showing a method of determining the position of the virtual operation space in the smart phone according to the first embodiment; 第1実施形態に係るスマートフォンにおける仮想操作空間の位置決定方法を示す概念図。FIG. 4 is a conceptual diagram showing a method of determining the position of the virtual operation space in the smart phone according to the first embodiment; 第2実施形態に係るスマートフォンのハードウェア構成図。FIG. 10 is a hardware configuration diagram of a smartphone according to the second embodiment; 第2実施形態に係るスマートフォンのハードウェア構成図。FIG. 10 is a hardware configuration diagram of a smartphone according to the second embodiment; 第2実施形態に係るCPUの機能ブロック図。The functional block diagram of CPU which concerns on 2nd Embodiment. 第2実施形態に係るスマートフォンの動作を示すフローチャート。8 is a flowchart showing the operation of the smart phone according to the second embodiment; 第2実施形態に係るスマートフォンにおける仮想操作空間のキャリブレーション方法を示す概念図。FIG. 11 is a conceptual diagram showing a method of calibrating the virtual operation space in the smartphone according to the second embodiment; 第2実施形態に係るスマートフォンにおける仮想操作空間のキャリブレーション方法を示す概念図。FIG. 11 is a conceptual diagram showing a method of calibrating the virtual operation space in the smartphone according to the second embodiment;
 以下、本実施形態について図面を参照しつつ説明する。図面については、同一又は同等の要素には同一の符号を付し、重複する説明は省略する。なお、以下に説明する本実施形態は、特許請求の範囲に記載された内容を不当に限定するものではない。また本実施形態で説明される構成の全てが、本開示の必須構成要件であるとは限らない。 The present embodiment will be described below with reference to the drawings. In the drawings, the same or equivalent elements are denoted by the same reference numerals, and overlapping descriptions are omitted. In addition, this embodiment described below does not unduly limit the content described in the claims. Moreover, not all the configurations described in the present embodiment are essential constituent elements of the present disclosure.
 <第1実施形態>
 まず、第1実施形態に係る電子機器について、スマートフォンを例に挙げて、以下説明する。図1は、本実施形態に係るスマートフォンの外観斜視図である。
<First embodiment>
First, the electronic device according to the first embodiment will be described below by taking a smartphone as an example. FIG. 1 is an external perspective view of a smartphone according to this embodiment.
 図1に示すように、本実施形態に係るスマートフォン10は、例えば電源ボタン11、ディスプレイ12、カメラ13、ToF(Time of Flight)センサ14、スピーカ15、マイク16、及びUSB(Universal Serial Bus)端子17を備えている。 As shown in FIG. 1, the smartphone 10 according to the present embodiment includes, for example, a power button 11, a display 12, a camera 13, a ToF (Time of Flight) sensor 14, a speaker 15, a microphone 16, and a USB (Universal Serial Bus) terminal. 17.
 電源ボタン11は、スマートフォン10の操作者による操作によりスマートフォン10に電源を投入するためのものである。ディスプレイ12は、タッチパネル機能を有する。そしてディスプレイ12は、種々のアプリケーション画面を表示すると共に、タッチパネル機能により操作者による文字入力や選択動作等、操作者により種々の情報入力を受け付ける。操作者による文字や数字等の情報入力は、ディスプレイ12に対して非接触により可能であり、その詳細については後述する。なお、図1では一例として、ユーザからの情報入力用の表示例として、ディスプレイ12にテンキーが表示されている場合を示している。カメラ13は、例えば操作者等を撮像するためのインカメラである。ToFセンサ14は、例えば赤外光を用いて対象物までの距離を計測する。なお、カメラ13とToFセンサ14とが一体化されている場合であってもよい。スピーカ15は、通話時における音声を出力し、マイク16は、通話時における操作者による音声を受信する。USB端子17は、スマートフォン10に対する充電や情報転送のために用いられる。 The power button 11 is for powering on the smartphone 10 by an operation by the operator of the smartphone 10 . The display 12 has a touch panel function. The display 12 displays various application screens and accepts various information input by the operator such as character input and selection operation by the operator through the touch panel function. Information such as characters and numbers can be input by the operator without touching the display 12, the details of which will be described later. Note that FIG. 1 shows, as an example, a case where a numeric keypad is displayed on the display 12 as a display example for inputting information from the user. The camera 13 is an in-camera for imaging an operator or the like, for example. The ToF sensor 14 measures the distance to the object using infrared light, for example. Note that the camera 13 and the ToF sensor 14 may be integrated. The speaker 15 outputs voice during a call, and the microphone 16 receives the voice of the operator during a call. The USB terminal 17 is used for charging the smartphone 10 and transferring information.
 次に、上記説明したディスプレイ12における非接触による情報入力の概念につき、図2A及び図2Bを用いて説明する。図2Aは、図1と同様にスマートフォン10の外観斜視図であり、図2Bはスマートフォン10の断面図であり、操作者が非接触動作によりテンキーを用いて数字を入力する例を示している。 Next, the concept of non-contact information input on the display 12 described above will be described with reference to FIGS. 2A and 2B. 2A is an external perspective view of the smartphone 10, similar to FIG. 1, and FIG. 2B is a cross-sectional view of the smartphone 10, showing an example in which the operator inputs numbers using the numeric keypad in a non-contact operation.
 図2A及び図2Bに示すように、スマートフォン10のディスプレイ12上方(すなわち、ディスプレイ12と操作者との間の空間)には、仮想操作空間19が設けられる。仮想操作空間19は、ディスプレイ12から一定の距離だけ離隔した位置に設けられ、仮想操作空間19における操作者の指によるタップ動作が、テンキー18に対する情報入力としてスマートフォン10に受け付けられる。この仮想操作空間は、種々の条件により適切な位置に設けられる。本方法については、後に詳細に説明する。なお、この仮想操作空間19は、実際には操作者が認識可能なように空間上に表示されるもの、言い換えれば、操作者の目に実際に見えるものであってもなくてもいずれであってもよい。 As shown in FIGS. 2A and 2B, a virtual operation space 19 is provided above the display 12 of the smartphone 10 (that is, the space between the display 12 and the operator). The virtual operation space 19 is provided at a position separated from the display 12 by a certain distance. This virtual operation space is provided at an appropriate position according to various conditions. This method will be described in detail later. Note that the virtual operation space 19 may or may not actually be displayed in the space so that the operator can recognize it, in other words, it may or may not actually be visible to the operator. may
 図3Aは、本実施形態に係るスマートフォン10の内部構成を示すハードウェア構成図である。図示するようにスマートフォン10は、CPU(Central Processing Unit)101、ROM(Read Only Memory)102、RAM(Random Access Memory)103、ジャイロセンサ104、ジャイロデータ処理部(第1検出部)105、ToFセンサ106(図1で説明したToFセンサ14)、ToFデータ処理部(測定部)107、カメラ108(図1で説明したカメラ13)、撮像データ処理部(第2検出部)109、位置判定部(取得部)110、及び入力判定部111を備えている。 FIG. 3A is a hardware configuration diagram showing the internal configuration of the smartphone 10 according to this embodiment. As illustrated, the smartphone 10 includes a CPU (Central Processing Unit) 101, a ROM (Read Only Memory) 102, a RAM (Random Access Memory) 103, a gyro sensor 104, a gyro data processing unit (first detection unit) 105, a ToF sensor 106 (ToF sensor 14 described in FIG. 1), ToF data processing unit (measurement unit) 107, camera 108 (camera 13 described in FIG. 1), imaging data processing unit (second detection unit) 109, position determination unit ( An acquisition unit) 110 and an input determination unit 111 are provided.
 CPU101は、スマートフォン10全体の動作を制御する。CPU101は、種々のプロセッサを用いることができ、必ずしもCPUに限定されるものではない。ROM102は、スマートフォン10が動作するためのプログラム120及び判定テーブル121等、種々のプログラムやデータを保持する。RAM103は、CPU101の作業領域として機能し、種々のプログラムやデータを保持する。ジャイロセンサ104は、スマートフォン10の回転角や向きの変化を角速度として検知し、その結果をジャイロデータ処理部105に転送する。ジャイロデータ処理部105は、ジャイロセンサ104から得られたデータに基づいて、例えば水平軸に対するスマートフォン10の角度を算出する。なお、スマートフォン10の角度は必ずしも水平軸を基準に算出される場合に限定されず、所定の基準軸や基準面であればよく、例えば操作者に対するスマートフォン10の相対的な傾きが算出されればよい。ToFセンサ106は、前述のとおり、スマートフォン10のディスプレイ12側に位置する物体までの距離を検出し、その結果をToFデータ処理部107に転送する。ToFデータ処理部107は、ToFセンサ106から得られたデータに基づいて、スマートフォン10の操作者の顔、例えば目までの距離を算出する。カメラ108は、前述の通り、スマートフォン10のディスプレイ12側に位置する物体を、一定の画角にて撮像し、その結果を撮像データ処理部109に転送する。撮像データ処理部109は、カメラ108で撮像された、操作者の目の位置を検出し、スマートフォン10と操作者との相対的な位置関係を算出する。また、撮像データ処理部109は、カメラ108で撮像された、操作者の指の位置や動きを検出する。位置判定部110は、ジャイロデータ処理部105で得られた角度、ToFデータ処理部107で得られた顔までの距離、及び撮像データ処理部109で得られた目の位置等に基づいて、図2A及び図2Bで説明した仮想操作空間19の適切な位置(座標)を算出する。入力判定部111は、操作者の指の位置が位置判定部110で得られた仮想操作空間19内にあるか否か、また、操作者の指の動きが仮想操作空間19内にあるか否かを判定する。そして、仮想操作空間19内にある場合、その指の操作に基づいて操作者による入力動作を受け付ける。 The CPU 101 controls the operation of the smartphone 10 as a whole. Various processors can be used as the CPU 101, and the processor is not necessarily limited to a CPU. The ROM 102 holds various programs and data such as a program 120 and a determination table 121 for operating the smartphone 10 . A RAM 103 functions as a work area for the CPU 101 and holds various programs and data. The gyro sensor 104 detects changes in the rotation angle and orientation of the smartphone 10 as angular velocity, and transfers the result to the gyro data processing unit 105 . The gyro data processing unit 105 calculates, for example, the angle of the smartphone 10 with respect to the horizontal axis based on the data obtained from the gyro sensor 104 . Note that the angle of the smartphone 10 is not necessarily calculated based on the horizontal axis, and may be a predetermined reference axis or reference plane. good. As described above, the ToF sensor 106 detects the distance to an object positioned on the display 12 side of the smartphone 10 and transfers the result to the ToF data processing unit 107 . The ToF data processing unit 107 calculates the distance to the face, for example, the eyes of the operator of the smartphone 10 based on the data obtained from the ToF sensor 106 . As described above, the camera 108 captures an image of an object located on the display 12 side of the smartphone 10 at a certain angle of view, and transfers the result to the captured data processing unit 109 . The captured data processing unit 109 detects the positions of the operator's eyes captured by the camera 108 and calculates the relative positional relationship between the smartphone 10 and the operator. Further, the captured data processing unit 109 detects the position and movement of the operator's finger captured by the camera 108 . The position determination unit 110 determines the position of the figure based on the angle obtained by the gyro data processing unit 105, the distance to the face obtained by the ToF data processing unit 107, the eye position obtained by the imaging data processing unit 109, and the like. Appropriate positions (coordinates) of the virtual operation space 19 described with reference to 2A and 2B are calculated. The input determination unit 111 determines whether the position of the operator's finger is within the virtual operation space 19 obtained by the position determination unit 110, and whether the motion of the operator's finger is within the virtual operation space 19. determine whether Then, when it is within the virtual operation space 19, it receives an input operation by the operator based on the operation of the finger.
 なお、図3Aにおけるいくつかのハードウェアは、ソフトウェアによって実行されてもよい。図3Bは、図3Aの変形例に係るスマートフォン10の内部構成を示すハードウェア構成図であり、図3Cは、プログラム120実行時におけるCPU101の機能ブロック図である。 Note that some hardware in FIG. 3A may be implemented by software. FIG. 3B is a hardware configuration diagram showing the internal configuration of smartphone 10 according to the modification of FIG. 3A, and FIG. 3C is a functional block diagram of CPU 101 when program 120 is executed.
 図示するように、図3Aで説明したジャイロデータ処理部105、ToFデータ処理部107、位置判定部110、及び入力判定部111については、ハードウェアの代わりに、プログラム120を実行したCPU101がこれらの処理部として機能してもよい。なお、撮像データ処理部109も同様であるが、撮像データ処理部109は、図3Bに示すように、CPUとは別のプロセッサ(図3Bの例ではGPU(Graphical Processing Unit)112)で実現されてもよい。 3A, the gyro data processing unit 105, the ToF data processing unit 107, the position determination unit 110, and the input determination unit 111 described in FIG. 3A are implemented by the CPU 101 executing the program 120 instead of hardware. It may function as a processing unit. Note that the imaging data processing unit 109 is similar, but the imaging data processing unit 109 is realized by a processor (GPU (Graphical Processing Unit) 112 in the example of FIG. 3B) different from the CPU, as shown in FIG. 3B. may
 図4Aは、スマートフォン10の動作を示すフローチャートである。以下では、スマートフォン10の仮想操作空間19の位置決定方法と、操作者による仮想操作空間19内における動作の検出方法について説明する。 4A is a flowchart showing the operation of the smartphone 10. FIG. A method of determining the position of the smartphone 10 in the virtual operation space 19 and a method of detecting an operator's motion in the virtual operation space 19 will be described below.
 図示するように、まずジャイロセンサ104がスマートフォン10の角速度データを取得する(ステップS10)。次に、ジャイロデータ処理部105が、ステップS10で得られた角速度データに基づいて、例えば水平軸に対するスマートフォン10の傾きθ1を検出し、これをRAM103に格納する(ステップS11)。 As shown, the gyro sensor 104 first acquires angular velocity data of the smartphone 10 (step S10). Next, the gyro data processing unit 105 detects, for example, the tilt θ1 of the smartphone 10 with respect to the horizontal axis based on the angular velocity data obtained in step S10, and stores it in the RAM 103 (step S11).
 またToFセンサ106は、撮像範囲内における対象物までの距離データを取得する(ステップS12)。次に、ToFデータ処理部107が、ステップS12で得られた距離データに基づいて、操作者の顔、例えば目までの距離Lを算出し、これをRAM103に格納する(ステップS13)。 The ToF sensor 106 also acquires distance data to the object within the imaging range (step S12). Next, the ToF data processing unit 107 calculates the distance L to the operator's face, eg eyes, based on the distance data obtained in step S12, and stores it in the RAM 103 (step S13).
 更にカメラ108は、その画角θ2内にある対象物を撮像する(ステップS14)。そして撮像データ処理部109が、ステップS14で得られた撮像データに基づいて、操作者の顔、例えば目を検出し、スマートフォン10と操作者との相対的な位置関係を算出し、これをRAM103に格納する(ステップS15)。 Furthermore, the camera 108 takes an image of the object within the angle of view θ2 (step S14). Based on the image data obtained in step S14, the imaging data processing unit 109 detects the operator's face, for example, the eyes, calculates the relative positional relationship between the smartphone 10 and the operator, and stores it in the RAM 103. (step S15).
 上記のステップS10、S12、及びS14は、例えば並行して実行される。そして、ステップS11で得られたθ1、ステップS13で得られた操作者の顔までの距離L、及びステップS15で得られたスマートフォン10と操作者の顔との相対的な位置関係とに基づいて、位置判定部110が仮想操作空間19の適切な位置を算出し、これをRAM103に格納する(ステップS16)。ステップS16における仮想操作空間19の適切な位置の算出は、例えば予め用意された判定テーブル121を用いて実行可能である。判定テーブルについては、図5を用いて後述する。 The above steps S10, S12, and S14 are executed in parallel, for example. Then, based on θ1 obtained in step S11, the distance L to the operator's face obtained in step S13, and the relative positional relationship between the smartphone 10 and the operator's face obtained in step S15, , the position determining unit 110 calculates an appropriate position in the virtual manipulation space 19 and stores it in the RAM 103 (step S16). Calculation of the appropriate position of the virtual manipulation space 19 in step S16 can be performed using, for example, a determination table 121 prepared in advance. The determination table will be described later with reference to FIG.
 また、カメラ108は、その画角θ2内にある対象物を撮像する(ステップS17)。本動作は、ステップS14で実行されてもよい。引き続き撮像データ処理部109は、ステップS17で得られた撮像データ内に、操作者の指が検出されているか否かを判定する(ステップS18)。 Also, the camera 108 captures an image of the object within the angle of view θ2 (step S17). This operation may be performed in step S14. Subsequently, the imaging data processing unit 109 determines whether or not the operator's finger is detected in the imaging data obtained in step S17 (step S18).
 ステップS18において操作者の指が検出された場合(ステップS19、YES)、ToFデータ処理部107が、操作者の指までの距離を算出する(ステップS20)。そしてカメラ108が、ステップS17(またはS14)で得られた撮像データを入力判定部111に転送し、更にToFデータ処理部107がステップS20における算出結果を入力判定部111に転送する(ステップS21)。もちろん、この際にステップS20で得られた算出結果が一時的にRAM103に保持され、入力判定部111によりRAM103から読み出される場合であってもよい。 When the operator's finger is detected in step S18 (step S19, YES), the ToF data processing unit 107 calculates the distance to the operator's finger (step S20). Then, the camera 108 transfers the imaging data obtained in step S17 (or S14) to the input determination unit 111, and the ToF data processing unit 107 transfers the calculation result in step S20 to the input determination unit 111 (step S21). . Of course, at this time, the calculation result obtained in step S<b>20 may be temporarily held in the RAM 103 and read from the RAM 103 by the input determination unit 111 .
 入力判定部111は、ステップS16で得られた仮想操作空間19に関する適切な位置情報と、ステップS21で得られた操作者の指の位置情報とに基づき、操作者の指の位置や動きが仮想操作空間19内にあるか否かを判断する(ステップS22)。ステップS22において、操作者の指が仮想操作空間19内にあると判断された場合(ステップS23、YES)、入力判定部111は、検出された操作者の指の動きをスマートフォンへの操作命令と判断する。そしてCPU101は、当該操作命令に基づく処理を行う(ステップS24)。 Based on the appropriate positional information regarding the virtual operation space 19 obtained in step S16 and the positional information of the operator's finger obtained in step S21, the input determination unit 111 determines the positions and movements of the operator's fingers. It is determined whether or not it is within the operation space 19 (step S22). If it is determined in step S22 that the operator's finger is within the virtual operation space 19 (step S23, YES), the input determination unit 111 regards the detected finger movement of the operator as an operation command to the smartphone. to decide. Then, the CPU 101 performs processing based on the operation instruction (step S24).
 なお、ToFデータ処理部107は、カメラ108で取得された撮像データを用いて、操作者の特定の部位、例えば顔や、さらに細かい目などの部位を特定し、これらの特定された部位までの距離を算出してもよい。この場合の例につき、図4Bを用いて説明する。図4Bは、図4Aで説明した方法の変形例であり、ToFデータ処理部107が撮像データを用いて特定の部位までの距離を計測する場合に関する。 Note that the ToF data processing unit 107 uses the imaging data acquired by the camera 108 to identify specific parts of the operator, such as the face and finer eyes. A distance may be calculated. An example of this case will be described with reference to FIG. 4B. FIG. 4B is a modification of the method described with reference to FIG. 4A, and relates to the case where the ToF data processing unit 107 measures the distance to a specific site using imaging data.
 図示するように、図4Aで説明したステップS15と同様に撮像データ処理部109が撮像データに基づいて操作者の例えば目を検出すると、目の位置に関するデータをToFデータ処理部107に送信する(ステップS30)。するとToFデータ処理部107は、撮像データ処理部109から受信した目の位置に関するデータに基づいて、当該目までの距離を算出する(ステップS31)。このように、ToFデータ処理部107は撮像データを利用してもよい。なお、カメラ108が、ToF方式の三次元画像センサであってよく、例えば図3Aで説明したToFデータ処理部107及び撮像データ処理部109が一体化された場合であってもよい。いずれのような構成であっても、操作者の所定の部位、例えば顔までの距離を算出可能で、かつ操作者の所定の部位、例えば目の位置を特定可能な構成であれば限定されない。 As shown in the figure, when the imaged data processing unit 109 detects, for example, eyes of the operator based on the imaged data, the imaged data processing unit 109 transmits data on the positions of the eyes to the ToF data processing unit 107 ( step S30). Then, the ToF data processing unit 107 calculates the distance to the eye based on the data regarding the position of the eye received from the imaging data processing unit 109 (step S31). In this way, the ToF data processing unit 107 may use imaging data. Note that the camera 108 may be a ToF three-dimensional image sensor, and for example, the ToF data processing unit 107 and imaging data processing unit 109 described with reference to FIG. 3A may be integrated. Any configuration is not limited as long as it can calculate the distance to a predetermined part of the operator, for example, the face, and specify the position of the predetermined part of the operator, for example, the eyes.
 上記動作の具体例につき、判定テーブル121の概念と共に以下説明する。まず、判定テーブルについて図5を用いて説明する。図5は判定テーブル121の概念図である。図示するように判定テーブル121は、撮像データ処理部109で得られた撮像データの撮像範囲における顔(本例では例えば目)の位置、ジャイロデータ処理部105で得られたスマートフォン10の例えば水平軸に対する角度θ1、及びToFデータ処理部107で得られた操作者の顔(本例では例えば目)までの距離Lに応じて、仮想操作空間19の位置(空間座標)に関する情報を保持する。 A specific example of the above operation will be described below together with the concept of the determination table 121. First, the determination table will be described with reference to FIG. FIG. 5 is a conceptual diagram of the determination table 121. As shown in FIG. As illustrated, the determination table 121 includes the position of the face (eg, the eyes in this example) in the imaging range of the imaging data obtained by the imaging data processing unit 109, the horizontal axis of the smartphone 10 obtained by the gyro data processing unit 105, for example. and the distance L to the operator's face (eyes in this example) obtained by the ToF data processing unit 107, information about the position (space coordinates) of the virtual operation space 19 is held.
 撮像範囲における目の位置は、図6Aに示すように、カメラ108によって得られた撮像範囲130において高さ方向(スマートフォン10の筐体の例えば長手方向)において、3つの領域A、B、及びCに分類される。例えば、領域Aは撮像範囲130における上方1/3の領域であり、領域Bは撮像範囲130における中央1/3の領域であり、領域Cは撮像範囲130における下方1/3の領域である。なお、図6Aは一例に過ぎず、必ずしも高さ方向に3分割でなくてもよく、2分割や4分割以上であってもよいし、高さ方向ではなく幅方向に複数の領域に分割されてもよいし、あるいは高さ方向と幅方向の両方で分割されていてもよい。 As shown in FIG. 6A, the positions of the eyes in the imaging range are three regions A, B, and C in the height direction (for example, the longitudinal direction of the housing of the smartphone 10) in the imaging range 130 obtained by the camera 108. are categorized. For example, area A is the upper ⅓ area of the imaging range 130 , area B is the central ⅓ area of the imaging range 130 , and area C is the lower ⅓ area of the imaging range 130 . Note that FIG. 6A is only an example, and it is not necessarily divided into three in the height direction, it may be divided into two or four or more, and it is divided into a plurality of regions in the width direction instead of the height direction. Alternatively, it may be divided in both the height direction and the width direction.
 スマートフォン10の角度θ1は、図6Bに示すように、例えば水平軸に対するスマートフォン10の角度である。より具体的には、水平軸と、スマートフォン10の背面との間で形成される角度である。図5の例では、傾きθ1は、0°以上30°未満の範囲と、30°以上60°未満の範囲と、60°以上90°以下の3つの範囲に分類される。θ1=0°は、スマートフォン10が水平軸に対して平行な状態であり、θ1=90°は、図6Bに示すようにスマートフォン10が水平軸に対して直立した状態である。図5の例では、角度θ1の範囲が0°以上90°以下であるが、これよりも狭い範囲であってもよいし、広い範囲であってもよい。また、角度θ1の基準となる軸は水平軸に限定されるものでもない。 The angle θ1 of the smartphone 10 is, for example, the angle of the smartphone 10 with respect to the horizontal axis, as shown in FIG. 6B. More specifically, it is the angle formed between the horizontal axis and the back surface of the smartphone 10 . In the example of FIG. 5, the inclination θ1 is classified into three ranges of 0° or more and less than 30°, 30° or more and less than 60°, and 60° or more and 90° or less. θ1=0° is a state in which the smartphone 10 is parallel to the horizontal axis, and θ1=90° is a state in which the smartphone 10 is upright with respect to the horizontal axis as shown in FIG. 6B. In the example of FIG. 5, the range of the angle θ1 is 0° or more and 90° or less, but the range may be narrower or wider than this. Also, the reference axis for the angle θ1 is not limited to the horizontal axis.
 距離Lは、ToFセンサ106によって測定された操作者の顔までの距離であり、例えば操作者の例えば目までの距離であってもよい。図5の例では、距離Lは、0cm以上15cm未満の範囲と、15cm以上30cm未満の範囲と、30cm以上45cm未満の範囲と、45cm以上の範囲の4つの範囲に分類される。 The distance L is the distance to the operator's face measured by the ToF sensor 106, and may be, for example, the distance to the eyes of the operator. In the example of FIG. 5, the distance L is classified into four ranges: 0 cm or more and less than 15 cm, 15 cm or more and less than 30 cm, 30 cm or more and less than 45 cm, and 45 cm or more.
 そして判定テーブル121は、各条件の組み合わせにつき、仮想操作空間19の位置(空間座標)に関するデータを保持する。例えば、目の位置が領域A内にあり、角度θ1が0°≦θ1<30°の範囲内である場合には、距離Lに応じて以下の位置データを保持する。
 ・0cm≦L<15cm:“A-a-1”
 ・15cm≦L<30cm:“A-a-2”
 ・30cm≦L<45cm:“A-a-3”
 ・45cm≦L:“A-a-4”
また、θ1が30°≦θ1<60°の範囲内である場合には、距離Lに応じて以下の位置データを保持する。
 ・0cm≦L<15cm:“A-b-1”
 ・15cm≦L<30cm:“A-b-2”
 ・30cm≦L<45cm:“A-b-3”
 ・45cm≦L:“A-b-4”
更に、θ1が60°≦θ1≦90°の範囲内である場合には、距離Lに応じて以下の位置データを保持する。
 ・0cm≦L<15cm:“A-c-1”
 ・15cm≦L<30cm:“A-c-2”
 ・30cm≦L<45cm:“A-c-3”
 ・45cm≦L:“A-c-4”
上記は、領域B及びCについても同様である。そして、上記においてダブルクォーテーションで示した例えば“A-a-1”などが、仮想操作空間19の位置を示す情報である。より具体的には、ディスプレイ12の表示面をX軸とY軸の二次元平面とし、表示面の垂線方向をZ軸とした場合、“A-a-1”は、Z軸に沿った方向におけるディスプレイ12と仮想操作空間19との距離や、仮想操作空間19における操作者による操作面のX軸座標やY軸座標、あるいは、仮想操作空間19の例えば水平軸に対してなす角度などの情報を含む。そして、例えば上記位置データは、操作者の目と、仮想操作空間19の所定の位置(例えば中央付近)と、ディスプレイ12に表示されている実際の入力画面(例えばテンキー)の所定の位置(例えば中央付近)とが、一直線となるような座標情報である。
The determination table 121 holds data on the position (spatial coordinates) of the virtual operation space 19 for each combination of conditions. For example, when the position of the eye is within the region A and the angle θ1 is within the range of 0°≦θ1<30°, the following position data is held according to the distance L.
・0cm≦L<15cm: “Aa-1”
・15cm≦L<30cm: “Aa-2”
・30cm≦L<45cm: “Aa-3”
・45 cm ≤ L: “A-a-4”
Further, when θ1 is within the range of 30°≦θ1<60°, the following position data is held according to the distance L.
・0cm≦L<15cm: “Ab-1”
・15cm≦L<30cm: “A-b-2”
・30cm≦L<45cm: “Ab-3”
・45 cm ≤ L: “A-b-4”
Further, when θ1 is within the range of 60°≦θ1≦90°, the following position data is held according to the distance L.
・0cm≦L<15cm: “Ac-1”
・15cm≦L<30cm: “Ac-2”
・30cm≦L<45cm: “Ac-3”
・45 cm ≤ L: “Ac-4”
The above is the same for regions B and C as well. Information indicating the position of the virtual operation space 19 is, for example, “Aa-1” indicated by double quotations in the above description. More specifically, when the display surface of the display 12 is a two-dimensional plane of the X axis and the Y axis, and the direction perpendicular to the display surface is the Z axis, "Aa-1" is the direction along the Z axis. Information such as the distance between the display 12 and the virtual operation space 19 in the virtual operation space 19, the X-axis coordinates and Y-axis coordinates of the operation surface by the operator in the virtual operation space 19, or the angle of the virtual operation space 19 with respect to the horizontal axis, for example. including. For example, the above position data includes the eyes of the operator, a predetermined position (for example, near the center) of the virtual operation space 19, and a predetermined position (for example, a numeric keypad) of the actual input screen displayed on the display 12. (near the center) is coordinate information that forms a straight line.
 位置データは例えば、図2Aのようなテンキー18がディスプレイ12に表示される場合には、操作者の目と、仮想操作空間19における数字の“5”または“8”を受け付ける領域と、ディスプレイ12上のテンキー18における数字の“5”または“8”を受け付ける領域とが一直線上となるような座標情報である。図2Aの例ではテンキー18の場合を示しているが、その他にもQWERTY配列のキーボードであってもよい。この場合には、操作者の目と、仮想操作空間19におけるキー“G”、“H”、“J”、または“K”を受け付ける領域と、ディスプレイ12上におけるキー“G”、“H”、“J”、または“K”を受け付ける領域とが一直線上となるような座標情報が選択されてもよい。また、ディスプレイ12に表示される画面は、“YES”と“NO”の2つの領域を操作者に対して選択させるものであってもよいし、または“OK”のような1つの領域を選択させるものであってもよい。ディスプレイ12に表示される画面がどのような種類のものであっても、ディスプレイ12に表示された画面においてある特定の領域と、仮想操作空間19において、この特定の領域に対応する領域と、操作者の目の位置とが一直線上に位置するように仮想操作空間19が設けられればよい。換言すれば、操作者が仮想操作空間19のある領域をタップした際に、ディスプレイ12における表示画面において操作者が意図した領域が正しく入力を受け付けられたと認識できるような位置関係を実現できる座標情報であればよい。このように、判定テーブル121を用いることにより、ステップS16において仮想操作空間19の適切な位置を容易に得ることができる。 For example, when the numeric keypad 18 is displayed on the display 12 as shown in FIG. The coordinate information is such that it is aligned with the area for receiving the number "5" or "8" on the ten key 18 above. Although the example of FIG. 2A shows the case of the numeric keypad 18, a QWERTY layout keyboard may also be used. In this case, the operator's eyes, the area in the virtual manipulation space 19 that receives the keys "G", "H", "J", or "K", and the keys "G", "H" on the display 12 , “J”, or “K” may be selected so as to be aligned with the region. Also, the screen displayed on the display 12 may allow the operator to select two areas of "YES" and "NO", or select one area such as "OK". It may be something that causes Regardless of what kind of screen is displayed on the display 12, a specific area on the screen displayed on the display 12, an area corresponding to this specific area in the virtual operation space 19, and an operation The virtual operation space 19 may be provided so as to be aligned with the position of the person's eyes. In other words, when the operator taps a certain area in the virtual operation space 19, coordinate information that can realize a positional relationship that allows the operator to recognize that the area intended by the operator on the display screen of the display 12 has correctly received the input. If it is By using the determination table 121 in this way, the appropriate position of the virtual manipulation space 19 can be easily obtained in step S16.
 上記の判定テーブル121を用いた仮想操作空間19の位置の決定方法の具体例のいくつかにつき、まず図7Aを用いて説明する。図7Aは、本実施形態に係るスマートフォン10の断面図であり、水平軸に対するスマートフォン10の角度と操作者の目の位置に対応した仮想操作空間19Aの位置を示している。 Several specific examples of the method of determining the position of the virtual operation space 19 using the determination table 121 will be described first with reference to FIG. 7A. FIG. 7A is a cross-sectional view of the smartphone 10 according to this embodiment, showing the position of the virtual operation space 19A corresponding to the angle of the smartphone 10 with respect to the horizontal axis and the position of the operator's eyes.
 図7Aの例であると、スマートフォン10は水平軸に対しての角度θ1が30°であり、ToFセンサ14に測定された距離Lが20cmであり、操作者の目200の位置がカメラ13の撮像範囲130における領域Bにある場合を示している。この場合、図5で説明した判定テーブル121により位置データ“B-b-2”が相当する。したがって、位置判定部110は、位置データ“B-b-2”に基づいて仮想操作空間19Aの位置を決定し、入力判定部111は、この仮想操作空間19A内における操作者の指を検出する。なお、操作者のキー入力を容易にするため、仮想操作空間19における操作面の面積は、ディスプレイ12上のテンキー18の表示面積よりも大きく設定されてよい。 In the example of FIG. 7A , the smartphone 10 has an angle θ1 of 30° with respect to the horizontal axis, the distance L measured by the ToF sensor 14 is 20 cm, and the operator's eye 200 is positioned at the position of the camera 13. The case is shown in the area B in the imaging range 130 . In this case, the position data "Bb-2" corresponds to the determination table 121 described with reference to FIG. Therefore, the position determination unit 110 determines the position of the virtual operation space 19A based on the position data "Bb-2", and the input determination unit 111 detects the operator's finger in the virtual operation space 19A. . It should be noted that the area of the operation surface in the virtual operation space 19 may be set larger than the display area of the numeric keypad 18 on the display 12 in order to facilitate key input by the operator.
 図7Bの例は、図7Aにおいて、操作者の目200の位置が撮像範囲130における領域Cに移動した場合を示している。なお、図7Bでは、参考のために図7Aの場合に決定された仮想操作空間19Aを破線で示している。 The example of FIG. 7B shows a case where the position of the operator's eyes 200 has moved to area C in the imaging range 130 in FIG. 7A. In FIG. 7B, for reference, the virtual operation space 19A determined in the case of FIG. 7A is indicated by a dashed line.
 図示するように、ToFセンサ14に測定された距離Lが35cmである。この場合、図5で説明した判定テーブル121により位置データ“C-b-3”が相当する。したがって、位置判定部110は、位置データ“C-b-3”に基づいて仮想操作空間19Bの位置を決定し、入力判定部111は、この仮想操作空間19B内における操作者の指を検出する。 As shown, the distance L measured by the ToF sensor 14 is 35 cm. In this case, the position data "Cb-3" corresponds to the determination table 121 described with reference to FIG. Therefore, the position determination unit 110 determines the position of the virtual operation space 19B based on the position data “Cb-3”, and the input determination unit 111 detects the operator's finger in the virtual operation space 19B. .
 上記の図7A及び図7Bの例を比較した場合、操作者の目が図7Bに示す位置にもかかわらず、図7Aに示す位置に仮想操作空間19Aが存在した場合、図7Bに示すように、操作者がディスプレイ12におけるテンキー18の例えばY方向における中央部分をタップするつもりで仮想操作空間19Aの位置に指を動かした場合、操作者の視線は図7Aの場合に比べてずれている。したがって、仮想操作空間19Aにおいては中央部分よりも下方部分がタップされたと判断され、誤入力の原因となる。具体的には、操作者はテンキーにおける数字の“4”を選択したつもりが、実際にはそれより下の位置にある記号“0”が選択されたとスマートフォン10が認識するおそれがある。しかし本実施形態では、角度θ1、距離L、及び操作者の目の位置に応じて仮想操作空間19の位置を変化させる。図7Bの例では、視線のずれに応じた位置に仮想操作空間19Bが設けられる。これにより、操作者の誤入力の発生を低減できる。 When the examples of FIGS. 7A and 7B are compared, if the virtual operation space 19A exists at the position shown in FIG. 7A regardless of the position of the operator's eyes shown in FIG. 7B, as shown in FIG. 7B When the operator moves his/her finger to the position of the virtual operation space 19A with the intention of tapping, for example, the central portion in the Y direction of the numeric keypad 18 on the display 12, the operator's line of sight is shifted compared to the case of FIG. 7A. Therefore, in the virtual operation space 19A, it is determined that the portion below the central portion has been tapped, which causes an erroneous input. Specifically, the smartphone 10 may recognize that the operator intended to select the number "4" on the numeric keypad, but actually selected the symbol "0" located below it. However, in this embodiment, the position of the virtual operation space 19 is changed according to the angle θ1, the distance L, and the position of the operator's eyes. In the example of FIG. 7B, a virtual operation space 19B is provided at a position corresponding to the deviation of the line of sight. This can reduce the occurrence of erroneous input by the operator.
 図7Cはまた別の例について示している。図示するように、スマートフォン10は水平軸に対しての角度θ1が60°である。この際、例えば操作者の目200の位置が、ディスプレイ12におけるテンキー18に対して正面にあった場合、図7Cにおいて破線で示した仮想操作空間19Cが設けられるとする。これに対して、操作者の目200の位置が変化し、ToFセンサ14により測定された距離Lが45cmであり、操作者の目200の位置がカメラ13の撮像範囲130における領域Aにあったとする。この場合、図5で説明した判定テーブル121により位置データ“A-c-4”が相当する。したがって、位置判定部110は、位置データ“A-c-4”に基づいて仮想操作空間19Dの位置を決定する。すなわち、操作者はスマートフォン10を正面からではなく、上側から見下ろしているため、この位置の差異に応じて、仮想操作空間19Dも仮想操作空間19Cに比べてY方向に沿って上側に設けられる。なお、図7Cの例では、Z軸方向におけるディスプレイ12と仮想操作空間19Dの距離と、ディスプレイ12と仮想操作空間19Cとの距離も変化している。これは、判定テーブル121によって決定されるが、操作者の視点に基づいて、より入力容易な位置に仮想操作空間19が設けられれば限定されるものではない。但し、スマートフォン10の場合には、好ましくはスマートフォン10から例えば10cmまでの距離に仮想操作空間が位置することが好ましい。もちろん、仮想操作空間19Dの水平軸に対する角度も、仮想操作空間19Cと異なる場合であってもよい。これは図7Bの例でも同様である。 Fig. 7C shows another example. As illustrated, the smartphone 10 has an angle θ1 of 60° with respect to the horizontal axis. At this time, for example, when the operator's eyes 200 are positioned in front of the numeric keypad 18 on the display 12, a virtual operation space 19C indicated by broken lines in FIG. 7C is provided. On the other hand, the position of the operator's eye 200 changed, the distance L measured by the ToF sensor 14 was 45 cm, and the position of the operator's eye 200 was in the area A in the imaging range 130 of the camera 13. do. In this case, the position data "Ac-4" corresponds to the determination table 121 described with reference to FIG. Therefore, the position determining section 110 determines the position of the virtual operation space 19D based on the position data "Ac-4". That is, since the operator looks down on the smartphone 10 from above rather than from the front, the virtual operation space 19D is also provided above the virtual operation space 19C along the Y direction according to this positional difference. In the example of FIG. 7C, the distance between the display 12 and the virtual operation space 19D and the distance between the display 12 and the virtual operation space 19C in the Z-axis direction also change. This is determined by the determination table 121, but is not limited as long as the virtual operation space 19 is provided at a position where input is easier based on the viewpoint of the operator. However, in the case of the smartphone 10, the virtual operation space is preferably positioned at a distance of, for example, 10 cm from the smartphone 10. Of course, the angle of the virtual operation space 19D with respect to the horizontal axis may also differ from that of the virtual operation space 19C. This also applies to the example of FIG. 7B.
 本例においても、操作者が仮想操作空間19Cに対してタップした場合には誤入力を招く可能性がある。しかし、操作者の目の位置の変化に応じて仮想操作空間19Cを仮想操作空間19Dに変化させることで、誤入力の発生を抑制できる。 Also in this example, if the operator taps the virtual operation space 19C, an erroneous input may occur. However, by changing the virtual operation space 19C to the virtual operation space 19D according to the change in the eye position of the operator, it is possible to suppress the occurrence of erroneous input.
 <第2実施形態>
 次に、第2実施形態に係る電子機器について説明する。本実施形態は、操作者の視差(左目と右目の視力の違い)の個人差に着目して、仮想操作空間19の位置を調整するものである。以下では、第1実施形態と異なる点についてのみ説明する。
<Second embodiment>
Next, an electronic device according to the second embodiment will be described. This embodiment adjusts the position of the virtual operation space 19 by focusing on individual differences in the operator's parallax (difference in visual acuity between the left eye and the right eye). Only points different from the first embodiment will be described below.
 図8Aは、本実施形態に係るスマートフォン10の内部構成を示すハードウェア構成図であり、第1実施形態で説明した図3Aに相当する。本実施形態に係る構成が第1実施形態で説明した図3Aと異なる点は、スマートフォン10がキャリブレーション制御部(調整部)140を更に備え、またRAM103内にキャリブレーションデータ122が保持される点にある。 FIG. 8A is a hardware configuration diagram showing the internal configuration of the smartphone 10 according to this embodiment, and corresponds to FIG. 3A described in the first embodiment. The configuration according to the present embodiment differs from FIG. 3A described in the first embodiment in that the smartphone 10 further includes a calibration control unit (adjustment unit) 140 and calibration data 122 is held in the RAM 103. It is in.
 キャリブレーション制御部140は、第1実施形態で説明した方法によって決定された仮想操作空間19の位置を、先に述べたように操作者の視差に基づいて調整するための補正データを算出する。キャリブレーションデータ122は、キャリブレーション制御部140で算出された補正データに基づく、仮想操作空間19の位置の調整データである。これらについては、のちに具体例を用いて詳細に説明する。 The calibration control unit 140 calculates correction data for adjusting the position of the virtual operation space 19 determined by the method described in the first embodiment based on the parallax of the operator as described above. The calibration data 122 is adjustment data for the position of the virtual operational space 19 based on the correction data calculated by the calibration control section 140 . These will be described later in detail using specific examples.
 なお、第1実施形態と同様に、図8Aにおけるいくつかのハードウェアは、ソフトウェアによって実行されてもよい。図8B及び図8Cはそれぞれ、図3Aの変形例に係るスマートフォン10の内部構成を示すハードウェア構成図、及びプログラム120実行時におけるCPU101の機能ブロック図である。図示するように、キャリブレーション制御部140の機能は、プログラム120実行時におけるCPU101によって実現されてもよい。 Note that some hardware in FIG. 8A may be implemented by software, as in the first embodiment. 8B and 8C are respectively a hardware configuration diagram showing the internal configuration of smartphone 10 according to the modification of FIG. 3A and a functional block diagram of CPU 101 when program 120 is executed. As illustrated, the functions of the calibration control unit 140 may be realized by the CPU 101 when the program 120 is executed.
 図9は、本実施形態に係るスマートフォン10の動作を示すフローチャートである。以下では、キャリブレーション制御部140による仮想操作空間19の位置の調整動作に着目して説明する。図9の動作は、例えばスマートフォン10の操作者による最初の電源投入時、または電源投入の度、及び/またはスマートフォン内における設定コマンドから実行されてよい。 FIG. 9 is a flowchart showing operations of the smartphone 10 according to this embodiment. The following description will focus on the operation of adjusting the position of the virtual operation space 19 by the calibration control unit 140 . The operations of FIG. 9 may be performed, for example, when the smartphone 10 is powered on for the first time by the operator, or every time the power is turned on, and/or from a setting command within the smartphone.
 図示するようにキャリブレーション制御部140は、ディスプレイ12にメッセージ(例えば「これから入力面の調整を行います」等のメッセージ)等を表示すること等により、操作者にキャリブレーション開始の旨を通知する(ステップS30)。引き続きキャリブレーション制御部140は、ディスプレイ12にキャリブレーション用のデータを表示する(ステップS31)。さらにキャリブレーション制御部140は操作者に対して、スマートフォン10の角度や操作者の顔の位置が適切な位置となるよう促す(ステップS32)。具体的には、例えば操作者の目の位置が、ディスプレイ12におけるキャリブレーション用データの中央からの略垂線上となるよう、言い換えれば、操作者の視線がキャリブレーション用データの中央に正対するように、操作者に対するメッセージをディスプレイ12に表示する。そしてキャリブレーション制御部140は、ジャイロセンサ104、ToFセンサ106、及びカメラ108による検出結果によりスマートフォン10の角度や操作者の顔の位置が適切な位置にあるとキャリブレーション制御部140が判断すると、キャリブレーション制御部140は、キャリブレーション用のデータの特定個所を、仮想操作空間においてタップするよう操作者に促す(ステップS33)。この際も、キャリブレーション制御部140は、例えばディスプレイ12において特定の個所をタップしてください、等のメッセージを表示する。 As illustrated, the calibration control unit 140 notifies the operator of the start of calibration by displaying a message (for example, a message such as "The input surface will be adjusted now") on the display 12. (Step S30). Subsequently, the calibration control section 140 displays data for calibration on the display 12 (step S31). Further, the calibration control unit 140 prompts the operator to adjust the angle of the smartphone 10 and the position of the operator's face to appropriate positions (step S32). Specifically, for example, the operator's eye position is on a substantially vertical line from the center of the calibration data on the display 12, in other words, the operator's line of sight faces the center of the calibration data. Then, a message for the operator is displayed on the display 12 . Then, when the calibration control unit 140 determines that the angle of the smartphone 10 and the position of the operator's face are appropriate based on the detection results from the gyro sensor 104, the ToF sensor 106, and the camera 108, The calibration control unit 140 prompts the operator to tap a specific portion of the data for calibration in the virtual operation space (step S33). Also at this time, the calibration control unit 140 displays a message such as, for example, please tap a specific place on the display 12 .
 なお、上記ステップS30~S33の処理と並行して、第1実施形態において図4A及び図Bを用いて説明したステップS10~S21(図4Bの処理の場合には、ステップS13及びS15の代わりにステップS30及びS31)の処理が実行される。なお、少なくともステップS33が実行されるまでにはステップS16が完了される。 In addition, in parallel with the processing of steps S30 to S33, steps S10 to S21 described with reference to FIGS. 4A and 4B in the first embodiment (in the case of the processing of FIG. Steps S30 and S31) are executed. Note that step S16 is completed at least before step S33 is executed.
 ステップS33の後、操作者が仮想操作空間19をタップすると(ステップS17~S21)、キャリブレーション制御部140は、仮想操作空間19において、ステップS16で得られた位置と、実際に操作者によりタップされた位置との差異を検出する。そして、これをキャリブレーションデータ122としてRAM103に記憶する(ステップS34)。その後、キャリブレーション制御部140は、キャリブレーション終了の旨を操作者に通知する(ステップS35)。その後、スマートフォン10の例えば位置判定部110は、ステップS16で得られた仮想操作空間19を、キャリブレーションデータ122で補正し、仮想操作空間19の位置を決定する(ステップS36)。 After step S33, when the operator taps the virtual operation space 19 (steps S17 to S21), the calibration control unit 140 changes the position obtained in step S16 and the position actually tapped by the operator in the virtual operation space 19. Detect the difference from the determined position. Then, this is stored in the RAM 103 as the calibration data 122 (step S34). After that, the calibration control unit 140 notifies the operator of the completion of the calibration (step S35). After that, for example, the position determination unit 110 of the smartphone 10 corrects the virtual operational space 19 obtained in step S16 using the calibration data 122, and determines the position of the virtual operational space 19 (step S36).
 上記キャリブレーション動作の具体例につき、図10A及び図10Bを用いて説明する。図10A及び図10Bは、キャリブレーション動作実行時におけるスマートフォン10の外観斜視図である。 A specific example of the calibration operation will be described with reference to FIGS. 10A and 10B. 10A and 10B are external perspective views of the smartphone 10 during execution of the calibration operation.
 まずキャリブレーション制御部140は、ディスプレイ12にキャリブレーション開始の旨を表示した後、図10Aに示すようにディスプレイ12にキャリブレーション用のデータ20を表示する。図10Aの例では、(3×3)の表示領域の4隅と中央に“A”~“E”のアルファベットが表示されるが、キャリブレーション用のデータ20はこの例に限定されるものではない。引き続きキャリブレーション制御部140がスマートフォン10と操作者の位置とが適切になるよう促すメッセージをディスプレイ12に表示した後、図10Aの例ではキャリブレーション用のデータ20における“E”をタップするよう操作者に促すメッセージ21をディスプレイ12に表示する。これに基づいて、仮想操作空間19Eは、操作者による指の動作を受け付ける。なお、操作者のキー入力を容易にするため、仮想操作空間19における操作面の面積は、ディスプレイ12上のキャリブレーションデータ122の表示面積よりも大きく設定されてよい。 First, the calibration control unit 140 displays the start of calibration on the display 12, and then displays data 20 for calibration on the display 12 as shown in FIG. 10A. In the example of FIG. 10A, the letters "A" to "E" are displayed at the four corners and the center of the (3×3) display area, but the calibration data 20 is not limited to this example. do not have. Subsequently, after the calibration control unit 140 displays on the display 12 a message urging that the positions of the smartphone 10 and the operator are appropriate, in the example of FIG. A message 21 prompting the user is displayed on the display 12 . Based on this, the virtual operation space 19E accepts the operator's finger motion. Note that the area of the operation surface in the virtual operation space 19 may be set larger than the display area of the calibration data 122 on the display 12 in order to facilitate key input by the operator.
 この際、図10Aに示すように、操作者のタップ位置が、仮想操作空間19Eにおける実際の“E”の領域よりも左側(X方向)にずれていたとする。すると図10Bに示すように、キャリブレーション制御部140は、操作者は、その視差により、実際の仮想操作空間19Eと操作者による認識位置との間にずれがあると認識する。したがってキャリブレーション制御部140は、実際にタップされた位置と、現在の仮想操作空間との間のずれ量δとその方向を検出する。そして、これらのデータをキャリブレーションデータ122としてRAM103に記憶する。そして以後、スマートフォン10は、仮想操作空間19において操作者の入力を受け付ける際には、仮想操作空間19の位置を、ステップS16で得られた位置から左方向にδだけずれた位置に設定する。 At this time, as shown in FIG. 10A, it is assumed that the operator's tap position is shifted to the left (X direction) of the actual "E" area in the virtual operation space 19E. Then, as shown in FIG. 10B, the calibration control unit 140 recognizes that the operator has a deviation between the actual virtual operation space 19E and the position recognized by the operator due to the parallax. Therefore, the calibration control unit 140 detects the amount of deviation δ between the actually tapped position and the current virtual operation space and the direction thereof. These data are stored in the RAM 103 as calibration data 122 . After that, when the smartphone 10 receives an input from the operator in the virtual operation space 19, the smartphone 10 sets the position of the virtual operation space 19 to a position shifted leftward by δ from the position obtained in step S16.
 上記のキャリブレーション動作により、操作者による視差等に基づく誤入力を抑制できる。 By the above calibration operation, erroneous input based on parallax etc. by the operator can be suppressed.
 <変形例等>
 上記のように、実施形態に係る構成によれば、非接触型のスマートフォン10における誤入力を抑制できる。なお、上記実施形態ではスマートフォン10を例に挙げたが、その他の電子機器に広く適用可能である。例えば、タブレット型PC、テレビ、鉄道や映画の自動チケット券売機、空港の自動チェックイン機、飲食店等のレジ等にも適用できる。また、これらの電子機器及びそのディスプレイの位置が固定されている場合においては、ディスプレイの角度は一定である場合が多いと思われ、その場合にはジャイロセンサ104は不要であり、仮想操作空間19の位置決定に角度θ1が考慮される必要がない場合があってもよい。しかし、電子機器が固定されている場合であっても、例えば船舶や航空機などに搭載される場合には、電子機器自体の角度が変化する場合がありうるので、ジャイロセンサ104による角度θ1を考慮することが好ましい。
<Modifications, etc.>
As described above, according to the configuration according to the embodiment, erroneous input in the non-contact smartphone 10 can be suppressed. In addition, although the smart phone 10 was mentioned as an example in the said embodiment, it is widely applicable to other electronic devices. For example, it can be applied to tablet PCs, televisions, automatic ticket vending machines for trains and movies, automatic check-in machines at airports, and cash registers at restaurants and the like. In addition, when the positions of these electronic devices and their displays are fixed, the angle of the display is likely to be constant in many cases. It may be the case that angle θ1 need not be taken into account in determining the position of . However, even if the electronic device is fixed, the angle of the electronic device itself may change when, for example, it is mounted on a ship or an aircraft. preferably.
 また、上記実施形態では、操作者の入力が指で行われ、カメラ108によって指を検出する場合を例に説明した。しかし、操作者による入力は指に限らず、例えばタッチペン(スタイラスペン)など、特定の領域を指定できる部材であれば限定されない。そして、予め撮像データ処理部109が、これらの部材を検出した際に、図4A及び図4BにおけるステップS19において、「指が検出された」と認識すればよい。 Further, in the above embodiment, the case where the operator's input is performed with a finger and the finger is detected by the camera 108 has been described as an example. However, input by the operator is not limited to a finger, and is not limited as long as it is a member capable of designating a specific area, such as a touch pen (stylus pen). Then, when the captured image data processing unit 109 detects these members in advance, it may be recognized that "a finger has been detected" in step S19 in FIGS. 4A and 4B.
 また、図4A及び図4BにおけるステップS14において、撮像データに複数の人間の顔が含まれる場合があり得る。この際には、撮像データ処理部109は、例えば顔認証処理を用いて1つの顔を選択してもよい。すなわち、撮像データ処理部109は、カメラ13によって予め撮影された操作者の顔データをRAM103に保持させ、ステップS14において複数の顔が認識された場合には、予めRAM103に保持された顔データを用いて認証処理を行い、認証された顔データに対してステップS15においてその位置を算出してもよい。 Also, in step S14 in FIGS. 4A and 4B, the imaging data may contain a plurality of human faces. At this time, the captured data processing unit 109 may select one face using, for example, face authentication processing. That is, the imaging data processing unit 109 causes the RAM 103 to hold the face data of the operator previously photographed by the camera 13, and when a plurality of faces are recognized in step S14, the face data held in advance in the RAM 103 is stored in the RAM 103. Authentication processing may be performed using the face data, and the position of the authenticated face data may be calculated in step S15.
 なお、上記第1及び第2実施形態で説明した動作は、例えばプログラム120を実行することにより実施され得る。そして、例えばユーザ購入時に電子機器にプログラム120が保持されていない場合であっても、例えばインターネット等を介してプログラム120ダウンロードしてROM102やRAM103に保持させることで、電子機器購入後に上記第1及び第2実施形態で説明した動作を実現することが可能である。 The operations described in the first and second embodiments can be implemented by executing the program 120, for example. For example, even if the program 120 is not held in the electronic device at the time of purchase by the user, the program 120 can be downloaded via the Internet or the like and held in the ROM 102 or the RAM 103, so that after the purchase of the electronic device, the above first and It is possible to realize the operation described in the second embodiment.
 なお、上記のように本実施形態について詳細に説明したが、本実施形態の新規事項および効果から実体的に逸脱しない多くの変形が可能であることは当業者には容易に理解できるであろう。従って、このような変形例はすべて本開示の範囲に含まれるものとする。例えば、明細書又は図面において、少なくとも一度、より広義または同義な異なる用語と共に記載された用語は、明細書又は図面のいかなる箇所においても、その異なる用語に置き換えることができる。また本実施形態及び変形例の全ての組み合わせも、本開示の範囲に含まれる。また上記実施形態で説明した電子機器の構成及び動作等も、本実施形態で説明したものに限定されず、種々の変形実施が可能である。

 
Although the present embodiment has been described in detail as above, those skilled in the art will easily understand that many modifications that do not substantially deviate from the novel matters and effects of the present embodiment are possible. . Accordingly, all such modifications are intended to be included within the scope of this disclosure. For example, a term described at least once in the specification or drawings together with a different broader or synonymous term can be replaced with the different term anywhere in the specification or drawings. All combinations of this embodiment and modifications are also included in the scope of the present disclosure. Also, the configuration and operation of the electronic devices described in the above embodiments are not limited to those described in the present embodiment, and various modifications are possible.

Claims (6)

  1.  非接触による操作者の情報入力を受け付け可能な電子機器であって、
     所定の基準軸に対する前記電子機器の傾きを検出する第1検出部と、
     前記操作者との間の距離を測定する測定部と、
     前記操作者の目の位置を検出する第2検出部と、
     前記第1検出部で検出された前記傾き、前記測定部で測定された前記距離、及び前記第2検出部で検出された前記目の位置に基づいて、前記操作者による前記非接触による情報入力動作を認識するための仮想操作空間の位置を取得する取得部と
     を具備する電子機器。
    An electronic device capable of accepting contactless information input by an operator,
    a first detection unit that detects the tilt of the electronic device with respect to a predetermined reference axis;
    a measuring unit that measures the distance from the operator;
    a second detection unit that detects the position of the operator's eyes;
    Non-contact information input by the operator based on the inclination detected by the first detection unit, the distance measured by the measurement unit, and the position of the eye detected by the second detection unit An electronic device comprising: an acquisition unit that acquires a position in a virtual operation space for recognizing a motion.
  2.  前記操作者による前記情報入力を促すための画像を表示可能なディスプレイを更に備え、
     前記仮想操作空間は、前記ディスプレイから離隔された空間であり、前記空間内において前記操作者の前記情報入力動作が検知されることにより、前記ディスプレイに表示された前記画像において前記情報入力がなされたと判断され得る空間である、請求項1に記載の電子機器。
    further comprising a display capable of displaying an image for prompting the operator to input the information;
    The virtual operation space is a space separated from the display, and when the information input operation of the operator is detected in the space, it is determined that the information has been input in the image displayed on the display. 2. The electronic device of claim 1, which is a space that can be determined.
  3.  前記操作者の前記情報入力動作は、前記仮想操作空間内における前記操作者の指の位置によって検知される、請求項2に記載の電子機器。 The electronic device according to claim 2, wherein the information input action of the operator is detected by the position of the operator's finger in the virtual operation space.
  4.  前記取得部は、前記ディスプレイに表示された前記画像において第1情報を入力するための第1領域と、前記仮想操作空間において前記第1情報の入力対応する第2領域と、前記操作者の前記目の位置とが、一直線上に位置するように、前記仮想操作空間の位置を取得する、請求項2または3に記載の電子機器。 The acquisition unit comprises: a first area for inputting first information in the image displayed on the display; a second area corresponding to input of the first information in the virtual operation space; 4. The electronic device according to claim 2, wherein the position of the virtual operation space is obtained so that the position of the eye is aligned with the position of the eye.
  5.  前記取得部は、前記前記傾き、前記距離、及び前記目の位置と、前記仮想操作空間の位置との関係を保持するテーブルに基づいて、前記仮想操作空間の位置を取得する、請求項1から4のいずれか1項に記載の電子機器。 2. The acquisition unit acquires the position of the virtual operational space based on a table that holds a relationship between the tilt, the distance, the position of the eye, and the position of the virtual operational space. 5. The electronic device according to any one of 4.
  6.  前記取得部において取得された前記仮想操作空間において、特定の領域の選択動作を前記操作者に促し、前記操作者による前記選択動作によって選択された領域と、前記特定の領域との関係に基づいて、前記仮想操作空間の位置を調整する調整部を更に備える、請求項1から5のいずれか1項に記載の電子機器。

     
    prompting the operator to select a specific region in the virtual operating space acquired by the acquisition unit, and based on the relationship between the region selected by the operator's selection motion and the specific region; 6. The electronic device according to any one of claims 1 to 5, further comprising an adjustment unit that adjusts the position of said virtual operation space.

PCT/JP2022/005227 2021-04-08 2022-02-10 Electronic device WO2022215346A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023512839A JPWO2022215346A1 (en) 2021-04-08 2022-02-10

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021065575 2021-04-08
JP2021-065575 2021-04-08

Publications (1)

Publication Number Publication Date
WO2022215346A1 true WO2022215346A1 (en) 2022-10-13

Family

ID=83546306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/005227 WO2022215346A1 (en) 2021-04-08 2022-02-10 Electronic device

Country Status (2)

Country Link
JP (1) JPWO2022215346A1 (en)
WO (1) WO2022215346A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175623A (en) * 2010-01-29 2011-09-08 Shimane Prefecture Image recognition apparatus, operation determination method, and program
WO2016103521A1 (en) * 2014-12-26 2016-06-30 株式会社ニコン Detection device and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175623A (en) * 2010-01-29 2011-09-08 Shimane Prefecture Image recognition apparatus, operation determination method, and program
WO2016103521A1 (en) * 2014-12-26 2016-06-30 株式会社ニコン Detection device and program

Also Published As

Publication number Publication date
JPWO2022215346A1 (en) 2022-10-13

Similar Documents

Publication Publication Date Title
US8854320B2 (en) Mobile type image display device, method for controlling the same and information memory medium
JP5802667B2 (en) Gesture input device and gesture input method
US20160299604A1 (en) Method and apparatus for controlling a mobile device based on touch operations
EP2950180B1 (en) Method for determining screen display mode and terminal device
JP6371475B2 (en) Eye-gaze input device, eye-gaze input method, and eye-gaze input program
US9007321B2 (en) Method and apparatus for enlarging a display area
US9544556B2 (en) Projection control apparatus and projection control method
JP2013061848A (en) Noncontact input device
JP2014021596A (en) Tablet terminal, operation receiving method, and operation receiving program
US11928291B2 (en) Image projection device
JP2008186247A (en) Face direction detector and face direction detection method
US20180314326A1 (en) Virtual space position designation method, system for executing the method and non-transitory computer readable medium
US9671881B2 (en) Electronic device, operation control method and recording medium
JP2022188192A (en) Head-mounted display device, and control method therefor
JP6428020B2 (en) GUI device
WO2022215346A1 (en) Electronic device
JP6792721B2 (en) Electronics, programs, controls and control methods
JP2013165334A (en) Mobile terminal device
JP7069887B2 (en) Display control method for mobile terminal devices and mobile terminal devices
JP7035662B2 (en) Display control method for mobile terminal devices and mobile terminal devices
JP2014049023A (en) Input device
JP7179334B2 (en) GESTURE RECOGNITION DEVICE AND PROGRAM FOR GESTURE RECOGNITION DEVICE
JP7087494B2 (en) Display control method for mobile terminal devices and mobile terminal devices
JP2017157135A (en) Input device and input method
JP2014238750A (en) Input device, program therefor, and image display system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22784331

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023512839

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22784331

Country of ref document: EP

Kind code of ref document: A1