WO2024100935A1 - Dispositif d'entrée et procédé d'entrée - Google Patents

Dispositif d'entrée et procédé d'entrée Download PDF

Info

Publication number
WO2024100935A1
WO2024100935A1 PCT/JP2023/026902 JP2023026902W WO2024100935A1 WO 2024100935 A1 WO2024100935 A1 WO 2024100935A1 JP 2023026902 W JP2023026902 W JP 2023026902W WO 2024100935 A1 WO2024100935 A1 WO 2024100935A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
user
gaze position
gaze
processor
Prior art date
Application number
PCT/JP2023/026902
Other languages
English (en)
Japanese (ja)
Inventor
匡夫 濱田
毅 吉原
智広 森川
要介 田中
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2024100935A1 publication Critical patent/WO2024100935A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry

Definitions

  • This disclosure relates to an input device and an input method.
  • Patent Document 1 discloses an information processing device that acquires a history of information indicating the correspondence between the position of a user's gaze point and the position of an index indicating an operation position operated by the user, detects the user's gaze point, and controls the display position of the index based on the acquired history of information indicating the correspondence so that the index is displayed at a position corresponding to the detected current position of the gaze point.
  • the gaze point calibration process requires the accumulation of information indicating the correspondence between the gaze point position and the operation position. Therefore, in an environment where information indicating the correspondence cannot be accumulated for each user, such as an Automatic Teller Machine (ATM) used outside the home, the information processing device may not be able to perform sufficient calibration, which may result in an error in the operation position, making it difficult to perform input operations based on the gaze position.
  • ATM Automatic Teller Machine
  • the present disclosure has been devised in consideration of the above-mentioned conventional situation, and aims to provide an input device and input method that make it more efficient to calibrate the gaze position in gaze input.
  • the present disclosure provides an input device capable of accepting an input operation based on a user's gaze position, the input device comprising: a display unit that displays an input screen capable of accepting the input operation; a camera that images the user; a calculation unit that calculates a correction parameter for calibrating the gaze position of the first user with respect to the input screen based on the gaze position of the first user shown in a first captured image; and a processor that detects the gaze position of a second user shown in a second captured image taken after calculating the correction parameter, calibrates the gaze position of the second user using the correction parameter, and accepts the input operation with respect to the input screen based on the calibrated gaze position of the second user.
  • the present disclosure also provides an input method performed by an input device capable of accepting an input operation based on a user's gaze position, the input method including: displaying an input screen capable of accepting the input operation; acquiring a first captured image of the user; calculating a correction parameter for calibrating the gaze position of the first user with respect to the input screen based on the gaze position of the first user shown in the first captured image; acquiring a second captured image of the user; calibrating the gaze position of a second user shown in the second captured image using the correction parameter; and accepting the input operation with respect to the input screen based on the calibrated gaze position of the second user.
  • the present disclosure also provides an input device capable of accepting an input operation based on a user's gaze position, the input device including a processor that detects the gaze position of a first user shown in a first captured image captured by a camera and the gaze position of a second user shown in a second captured image captured by the camera, and accepts the input operation on an input screen capable of accepting the input operation based on the gaze position of the first user and the gaze position of the second user.
  • This disclosure makes it possible to more efficiently calibrate gaze position during gaze input.
  • FIG. 1 is a block diagram showing an example of an internal configuration of a gaze input device according to a first embodiment;
  • FIG. 1 is a diagram for explaining an example of an operation procedure of the eye-gaze input device according to the first embodiment;
  • FIG. 1 is a diagram for explaining an example of an operation procedure of the eye-gaze input device according to the first embodiment;
  • FIG. 1 is a diagram for explaining an example of a method for calibrating a gaze position.
  • FIG. 1 is a diagram for explaining a method for calculating the movement direction of the gaze position.
  • FIG. 13 is a diagram showing an example of an angle at which an eye-gaze input operation can be accepted based on the moving direction of the eye-gaze position;
  • FIG. 1 is a diagram for explaining an example of a dead region.
  • FIG. 11 is a diagram for explaining a first example of an eye-gaze input operation procedure.
  • FIG. 11 is a diagram for explaining a first example of an eye-gaze input operation procedure.
  • FIG. 11 is a diagram for explaining a second example of an eye-gaze input operation procedure.
  • FIG. 11 is a diagram for explaining a second example of an eye-gaze input operation procedure.
  • FIG. 11 shows another example of an input screen
  • FIG. 11 shows another example of an input screen
  • Fig. 1 is a block diagram showing an example of the internal configuration of the gaze input device P1 according to embodiment 1. It should be noted that the gaze input device P1 shown in Fig. 1 is merely an example, and needless to say, the present invention is not limited to this.
  • the gaze input device P1 is equipped with a camera 13 capable of capturing an image of the face of a user looking at a display 14, and is realized, for example, by a personal computer (hereinafter referred to as "PC"), a notebook PC, a tablet terminal, a smartphone, etc.
  • the gaze input device P1 is capable of accepting gaze input operations based on the user's gaze position.
  • the gaze input device P1 is a system capable of accepting input operations based on the user's gaze, and includes a processor 11, a memory 12, a camera 13, a display 14, and a database DB1. Note that the database DB1 may be configured separately from the gaze input device P1.
  • the camera 13 and the display 14 may also be configured separately from the gaze input device P1.
  • the processor 11 which is an example of a calculation unit, is configured using, for example, a Central Processing Unit (CPU) or a Field Programmable Gate Array (FPGA), and performs various processes and controls in cooperation with the memory 12. Specifically, the processor 11 references the programs and data stored in the memory 12 and executes the programs to realize the functions of each unit.
  • CPU Central Processing Unit
  • FPGA Field Programmable Gate Array
  • the processor 11 outputs the calibration screen SC0 (see FIG. 4, an example of an input screen) to the display 14 and displays it.
  • the processor 11 then causes the camera 13 to capture an image of the user looking at the calibration screen SC0, and calculates correction parameters (e.g., a transformation matrix, etc.) for correcting (transforming) the positional deviation of the user's gaze position detected using the captured image of the user output from the camera 13.
  • correction parameters e.g., a transformation matrix, etc.
  • the processor 11 After calculating the correction parameters, the processor 11 displays the input screen SC1 (see Figs. 6, 7, and 8) and starts accepting gaze input operations by the user on the input screen SC1 (see Figs. 6, 7, and 8).
  • the processor 11 uses the correction parameters to correct and store the gaze position of the user detected based on the captured image output from the camera 13, and accepts input operations of input information (e.g., PIN code, symbols, pictograms, stroke order, password, etc.) based on the time-series changes in the gaze position.
  • input information e.g., PIN code, symbols, pictograms, stroke order, password, etc.
  • the processor 11 performs image analysis on the captured image output from the camera 13 and generates measurement environment information. Based on the generated measurement environment information, the processor 11 determines a threshold value for each variability of the detected gaze positions.
  • the measurement environment information here includes, for example, any one of the following information: the size of the display 14, the brightness of the user's face area shown in the face image, the distance between the display 14 and the user (imaging distance), the angle of the user's face, etc.
  • the processor 11 compares the input information based on the received input operation result with the user's registration information (e.g., PIN code, password, etc.) registered in the database DB1.
  • the user's registration information registered in the database DB1 may be registered in the memory 12 instead of the database DB1.
  • Memory 12 has, for example, a Random Access Memory (hereinafter referred to as "RAM”) as a working memory used when executing each process of processor 11, and a Read Only Memory (hereinafter referred to as "ROM”) that stores programs and data that define the operation of processor 11. Data or information generated or acquired by processor 11 is temporarily stored in RAM. Programs that define the operation of processor 11 are written in ROM. Memory 12 stores the size of display 14, etc. Memory 12 may also store the installation position and distance of camera 13 relative to display 14, etc.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • the camera 13 is configured to have at least an image sensor (not shown) and a lens (not shown).
  • the image sensor is, for example, a solid-state imaging element such as a Charged-Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS), and converts the optical image formed on the imaging surface into an electrical signal.
  • CCD Charged-Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • the display 14 which is an example of a display unit, is configured using, for example, a Liquid Crystal Display (LCD) or an organic electroluminescence (EL) display.
  • the display 14 displays the calibration screen SC0 (see FIG. 4), the input screen SC1 (see FIG. 6, FIG. 7, FIG. 8), etc., output from the processor 11.
  • Database DB1 is a so-called storage device, and is configured using a storage medium such as a flash memory, a hard disk drive (HDD), or a solid state drive (SSD).
  • Database DB1 stores user registration information (e.g., PIN code, password, etc.) in a manner that allows it to be managed for each user.
  • Figure 2 is a diagram for explaining an example of the operation procedure of the eye-gaze input device P1 according to embodiment 1.
  • Figure 3 is a diagram for explaining an example of the operation procedure of the eye-gaze input device P1 according to embodiment 1.
  • FIG. 4 is a diagram for explaining an example of a method for calibrating the eye-gaze position.
  • the calibration screen SC0 shown in FIG. 4 shows an example that is similar to the input screen SC1 (see FIG. 6) for accepting input operations for input information, but is not limited to this.
  • the processor 11 outputs the calibration screen SC0 (see FIG. 4) including the center point "A" to the display 14 and displays it (St11).
  • the calibration screen SC0 includes a center point "A."
  • the center point "A" is located approximately in the center of the calibration screen SC0.
  • the processor 11 outputs the calibration screen SC0 to the display 14 for display.
  • the processor 11 requests the user to look at (gaze at) the center point "A" included in the calibration screen SC0. This request may be made by displaying a message requesting the user to gaze at the center point "A” on the display 14, or by outputting the message requesting the user to gaze at the center point "A” as audio from a speaker (not shown).
  • the camera 13 captures an image of the user gazing at the calibration screen SC0 (St12). The camera 13 outputs the captured image to the processor 11.
  • the processor 11 detects the face of the user (person) from the captured image output from the camera 13, and detects the user's gaze position on the calibration screen SC0 using a gaze detection algorithm (St13).
  • the processor 11 stores information on the detected gaze position (coordinates) in the memory 12 (St13).
  • the processor 11 calculates the amount of positional blur of the gaze position Pt0 for a predetermined time (e.g., 0.3 seconds, 0.5 seconds, etc.) based on each of the accumulated gaze positions Pt0 for a predetermined time (St14).
  • a predetermined time e.g., 0.3 seconds, 0.5 seconds, etc.
  • the amount of positional blur is the standard deviation value that indicates the variation of each of the detected gaze positions.
  • the processor 11 determines whether the calculated amount of positional blur of the gaze position Pt0 for a predetermined time period is less than a threshold value (St15).
  • the threshold value is a predetermined fixed value that is determined based on the measurement environment information.
  • step St15 determines in the processing of step St15 that the calculated amount of positional blur of the gaze position Pt0 for the specified time period is less than the threshold value (St15, YES), it calculates the center position PtC0 of the gaze position Pt0 for the specified time period (St16).
  • step St15 determines in the processing of step St15 that the calculated amount of positional blur of the gaze position Pt0 for the specified time period is not less than the threshold value (St15, NO), the processor 11 returns to the processing of step St12.
  • the processor 11 calculates a correction parameter (transformation matrix DR0) for coordinate transformation of the center position PtC0 to the center position Ct of the center point "A" (i.e., the center position PtC0 of the area ARA) based on the calculated center position PtC0 and the center position Ct (see FIG. 8) of the center point "A" (St17).
  • the correction parameter calculated here is a parameter for transforming (correcting) the user's gaze position (i.e., the center position PtC0 of the gaze position Pt0) to the gaze input position by the user.
  • the processor 11 can correct the positional shift of the user's gaze position, for example, by converting the gaze position Pt0 for a predetermined time into an input position Pt0' and the gaze position Pt1 for a predetermined time into an input position Pt1'.
  • the processor 11 After calculating the correction parameters, the processor 11 ends the calibration process and outputs and displays the input screen SC1 for accepting input of input information on the display 14 (St18).
  • the input screen SC1 includes a center point “A” and input keys “1", “2", “3", and “4" corresponding to four numbers.
  • the center point “A” is located approximately in the center of the input screen SC1.
  • Each of the input keys “1” to “4" is located at approximately equal intervals on concentric circles centered on the center point "A".
  • the camera 13 captures an image of the user gazing at the input screen SC1 (St19). The camera 13 outputs the captured image to the processor 11.
  • Processor 11 detects the face of the user (person) from the captured image output from camera 13, and detects the user's gaze position on input screen SC1 using a gaze detection algorithm (St20). Processor 11 accumulates information on the detected gaze position (coordinates) for a predetermined period of time (e.g., 0.3 seconds, 0.5 seconds, etc.) in memory 12 in chronological order (St20). Processor 11 may also accumulate information on the detected gaze position in association with imaging time information of the captured image in which this gaze position was detected.
  • a gaze detection algorithm St20
  • Processor 11 may also accumulate information on the detected gaze position in association with imaging time information of the captured image in which this gaze position was detected.
  • the processor 11 calculates an approximation line based on the accumulated gaze positions for a predetermined time period and the center position Ct (see FIG. 8) of the center point "A" which is the starting point of the gaze position movement (St22).
  • the processor 11 calculates the angle of the calculated approximation line and accumulates it in the memory 12 in chronological order (St23).
  • the processor 11 calculates an angular blur amount (e.g., the angular blur amount ⁇ A shown in FIG. 5, the angular blur amount ⁇ B shown in FIG. 6, etc.) indicating the amount of blur in the user's gaze position based on the angles of the multiple approximate straight lines accumulated by at least two approximate straight line calculation processes Rp1 (St24).
  • an angular blur amount e.g., the angular blur amount ⁇ A shown in FIG. 5, the angular blur amount ⁇ B shown in FIG. 6, etc.
  • the processor 11 determines whether the amount of angular blur is less than a threshold value corresponding to a specific input key (St25).
  • the threshold value referred to here is, for example, the threshold value ⁇ 1 shown in FIG. 7, or the threshold values ⁇ 2 and ⁇ 3 shown in FIG. 13, and is a threshold value for determining whether an input key has been input among the center point "A", the number "1", ..., which are input keys displayed on the input screens SC1, SC2, and SC3, based on the direction of movement of the user's gaze position (i.e., the angle of the approximated line).
  • the processor 11 determines in the processing of step St25 that the amount of angular blur is less than the threshold value (St25, YES), it confirms the input content (input key) based on the input keys (e.g., center point "A", number "1", ..., etc.) arranged in the direction of movement of the user's gaze position indicated by the approximation line, and accepts an input operation based on the user's gaze (St26).
  • the input content e.g., center point "A", number "1", ..., etc.
  • step St25 determines in the processing of step St25 that the amount of angular blur is not less than the threshold value (St25, NO)
  • the processor 11 returns to the processing of step St19.
  • the processor 11 determines whether or not input operations for a predetermined number of digits (e.g., three digits, four digits, etc.) have been completed based on the number of input contents (input keys) accepted as input operations (St27).
  • a predetermined number of digits e.g., three digits, four digits, etc.
  • step St27 determines in the process of step St27 that the input operation for the predetermined number of digits has been completed (St27, YES)
  • the operation procedure shown in FIG. 3 ends.
  • the processor 11 acquires input information based on the input contents (input keys) for the predetermined number of digits, and proceeds to a process of comparing the input information with the input information previously registered in the database DB1.
  • step St27 determines in the processing of step St27 that the input operation for the predetermined number of digits has not been completed (St27: NO)
  • the processor 11 returns to the processing of step St19.
  • the eye-gaze input device P1 can accept an input operation based on the direction of movement of the user's eye-gaze position (i.e., changes over time) even when the user's eye-gaze position is not detected in the areas ARA, AR1, AR2, AR3, AR4 of each input key, or when the user's eye-gaze position detected in the areas ARA (an example of a first input section) and AR1 to AR4 (an example of a second input section) of each input key does not satisfy a predetermined condition for accepting an input operation (for example, the user's eye-gaze position is continuously detected within areas ARA, AR1 to AR4 for a predetermined period of time or more).
  • the eye-gaze input device P1 can accept input key input operations without the user having to gaze at the areas ARA, AR1 to AR4 of each input key, so the time required for each user to perform input operations can be more effectively reduced.
  • the eye-gaze input device P1 can receive input operations based on the user's gaze with a high degree of accuracy. Therefore, even if the eye-gaze input device P1 is used by an unspecified number of users without recording and storing correction parameters for each user in advance, it can calculate correction parameters with low calibration accuracy by gazing at at least one point (for example, the central point "A"), and therefore can more effectively reduce the time required to calculate the correction parameters for each user.
  • FIG. 5 is a diagram explaining the method of calculating the moving direction of the gaze position.
  • the processor 11 executes the approximate line calculation process Rp1 three times.
  • the processor 11 accumulates the gaze positions for a predetermined time period (St21), and calculates the approximate line DR21 based on the accumulated gaze positions Pt21' for the predetermined time period and the center position Ct of the center point "A", which is the starting point of the gaze position movement (St22).
  • the processor 11 calculates the angle of the approximate line DR21, and accumulates it in chronological order in the memory 12 (St23).
  • processor 11 returns to the process of step St19, further accumulates gaze positions for a predetermined time period (St21), and calculates approximate line DR22 based on the accumulated gaze positions Pt22' for the predetermined time period and the center position Ct of center point "A", which is the starting point of the gaze position movement (St22).
  • Processor 11 calculates the angle of approximate line DR22 and accumulates it in chronological order in memory 12 (St23).
  • processor 11 returns to the process of step St19, further accumulates gaze positions for a predetermined time period (St21), and calculates approximate line DR23 based on the accumulated gaze positions Pt23' for the predetermined time period and the center position Ct of center point "A", which is the starting point of the gaze position movement (St22).
  • Processor 11 calculates the angle of approximate line DR23 and accumulates it in chronological order in memory 12 (St23).
  • the processor 11 calculates the angular blur amount ⁇ A, which indicates the amount of blur in the user's gaze position, based on the angles of the three approximate lines DR21 to DR23 accumulated by the approximate line calculation process Rp1 (St24).
  • the eye gaze input device P1 can calculate the direction in which the user's eye gaze position moves and the amount of blur in the direction in which the user's eye gaze position moves.
  • Fig. 6 is a diagram explaining a method of accepting an eye-gaze input operation based on the moving direction of the eye-gaze position.
  • Fig. 7 is a diagram showing an example of an angle at which an eye-gaze input operation based on the moving direction of the eye-gaze position can be accepted.
  • the processor 11 calculates four approximate straight lines DR31 to DR34 corresponding to each of the accumulated gaze positions based on each of the four gaze positions and the center position Ct of the center point "A" (see FIG. 8) (St22).
  • the processor 11 calculates the angle of each of the calculated four approximate straight lines DR31 to DR34 and accumulates them in the memory 12 (St23).
  • the processor 11 calculates the angular blur amount ⁇ B, which indicates the amount of blur in the user's gaze position, based on the angles of the approximated lines DR31 to DR34 accumulated by the approximated line calculation process Rp1 (St24).
  • the processor 11 determines whether the calculated angular blur amount ⁇ B is less than the threshold value ⁇ 1 (St25).
  • the threshold value ⁇ 1 may be an angle of 90° or less.
  • processor 11 determines that the calculated angle blur amount ⁇ B is less than threshold ⁇ 1, it determines in which of angle regions ⁇ 11, ⁇ 12, ⁇ 13, or ⁇ 14 the angle of the approximated line falls in, in the chronological order in which the angles of approximated lines DR31 to DR34 were calculated. If processor 11 determines based on the determination result that the angle of the approximated line falls within the same angle region a predetermined number of times (e.g., two times, five times, etc.) or more consecutively in the chronological order, it accepts an input operation in which the input content is an input key that corresponds to this angle region and is positioned in the direction of movement of the user's gaze position indicated by the approximated line (St26).
  • a predetermined number of times e.g., two times, five times, etc.
  • each angle region ⁇ 11 shown in FIG. 7 is a region that is -45° or more and less than +45°, with the position of the input key "1" as the reference (0 (zero)°).
  • the angle region ⁇ 12 is a region that is +45° or more and less than +135°, with the position of the input key “1” as the reference (0 (zero)°).
  • the angle region ⁇ 13 is a region that is +135° or more and less than +225°, with the position of the input key "1” as the reference (0 (zero)°).
  • the angle region ⁇ 12 is a region that is +225° or more and less than +315° (i.e., less than -45°), with the position of the input key "1" as the reference (0 (zero)°).
  • Processor 11 determines whether the angle of approximate line DR31 is included in any of the angle regions ⁇ 11 to ⁇ 14. After determining that the angle of approximate line DR31 is included in angle region ⁇ 11, processor 11 determines whether the angle of approximate line DR32, which is calculated next to approximate line DR31, is included in the same angle region ⁇ 11 as the angle of approximate line DR31.
  • the processor 11 determines whether the angle of the approximated straight line DR33, which is calculated next to the approximated straight line DR32, is included in the same angle region ⁇ 11 as the angles of the approximated straight lines DR31 to DR32.
  • the processor 11 determines whether the angle of the approximated straight line DR34, which is calculated next to the approximated straight line DR33, is included in the same angle region ⁇ 11 as the angles of the approximated straight lines DR31 to DR33.
  • processor 11 After determining that the angle of approximated straight line DR34 is included in angle region ⁇ 11, processor 11 detects that it has determined that each of the angles of approximated straight lines DR31 to DR34 is included in the same angle region ⁇ 11 four times in succession in time series (i.e., a predetermined number of times). Processor 11 accepts an input operation in which the input content is the input key "1" corresponding to the angle region ⁇ 11 that includes the angles of approximated straight lines DR31 to DR34.
  • the eye-gaze input device P1 can estimate the input content that the user is about to input based on the moving direction of the user's gaze position, and can accept it as an input operation before the user gazes at the input keys. This makes it possible to more efficiently reduce the time required for each user to input information. Furthermore, because the eye-gaze input device P1 can more accurately estimate the moving direction of the user's gaze position based on the amount of angular deviation of the user's gaze position, it is possible to more effectively prevent erroneous input of input information even when calibration accuracy is low.
  • FIG. 8 is a diagram illustrating an example of the insensitive area ARN. Note that setting the insensitive area ARN is not essential and may be optional.
  • the insensitive area ARN is an area that disables the estimation process of the input content based on the angle of the approximate line, and is an area outside the area ARA of the center point "A" and within a radius R1 from the center position Ct of the area ARA of the center point "A".
  • the insensitive area ARN may be set in other shapes (e.g., ellipse, diamond, etc.) based on the aspect ratio and size of the display 14 or the arrangement of the input keys.
  • the processor 11 determines that the detected gaze position of the user is within the insensitive area ARN, the processor 11 does not use the gaze position of the user located within the insensitive area ARN in the calculation of the approximate line and the angle of the approximate line. In other words, the processor 11 calculates the approximate line and the angle of the approximate line based on the gaze position of the user detected outside the insensitive area ARN, estimates the input content that the user is about to input based on the calculated angle of the approximate line, and accepts it as an input operation before the user gazes at the input keys.
  • the gaze input device P1 uses the gaze position detected at a position that is at least a predetermined distance (radius R1) away from the center point "A" to estimate the input content, and can eliminate gaze positions that are detected near the center point "A" where the variation in the angle of the approximated line is likely to be small and that are likely to result in erroneous determination of the input content. Therefore, the gaze input device P1 can more effectively suppress erroneous determination of the input content in the process of estimating the input content based on the movement direction of the user's gaze position.
  • Fig. 9 is a diagram for explaining an example of the first gaze input operation procedure.
  • Fig. 10 is a diagram for explaining an example of the first gaze input operation procedure.
  • the first gaze input operation procedure is a gaze input operation procedure in which input operations for the center point "A" and input operations for the input keys "1" to "4" are alternately accepted, and corresponds to the processing of each of steps St19 to St26 shown in Fig. 3.
  • the input screens SC41, SC42, SC43, SC44, SC45, SC46, SC47, and SC48 shown in Figures 9 and 10, respectively, are merely examples and are not limited to these.
  • the number of numeric input keys is not limited to four.
  • the arrangement of the numeric input keys is not limited to the arrangement shown in input screens SC41 to SC48, and they may be rotated by any angle (for example, 45°) and arranged.
  • the processor 11 displays the input screen SC41 on the display 14, instructs the user to look at the center point "A" (by outputting voice, outputting a message, etc.), and enables (makes acceptable) only the input operation for the center point "A" out of the five input keys.
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing within area ARA of center point "A" or in the direction in which center point "A" is located based on the detected user's gaze position. Processor 11 may generate correction parameters for calibrating the user's gaze position at this timing.
  • processor 11 determines, based on the detected user's gaze position, that the user is gazing within area ARA of center point "A" or in the direction in which center point "A” is located, it displays input screen SC42 on display 14 and instructs the user to gaze at one of the number input keys. On input screen SC42, processor 11 disables input operations for center point “A” and enables input operations for each of the four input keys “1" to "4" arranged around center point "A".
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing at area AR1 of the four input keys "1", area AR2 of input key “2", area AR3 of input key “3”, or area AR4 of input key "4" based on the detected user's gaze position or the movement direction DR41 of the user's gaze position (angle of the approximated line). Processor 11 accepts the input operation of the number input key "1" on input screen SC42.
  • the processor 11 After accepting an input operation of any of the numeric input keys (here, input key "1"), the processor 11 displays the input screen SC43 on the display 14, instructs the user to gaze at the center point "A” again, and enables (accepts) only the input operation for the center point "A” out of the five input keys.
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing within area ARA of center point "A" or in the direction in which center point "A" is located based on the detected user's gaze position or the movement direction DR42 of the user's gaze position (angle of the approximated line).
  • processor 11 determines, based on the detected user's gaze position, that the user is gazing within area ARA of center point "A” or in the direction in which center point “A” is located, it displays input screen SC44 on display 14 and instructs the user to gaze at one of the number input keys. On input screen SC44, processor 11 disables input operations for center point “A” and enables input operations for each of the four input keys “1" to "4" arranged around center point "A".
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing at area AR1 of the four input keys "1", area AR2 of input key “2", area AR3 of input key “3”, or area AR4 of input key "4" based on the detected user's gaze position or the movement direction DR43 of the user's gaze position (angle of the approximated line). Processor 11 accepts the input operation of the number input key "2" on input screen SC44.
  • the processor 11 After accepting an input operation of any of the numeric input keys (here, input key "2"), the processor 11 displays the input screen SC45 on the display 14, instructs the user to gaze at the center point "A” again, and enables only the input operation for the center point "A” out of the five input keys.
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing within area ARA of center point "A" or in the direction in which center point "A” is located based on the detected user's gaze position or the movement direction DR44 of the user's gaze position (angle of the approximated line).
  • processor 11 determines, based on the detected user's gaze position, that the user is gazing within area ARA of center point "A” or in the direction in which center point “A” is located, it displays input screen SC46 on display 14 and instructs the user to gaze at one of the number input keys. On input screen SC46, processor 11 disables input operations for center point “A” and enables input operations for each of the four input keys “1” to "4" arranged around center point "A".
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing at area AR1 of the four input keys "1", area AR2 of input key “2", area AR3 of input key “3”, or area AR4 of input key "4" based on the detected user's gaze position or the movement direction DR45 of the user's gaze position (angle of the approximated line). Processor 11 accepts the input operation of the number input key "3" on input screen SC46.
  • the processor 11 After accepting an input operation of any of the numeric input keys (here, input key "3"), the processor 11 displays the input screen SC47 on the display 14, instructs the user to gaze at the center point "A” again, and enables only the input operation for the center point "A” out of the five input keys.
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing within area ARA of center point "A" or in the direction in which center point "A" is located based on the detected user's gaze position or the movement direction DR46 of the user's gaze position (angle of the approximated line).
  • processor 11 determines, based on the detected user's gaze position, that the user is gazing within area ARA of center point "A” or in the direction in which center point “A” is located, it displays input screen SC48 on display 14 and instructs the user to gaze at one of the number input keys. On input screen SC48, processor 11 disables input operations for center point “A” and enables input operations for each of the four input keys “1" to "4" arranged around center point "A".
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing at area AR1 of the four input keys "1," AR2 of the input key “2,” AR3 of the input key “3,” or area AR4 of the input key "4" based on the detected user's gaze position or the movement direction DR46 of the user's gaze position (angle of the approximated line). Processor 11 accepts the input operation of the number input key "4" on the input screen SC48.
  • the processor 11 After accepting the input of the four numbers “1,” “2,” “3,” and “4,” the processor 11 compares the input numbers (input information) with the registration information previously registered in the database DB1 and performs user authentication.
  • the eye-gaze input device P1 can more effectively suppress erroneous input of input information by alternately accepting an input operation of input information (here, a number) and an input operation of the center point "A" in the first eye-gaze input operation procedure. Furthermore, by alternately accepting an input operation for the center point "A” and an input operation for the input keys "1" to "4" (i.e., input information), when the user continuously inputs the same input information (for example, when the number "1" is input two or more times in succession), the eye-gaze input device P1 can more accurately accept the input operation of the same input information.
  • input information here, a number
  • input keys "1" to "4" i.e., input information
  • the eye-gaze input device P1 can prevent the center point "A” and any of the input keys “1” to “4" from being positioned on the same straight line. This makes it possible to accept input operations based on the time-series changes in the direction of movement of the user's eye-gaze position (the angle of the approximated straight line), and thus makes it possible to more effectively suppress erroneous inputs.
  • the input screens SC42, SC44, SC46, and SC48 shown in Figures 9 and 10 may display four of the five input keys, "1" to "4," for which input operations are enabled, in a solid line or with emphasis (for example, a thick line or a red frame, etc.), and only the center point "A,” for which input operations are disabled, in a dashed line or with a suppressed display (for example, a thin line or a gray frame, etc.).
  • the processor 11 when it receives an input operation for input information, it may enlarge and display the input key corresponding to the input information, or change the luminance of the lighting on the input screens SC41 to SC48 to make them flash. This allows the eye-gaze input device P1 to notify the user that the acceptance of the input operation has been completed.
  • Fig. 11 is a diagram for explaining an example of the second gaze input operation procedure.
  • Fig. 12 is a diagram for explaining an example of the second gaze input operation procedure.
  • the second gaze input operation procedure is a gaze input operation procedure when input operations of the input keys "1" to "4" are accepted consecutively, and corresponds to the processing of each of steps St19 to St26 shown in Fig. 3.
  • the input screens SC51, SC52, SC53, SC54, and SC55 shown in Figures 11 and 12, respectively, are merely examples and are not limited to these.
  • the number of numeric input keys is not limited to four.
  • the arrangement of the numeric input keys is not limited to the arrangement shown in input screens SC51 to SC55, and may be rotated by any angle (for example, 45°) and arranged.
  • the processor 11 displays the input screen SC51 on the display 14, instructs the user to gaze at the center point "A" (outputs voice and message), and enables (accepts) only the input operation for the center point "A" out of the five input keys.
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines whether the user is gazing within area ARA of center point "A" or in the direction in which center point "A" is located based on the detected user's gaze position. Processor 11 may generate correction parameters for calibrating the user's gaze position at this timing.
  • the processor 11 determines, based on the detected user's gaze position, that the user is gazing within the area ARA of the center point "A” or in the direction in which the center point "A” is located, it displays the input screen SC52 on the display 14 and instructs the user to gaze at one of the number input keys. On the input screen SC52, the processor 11 disables input operations for the center point “A” and enables input operations for each of the four input keys "1" to "4" arranged around the center point "A".
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines which of area AR1 of input key "1" to area AR4 of input key "4" the user is gazing at based on the detected user's gaze position or the movement direction DR51 of the user's gaze position (angle of the approximated line). Processor 11 accepts the input operation of the number input key "1" on input screen SC52.
  • the processor 11 After accepting the input operation of any number input key (here, input key "1"), the processor 11 displays the input screen SC53 on the display 14 and instructs the user to look at the next number input key. It goes without saying that this instruction is not essential and may be omitted.
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines which of area AR1 of input key "1" to area AR4 of input key "4" the user is gazing at based on the detected user's gaze position or the movement direction DR52 of the user's gaze position (angle of the approximated line). Processor 11 accepts the input operation of number input key "2" on input screen SC54.
  • the processor 11 After accepting an input operation of any numeric input key (here, input key "2"), the processor 11 displays the input screen SC54 on the display 14 and instructs the user to focus on the next numeric input key.
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines which of area AR1 of input key "1" to area AR4 of input key "4" the user is gazing at based on the detected user's gaze position or the movement direction DR53 of the user's gaze position (angle of the approximated line). Processor 11 accepts the input operation of number input key "3" on input screen SC54.
  • the processor 11 After accepting an input operation of any numeric input key (here, input key "3"), the processor 11 displays the input screen SC55 on the display 14 and instructs the user to focus on the next numeric input key.
  • Processor 11 detects the user's gaze position based on the captured image. Processor 11 determines which of area AR1 of input key "1" to area AR4 of input key "4" the user is gazing at based on the detected user's gaze position or the movement direction DR54 of the user's gaze position (angle of the approximated line). Processor 11 accepts the input operation of number input key "4" on input screen SC54.
  • the processor 11 After accepting the input of the four numbers “1,” “2,” “3,” and “4,” the processor 11 compares the input numbers (input information) with the registration information previously registered in the database DB1 and performs user authentication.
  • the eye-gaze input device P1 can more effectively suppress erroneous input of input information by continuously accepting input operations of a predetermined number of digits of input information (here, numbers) after accepting an input operation of the center point "A" in the second eye-gaze input operation procedure. Furthermore, because the eye-gaze input device P1 does not alternately accept input operations for the center point "A” and input operations for the input keys "1" to "4" (i.e., input information), it is possible to shorten the time required for the input operation for the center point "A" between each piece of input information, and more efficiently shorten the eye-gaze input operation time per user.
  • a predetermined number of digits of input information here, numbers
  • the input screen SC51 shown in Figures 11 and 12 may display only the center point "A" of the five input keys for which input operation is enabled, using a solid line or highlighting (for example, displaying a thick line or a red frame, etc.), and each of the four input keys "1" to "4" for which input operation is disabled, using a dashed line or suppressing (for example, displaying a thin line or a gray frame, etc.).
  • the four input keys "1" to "4" for which input operations are enabled among the five input keys may be displayed with a solid line or highlighted (for example, a thick line, a red frame, etc.), and only the center point "A" for which input operations are disabled may be displayed with a dashed line or suppressed (for example, a thin line, a gray frame, etc.).
  • the processor 11 when it receives an input operation for input information, it may enlarge and display the input key corresponding to the input information, or change the illuminance of the lighting on the input screens SC51 to SC55 to make them flash. This allows the eye-gaze input device P1 to notify the user that the acceptance of the input operation has been completed.
  • Fig. 13 is a diagram showing another example of an input screen.
  • Fig. 14 is a diagram showing another example of an input screen. It goes without saying that the input screens SC2 and SC3 shown in Fig. 13 are merely examples and are not limited to these.
  • the input screen SC2 includes a center point "A” and eight input keys corresponding to the numeric input keys "1" to "8".
  • the processor 11 sets the threshold ⁇ 2 of the angular blur of the approximated line to 45° or less in gaze input using the input screen SC2. In such a case, the processor 11 accepts the input operation of the input key "1" when the threshold ⁇ 2 of the angular blur of the approximated line is 45° or less and the angle of the approximated line is -22.5° or more and less than +22.5° with the position of the input key "1" as the reference (0 (zero)°).
  • the input screen SC3 includes a center point "A” and ten input keys corresponding to the numeric input keys "0" to “9.”
  • the processor 11 sets the threshold ⁇ 3 of the angular deviation of the approximated line to 36° or less. In such a case, the processor 11 accepts the input operation of the input key "0" when the threshold ⁇ 3 of the angular deviation of the approximated line is 36° or less and the angle of the approximated line is -18° or more and less than +18° with the position of the input key "0" as the reference (0 (zero)°).
  • the gaze input device P1 (an example of an input device) according to the first embodiment is capable of receiving an input operation based on the gaze position of a user, and includes a calibration screen SC0 (an example of an input screen) capable of receiving an input operation, a display 14 (an example of a display unit) displaying the input screen SC1, a camera 13 that captures an image of a user, a processor 11 (an example of a calculation unit) that calculates a correction parameter for calibrating the gaze position of the first user relative to the input screen SC1 based on the gaze position of the first user shown in the captured first captured image, and a processor 11 that detects the gaze position of the second user shown in the captured second captured image after calculating the correction parameter, calibrates the gaze position of the second user using the correction parameter, and receives an input operation relative to the input screen SC1 based on the calibrated gaze position of the second user.
  • a calibration screen SC0 an example of an input screen
  • a display 14 an example of a display unit
  • a camera 13 that captures
  • the eye-gaze input device P1 can more efficiently calibrate the eye-gaze position and more efficiently accept input operations based on the user's eye gaze, even when correction parameters for each user are not recorded and stored in advance and the device is used by an unspecified number of users.
  • the eye-gaze input device P1 includes an area ARA (an example of a first input section) of a center point "A" that accepts a first input operation, and areas AR1 to AR4 (an example of a second input section) corresponding to a plurality of input keys "1" to "4" that accept a second input operation different from the first input operation.
  • This allows the eye-gaze input device P1 according to the first embodiment to accept an input operation for calibration and an input operation for input information on a single input screen SC1.
  • the first captured image in the gaze input device P1 according to embodiment 1 is an image captured of a user looking at area ARA of center point "A", and the processor 11 calculates correction parameters based on the gaze position of the first user and the position of area ARA of center point "A".
  • the gaze input device P1 according to embodiment 1 can more efficiently accept input operations based on the user's gaze, even when correction parameters for each user are not recorded and stored in advance and the device is used by an unspecified number of users.
  • the second captured image in the eye-gaze input device P1 according to the first embodiment is an image captured of a user looking at the areas AR1 to AR4 corresponding to any one of the input keys "1" to "4".
  • the processor 11 accepts an input operation based on the eye gaze position of the second user and the positions of the areas AR1 to AR4 corresponding to the multiple input keys "1" to "4".
  • the eye-gaze input device P1 according to the first embodiment can more effectively suppress erroneous input of input information by continuously accepting input operations of a predetermined number of digits of input information (here, numbers) after accepting an input operation of the center point "A" in the second eye-gaze input operation procedure.
  • the eye-gaze input device P1 does not alternately accept an input operation for the center point "A” and an input operation for the input keys "1" to "4" (i.e., input information), the time required for the input operation for the center point "A” between each piece of input information can be shortened, and the eye-gaze input operation time per user can be more efficiently shortened.
  • the second captured image in the eye-gaze input device P1 according to the first embodiment is an image captured of a user looking at the area ARA of the center point "A" or the areas AR1 to AR4 corresponding to any one of the input keys "1" to "4".
  • the processor 11 alternately accepts the first input operation and the second input operation.
  • the eye-gaze input device P1 according to the first embodiment can more effectively suppress erroneous input of input information by alternately repeating the acceptance of the input operation of the input information (here, a number) and the acceptance of the input operation of the center point "A" in the first eye-gaze input operation procedure.
  • the eye-gaze input device P1 can more accurately accept the input operation of the same continuous input information because it accepts the input of a different center point "A" while inputting the same input information.
  • the processor 11 in the eye-gaze input device P1 according to embodiment 1 activates the area ARA of the center point "A" on the input screen SC1, disables the areas AR1 to AR4 corresponding to the multiple input keys “1” to “4", and accepts a first input operation, and activates the areas AR1 to AR4 corresponding to the multiple input keys "1” to "4" on the input screen SC1, disables the area ARA of the center point "A", and accepts a second input operation.
  • the eye-gaze input device P1 according to embodiment 1 can more effectively suppress erroneous inputs by only activating (accepting) the input operation of either the center point "A” or the input keys "1" to "4".
  • the processor 11 in the eye-gaze input device P1 according to embodiment 1 accepts a first input operation or a second input operation, it highlights the area ARA of the center point "A" that corresponds to the accepted first input operation or second input operation, or the areas AR1 to AR4 that correspond to the input keys "1" to "4". This allows the eye-gaze input device P1 according to embodiment 1 to notify the user that acceptance of the input operation has been completed.
  • the area ARA of the center point "A" in the eye-gaze input device P1 according to embodiment 1 is located approximately in the center of the input screen SC1.
  • Areas AR1 to AR4 corresponding to the multiple input keys "1" to "4" are each located at approximately the same distance from the area ARA of the center point "A". This allows the eye-gaze input device P1 according to embodiment 1 to more clearly distinguish between the first input operation and the second input operation, and to more accurately accept the second input operation.
  • the areas AR1 to AR4 corresponding to the multiple input keys "1" to "4" in the eye-gaze input device P1 according to embodiment 1 are arranged at approximately equal intervals on a circumference centered on the area ARA at the center point "A". This allows the eye-gaze input device P1 according to embodiment 1 to more accurately receive the second input operation.
  • the input screen SC1 of the eye-gaze input device P1 has an insensitive area ARN around the area ARA of the center point "A" that disables input operations.
  • the processor 11 determines that the gaze position of the second user is within the insensitive area ARN, it omits the acceptance of input operations based on the gaze position of the second user.
  • the eye-gaze input device P1 uses the gaze position detected at a position that is outside the insensitive area ARN and is at least a predetermined distance (radius R1) away from the center point "A", which is outside the insensitive area ARN, to estimate the input content, thereby being able to eliminate gaze positions that are prone to erroneous determination of the input content, and therefore being able to more effectively suppress erroneous determination of the input content.
  • the processor 11 in the gaze input device P1 detects the gaze position of the second user from a plurality of second captured images captured by the camera 13, accumulates them in chronological order, calculates the movement direction of the user's gaze position based on the accumulated gaze positions of the second user, and accepts an input operation based on the movement direction of the user's gaze position.
  • the gaze input device P1 can accept an input operation based on the movement direction of the user's gaze position even when the user's gaze position is not detected within the areas ARA to AR4 of each input key, or when the user's gaze position detected in the areas ARA, AR1 to AR4 of each input key does not satisfy a predetermined condition for accepting an input operation (for example, the user's gaze position is detected within the areas ARA, AR1 to AR4 continuously for a predetermined time or more), etc.
  • a predetermined condition for accepting an input operation for example, the user's gaze position is detected within the areas ARA, AR1 to AR4 continuously for a predetermined time or more
  • the processor 11 in the eye-gaze input device P1 according to the first embodiment calculates and accumulates the moving direction of the user's eye gaze position based on the accumulated gaze position of the second user for a predetermined time, calculates the amount of blur in the moving direction of the user's eye gaze position based on the accumulated multiple moving directions, and accepts an input operation based on the moving direction when it is determined that the calculated amount of blur is equal to or less than a threshold.
  • the eye-gaze input device P1 according to the first embodiment can estimate the input content that the user is about to input based on the moving direction of the user's eye gaze position and accept it as an input operation before the user gazes at the input key, so that the time required for input information per user can be more efficiently shortened.
  • the eye-gaze input device P1 can more accurately estimate the moving direction of the user's eye gaze position based on the amount of angular blur of the user's eye gaze position, so that erroneous input of input information can be more effectively suppressed even when the calibration accuracy is low.
  • the input method performed by the gaze input device P1 capable of accepting an input operation based on a user's gaze position includes displaying a calibration screen SC0 (an example of an input screen) capable of accepting an input operation, an input screen SC1, acquiring a first captured image of a user, calculating a correction parameter for calibrating the gaze position of the first user relative to the input screen based on the gaze position of the first user shown in the first captured image, acquiring a second captured image of the user, calibrating the gaze position of the second user shown in the second captured image using the correction parameter, and accepting an input operation relative to the input screen based on the calibrated gaze position of the second user.
  • a calibration screen SC0 an example of an input screen
  • the eye-gaze input device P1 can more efficiently calibrate the eye-gaze position and more efficiently accept input operations based on the user's eye gaze, even when correction parameters for each user are not recorded and stored in advance and the device is used by an unspecified number of users.
  • the present disclosure is useful as an input device and input method that makes calibration of eye gaze input more efficient.
  • Processor 12 Memory 13
  • Camera 14 Display DB1 Database P1 Eye-gaze input device SC0 Calibration screen SC1, SC2, SC3, SC41, SC42, SC43, SC44, SC45, SC46, SC47, SC48, SC51, SC52, SC53, SC54, SC55 Input screen

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

L'invention concerne un dispositif d'entrée qui comprend : une unité d'affichage qui affiche une image d'écran d'entrée capable d'accepter une opération d'entrée sur la base de la position du regard d'un utilisateur ; une caméra qui capture une image de l'utilisateur ; une unité de calcul qui, sur la base de la position du regard d'un premier utilisateur apparaissant dans la première image capturée qui a été capturée, calcule un paramètre de correction pour étalonner la position du regard du premier utilisateur par rapport à l'image d'écran d'entrée ; et un processeur qui étalonne la position du regard d'un second utilisateur à l'aide du paramètre de correction et, sur la base de la position de regard étalonnée du second utilisateur, accepte une opération d'entrée sur l'image d'écran d'entrée.
PCT/JP2023/026902 2022-11-11 2023-07-21 Dispositif d'entrée et procédé d'entrée WO2024100935A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-181386 2022-11-11
JP2022181386 2022-11-11

Publications (1)

Publication Number Publication Date
WO2024100935A1 true WO2024100935A1 (fr) 2024-05-16

Family

ID=91032109

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/026902 WO2024100935A1 (fr) 2022-11-11 2023-07-21 Dispositif d'entrée et procédé d'entrée

Country Status (1)

Country Link
WO (1) WO2024100935A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007136000A (ja) * 2005-11-21 2007-06-07 Nippon Telegr & Teleph Corp <Ntt> 視線検出装置、視線検出方法、および視線検出プログラム
JP2020502628A (ja) * 2016-10-27 2020-01-23 アリババ・グループ・ホールディング・リミテッドAlibaba Group Holding Limited 仮想現実環境における情報入力のためのユーザインターフェース
JP2022169043A (ja) * 2021-04-27 2022-11-09 パナソニックIpマネジメント株式会社 認証装置および認証システム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007136000A (ja) * 2005-11-21 2007-06-07 Nippon Telegr & Teleph Corp <Ntt> 視線検出装置、視線検出方法、および視線検出プログラム
JP2020502628A (ja) * 2016-10-27 2020-01-23 アリババ・グループ・ホールディング・リミテッドAlibaba Group Holding Limited 仮想現実環境における情報入力のためのユーザインターフェース
JP2022169043A (ja) * 2021-04-27 2022-11-09 パナソニックIpマネジメント株式会社 認証装置および認証システム

Similar Documents

Publication Publication Date Title
WO2020114037A1 (fr) Procédé et système basés sur la vision artificielle pour surveiller la vitesse de retrait de lentille intestinale en temps réel
RU2672181C2 (ru) Способ и устройство для генерирования команды
JP4755490B2 (ja) ブレ補正方法および撮像装置
KR20150108303A (ko) 직선검출방법, 장치, 프로그램 및 저장매체
WO2013089190A1 (fr) Dispositif d&#39;imagerie et procédé d&#39;imagerie, et support de stockage destiné à stocker un programme de suivi qui peut être traité par un ordinateur
JP6502511B2 (ja) 計算装置、計算装置の制御方法および計算プログラム
US20050163498A1 (en) User interface for automatic red-eye removal in a digital image
JP2012029245A (ja) 撮像装置
WO2011058807A1 (fr) Dispositif de traitement vidéo et procédé de traitement vidéo
CN111080571B (zh) 摄像头遮挡状态检测方法、装置、终端和存储介质
US20170193322A1 (en) Optical reading of external segmented display
JP2011164742A (ja) 表示制御装置、表示制御方法、及び表示制御プログラム、並びに記録媒体
US9569838B2 (en) Image processing apparatus, method of controlling image processing apparatus and storage medium
WO2024100935A1 (fr) Dispositif d&#39;entrée et procédé d&#39;entrée
JP6564136B2 (ja) 画像処理装置、画像処理方法、および、プログラム
US11330191B2 (en) Image processing device and image processing method to generate one image using images captured by two imaging units
JP2013255226A (ja) 画像取込装置の改良された制御
JPH11296304A (ja) 画面表示入力装置及び視差補正方法
WO2023098249A1 (fr) Procédé d&#39;assistance d&#39;aide au balayage pendant une tomographie en cohérence optique, et terminal du type pc, support de stockage et système
JP2008015979A (ja) パターン検出方法、パターン検出プログラム、パターン検出装置、及び撮像装置
CN112637587B (zh) 坏点检测方法及装置
US11373312B2 (en) Processing system, processing apparatus, terminal apparatus, processing method, and program
JP2008211534A (ja) 顔検知装置
JP2007206963A (ja) 画像処理装置及び画像処理方法及びプログラム及び記憶媒体
US20200186766A1 (en) Information processing apparatus, image capturing apparatus, and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23888290

Country of ref document: EP

Kind code of ref document: A1