WO2022191276A1 - Dispositif d'entrée d'opération, procédé d'entrée d'opération et programme - Google Patents

Dispositif d'entrée d'opération, procédé d'entrée d'opération et programme Download PDF

Info

Publication number
WO2022191276A1
WO2022191276A1 PCT/JP2022/010548 JP2022010548W WO2022191276A1 WO 2022191276 A1 WO2022191276 A1 WO 2022191276A1 JP 2022010548 W JP2022010548 W JP 2022010548W WO 2022191276 A1 WO2022191276 A1 WO 2022191276A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
distance
camera
touch panel
dimensional data
Prior art date
Application number
PCT/JP2022/010548
Other languages
English (en)
Japanese (ja)
Inventor
堪亮 坂本
Original Assignee
株式会社ネクステッジテクノロジー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ネクステッジテクノロジー filed Critical 株式会社ネクステッジテクノロジー
Priority to JP2023505628A priority Critical patent/JP7452917B2/ja
Publication of WO2022191276A1 publication Critical patent/WO2022191276A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means

Definitions

  • the present invention relates to an operation input device, an operation input method, and a program for inputting information related to operator's operations.
  • a non-contact operation input device is attracting attention as a means that can lighten the operator's operation burden and that an information processing device can be operated even during work at a surgical site, a cooking site, or the like.
  • Such an operation input device includes, for example, a camera that photographs an indicator such as an operator's finger, measures the three-dimensional position of the indicator, and performs an operation input to an information processing apparatus based on the three-dimensional position.
  • a camera that photographs an indicator such as an operator's finger, measures the three-dimensional position of the indicator, and performs an operation input to an information processing apparatus based on the three-dimensional position.
  • the interactive projector described in Patent Document 1 uses two cameras to detect the contact of a self-luminous indicator on the projection screen based on the light emission pattern, and to detect the contact of the non-luminous indicator on the projection screen by a position detection unit. It is explained that the detection accuracy of the contact of the pointer to the screen surface can be improved by detecting by .
  • the present invention has been made in view of the above circumstances, and aims to provide an operation input device, an operation input method, and a program capable of accurately detecting non-contact operation input with a simple configuration.
  • the operation input device of the present invention includes: a three-dimensional data generation unit that generates three-dimensional data based on output data of a real camera that is fixed with respect to a display surface of a display means and whose line-of-sight direction is inclined with respect to the pointing direction of the indicator;
  • the first straight line is rotated by a predetermined angle about a point on the first straight line extending in the line-of-sight direction of the real camera and located at a predetermined first distance from the real camera.
  • a coordinate transformation unit for transforming the three-dimensional data in the coordinate space of the real camera into virtual three-dimensional data in the coordinate space of the virtual camera, assuming a virtual camera whose viewing direction is the direction of the second straight line;
  • the virtual touch panel is calculated based on the virtual three-dimensional data, and the
  • a third distance which is the shortest distance from the virtual camera to the pointer, is compared with the second distance from the virtual camera to the virtual touch panel, and the third distance is equal to or less than the second distance
  • a contact determination unit that determines that the pointer has come into contact with the virtual touch panel
  • an operation input signal generation unit that generates an operation input signal based on the virtual three-dimensional data when the contact determination unit determines that the pointer has made contact.
  • FIG. 2 is a block diagram showing the hardware configuration of the operation input system according to Embodiment 1;
  • FIG. 2 is a functional block diagram showing the functional configuration of the operation input device according to Embodiment 1;
  • FIG. FIG. 2 is a schematic side view of display means, a real camera, and a virtual touch panel according to Embodiment 1;
  • FIG. 4 is a diagram showing the positional relationship between a real camera and a virtual camera; 4 is a flowchart of operation input processing according to Embodiment 1;
  • FIG. 4 is a diagram showing installation positions of real cameras;
  • FIG. 10 is a schematic side view of display means, a real camera, a virtual touch panel, and a hover surface according to Embodiment 2;
  • FIG. 11 is a perspective view showing a schematic configuration of display means, a real camera, a virtual touch panel, and an instruction effective area according to Embodiment 3; 10 is a flowchart of operation input processing according to Embodiment 3; FIG. 14 is a diagram showing the arrangement of real cameras according to Embodiment 4; It is the schematic which looked at the display means, the transparent plate, and the virtual touch panel which concern on other embodiment from the side.
  • the operation input system 1 is an information processing system that performs processing based on an operation input signal generated by determining an operation using an operator's finger or other indicator.
  • FIG. 1 is a block diagram showing the hardware configuration of an operation input system 1 according to the first embodiment.
  • the operation input system 1 comprises a real camera 10 and an operation input device 20, as shown in FIG.
  • the real camera 10 is a 3D camera (3 Dimensional Camera) that outputs data representing 3D information. Any conventional method may be used to obtain the three-dimensional information, and for example, a stereo method may be used.
  • the real camera 10 is preferably capable of wide-angle photography, and is equipped with a wide-angle FOV 84° lens, for example.
  • a dedicated camera built into the operation input device 20 may be used.
  • the operation input device 20 is an arbitrary information processing terminal, and may be a general-purpose information processing terminal such as a personal computer, a smartphone, or a tablet-type terminal in which a program for operation input processing is installed, or a dedicated terminal. As shown in FIG. 1, the operation input device 20 includes a CPU (Central Processing Unit) 21, a RAM (Random Access Memory) 22, a ROM (Read Only Memory) 23, a communication interface 24, a storage unit 25, a display Means 26 are provided.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • the operation input system 1 detects the operator's virtual touch operation on a virtual touch panel imaginary in front of the display surface of the display means 26 of the operation input device 20 based on the output data of the real camera 10 .
  • the display means 26 of the operation input device 20 is an arbitrary display device that displays information such as images and characters.
  • the display means 26 may be a tangible display including a liquid crystal display or an organic EL (Electro-Luminescence) display built in or external to a personal computer, a smartphone, a tablet terminal, etc. It may be an intangible or fluid type display device such as a display.
  • the indicator is an object that can be moved by the operator in order to specify the position on the display screen of the display means 26, and has an elongated shape extending in the pointing direction.
  • the indicator is, for example, an operator's finger or a pointing stick.
  • the real camera 10 is fixed with respect to the display surface of the display means 26 of the operation input device 20 .
  • the line-of-sight direction of the real camera 10 (the direction toward the center of the field of view) is inclined with respect to the pointing direction of the indicator.
  • the display means 26 is a liquid crystal display of a personal computer and the real camera 10 is fixed to the end of the liquid crystal display
  • the line-of-sight direction is the direction toward the area in front of the center of the display surface of the display means 26. Slanted with respect to the direction perpendicular to the plane.
  • the display means 26 is a liquid crystal display of a personal computer and the real camera 10 is fixed to the central upper end portion of the liquid crystal display will be described.
  • the line-of-sight direction of the real camera 10 is obliquely downward with respect to the direction perpendicular to the display surface of the display means 26 .
  • the CPU 21 of the operation input device 20 controls each component of the operation input device 20, and executes each process including the operation input process by executing the programs stored in the ROM 23 and the storage unit 25.
  • the RAM 22 is a memory in which data can be read and written at high speed, and temporarily stores data output from the real camera 10 and data read from the storage unit 25 for data processing executed by the CPU 21 .
  • the ROM 23 is a read-only memory that stores programs for processing executed by the CPU 21, setting values, and the like.
  • the communication interface 24 is an interface through which data is transmitted and received with the real camera 10 .
  • the storage unit 25 is a large-capacity storage device, and is composed of a flash memory or the like.
  • the storage unit 25 stores the output data of the real camera 10 and the data generated by the processing of the CPU 21 .
  • the storage unit 25 further stores programs executed by the CPU 21 .
  • the display means 26 displays information such as images and characters generated by the CPU 21 .
  • the CPU 21 and the RAM 22 By executing the operation input processing program stored in the storage unit 25, the CPU 21 and the RAM 22, as shown in FIG. It functions as a unit 214 , a pointer detection unit 215 , a contact determination unit 216 , and an operation input signal generation unit 217 .
  • the data acquisition unit 211 acquires output data of the real camera 10 .
  • the three-dimensional data generation unit 212 develops the output data acquired by the data acquisition unit 211 in a three-dimensional coordinate space to generate three-dimensional data.
  • the output data format of the real camera 10 may be any format, and the 3D data generation unit 212 executes data processing according to the output data format of the real camera 10 in order to generate 3D data in a predetermined format. do.
  • the output data of the real camera 10 may be directly used as three-dimensional data.
  • the parameter acquisition unit 213 acquires parameters related to the position and orientation of the real camera 10 and the position and size of the virtual touch panel.
  • the parameters are determined in advance by an initial setting input by the administrator of the operation input device 20 .
  • the coordinate conversion unit 214 converts the three-dimensional data generated based on the output data of the real camera 10 into virtual three-dimensional data corresponding to the virtual touch panel.
  • a virtual camera is assumed in which the real camera 10 is virtually moved for conversion into virtual three-dimensional data. That is, the coordinate transformation unit 214 transforms the three-dimensional data in the coordinate space of the real camera 10 into virtual three-dimensional data in the coordinate space of the virtual camera.
  • the pointer detection unit 215 detects the point closest to the virtual camera as the tip of the pointer based on the virtual three-dimensional data.
  • the contact determination unit 216 determines whether or not the tip of the pointer detected by the pointer detection unit 215 has come closer to the display unit 26 than the virtual touch panel.
  • FIG. 3 is a schematic diagram of the real camera 10, the display means 26, and the virtual touch panel 311 viewed from the side.
  • FIG. 4 is a diagram showing the positional relationship between the real camera 10 and the virtual camera 312. As shown in FIG.
  • the 3D data generation unit 212 generates 3D data based on the output data of the real camera 10 acquired by the data acquisition unit 211 .
  • the three-dimensional data generator 212 can generate three-dimensional data including the pointer 321.
  • the plane on which the virtual touch panel 311 extends is defined as an extension plane P
  • the plane parallel to the extension plane P that passes through a reference point that is the reference of the three-dimensional coordinates of the real camera 10 is defined as a reference plane B.
  • the distance between the extension plane P and the reference plane B is Pz.
  • the distance Pz is a value within a range that is determined according to specifications such as the viewing angle of the real camera 10 or the three-dimensional coordinate measurable range, and is determined by the administrator's input of initial settings.
  • the real camera 10 is fixed to the central upper end portion of the display means 26, and is oriented obliquely downward.
  • the line-of-sight direction of the real camera 10 is E
  • the direction passing through the center of the virtual touch panel 311 and perpendicular to the extension plane P is V
  • the angle formed by the direction E and the direction V is ⁇ .
  • is a parameter input by the administrator according to the orientation of the real camera 10. The administrator assumes the position and size of the virtual touch panel 311, and calculates the distance Pz from the reference plane B to the virtual touch panel 311 along with the angle ⁇ . to decide.
  • Transformation from the three-dimensional data of the real camera 10 to virtual three-dimensional data is performed assuming a virtual camera 312 in which the real camera 10 is rotated and translated, as shown in FIG.
  • the line-of-sight direction of the virtual camera 312 is defined as a direction V that passes through the center of the virtual touch panel 311 and is perpendicular to the virtual touch panel 311 .
  • the reference point for determining the virtual three-dimensional data of the virtual camera 312 is on the reference plane B.
  • S be the distance between the reference point of the real camera 10 and the specific point 313 when the intersection of the directions E and V is the specific point 313 .
  • the parameters that the parameter acquisition unit 213 acquires by the administrator's input are the angle ⁇ for defining the line-of-sight direction of the real camera 10 and the distance Pz for defining the position of the virtual touch panel 311 shown in FIGS. and a distance S for defining a particular point 313 about which the rotation from the real camera 10 to the virtual camera 312 is centered.
  • the administrator also considers the performance of the real camera 10 and determines the parameters. By setting these parameters, it is possible to specify a conversion method from three-dimensional data to virtual three-dimensional data.
  • the first straight line A virtual camera 312 is assumed to have a line-of-sight direction V of a second straight line rotated by an angle ⁇ . Then, the three-dimensional data in the coordinate space of the real camera 10 is coordinate-transformed into virtual three-dimensional data in the coordinate space of the virtual camera 312 .
  • the virtual touch panel 311 is separated from the reference point of the virtual camera 312 by Pz (second distance) in a direction parallel to the second straight line.
  • the shape of the virtual touch panel 311 may be any shape as long as it can correspond to the display surface of the display means 26 .
  • the second straight line passes through the rectangular center point of the virtual touch panel 311, and the virtual touch panel 311 extends in a direction perpendicular to the second straight line.
  • the second straight line passes through the reference point of virtual camera 312 and the center of virtual touch panel 311 .
  • the coordinate conversion performed by the coordinate conversion unit 214 will be described.
  • the three-dimensional data generated by the three-dimensional data generation unit 212 based on the output data of the real camera 10 is, as shown in FIG. It is represented by coordinates (x, y, z) with the direction as the x-axis.
  • the coordinate space of the real camera 10 is rotated around a straight line passing through the specific point 313 and parallel to the x-axis, the line-of-sight direction V of the virtual camera 312 is taken as the z'-axis, and the horizontal direction of the virtual touch panel is taken as the x-axis.
  • a coordinate space is obtained, represented by the coordinates (x, y', z').
  • the camera position is located at B' behind the reference plane B, so the camera position is translated along the direction V to the reference plane B.
  • the translation distance Dz is S-Scos ⁇ .
  • the three-dimensional data in the coordinate space represented by (x, y, z) based on the output data of the real camera 10 is rotated and translated by the virtual camera 312 ( x, y', z'') can be converted into virtual three-dimensional data in a coordinate space.
  • the pointer detection unit 215 detects the point closest to the virtual camera 312 as the tip of the pointer 321 based on the virtual three-dimensional data converted by the coordinate conversion unit 214 .
  • the tip of the indicator 321 is the pointing position 322 of the operator.
  • the pointer detection unit 215 outputs virtual three-dimensional coordinates of the pointing position 322 based on the virtual three-dimensional data.
  • the contact determination unit 216 determines the z′′ direction (the direction V ), and the contact determination unit 216 compares the distance Vz with the distance Pz (second distance) in the z′′ direction from the virtual camera 312 to the virtual touch panel.
  • the contact determination unit 216 determines the distance Vz, which is the shortest distance from the reference plane B passing through the reference point of the virtual camera 312 to the pointing position 322 of the indicator 321, the distance Pz from the reference plane B to the virtual touch panel, compare.
  • Vz the distance from the reference plane B passing through the reference point of the virtual camera 312 to the pointing position 322 of the indicator 321, the distance Pz from the reference plane B to the virtual touch panel, compare.
  • Vz the indicated position 322 of the pointer 321 can be said to be at the same position as the virtual touch panel 311 or closer to the virtual camera 312 than the virtual touch panel 311 .
  • Vz>Pz it is determined that there is no input from the pointer 321 to the virtual touch panel 311 .
  • the operation input signal generating unit 217 detects the operator's operation based on the virtual three-dimensional coordinates of the pointing position 322 at that time. generates an operation input signal indicated by .
  • the operation input signal generation unit 217 generates an operation input signal for displaying a cursor at a position corresponding to the virtual three-dimensional coordinates of the pointing position 322 on the display screen of the display means 26 .
  • the operation input signal generation unit 217 After that, the operation input signal generation unit 217 generates an operation input signal for moving the cursor, selecting, moving, etc. according to the information on the temporal change of the pointing position 322 output by the contact determination unit 216 . Further, in an information processing terminal including the operation input device 20 in which an arbitrary application is installed in advance, an operation input signal for instructing execution of an application indicated by an icon displayed on the display means 26 is generated.
  • FIG. 5 is a flowchart showing operation input processing.
  • the operation input process starts when the administrator of the operation input system 1 executes the operation input program.
  • the parameter acquisition unit 213 displays a parameter input screen on the display means 26.
  • the administrator hypothesizes the position and orientation of the virtual touch panel 311 and inputs parameters for specifying the virtual touch panel 311 .
  • the parameter acquisition unit 213 acquires parameters input in advance by the administrator (step S101).
  • the first parameter acquired by the parameter acquisition unit 213 is the angle ⁇ of the line-of-sight direction E of the real camera 10 with respect to the direction V perpendicular to the extension plane P of the virtual touch panel 311 .
  • the second parameter is a value obtained when a specific point 313 is an intersection of a second straight line passing through the center of the virtual touch panel 311 and extending in the direction V and a first straight line extending in the line-of-sight direction E of the real camera 10 . It is the distance S from the reference point of the camera 10 to the specific point 313 .
  • the third parameter is the distance Pz from the reference plane B to the virtual touch panel 311 when the plane passing through the reference point of the real camera 10 and parallel to the virtual touch panel 311 is taken as the reference plane B.
  • the parameters acquired by parameter acquisition unit 213 may include other parameters for specifying virtual touch panel 311 .
  • the coordinate transformation unit 214 determines a method of coordinate transformation from the three-dimensional data of the coordinate space of the real camera 10 to the virtual three-dimensional data of the coordinate space of the virtual camera 312 (step S102). ).
  • the virtual camera 312 is a camera virtualized at a position where the real camera 10 is rotated by an angle ⁇ around the specific point 313 and translated along the direction V to the reference plane B.
  • a coordinate transformation method for transforming the three-dimensional data of the coordinate space of the real camera 10 into virtual three-dimensional data of the coordinate space of the virtual camera 312 is determined (step S102).
  • the data acquisition unit 211 acquires the output data of the real camera 10 (step S103).
  • the three-dimensional data generator 212 develops the output data of the real camera 10 in a three-dimensional coordinate space to generate three-dimensional data. (Step S104: 3D data generation step).
  • the coordinate transformation unit 214 transforms the three-dimensional data in the coordinate space of the real camera 10 into virtual three-dimensional data in the coordinate space of the virtual camera 312 using the coordinate transformation method determined in step S102 (step S105: coordinates conversion step).
  • the pointer detection unit 215 detects the point closest to the virtual camera 312 as the pointed position 322 of the pointer 321 based on the virtual three-dimensional data, and calculates the virtual three-dimensional coordinates of the pointed position 322 .
  • the contact determination unit 216 calculates the shortest distance Vz from the reference plane B to the indicated position 322 based on the virtual three-dimensional coordinates of the indicated position 322 (step S106).
  • the contact determination unit 216 compares the distance Pz from the reference plane B to the virtual touch panel 311 with the distance Vz to the indicated position 322 (step S107), and determines whether or not the distance Vz is equal to or less than the distance Pz (step S108: contact determination step).
  • step S108: Yes If the distance Vz is equal to or less than the distance Pz (step S108: Yes), the coordinates on the virtual touch panel 311 are calculated based on the virtual three-dimensional coordinates of the pointing position 322 obtained in step S106, and the operation input signal generation unit 217 (step S109). If the distance Vz is greater than the distance Pz (step S108: No), the process returns to step S103.
  • step S110: Yes After outputting the coordinates on the virtual touch panel 311 obtained in step S109, if the administrator issues an instruction to end the operation input process (step S110: Yes), the process ends. If there is no end command (step S110: No), the process returns to step S103.
  • the operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel 311 obtained in step S109 (operation input signal generation step) and outputs it.
  • the operation input signal is a signal for instructing selection, movement, or the like by moving a cursor, or a signal for instructing execution of an application installed in advance.
  • the operation input device 20 converts the three-dimensional data based on the output data of the real camera 10 into virtual three-dimensional data by the virtual camera 312, and compares the pointing position 322 with the position of the virtual touch panel 311. , the touch operation of the pointer 321 on the virtual touch panel 311 is detected.
  • the real camera 10 is fixed with respect to the display means 26 with the line-of-sight direction directed in a direction inclined with respect to the pointing direction of the indicator 321 .
  • a three-dimensional data generation unit 212 of the operation input device 20 generates three-dimensional data based on the output data of the real camera 10 .
  • the coordinate transformation unit 214 assumes a virtual camera 312 whose line-of-sight direction is a direction V obtained by rotating the line-of-sight direction E of the real camera 10 by an angle ⁇ about a specific point 313, and converts three-dimensional data from the real camera 10 into virtual camera 312. 312 into virtual three-dimensional data.
  • the pointer detection unit 215 detects the point closest to the virtual camera 312 as the pointing position 322, and the contact determination unit 216 moves away from the virtual camera 312 in the direction V by the distance Pz.
  • the distance Vz from the virtual camera 312 to the pointing position 322 in the direction V is compared with the distance Pz from the virtual camera 312 to the virtual touch panel 311, and if the distance Vz is the distance Pz or less, It is determined that the pointer 321 touches the virtual touch panel 311 at some point. This makes it possible to accurately detect non-contact operation input with a simple configuration in which the real camera 10 is installed with the line-of-sight direction inclined.
  • the coordinate transformation unit 214 determines the coordinate transformation method, by changing the direction of rotation from the real camera 10 to the virtual camera 312 , the 3D data from the real camera 10 at another location can also be converted to the virtual 3D data from the virtual camera 312 . It can be transformed into dimensional data.
  • the rotation angle ⁇ is set from the real camera 10 to the virtual camera 312 with a straight line parallel to the x-axis as the rotation axis, and the coordinate transformation represented by this rotation is performed.
  • the position of the real camera 11 in FIG. and coordinate transformation represented by this rotation is performed.
  • a rotation angle with a straight line parallel to the x-axis as a rotation axis and a rotation angle with a straight line parallel to the y-axis as a rotation axis are set, and a straight line parallel to the x-axis is set.
  • Coordinate transformation represented by rotation about a rotation axis and rotation about a straight line parallel to the y-axis may be performed.
  • the rotation from the real camera 10 to the virtual camera 312 is rotation about a straight line passing through the specific point 313 and parallel to the extending direction of the virtual touch panel 311. Performs coordinate transformation represented by a rotation of . Thereby, a virtual touch operation on the virtual touch panel 311 can be detected regardless of the installation position of the real camera 10 .
  • the position of the real camera 10 can capture the pointer 321 from a direction that is inclined with respect to the pointing direction of the pointer 321, the operator operating toward the virtual touch panel 311 as well as the outer periphery of the virtual touch panel 311 It may be up, down, left, right, or behind.
  • the angle ⁇ to a value of 90° or more, coordinate conversion can be performed as appropriate, and a virtual touch operation on the virtual touch panel 311 can be detected.
  • FIG. 7 is a schematic side view of the display unit 26, the real camera 10, the virtual touch panel 311, and the hover surface 315 according to the second embodiment.
  • the operation input system 1 according to the second embodiment has the same configuration as that of the first embodiment, and executes the same operation input process. The difference is that the hover plane 315 is assumed.
  • the hover surface 315 is a surface parallel to the virtual touch panel 311 and is located at a predetermined distance Hz from the virtual touch panel 311 .
  • the distance Hz between the virtual touch panel 311 and the hover surface 315 can be set by the administrator, and is a value between 5 cm and 10 cm, for example.
  • the functions and operations of the data acquisition unit 211, the three-dimensional data generation unit 212, the coordinate conversion unit 214, and the pointer detection unit 215 are the same as in the first embodiment.
  • the parameter acquisition unit 213 acquires the distance Hz from the virtual touch panel 311 to the hover plane 315 in addition to the parameter angle ⁇ , the distance S, and the distance Pz in the first embodiment.
  • the contact determination unit 216 determines the distance from the reference plane B to the hover plane 315 (fourth distance: (Pz+Hz )).
  • the operation input signal generation unit 217 displays a display indicating that the virtual touch panel 311 is approaching. Generates an operation signal to perform.
  • the operation input signal generator 217 may cause the display means 26 to display a cursor when the distance Vz becomes equal to or less than the distance (Pz+Hz). This allows the operator to recognize the current pointing position. Further, the size or shape of the cursor may be changed to display the distance to the virtual touch panel 311 recognizable. For example, as the pointing position 322 crosses over the hover surface 315 and approaches the virtual touch panel 311, the cursor may be made smaller step by step, or the color may be made darker step by step.
  • the operator can visually recognize the distance to the touch panel, but in the case of the virtual touch panel 311 in space, the distance to the virtual touch panel 311 cannot be visually recognized.
  • the instruction position 322 approaches the virtual touch panel 311 and the hover surface 315 is displayed before the operator touches the virtual touch panel 311 .
  • the area where the cursor is located and the distance to the virtual touch panel can be recognized.
  • the positional error when the virtual touch panel 311 is touched can be reduced. Thereby, operability in space can be improved.
  • the hover surface 315 is assumed to be spaced apart from the virtual touch panel 311 in the direction opposite to the display surface of the display means 26, and the contact determination unit 216 instructs Passage of hover surface 315 is determined prior to touching virtual touch panel 311 at location 322 . Then, the operation input signal generation unit 217 generates an operation signal for displaying that the pointed position 322 is approaching the virtual touch panel 311 . This makes it possible for the operator to recognize the area where the pointed position 322 exists and the distance to the virtual touch panel 311 .
  • FIG. 8 is a perspective view showing a schematic configuration of display means 26, real camera 10, virtual touch panel 311, and instruction effective area 330 according to the third embodiment.
  • FIG. 9 is a flowchart of operation input processing according to the third embodiment.
  • the operation input system 1 according to Embodiment 3 has the same configuration as Embodiments 1 and 2, but the functions of the parameter acquisition section 213 and the pointer detection section 215 are partially different.
  • the operation input device 20 assumes an instruction effective area 330 in which an instruction by the indicator 321 is valid, and detects only the indicator 321 inside the instruction effective area 330. It is valid, and the indicator 321 is not detected outside the indicator effective area 330 .
  • the instruction effective area 330 is an area limited from the space area in front of the display means 26, as shown in FIG.
  • the shape of the boundary of the instruction effective area 330 is arbitrary, and, for example, as shown in FIG. In addition, it may have a cylindrical shape or an elliptical cylindrical shape whose side surface is perpendicular to the virtual touch panel 311 , or a shape in which the area expands or narrows as the distance from the virtual touch panel 311 increases or decreases. In the present embodiment, a case where the boundary of instruction effective area 330 is a rectangular parallelepiped will be described.
  • a hover surface 315 similar to that in the second embodiment may be further included in the instruction effective area 330, and it is determined whether or not the pointer 321 has passed through the hover surface 315 before the pointing position 322 of the pointer 321 touches the virtual touch panel 311. You may make it
  • the parameter acquisition unit 213 acquires parameters for specifying the instruction effective area 330 in addition to parameters for specifying the virtual touch panel 311 .
  • the parameters for specifying the virtual touch panel 311 are the angle ⁇ of the line-of-sight direction E of the real camera 10 as in Embodiment 1, the distance S from the reference point of the real camera 10 to the specific point 313, and the distance S from the reference plane B It is the distance Pz to the virtual touch panel 311 .
  • the parameter for specifying the instruction effective area 330 is an arbitrary parameter indicating the boundary of the instruction effective area 330 in the coordinate space of the virtual camera 312.
  • the virtual three-dimensional coordinate value indicating the boundary of the instruction effective area 330 is The upper and lower bounds or coefficients of a three-dimensional function representing the bounding surface of the indication effective area 330 .
  • the parameters acquired by parameter acquisition section 213 may include other parameters for specifying virtual touch panel 311 or instruction effective area 330 .
  • the pointer detection unit 215 detects the closest object from the virtual camera 312 based on the virtual three-dimensional data within the instruction effective area 330 specified by the parameter, among the virtual three-dimensional data from the virtual camera 312 output by the coordinate transformation unit 214 . A point is detected as the indicated position 322 of the pointer 321 .
  • operation input device 20 Other configurations of the operation input device 20 are the same as in the first and second embodiments. Operation input processing of the operation input device 20 configured in this manner will be described with reference to the flowchart shown in FIG. In FIG. 9, the processes assigned the same reference numerals as those in FIG. 5 are the same as those in the first embodiment.
  • the parameter acquisition unit 213 displays a parameter input screen on the display means 26.
  • the administrator hypothesizes the position and orientation of the virtual touch panel 311 and inputs parameters for specifying the virtual touch panel 311 and parameters for specifying the instruction effective area 330 .
  • the parameter acquisition unit 213 acquires parameters input in advance by the administrator (step S201).
  • the parameters for specifying the virtual touch panel 311 acquired by the parameter acquisition unit 213 are the angle ⁇ of the line-of-sight direction E of the real camera 10 as in Embodiment 1, and the distance from the reference point of the real camera 10 to the specific point 313. S and a distance Pz from the reference plane B to the virtual touch panel 311 .
  • a parameter for specifying the instruction effective area 330 is a parameter indicating the boundary of the instruction effective area 330 in the coordinate space of the virtual camera 312 .
  • the parameters acquired by parameter acquisition section 213 may include other parameters for specifying virtual touch panel 311 or instruction effective area 330 .
  • the coordinate transformation unit 214 determines a method of coordinate transformation from the three-dimensional data in the coordinate space of the real camera 10 to the virtual three-dimensional data in the coordinate space of the virtual camera 312. Determine (step S102).
  • the data acquisition unit 211 acquires the output data of the real camera 10 (step S103).
  • the three-dimensional data generator 212 develops the output data of the real camera 10 in a three-dimensional coordinate space to generate three-dimensional data. (Step S104: 3D data generation step).
  • step S105 coordinates conversion step
  • the pointer detection unit 215 excludes the data outside the instruction effective area 330 specified based on the parameters acquired in step S201 from the virtual three-dimensional data coordinate-transformed in step S105 (step S202). .
  • the pointer detection unit 215 detects the point closest to the virtual camera 312 as the designated position 322 of the pointer 321, Calculate virtual three-dimensional coordinates. Then, the contact determination unit 216 calculates the shortest distance Vz from the reference plane B to the indicated position 322 based on the virtual three-dimensional coordinates of the indicated position 322 (step S106).
  • the contact determination unit 216 compares the distance Pz from the reference plane B to the virtual touch panel 311 with the distance Vz to the indicated position 322 (step S107), and determines whether or not the distance Vz is equal to or less than the distance Pz (step S108: contact determination step).
  • step S108: Yes If the distance Vz is equal to or less than the distance Pz (step S108: Yes), the coordinates on the virtual touch panel 311 are calculated based on the virtual three-dimensional coordinates of the pointing position 322 obtained in step S106, and the operation input signal generation unit 217 (step S109). If the distance Vz is greater than the distance Pz (step S108: No), the process returns to step S103.
  • step S110: Yes After outputting the coordinates on the virtual touch panel 311 obtained in step S109, if the administrator issues an instruction to end the operation input process (step S110: Yes), the process ends. If there is no end command (step S110: No), the process returns to step S103.
  • the detailed processing of steps S106 to S110 is the same as that of the first embodiment.
  • the operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel 311 obtained in step S109 (operation input signal generation step) and outputs it.
  • the operation input signal is a signal for instructing selection, movement, or the like by moving a cursor, or a signal for instructing execution of an application installed in advance.
  • the operation input device 20 detects data in the instruction effective area 330 from among the data obtained by converting the three-dimensional data based on the output data of the real camera 10 into the virtual three-dimensional data by the virtual camera 312. By comparing the indicated position 322 and the position of the virtual touch panel 311, the touch operation of the pointer 321 on the virtual touch panel 311 is detected.
  • the third embodiment In some display means 26, only a part of the display means 26 is targeted for instruction operation, and other operation units 331 such as card insertion and bar code reader are provided. In such a case, the operation input device 20 must avoid erroneously detecting an operation to the other operation unit 331 as an instruction operation.
  • the instruction effective area 330 is set, and only the instruction position 322 of the indicator 321 existing inside the instruction effective area 330 is detected. It is possible to accurately detect a non-contact operation input without erroneously detecting as.
  • the pointer detection unit 215 of the operation input device 20 detects the virtual camera 312 based on the data within the instruction effective area 330 of the virtual three-dimensional data obtained by the virtual camera 312 .
  • the point closest to 312 is detected as designated position 322, and contact determination unit 216 compares distance Vz from virtual camera 312 to designated position 322 with distance Pz from virtual camera 312 to virtual touch panel 311, It is determined that the pointer 321 has touched the virtual touch panel 311 when the distance Vz is equal to or less than the distance Pz. This makes it possible to accurately detect a non-contact operation input without erroneously detecting another operation by the operator outside the instruction effective area 330 .
  • FIG. 10 is a diagram showing the arrangement of real cameras according to the fourth embodiment.
  • the operation input system 1 according to Embodiment 4 executes the same operation input processing as in Embodiment 1, but differs in that two or more real cameras including real cameras 12 and 13 are used. Although the number of real cameras may be any number of two or more, in the following description, a configuration including real cameras 12 and 13 will be described.
  • the operator may perform operation input using two or more indicators. For example, by touching the touch panel with two fingers at the same time, various operation inputs are possible depending on the distance between the two fingers or the moving direction and moving distance of the two fingers.
  • the operation input device 20 according to the fourth embodiment can reliably detect an operation input to the virtual touch panel 311 even when there are two or more pointers 321 .
  • the first real camera 12 and the second real camera 13 are fixed at locations separated from each other.
  • the first real camera 12 is installed at the upper right end of the display means 26 and the second real camera 13 is installed at the upper left end of the display means 26 . That is, the first real camera 12 is positioned at the upper right side of the virtual touch panel 311 , and the second real camera 13 is positioned at the upper left side of the virtual touch panel 311 .
  • the line-of-sight directions of the real cameras 12 and 13 are directions inclined with respect to the pointing direction of the indicator 321 and different from each other.
  • the configuration of the operation input device 20 is the same as that of the first embodiment.
  • An administrator of the operation input device 20 selects a master camera from among the two or more real cameras 12 and 13, and sets parameters for the master camera.
  • the parameter obtaining unit 213 obtains an angle ⁇ x and a specific The angle ⁇ y about a straight line passing through the point 313 and parallel to the y -axis, the distance S from the real camera 12 to the specific point 313, and the distance Pz from the reference plane B to the virtual touch panel 311 are obtained.
  • the 3D data generation unit 212 generates 3D data based on the output data of the two or more real cameras 12 and 13 and the position information of the two or more real cameras 12 and 13 . Note that the three-dimensional data generation unit 212 performs calibration for generating a single three-dimensional data for two or more real cameras 12 and 13 in advance.
  • the three-dimensional data generation unit 212 expands the output data of the real camera 12, which is the master camera, into a three-dimensional coordinate space to generate three-dimensional data.
  • the three-dimensional data generator 212 generates three-dimensional data based on the output data of the real camera 13, which is a slave camera. This is three-dimensional data that has been calibrated and corrected based on the relationship and output data.
  • the three-dimensional data generator 212 supplements the three-dimensional data based on the output data of the real camera 12 with calibrated three-dimensional data based on the output data of the real camera 13 to generate a single three-dimensional data. .
  • the coordinate transformation unit 214 rotates the line-of-sight direction E of the real camera 12 around the specific point 313 by an angle ⁇ x about a straight line parallel to the x-axis and an angle ⁇ y about a straight line parallel to the y-axis. is the line-of-sight direction V, and a virtual camera 312 having a reference point on the reference plane B is assumed. Then, the coordinate transformation unit 214 determines a coordinate transformation method for transforming the three-dimensional data based on the output data of the real cameras 12 and 13 generated by the three-dimensional data generation unit 212 into virtual three-dimensional data by the virtual camera 312 .
  • the coordinate transformation unit 214 transforms the three-dimensional data generated by the three-dimensional data generation unit 212 into virtual three-dimensional data using the determined coordinate transformation method.
  • the contact determination unit 216 hypothesizes a surface that is separated from the virtual camera 312 by a distance Pz in the direction V and that is perpendicular to the direction V as the virtual touch panel 311 . Then, the distance Vz from the virtual camera 312 to the pointers 323 and 324 in the direction V is calculated based on the virtual three-dimensional data coordinate-transformed from the three-dimensional data from the real cameras 12 and 13 .
  • the contact determination unit 216 uses the virtual camera 312 in the direction V calculated based on the virtual three-dimensional data. to the pointers 323 and 324 is detected as one or more distances Vz (third distance). In other words, the contact determination unit 216 plots the distances from the virtual camera 312 to points in the three-dimensional space based on the virtual three-dimensional data, and detects points indicating the minimum distance as indicated positions of the indicators 323 and 324 .
  • the contact determination unit 216 compares one or more distances Vz with the distance Pz from the virtual camera 312 to the virtual touch panel 311, and when the distance Vz is equal to or less than the distance Pz, the pointer 323, 324 determines that it has touched.
  • the effect of using two or more real cameras 12 and 13 will be described.
  • the pointer 324 is behind the pointer 323 and cannot be detected.
  • the three-dimensional data is supplemented by the output data of the real camera 13 with different installation positions and line-of-sight directions.
  • the virtual three-dimensional coordinates of the indicated position on the body 324 can be output.
  • the virtual three-dimensional coordinates of the pointed position are output based on the output data of two or more real cameras 12 and 13, the problem of failing to detect the pointed position can be avoided.
  • the operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel of the pointing position determined by the contact determination unit 216 to be in contact. At this time, when contact at two or more different designated positions is determined, operation input signals corresponding to two or more preset contacts are generated.
  • the operation input system 1 is fixed apart from each other, and the line-of-sight directions are tilted with respect to the pointing directions of the indicators 323 and 324 and are different from each other.
  • Two or more real cameras 12 and 13 are provided.
  • a three-dimensional data generation unit 212 generates three-dimensional data based on the output data of two or more real cameras 12 and 13, and a coordinate transformation unit 214 performs a coordinate transformation method based on parameters related to the real camera 12, which is a master camera. is used to convert to virtual three-dimensional data.
  • the contact determination unit 216 acquires the distance Vz, which is one or more minimum values of the distances from the virtual camera 312 to the pointers 323 and 324 in the direction V calculated based on the virtual three-dimensional data, and obtains one or more instruction points.
  • the indicated positions of the bodies 323 and 324 are to be detected.
  • the pointer 324 that has not been detected by one of the real cameras 12 can also be detected by the other real camera 13, and reliable operation input by a plurality of touches is possible.
  • the real camera 10-13 is installed at the end of the display means 26, and the administrator sets parameters related to coordinate transformation. It may be built in the input device 20 . In this case, when the operation input device 20 is assembled, the line-of-sight direction of the real camera 10-13 is tilted with respect to the direction perpendicular to the display surface of the display means 26, and the coordinate transformation is determined according to the tilt angle. method may be used.
  • a transparent plate 316 may be further installed in front of the display means 26 and the real camera 10, as shown in FIG.
  • the transparent plate 316 is an arbitrary transparent plate that transmits the light output by the display means 26, such as a glass plate or an acrylic plate.
  • the display means 26 and the real camera 10 can be protected, and the operator can be urged to operate in a space a certain distance away.
  • one rectangular and flat virtual touch panel 311 corresponding to one rectangular display surface of the display means 26 is used to detect an operation input.
  • the size is arbitrary, and it does not have to be flat.
  • the three-dimensional data obtained by the real cameras 10-13 are subjected to coordinate transformation by two or more virtual cameras 312, and each virtual An operation input to the touch panel 311 may be detected.
  • the parameter acquisition unit 213 acquires parameters including the shape and size of the virtual touch panel 311, and the coordinate conversion unit 214 converts three-dimensional data according to the parameters representing the shape and size of the virtual touch panel 311. Coordinate transformation including enlargement, reduction or transformation of the coordinate values of
  • two or more real cameras 12 and 13 are used to detect one or more indicated positions.
  • a distance Vz that is one or more minimum values of the distance from the camera 312 to the indicator 321 may be acquired, and the indicated position of one or more indicators 321 may be detected.
  • a computer-readable CD-ROM Compact Disc Read-Only Memory
  • DVD Digital Versatile Disc
  • MO Magnetic Optical Disc
  • a computer capable of realizing each function may be configured by storing and distributing the program in a recording medium of the above and installing the program in the computer.
  • OS Operating System
  • an application or by cooperation between the OS and an application, only portions other than the OS may be stored in the recording medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

Dans la présente invention, une caméra réelle 10 est fixée en place de telle sorte qu'une direction de ligne de visée est inclinée par rapport à une direction indiquée d'un corps d'indication 321. Une unité de génération de données tridimensionnelles 212 d'un dispositif d'entrée d'opération 20 génère des données tridimensionnelles sur la base de données de sortie de la caméra réelle 10. Une unité de conversion de coordonnées 214 : adopte une caméra virtuelle 312 avec une direction de ligne de visée tournée d'un angle θ à partir de la direction de ligne de visée de la caméra réelle 10, à l'aide d'un point spécifique 313 comme centre ; et convertit les données tridimensionnelles de la caméra réelle 10 en données tridimensionnelles virtuelles de la caméra virtuelle 31. Une unité de détermination de contact 216 détermine que le corps d'indication 321 est entré en contact avec un écran tactile virtuel 311 si la distance la plus courte de la caméra virtuelle 321 au corps d'indication 321 calculée sur la base des données tridimensionnelles virtuelles n'est pas supérieure à la distance de la caméra virtuelle 312 à l'écran tactile virtuel 311.
PCT/JP2022/010548 2021-03-10 2022-03-10 Dispositif d'entrée d'opération, procédé d'entrée d'opération et programme WO2022191276A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023505628A JP7452917B2 (ja) 2021-03-10 2022-03-10 操作入力装置、操作入力方法及びプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021038462 2021-03-10
JP2021-038462 2021-03-10

Publications (1)

Publication Number Publication Date
WO2022191276A1 true WO2022191276A1 (fr) 2022-09-15

Family

ID=83228067

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/010548 WO2022191276A1 (fr) 2021-03-10 2022-03-10 Dispositif d'entrée d'opération, procédé d'entrée d'opération et programme

Country Status (2)

Country Link
JP (1) JP7452917B2 (fr)
WO (1) WO2022191276A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175623A (ja) * 2010-01-29 2011-09-08 Shimane Prefecture 画像認識装置および操作判定方法並びにプログラム
JP2012137989A (ja) * 2010-12-27 2012-07-19 Sony Computer Entertainment Inc ジェスチャ操作入力処理装置およびジェスチャ操作入力処理方法
WO2014034031A1 (fr) * 2012-08-30 2014-03-06 パナソニック株式会社 Dispositif de saisie d'informations et procédé d'affichage d'informations
JP2016134022A (ja) * 2015-01-20 2016-07-25 エヌ・ティ・ティ アイティ株式会社 仮想タッチパネルポインティングシステム
JP2019219820A (ja) * 2018-06-18 2019-12-26 チームラボ株式会社 映像表示システム,映像表示方法,及びコンピュータプログラム

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4608326B2 (ja) 2005-01-26 2011-01-12 株式会社竹中工務店 指示動作認識装置及び指示動作認識プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175623A (ja) * 2010-01-29 2011-09-08 Shimane Prefecture 画像認識装置および操作判定方法並びにプログラム
JP2012137989A (ja) * 2010-12-27 2012-07-19 Sony Computer Entertainment Inc ジェスチャ操作入力処理装置およびジェスチャ操作入力処理方法
WO2014034031A1 (fr) * 2012-08-30 2014-03-06 パナソニック株式会社 Dispositif de saisie d'informations et procédé d'affichage d'informations
JP2016134022A (ja) * 2015-01-20 2016-07-25 エヌ・ティ・ティ アイティ株式会社 仮想タッチパネルポインティングシステム
JP2019219820A (ja) * 2018-06-18 2019-12-26 チームラボ株式会社 映像表示システム,映像表示方法,及びコンピュータプログラム

Also Published As

Publication number Publication date
JP7452917B2 (ja) 2024-03-19
JPWO2022191276A1 (fr) 2022-09-15

Similar Documents

Publication Publication Date Title
US9001208B2 (en) Imaging sensor based multi-dimensional remote controller with multiple input mode
JP5921835B2 (ja) 入力装置
KR100851977B1 (ko) 가상 평면을 이용하여 전자 기기의 사용자 인터페이스를제어하는 방법 및 장치.
JP5808712B2 (ja) 映像表示装置
US10318152B2 (en) Modifying key size on a touch screen based on fingertip location
US20140002348A1 (en) Measuring device and measuring method
EP3032375B1 (fr) Système d'operation d'entrée
JP2014197380A (ja) 画像投影装置、システム、画像投影方法およびプログラム
JP6176013B2 (ja) 座標入力装置及び画像処理装置
EP3037936B1 (fr) Appareil de projection d'image et système utilisant une capacité d'entrée-sortie interactive
US20110193969A1 (en) Object-detecting system and method by use of non-coincident fields of light
JP2015212927A (ja) 入力操作検出装置、入力操作検出装置を備えた画像表示装置及びプロジェクタシステム
JP2016103137A (ja) ユーザインタフェース装置、画像処理装置及び制御用プログラム
JP2014115876A (ja) 3次元タッチパネルを用いた被操作端末の遠隔操作方法
JP2014115876A5 (fr)
US20120056808A1 (en) Event triggering method, system, and computer program product
WO2022191276A1 (fr) Dispositif d'entrée d'opération, procédé d'entrée d'opération et programme
EP3032380B1 (fr) Appareil de projection d'image et système utilisant une capacité d'entrée-sortie interactive
JP2018018308A (ja) 情報処理装置、及びその制御方法ならびにコンピュータプログラム
JP6898021B2 (ja) 操作入力装置、操作入力方法、及びプログラム
JP6555958B2 (ja) 情報処理装置、その制御方法、プログラム、および記憶媒体
KR101573287B1 (ko) 전자기기에서 터치 위치 디스플레이 방법 및 장치
EP3059664A1 (fr) Procédé pour commander un dispositif par des gestes et système permettant de commander un dispositif par des gestes
JP2016024518A (ja) 座標検出システム、座標検出方法、情報処理装置及びプログラム
JP6476626B2 (ja) 指示体判定装置、座標入力装置、指示体判定方法、座標入力方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22767230

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023505628

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22767230

Country of ref document: EP

Kind code of ref document: A1