WO2019130900A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2019130900A1
WO2019130900A1 PCT/JP2018/042527 JP2018042527W WO2019130900A1 WO 2019130900 A1 WO2019130900 A1 WO 2019130900A1 JP 2018042527 W JP2018042527 W JP 2018042527W WO 2019130900 A1 WO2019130900 A1 WO 2019130900A1
Authority
WO
WIPO (PCT)
Prior art keywords
correction
display
user
unit
depth
Prior art date
Application number
PCT/JP2018/042527
Other languages
French (fr)
Japanese (ja)
Inventor
浩丈 市川
諒介 村田
広幸 安賀
俊逸 小原
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2019130900A1 publication Critical patent/WO2019130900A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/64Constructional details of receivers, e.g. cabinets or dust covers

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a program.
  • AR Augmented Reality
  • various types of information for example, a virtual object
  • a virtual object can be presented to the user in association with the position of the user in the real space.
  • a gesture area associated with at least one operation target is set based on the reference positions of a plurality of operation targets operable by the user, and then, via the gesture interface in the gesture area It is described to control the operation of each operation target.
  • the present disclosure proposes a novel and improved information processing apparatus, an information processing method, and a program that can highly conveniently correct parameters related to display of a virtual object based on the result of depth sensing.
  • an acquisition unit for acquiring a result of depth sensing of a real object corresponding to a viewpoint position of a user, and a correction for correcting a parameter related to display of a virtual object based on the result of depth sensing of the real object
  • An information processing apparatus comprising: a display control unit configured to display an object for display on a display unit corresponding to the user.
  • an information processing method including a processor displaying a correction object on a display unit corresponding to the user.
  • an acquisition unit that acquires a result of depth sensing of a real object corresponding to a viewpoint position of a user, and a parameter related to display of a virtual object based on the result of depth sensing of the real object
  • parameters relating to display of a virtual object based on the result of depth sensing can be corrected with high convenience.
  • the effect described here is not necessarily limited, and may be any effect described in the present disclosure.
  • FIG. 7 is a diagram schematically illustrating an example of correction of parameters related to a display position of a virtual object based on movement of a depth object in the screen coordinate system according to the first embodiment.
  • FIG. 6 is a diagram showing a specific example of the flow of a correction start instruction and a correction end instruction.
  • FIG. 6 is a diagram showing a specific example of the flow of a correction start instruction and a correction end instruction.
  • FIG. 6 is a diagram showing a specific example of the flow of a correction start instruction and a correction end instruction.
  • FIG. 6 is a diagram showing a specific example of the flow of a correction start instruction and a correction end instruction.
  • FIG. 6 is a view showing a display example of a depth object and four correction objects according to the first embodiment.
  • FIG. 6 is a view showing a display example of a depth object and four correction objects according to the first embodiment. It is a figure showing the flow of processing concerning a 1st embodiment. It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 1 of a 1st embodiment.
  • FIG. 1 It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 1 of a 1st embodiment. It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 1 of a 1st embodiment. It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 2 of a 1st embodiment. It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 2 of a 1st embodiment.
  • FIG. 1 It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 1 of a 1st embodiment.
  • FIG. 17 is a diagram schematically illustrating an example of correction of a parameter related to a display position of a virtual object based on movement of a depth object in a camera coordinate system according to an application example 4 of the first embodiment. It is a figure showing an example of classification of a kind of case where accuracy of a result of depth sensing is low. It is a block diagram showing an example of functional composition of eyewear 10 concerning a 2nd embodiment. It is a figure showing a concrete example of a parameter set concerning a 2nd embodiment. It is the figure which showed the example of a display of the animation corresponding to each parameter set concerning a 2nd embodiment. It is the figure which expanded and showed the moving image 90c shown to FIG. 15A. It is a figure showing a part of flow of processing concerning a 2nd embodiment. It is a figure showing a part of flow of processing concerning a 2nd embodiment. It is a figure showing an example of hardware constitutions of eyewear 10 common to each embodiment.
  • a plurality of components having substantially the same functional configuration may be distinguished by attaching different alphabets to the same reference numerals.
  • a plurality of components having substantially the same functional configuration are distinguished as required by the display unit 124 a and the display unit 124 b.
  • the display unit 124 is simply referred to as the display unit 124.
  • an information processing system common to the embodiments includes an eyewear 10, a server 20, and a communication network 22.
  • Eyewear 10 is an example of an information processing apparatus according to the present disclosure. Eyewear 10 may control the display of content that includes one or more virtual objects. For example, the eyewear 10 allows one or more virtual objects to be displayed on the display unit 124 described later while making the real object (for example, the user's hand, etc.) around the user wearing the eyewear 10 visible to the user. Display.
  • the real object for example, the user's hand, etc.
  • the content is, for example, AR content or VR (Virtual Reality) content.
  • the virtual object may be a 2D object or a 3D object.
  • the eyewear 10 can also receive the content, for example, from an external device such as the server 20 via the communication network 22 described later, or can be stored in advance (in its own device) It may be
  • eyewear 10 may be a head-mounted device that includes a display 124.
  • the eyewear 10 may be an AR glass, a video see-through HMD (Head Mounted Display), or a shield HMD.
  • HMD Head Mounted Display
  • the eyewear 10 includes, as the display unit 124, a right side display unit 124a and a left side display unit 124b described later.
  • the eyewear 10 can display the predetermined content on the right side display unit 124a and the left side display unit 124b.
  • the eyewear 10 first generates an image for the right eye and an image for the left eye based on the predetermined content, and displays the image for the right eye on the right display unit 124 a and the image for the left eye Is displayed on the left side display unit 124b.
  • Right display section 124a, left display section 124b ⁇ As shown in FIG. 1, the right side display unit 124 a and the left side display unit 124 b may be configured as a transmissive display device.
  • the right side display unit 124a can project an image by using at least a partial area of the right-eye lens (or the goggle-type lens) included in the eyewear 10 as a projection plane.
  • the left display unit 124b can project an image by using at least a partial area of the left eye lens (or the goggle type lens) included in the eyewear 10 as a projection plane.
  • the display unit 124 may be configured as a non-transmissive display device.
  • the right side display unit 124 a and the left side display unit 124 b may be configured to include an LCD (Liquid Crystal Display), an OLED (Organic Light Emitting Diode), and the like.
  • the eyewear 10 has a camera, and can sequentially display on the display unit 124 an image in front of the user captured by the camera.
  • the user can view the scenery in front of the user through the video displayed on the display unit 124.
  • FIG. 2 is a view showing an example in which a virtual object 40 is displayed on the display screen 30 of the display unit 124.
  • the eyewear 10 has a depth sensor, and senses individual real objects in the real space (in the example shown in FIG. 2, the user's hand 2 etc.) using the depth sensor. obtain. Thereafter, the eyewear 10 overlaps the individual real object with the virtual object 40 based on comparison of the result of the depth sensing by the depth sensor with the position information in the real space corresponding to the display position of the virtual object 40.
  • the presence or absence of occlusion of the virtual object 40 by the respective real objects For example, if it is determined that the hand 2 and the virtual object 40 overlap (in other words, part of the virtual object 40 is occluded by the hand 2), as shown in FIG.
  • the eyewear 10 hides the portion of the virtual object 40 that corresponds to the overlapping area. In the example illustrated in FIG. 2, in the virtual object 40 which is a plane, the area where the user's hand 2 overlaps is hidden.
  • the server 20 is a device that manages various types of content (such as AR content and VR content). Also, the server 20 can communicate with other devices via the communication network 22. For example, when receiving an acquisition request for content from the eyewear 10, the server 20 transmits the content corresponding to the acquisition request to the eyewear 10.
  • the communication network 22 is a wired or wireless transmission path of information transmitted from a device connected to the communication network 22.
  • the communication network 22 may include a telephone network, the Internet, a public network such as a satellite communication network, various LANs (Local Area Network) including Ethernet (registered trademark), a WAN (Wide Area Network), etc.
  • the communication network 22 may include a dedicated line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
  • FIG. 3 is a view showing an example in which the shielded area of the virtual object 40 by the user's hand 2 is shifted and displayed for some reason in the same situation as the example shown in FIG.
  • an error related to calibration of depth sensing by the eyewear 10 can be mentioned.
  • the said error may arise, for example, when the precision of the sensing by the depth sensor which eyewear 10 has later mentioned later is low.
  • the error may occur because an incorrect value is set as the value of the internal parameter of the depth sensor.
  • the error may occur in the case where the calibration of the display unit 124 is not properly performed.
  • the installation position of the depth sensor in the eyewear 10 and the installation position of the display unit 124 may be different. Therefore, the error may occur if calibration (correction or the like) is not appropriately performed according to the difference between the installation position of the depth sensor and the installation position of the display unit 124.
  • the viewpoint position of the user wearing the eyewear 10 changes.
  • the interocular distance is different for each user, and the individual difference in the interocular distance may cause the error.
  • the internal rotation angle may be different for each user. For example, while the human eye tends to turn inward when looking at an object, the magnitude of this tendency may differ from user to user. And the said difference
  • error may arise by the individual difference of the said internal rotation angle.
  • the error may also occur due to the slippage of the eyewear 10 (that is, the user does not properly wear the eyewear 10).
  • the eyewear 10 which concerns on each embodiment came to be created in view of the said situation.
  • the eyewear 10 according to each embodiment acquires the result of depth sensing of at least one real object corresponding to the viewpoint position of the user wearing the eyewear 10, and performs depth sensing of the at least one real object.
  • At least one correction object can be displayed on the display unit 124 for correcting parameters related to the display of at least one virtual object based on the result. Therefore, the user can easily and appropriately correct the value of the parameter related to the display of the at least one virtual object.
  • FIG. 4 is a block diagram showing an example of the functional configuration of the eyewear 10.
  • the eyewear 10 includes a control unit 100, a communication unit 120, a sensor unit 122, a display unit 124, an input unit 126, and a storage unit 128.
  • description is abbreviate
  • the sensor unit 122 may include, for example, a depth sensor (for example, a stereo camera or a time of flight sensor), an image sensor (camera), a microphone, and the like.
  • a depth sensor for example, a stereo camera or a time of flight sensor
  • the depth sensor may be a left camera that performs depth sensing on the front left side of the user wearing the eyewear 10 and a right camera that performs depth sensing on the front right side of the user.
  • the left camera is an example of the left depth camera according to the present disclosure.
  • the right camera is an example of the right depth camera according to the present disclosure.
  • the individual sensors included in the sensor unit 122 may constantly sense, may periodically sense, or in a specific case (for example, when an instruction from the control unit 100 is given, etc.) You may sense only in).
  • the input unit 126 receives various inputs from the user wearing the eyewear 10.
  • the input unit 126 can be configured to include an input device 160 described later.
  • the input unit 126 includes one or more physical buttons.
  • Control unit 100 The control unit 100 can be configured to include, for example, processing circuits such as a central processing unit (CPU) 150 and a graphics processing unit (GPU) described later.
  • the control unit 100 centrally controls the operation of the eyewear 10. Further, as illustrated in FIG. 4, the control unit 100 includes a sensing result acquisition unit 102, a correction unit 104, and a display control unit 106.
  • the sensing result acquisition unit 102 is an example of an acquisition unit according to the present disclosure.
  • the sensing result acquisition unit 102 acquires the result of depth sensing of one or more real objects corresponding to the viewpoint position of the user wearing the eyewear 10, for example, by reception or readout processing.
  • the sensing result acquisition unit 102 acquires the result of the depth sensing of the one or more real objects by the depth sensor included in the sensor unit 122 by reading from the sensor unit 122.
  • the sensing result acquisition unit 102 determines the result of the depth sensing by the one or more depth sensors as the one or more depth sensors It may be acquired by receiving from.
  • the sensing result acquisition unit 102 first causes the communication unit 120 to transmit a sensing result acquisition request to at least one of the one or more depth sensors. Then, when the communication unit 120 described later receives the result of the depth sensing from the one or more depth sensors, the sensing result acquisition unit 102 may acquire the result of the depth sensing from the communication unit 120.
  • the one or more real objects may basically be real objects located in a space corresponding to the field of view of the user.
  • the present invention is not limited to such an example, and the one or more real objects may include one or more real objects located in a predetermined space corresponding to the outside (e.g., behind the user) of the user.
  • Correction unit 104 When predetermined instruction information of the user wearing the eyewear 10 is acquired, the correction unit 104 corrects the value of the parameter related to the display of one or more virtual objects based on the instruction information. For example, the correction unit 104 corrects the value of the parameter related to the display of the one or more virtual objects based on the instruction information of the user acquired after the depth sensing result is acquired by the sensing result acquisition unit 102.
  • the parameters related to the display of the one or more virtual objects include the parameters related to the display position of the one or more virtual objects.
  • the correction unit 104 is a parameter related to the display position of the one or more virtual objects based on the instruction information of the user for the one or more correction objects displayed on the display unit 124 under the control of the display control unit 106 described later.
  • the instruction information of the user may be information acquired while the result of the depth sensing is acquired and the one or more correction objects are displayed on the display unit 124.
  • the correction unit 104 may correct the parameter related to the display position of the one or more virtual objects based on the instruction information of the user. The specific contents of the user's instruction information will be described later.
  • the correction unit 104 sets the depth object indicating the shape of one or more real objects specified based on the result of depth sensing acquired by the sensing result acquisition unit 102 to the instruction information of the user in the screen coordinate system.
  • the value of the parameter related to the display position of the one or more virtual objects may be corrected.
  • the depth object may be an object that indicates the entire shape of each of the one or more real objects, or the shape of the outer peripheral portion (such as an outline) of each of the one or more real objects. It may be an object to be emphasized.
  • the depth object may be an object that emphasizes and indicates a part of edges extracted from a captured image of the one or more real objects.
  • FIG. 5 is a diagram schematically showing an example in which the depth object 50 is moved in the screen coordinate system in order to correct the value of the parameter regarding the display position of the one or more virtual objects.
  • the depth object 50 is an object indicating the shape of the user's hand specified based on the result of depth sensing.
  • the correction unit 104 corrects the one or more items.
  • the value of the parameter related to the display position of the virtual object is corrected by the movement amount of the depth object 50 indicated by the instruction information.
  • the values of the parameters related to the display position of the one or more virtual objects may be separately corrected.
  • the correction unit 104 first performs parallel movement of the depth object 50 in each of the left and right screen coordinate systems. Scaling is performed separately based on these instruction information. Then, the correction unit 104 separately corrects the values of the parameters regarding the display position of the one or more virtual objects based on the result of the parallel movement and the scaling of the depth object 50.
  • the correction unit 104 performs coordinate conversion of each vertex of the depth object 50 on the basis of Equation (1) below to perform the one or more virtual Correct the parameter values related to the display position of the object separately.
  • v is a vertex position in the local coordinate system. That is, the result of the depth sensing may be the position of the vertex represented in the three-dimensional coordinate system with the corresponding depth sensor (right or left camera) as the origin.
  • M is a model matrix. Specifically, M may be a matrix for translating, rotating, or scaling the result of the depth sensing.
  • V is a viewing transformation. Specifically, V may be a matrix for conversion to the camera coordinate system of the virtual camera in order to display the vertex v on the right side display unit 124 a or the left side display unit 124 b.
  • P is a projection matrix. Specifically, P may be a matrix for normalizing three directions (vertical, horizontal, and depth) to generate a screen coordinate system.
  • v ' is the vertex position (coordinates) after conversion. Specifically, v 'may be the position (coordinates) of v in the coordinate system in which the three directions are normalized.
  • the correction unit 104 determines the value of one or more parameters in the matrix P in Equation (1) based on the instruction information of the user. By correcting separately, the value of the parameter related to the display position of the one or more virtual objects is corrected.
  • the display control unit 106 described later sets the display color of the entire display area of the display unit 124 on the other side to a predetermined color (for example, black). It may be set, or a predetermined image may be displayed on the entire display area of the display unit 124 on the other side.
  • a predetermined color for example, black
  • the display control unit 106 is configured to With regard to the display unit 124 of the above, one or more light control elements of the display unit 124 may be controlled such that the entire display area is displayed in black.
  • each of the two display units 124 is provided with a shutter unit (physical blindfold) that can be opened and closed in a predetermined direction (such as in the vertical direction), correction is performed on one of them. While the display control unit 106 is operating, the shutter unit of the display unit 124 on one of the two sides is open, and the shutter unit of the display unit 124 on the other side is closed. The shutter unit may be controlled.
  • the instruction information of the user may include a correction start instruction for starting correction of a parameter related to display of the one or more virtual objects, or may end correction of a parameter related to the display of the one or more virtual objects.
  • a correction end instruction may be included, or an instruction of the correction amount of the parameter when the correction is performed by the correction unit 104 may be included.
  • the instruction information of the user may include one or more of these three types of instructions.
  • the correction start instruction and / or the correction end instruction may be that a predetermined operation of the user on the input unit 126 (for example, a predetermined physical button or the like) is detected.
  • a predetermined operation of the user on the input unit 126 for example, a predetermined physical button or the like
  • correction unit 104 acquires the detection result as the correction start instruction or the correction end instruction.
  • the correction start instruction and / or the correction end instruction may be a result of speech recognition of a predetermined speech of the user. For example, when it is recognized that the user has issued a predetermined voice command (for example, “Start calibration.” Or the like) for starting the correction, the correction unit 104 instructs the correction result to be the correction start instruction. Get as. Alternatively, when it is recognized that the user has issued a predetermined voice command (for example, “End calibration.” Or the like) for ending the correction, the correction unit 104 instructs the correction end to the correction end. It may be acquired as According to this method, the user can instruct hands-free start and end of correction.
  • a predetermined voice command for example, “Start calibration.” Or the like
  • -Movement of a hand or the correction start instruction and / or the correction end instruction may be a recognition result of the user's hand movement.
  • the correction unit 104 instructs the correction result to be the correction start instruction or the correction end instruction.
  • the predetermined gesture may be, for example, transforming the hand into the shape of "goo" and then performing the transformation of the hand into the shape of "par” twice in total.
  • the predetermined gesture may be to align the palms of both hands, or may be to align the fingertips of predetermined fingers of both hands.
  • Such predetermined gestures are significantly different from hand movements for other operations, and thus have the advantage that the eyewear 10 is easy to recognize.
  • the predetermined gesture is the correction start instruction, immediately after performing the predetermined gesture, it is possible to extend or withdraw the hand performing the gesture in a predetermined direction. It may be further defined as an instruction. Thereby, the user can indicate the correction amount with high continuity of operation.
  • the correction start instruction and / or the correction end instruction may be an instruction acquired based on the line-of-sight information of the user.
  • the correction unit 104 starts the correction of the recognition result. It may be acquired as an instruction or the correction end instruction.
  • the line-of-sight information of the user may be information acquired based on, for example, a captured image of the eyeball of the user, or may be acquired based on the detection result of the posture (direction or the like) of the eyewear 10 It may be information. In the latter case, the change in the user's line of sight may be identified indirectly based on the detection result.
  • the correction start instruction and / or the correction end instruction is performed in front of at least one of the stereo cameras. Holding the hand (that is, hiding the camera with the hand) may be used. Usually, it is easy to recognize whether the hand is held in front of the camera. Therefore, according to this method, it is possible to prevent misrecognition regarding the instruction of start and end of correction with high accuracy.
  • the correction start instruction and / or the correction end instruction may be a combination of any two or more of the plurality of types of operations described above.
  • these instructions may be, while uttering a predetermined voice command, transforming the hand into the shape of "goo” and then transforming the hand into the shape of "par”.
  • these instructions may be to align the palms of both hands while uttering a predetermined voice command.
  • these instructions may be that the user closes one eye while uttering a predetermined voice command.
  • these instructions may be that the user stretches at least one hand ahead of the user while closing one's eyes.
  • these instructions may be to utter a predetermined voice command while holding a hand in front of one of the stereo cameras.
  • these instructions may be to extend the other hand forward of the user while holding one hand in front of one of the stereo cameras.
  • the eyewear 10 can perform these instructions and other operations (normal operation), More accurate distinction is possible.
  • the combination of holding the hand in front of the camera and other operations By being used as an instruction of, misrecognition can be prevented with higher accuracy.
  • FIG. 6A is a diagram showing an example in which the user wearing the eyewear 10 performs a normal operation (that is, an operation other than these instructions) using the left hand 2a.
  • a normal operation that is, an operation other than these instructions
  • the user shifts the right hand 2b from the front of the right camera 122a, and keeps holding the right hand 2b only in front of the right display 124a. Thereafter, as shown in FIG. 6D, the user holds the right hand 2b again in front of both the right camera 122a and the right display unit 124a as the correction end instruction.
  • the correction unit 104 first detects the detection result of the position of the left hand 2a at the timing shown in FIG. 6B and the detection result of the position of the left hand 2a at the timing shown in FIG. Calculate the difference. Then, the correction unit 104 may determine the correction amount of the parameter related to the display position of the one or more virtual objects according to the difference.
  • the display control unit 106 controls the display of the display unit 124.
  • the display control unit 106 displays, on the display unit 124, one or more correction objects for correcting a parameter related to the display position of one or more virtual objects based on the result of depth sensing acquired by the sensing result acquisition unit 102.
  • the one or more correction objects may be displayed by the number of different correction directions with respect to the display position of the one or more virtual objects.
  • each of the one or more correction objects may be an object for the user to designate the correction amount of the display position of the one or more virtual objects in the correction direction corresponding to the correction object. .
  • the display control unit 106 can cause the display unit 124 to display a depth object based on the result of depth sensing acquired by the sensing result acquisition unit 102 in association with the one or more correction objects. For example, the display control unit 106 causes the display unit 124 to simultaneously display the depth object and the one or more correction objects.
  • FIG. 7A is a diagram showing an example in which the depth object based on the result of the depth sensing of the user's right hand and the four correction objects are displayed on the display unit 124 simultaneously.
  • the display control unit 106 corrects the correction amount of the display position of the depth object 52 indicating the contour of the right hand 2b specified based on the result of the depth sensing of the right hand 2b
  • the display screen 30 displays four correction objects 54 for the user to instruct.
  • the display control unit 106 causes the depth object 52 to be displayed at the display position in the display screen 30 corresponding to the result of the depth sensing of the contour portion of the right hand 2 b.
  • the display control unit 106 makes one correction object 54 for the user to designate the correction amount of the display position of the depth object 52 in each of four directions in the upper, lower, right and left directions in the display screen 30 Display in the vicinity of 52.
  • the correction unit 104 corrects the operation object correction object 54 and the corresponding operation. Depending on the content, parameters relating to the display position of the virtual object may be corrected. For example, when it is detected that a predetermined physical button (included in the input unit 126) has been pressed for a predetermined time or longer, the correction unit 104 acquires the detection result as a correction start instruction, and Switch the current mode from the normal mode to the correction mode. Subsequently, as shown in FIG. 7A, the display control unit 106 causes the display screen 30 to simultaneously display the depth object 52 and the four correction objects 54.
  • a predetermined operation for example, tapping with the left hand 2a
  • the user taps one or more of the four correction objects 54 as many as necessary so that the display position of the depth object 52 and the position of the outline of the right hand 2b coincide with each other. For example, when one of the four correction objects 54 is tapped, the offset amount of the display position of the depth object 52 in the correction direction corresponding to the correction object 54 is added by a predetermined value.
  • the display position of the depth object 52 and the position of the contour of the right hand 2b can substantially coincide. Thereafter, the user holds the predetermined physical button for a long time. If it is detected that the predetermined physical button has been pressed and held for a predetermined time or more, the correction unit 104 acquires the detection result as a correction end instruction, and the current mode is determined from the correction mode. The mode can be switched to the normal mode. That is, the correction mode ends.
  • the depth sensor included in the sensor unit 122 can perform depth sensing in real time. Then, each time a result of the depth sensing is newly obtained, the display control unit 106 is based on the result of the depth sensing newly obtained and the latest offset amount of the display position of the depth object 52 in each direction. Thus, the display position of the depth object 52 may be updated sequentially. Thereby, during the correction mode, the user can confirm in detail whether the offset amount of the display position of the depth object 52 in each direction is appropriate by freely moving the right hand 2b.
  • the display position of the depth object 52 may be changed based on the recognition result of the user's speech. For example, when it is recognized that the user has uttered “right” once, the display control unit 106 controls the display so that the depth object 52 moves to the right in the display screen 30 at the first speed. You may In this case, when it is recognized that the user has uttered “right” once more, the display control unit 106 changes the moving speed of the depth object 52 to a second speed faster than the first speed. You may In addition, when it is recognized that the user has uttered “stop”, the display control unit 106 may stop the movement of the depth object 52.
  • the one or more correction objects may be objects for correcting parameter values in a binary search.
  • the one or more correction objects shift the display position of the depth object on the display unit 124 toward the one end side by a predetermined distance or more with respect to the predetermined direction (for example, the horizontal direction of the display unit 124)
  • the first object is also referred to
  • the result of shifting the display position of the depth object to the opposite end side with respect to the predetermined direction by the predetermined distance or more (hereinafter also referred to as a second object) May be
  • the display control unit 106 first causes the display unit 124 to simultaneously display the first object and the second object.
  • the display control unit 106 is configured to occlude the corresponding virtual object by the amount of deviation from the target position of either the first object or the second object (that is, the corresponding real object (eg, a hand)). The user is made to select whether the shift amount of the position of the area is smaller. After that, the display control unit 106 changes, for example, the distance between the first object and the second object, the direction in which the first object and the second object shift from the initial display position, and the like each time. Repeat the above process several times.
  • the binary search parameter correction method as described above, it is possible to correct the parameter values regarding the display position of the one or more virtual objects more efficiently. For example, it can be expected that the number of operations performed by the user until the value of the parameter is corrected to an appropriate value can be reduced. As a result, the load on the user may be reduced.
  • the communication unit 120 may be configured to include a communication device 166 described later.
  • the communication unit 120 transmits and receives information to and from another device by, for example, wireless communication and / or wired communication.
  • the communication unit 120 can receive various contents from the server 20.
  • Storage unit 128 may be configured to include a storage device 164 described later.
  • the storage unit 128 stores various data such as one or more virtual objects and one or more contents, and various software.
  • FIG. 8 is a flowchart showing the flow of processing according to the first embodiment.
  • a normal mode that is, a mode other than the correction mode
  • the sensing result acquisition unit 102 of the eyewear 10 acquires the result of depth sensing by the depth sensor (included in the sensor unit 122), and then, the predetermined result is obtained for the result of the depth sensing.
  • the display control unit 106 causes the display unit 124 to display, for example, one or more virtual objects included in the currently activated content based on the result of the recognition process.
  • the display control unit 106 applies the offset amount (hereinafter also referred to as a correction offset amount) related to the correction of the value of the parameter related to the display of the one or more virtual objects to the recognition result.
  • a correction offset amount related to the correction of the value of the parameter related to the display of the one or more virtual objects
  • the correction unit 104 determines whether a correction start instruction (hereinafter also referred to as a calibration start trigger) by the user wearing the eyewear 10 is detected. While the start trigger of the calibration is not detected (S103: No), the eyewear 10 repeats the processing after S101 again.
  • a correction start instruction hereinafter also referred to as a calibration start trigger
  • the correction unit 104 switches the current mode from the normal mode to the correction mode. Subsequently, the display control unit 106 cancels the display of the one or more virtual objects. Furthermore, the display control unit 106 may cause the display unit 124 to display a display indicating that the mode for correction has been switched. After that, the correction unit 104 sets the current correction offset amount stored in the storage unit 128 as a new offset value (that is, initializes the new offset value) (S105).
  • control unit 100 acquires the latest depth sensing result by the depth sensor, and performs the predetermined recognition processing on the depth sensing result.
  • the display control unit 106 generates the depth object corresponding to the result of the depth sensing by applying the new offset value to the recognition result (that is, the recognition result at the current point in time). Then, the display control unit 106 causes the display unit 124 to display one or more correction objects and the depth object together (S107).
  • the correction unit 104 determines whether a correction end instruction (hereinafter also referred to as a calibration end trigger) by the user is detected (S109). If the calibration termination trigger is detected (S109: Yes), the correction unit 104 determines the current correction offset amount stored in the storage unit 128 (set in S105 or S115). Change (update) to the new offset value. (In other words, the storage content of the storage unit 128 is updated) (S111). After that, the eyewear 10 repeats the processing after S101 again.
  • a correction end instruction hereinafter also referred to as a calibration end trigger
  • the correction unit 104 instructs the user regarding the change of the new offset value (as instruction information of the user) It is determined whether it has been detected (S113). When the instruction of the user is not detected (S113: No), the eyewear 10 repeats the processing of S107 and thereafter again.
  • the correction unit 104 changes (updates) the new offset value by a value corresponding to the detection result (S115). After that, the eyewear 10 repeats the processing after S107 again.
  • the eyewear 10 acquires the result of depth sensing of at least one real object corresponding to the viewpoint position of the user wearing the eyewear 10, and
  • the display unit 124 displays at least one correction object for correcting parameters related to display of at least one virtual object based on a result of depth sensing of one real object. Therefore, the user can easily and appropriately correct the value of the parameter related to the display of the at least one virtual object.
  • the first embodiment is not limited to the example described above.
  • application examples of the first embodiment will be described in “2-4-1.
  • Application Example 4 Note that all components included in the eyewear 10 according to each application are the same as the example shown in FIG. In the following, only components having functions different from those described above will be described, and descriptions of the same content will be omitted.
  • the result of depth sensing is not updated during the correction mode. Furthermore, in the application example 1, the values of the parameters related to the display of the one or more virtual objects are updated according to the difference in the result of the depth sensing of the user's hand before and after the switching between the normal mode and the correction mode. Can.
  • the correction unit 104 includes the first position information of the user's hand when the user's correction start instruction is acquired, and the number of the user's hand when the correction end instruction is acquired.
  • the correction amount of the parameter related to the display position of the one or more virtual objects that is, the above-mentioned offset amount for correction
  • the first position information of the user's hand is a position corresponding to the result of the depth sensing of the user's hand when the correction start instruction of the user is acquired by the depth sensor (included in the sensor unit 122) It is information.
  • the second position information of the user's hand is position information corresponding to the result of depth sensing of the user's hand when the correction end instruction of the user is acquired by the depth sensor.
  • correction unit 104 (a part of the user's hand at these timings (instead of using the relationship of positional information of the user's hand at the time of acquisition of the correction start instruction and acquisition of the correction end instruction)
  • parameters related to the display position of the one or more virtual objects may be corrected using the relationship of position information of a predetermined finger tip or the like.
  • the above-mentioned function of the correction unit 104 will be described in more detail with reference to FIGS. 9A to 9C.
  • the user performs a correction start instruction (for example, an utterance of a predetermined voice command (“Start calibration” or the like)).
  • a correction start instruction for example, an utterance of a predetermined voice command (“Start calibration” or the like)
  • the correction unit 104 acquires the correction start instruction
  • the position information 60a corresponding to the result of the depth sensing of the user's hand 2 (for example, a fingertip) by the depth sensor is the first position of the user's hand. Acquire as information.
  • the correction unit 104 switches the current mode from the normal mode to the correction mode. As described above, in the application example 1, the result of depth sensing is not updated during the correction mode.
  • the user moves the hand 2 so that the shielding area of the virtual object (not shown) by the user's hand 2 substantially matches the position of the target.
  • a correction end instruction for example, an utterance of a predetermined voice command (“End calibration” or the like)
  • the correction unit 104 performs positional information 60b corresponding to the result of the depth sensing of the user's hand 2 (for example, a fingertip) by the depth sensor at the time of obtaining the correction end instruction. Acquire as second position information of the user's hand.
  • the correction unit 104 calculates the difference ("d" shown in FIG.
  • the correction unit 104 corrects the value of the parameter related to the display position of the one or more virtual objects by the determined offset amount.
  • the position of the hand depth object 50 substantially matches the position of the target. Therefore, the shielded area of the virtual object by the user's hand 2 substantially matches the position of the target.
  • the correction unit 104 switches the current mode from the correction mode to the normal mode.
  • the display control unit 106 may cause the display unit 124 to display only the outer peripheral portion (for example, an outline) of the hand at the time of obtaining the correction start instruction, for example.
  • the display control unit 106 may cause the display unit 124 to display an image in which the edge of the captured image of the hand at the time of acquisition of the correction start instruction is emphasized. This may reduce the load on the user as the user has more clues to be aligned (e.g., outlines of the hand, eyebrows on the hand, etc.).
  • the display control unit 106 may control the display so that the shielding area of the virtual object by the user's hand is blurred, for example, at the time of obtaining the correction start instruction. This can inform the user that they do not have to be exactly the same.
  • the parameter related to the display position of the one or more virtual objects is corrected using the positional relationship of a part of the user's hand (for example, a fingertip of a predetermined finger).
  • the display control unit 106 may cause the display unit 124 to highlight the part.
  • the display control unit 106 may control the display by the display unit 124 so that a space in which the accuracy of the sensing by the depth sensor is high is noticeable. For example, the display control unit 106 may display the display area corresponding to the space in a predetermined display color.
  • the correction unit 104 displays one or more depth objects corresponding to the result of the depth sensing acquired by the sensing result acquisition unit 102 and when the correction start instruction of the user is acquired. Correction of parameters related to the display position of the one or more virtual objects according to the difference between the first position information of the user and the second position information of the user when the correction end instruction of the user is acquired Determine the amount (that is, the offset amount for correction).
  • the first position information of the user is position information of the eyewear 10 when the correction start instruction of the user is acquired by the depth sensor (included in the sensor unit 122).
  • the second position information of the user is the position information of the eyewear 10 when the correction end instruction of the user is acquired by the depth sensor. For example, the amount of change in the position information of the eyewear 10 sensed between the time when the correction start instruction is obtained and the time when the correction end instruction is obtained and the first position information.
  • the second position information of the user may be identified.
  • FIG. 10A first, it is assumed that the user wearing the eyewear 10 first issues a correction start instruction (for example, an utterance of a predetermined voice command (“Start calibration.” Or the like)). Thereafter, the correction unit 104 switches the current mode from the normal mode to the correction mode.
  • the display control unit 106 sets the contour of each real object corresponding to the result of the depth sensing by the depth sensor (included in the sensor unit 122) at the time of obtaining the correction start instruction.
  • the depth object 52 shown is displayed on the display screen 30.
  • the display control unit 106 emphasizes an edge (as a depth object 52) and causes the display unit 124 to display the result of the depth sensing of the entire environment by the depth sensor.
  • the result of depth sensing is not updated during the correction mode.
  • the user moves or moves the head in the room such that the depth object 52 and the position of each real object in the display screen 30 substantially coincide with each other.
  • the user performs a correction end instruction (for example, an utterance of a predetermined voice command (“End calibration.” Or the like)).
  • the correction unit 104 calculates the difference between the position information of the user at the time of acquisition of the correction start instruction and the position information of the user at the time of acquisition of the correction end instruction, and according to the difference.
  • the above-mentioned offset amount for correction is determined.
  • the correction unit 104 corrects the value of the parameter related to the display position of the one or more virtual objects by the determined correction offset amount.
  • the correction unit 104 switches the current mode from the correction mode to the normal mode.
  • the parameter related to the display position of the one or more virtual objects is corrected based on the detection result of the change of the user's position information or the change of the user's head movement. This makes it possible to correct the displacement of the shielding in the entire field of view of the user. Therefore, it is particularly effective in the case where occlusion of a virtual object by an object other than the user's hand occurs.
  • the correction unit 104 includes a first real object existing within a predetermined distance from the eyewear 10 and a second real object located farther than the predetermined distance from the eyewear 10 ,
  • the parameters relating to the display of one or more virtual objects may be separately corrected.
  • the correction unit 104 may use one or more virtual object display parameters based on the result of the first real object depth sensing and one or more virtual objects based on the second real object depth sensing result. Parameters related to display are separately corrected based on the user's instruction information described above. Thereby, according to the distance of each real object from eyewear 10, the gap amount of the occlusion field of the virtual object by each real object can be amended appropriately.
  • the control unit 100 determines the three-dimensional shape of the real object based on the sensing result of a certain real object (for example, the user's hand) by the depth sensor (included in the sensor unit 122).
  • the display control unit 106 causes the display unit 124 to display the mesh data 70 during the correction mode.
  • the mesh data 70 is an example of a depth object according to the present disclosure.
  • the correction unit 104 When the user's instruction information for moving the mesh data 70 is obtained during the correction mode, as shown in FIG. 11, the correction unit 104 performs the mesh information 70 in the camera coordinate system as the instruction information. For example, it moves to the front and rear, right and left based on. Then, the correction unit 104 corrects the value of the parameter related to the display position of the one or more virtual objects according to the movement amount of the mesh data 70.
  • the correction unit 104 changes the value of one or more parameters in the matrix M in the above equation (1) based on the instruction information of the user to obtain the parameters related to the display position of the one or more virtual objects. Change the value
  • Second embodiment The first embodiment has been described above. As described above, the eyewear 10 according to the first embodiment may correct the value of the parameter regarding the display position of one or more virtual objects.
  • the eyewear 10 is the result of depth sensing by the left camera of the stereo cameras (hereinafter also referred to as left image) and the depth sensing by the right camera of the stereo cameras.
  • the depth values of one or more real objects may be measured using the result (hereinafter also referred to as right image) and a predetermined algorithm.
  • the predetermined algorithm includes estimating which pixel in the right image corresponds to a pixel in the left image.
  • several hundreds of types of depth parameters may be used in the predetermined algorithm.
  • the plurality of types of depth parameters can define, for example, a search range, a threshold, and the like in the predetermined algorithm.
  • the plurality of types of depth parameters are examples of the “parameter related to the display of one or more virtual objects” according to the present disclosure.
  • the plurality of types of depth parameters may include a plurality of types of parameters regarding calculation of a matching score between the result of depth sensing by the left depth camera and the result of depth sensing by the right depth camera.
  • the matching score may be calculated using a plurality of types of calculation methods.
  • the plurality of types of parameters related to calculation of the matching score may include a first parameter related to the application ratio of each of the plurality of types of calculation methods, and a second parameter related to the threshold value of the matching score.
  • the first parameter is a matching score calculation ratio parameter
  • the second parameter is a disparity penalty parameter.
  • the matching score calculation ratio parameter is a parameter relating to the blending ratio of the score based on the difference absolute value at the time of calculation of the matching score and the score based on the Hamming distance.
  • the disparity penalty parameter is a threshold of the matching score.
  • the level of accuracy of the sensing result for example, the level of accuracy of the sensing result, the type of excellent / unfavorable condition, and the like may change.
  • types of depth parameters that can accurately recognize the three-dimensional shape of the hand are various environmental conditions (for example, It may differ depending on the real object located behind the hand, the brightness of the environment, the color of the hand, and the like.
  • FIG. 12 is a diagram (Table 80) illustrating classification examples of types of cases in which the accuracy of the depth sensing result is low. As shown in FIG. 12, there may be multiple types of cases where the accuracy of the depth sensing result is low.
  • the user using the eyewear 10 can easily set desired values for each of the plurality of types of depth parameters.
  • FIG. 13 is a block diagram showing an example of the functional configuration of the eyewear 10 according to the second embodiment.
  • the eyewear 10 according to the second embodiment further includes a parameter set DB 130 in the storage unit 128 as compared to the first embodiment shown in FIG. 4.
  • a parameter set DB 130 in the storage unit 128 as compared to the first embodiment shown in FIG. 4.
  • the parameter set DB 130 can store in advance a combination of a plurality of different types of values of the plurality of types of depth parameters (hereinafter also referred to as “a plurality of parameter sets”).
  • the plurality of types of depth parameters include, for example, a matching score calculation ratio parameter and a disparity penalty parameter. Furthermore, for each parameter set, the ratio of each of these two parameters may be different.
  • FIG. 14 is a diagram showing the relationship between the ratio of each of the matching score calculation ratio parameter and the disparity penalty parameter, and the type of the case in which the accuracy of the depth sensing result is low as shown in FIG. 12.
  • the occurrence frequency of “punch” and “perforation” may change.
  • the occurrence frequency of "outside” and “perforation” may change.
  • FIG. 14 shows an example of two types of parameter sets 82.
  • the value of the matching score calculation ratio parameter is small, and the value of the disparity penalty parameter is large. Therefore, the parameter set 82 a is desirable for a user who feels more uncomfortable than “overwhelming” than “overwhelming”.
  • the value of the matching score calculation ratio parameter is large, and the value of the disparity penalty parameter is small. Therefore, the parameter set 82 b is desirable for a user who feels more uncomfortable than “perforated” than “abrupt”.
  • the display control unit 106 is configured such that the left image and the right side sensed by the stereo camera (included in the sensor unit 122) for each of the plurality of types of parameter sets stored in the parameter set DB 130 Each of one or more correction objects is displayed on the display unit 124 as an object indicating the result of using the parameter set for an image.
  • each of the plurality of types of parameter sets may be uniquely associated with each of one or more correction objects.
  • the display control unit 106 first obtains the result of the depth sensing of the hand of the user wearing the eyewear 10 when the parameter set associated with each of the one or more correction objects is used. And a captured image of the user's hand are combined to generate a moving image. Then, the display control unit 106 causes the display unit 124 to display each generated moving image.
  • FIG. 15A is a diagram showing a display example of a moving image (moving image 90) corresponding to each parameter set.
  • FIG. 15B is an enlarged view of the moving image 90c shown in FIG. 15A.
  • the display control unit 106 causes the display screen 30 to simultaneously display a predetermined number of moving images 90, such as six.
  • the parameter sets associated with each of the predetermined number of animations 90 are different from one another.
  • a hand-captured image 900 and an image of occlusion of a virtual object by the hand are combined.
  • a noise 902 corresponding to the result of depth sensing when the parameter set corresponding to the moving image 90 is used is shown in the image of the shielding.
  • the display control unit 106 causes the display screen 30 to simultaneously display a predetermined number of moving images 90 out of all the generated moving images 90 (as correction objects).
  • the display control unit 106 first extracts a predetermined number of moving images 90 from all the moving images 90, and causes the display screen 30 to simultaneously display the predetermined number of moving images 90. Then, whenever any of the predetermined number of moving images 90 is selected by the user, the display control unit 106 simultaneously displays a predetermined number of undisplayed moving images 90 among all the moving images 90. Display on 30 The display control unit 106 can repeat this process until the undisplayed moving image 90 disappears.
  • the display control unit 106 may cause the display unit 124 to display a predetermined number of moving images 90 different from one another among all the moving images 90 while switching at predetermined time intervals.
  • the one or more correction objects may exist by the number of all parameter sets stored in the parameter set DB 130, or the number of partial parameter sets stored in the parameter set DB 130. It may only exist.
  • correction unit 104 uses the parameter set associated with each of at least one correction object selected by the user among the one or more correction objects displayed on the display unit 124. Based on the parameters related to the display of one or more virtual objects.
  • the correction unit 104 when one or more correction objects are displayed on the display unit 124 and at least one correction object is selected from the one or more correction objects, the correction unit 104 first Information indicating the type of the corresponding correction object selected by the user is acquired as instruction information of the user. Then, based on the parameter set associated with each of the corresponding correction objects selected by the user indicated by the user's instruction information, the correction unit 104 indicates the parameter related to the display of the one or more virtual objects. to correct. For example, the correction unit 104 corrects the values of the parameters related to the display of the one or more virtual objects to values corresponding to the values of the individual depth parameters included in the parameter set associated with each of the corresponding correction objects. Do.
  • FIG. 16 and FIG. 16 and 17 are flowcharts showing a part of the flow of processing according to the second embodiment.
  • the control unit 100 of the eyewear 10 performs “correction processing regarding display position of virtual object” (S201).
  • the “correction process regarding the display position of the virtual object” may be substantially the same as the process flow according to the first embodiment (that is, the process of S101 to S115). Thereby, the value of the parameter regarding the display position of the virtual object can be appropriately corrected.
  • the display control unit 106 causes the display unit 124 to display a character string for instructing the user to move the hand.
  • the display control unit 106 causes the display unit 124 to display character strings such as “Adjust the recognition accuracy. Move your hand in front of your eyes.” And “Start” (S203).
  • the stereo camera (included in the sensor unit 122) starts depth sensing of the user's hand according to the control of the control unit 100 (S205).
  • the display control unit 106 causes the display unit 124 to display a character string (for example, "stop” or the like) for instructing to stop moving the hand. Then, the stereo camera ends the depth sensing of the user's hand according to the control of the control unit 100 (S209).
  • a character string for example, "stop” or the like
  • the display control unit 106 causes each of the plurality of types of parameter sets stored in the parameter set DB 130 to be a result of depth sensing (one or more left images and one or more right images) between S205 and S209.
  • a moving image in which the result of application to the image and the captured image of the user's hand are combined is generated (S211).
  • the display control unit 106 extracts a plurality of undisplayed moving images from all the moving images generated in S211 (S221).
  • the display control unit 106 causes the display unit 124 to simultaneously display the plurality of moving images extracted in S221 (S223).
  • the correction unit 104 acquires identification information of the selected moving image as instruction information of the user (S225).
  • the correction unit 104 determines whether the display of all the moving images generated in S211 is finished (S227). If there is an undisplayed moving image (S227: No), the processing after S221 is repeated again.
  • the correction unit 104 uses the parameter set corresponding to each of the instruction information of all the users acquired each time S225 is performed. And correct parameters related to display of one or more virtual objects (S229).
  • the flow of processing according to the second embodiment is not limited to the example described above.
  • the above-described “correction process regarding display position of virtual object” may be performed last (that is, after S229) instead of being performed first.
  • the eyewear 10 includes at least one of the one or more correction objects displayed on the display unit 124 selected by the user wearing the eyewear 10.
  • the parameters related to the display of one or more virtual objects are corrected based on the parameter set associated with each of the correction objects.
  • the eyewear 10 uses the value of each depth parameter included in the parameter set associated with each of the at least one correction object selected by the user when displaying the one or more virtual objects. Set as the value of each depth parameter. Therefore, the user can easily set desired values for each of the plurality of types of depth parameters.
  • the second embodiment is not limited to the example described above.
  • there may be more than one type of parameter (occluding filter parameter) in the algorithm that produces the occluding mesh eg, meshing or prediction. Therefore, the eyewear 10 may perform substantially the same processing as the correction processing of the value of each depth parameter described above in order to correct one or more shielding filter parameters.
  • the one or more occlusion filter parameters are examples of the “parameter related to the display of one or more virtual objects” according to the present disclosure.
  • the eyewear 10 includes a CPU 150, a read only memory (ROM) 152, a random access memory (RAM) 154, a bus 156, an interface 158, an input device 160, an output device 162, a storage device 164, and , Communication device 166.
  • ROM read only memory
  • RAM random access memory
  • the CPU 150 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the eyewear 10 according to various programs.
  • the CPU 150 also realizes the function of the control unit 100 in the eyewear 10.
  • the CPU 150 is configured of a processor such as a microprocessor.
  • the ROM 152 stores programs used by the CPU 150, control data such as calculation parameters, and the like.
  • the RAM 154 temporarily stores, for example, a program executed by the CPU 150, data in use, and the like.
  • the bus 156 is configured of a CPU bus and the like.
  • the bus 156 connects the CPU 150, the ROM 152, and the RAM 154 to one another.
  • the interface 158 connects the input device 160, the output device 162, the storage device 164, and the communication device 166 to the bus 156.
  • the input device 160 includes, for example, an input unit such as a touch panel, a button, a switch, a lever, and a microphone for inputting information by a user, and an input control circuit that generates an input signal based on an input by the user and outputs it to the CPU 150 Configured
  • the input device 160 can function as the input unit 126.
  • the output device 162 includes a display such as an LCD or an OLED, or a display such as a projector. Output device 162 may also include an audio output device such as a speaker. The output device 162 can function as the display unit 124.
  • the storage device 164 is a device for storing data.
  • the storage device 164 includes, for example, a storage medium, a recording device that records data in the storage medium, a reading device that reads data from the storage medium, or a deletion device that deletes data recorded in the storage medium.
  • the storage device 164 can function as the storage unit 128.
  • the communication device 166 is a communication interface configured by, for example, a communication device (for example, a network card or the like) for connecting to the communication network 22 or the like. Further, the communication device 166 may be a wireless LAN compatible communication device, an LTE (Long Term Evolution) compatible communication device, or a wire communication device performing communication by wire. The communication device 166 can function as the communication unit 120.
  • a communication device for example, a network card or the like
  • LTE Long Term Evolution
  • wire communication device performing communication by wire.
  • the communication device 166 can function as the communication unit 120.
  • the information processor concerning this indication explained an example which is eyewear 10, it is not limited to this example.
  • the information processing device is the other type of device. It may be an apparatus.
  • the other type of device may be the server 20.
  • the other type of device may be a general-purpose PC (Personal Computer), a tablet type terminal, a game machine, a mobile phone such as a smartphone, a portable music player, a speaker, a projector, a wearable device such as a smart watch or earphone, It may be a device (such as a car navigation device) or a robot (such as a humanoid robot or an autonomous vehicle).
  • PC Personal Computer
  • tablet type terminal such as a tablet type terminal
  • a game machine such as a smartphone, a portable music player, a speaker, a projector, a wearable device such as a smart watch or earphone
  • a wearable device such as a smart watch or earphone
  • It may be a device (such as a car navigation device) or a robot (such as a humanoid robot or an autonomous vehicle).
  • each step may be processed in an appropriate order.
  • each step may be processed partially in parallel or individually instead of being processed chronologically.
  • some of the described steps may be omitted or additional steps may be added.
  • a display control unit that causes a display unit corresponding to the user to display a correction object for correcting a parameter related to display of a virtual object based on a result of depth sensing of the real object;
  • An information processing apparatus comprising: (2) The information processing apparatus according to (1), further including: a correction unit that corrects a parameter related to display of the virtual object based on instruction information of the user on the correction object.
  • the information processing apparatus (4) The information processing apparatus according to (3), wherein the real object is located in a space corresponding to a field of view of the user.
  • the display control unit causes the display unit to display a depth object indicating a shape of the real object, which is specified based on a result of depth sensing of the real object, together with the correction object.
  • the information processing apparatus according to (6)
  • the parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
  • the real object includes at least the user's hand,
  • the information processing apparatus according to (5), wherein the instruction information of the user includes a recognition result of the movement of the user's hand with respect to the correction object.
  • the correction object includes a plurality of correction objects The plurality of correction objects respectively correspond to different correction directions with respect to the display position of the virtual object,
  • the correction object includes a depth object indicating a shape of the real object, which is identified based on a result of depth sensing of the real object.
  • the parameter related to display of the virtual object includes a parameter related to the display position of the virtual object
  • the real object includes at least the user's hand
  • the user's instruction information includes a correction start instruction for instructing start of correction of parameters related to display of the virtual object, and a correction end instruction for instructing end of correction of parameters related to display of the virtual object.
  • the correction unit is configured to calculate a difference between the first position information of the user's hand when the correction start instruction is acquired and the second position information of the user's hand when the correction end instruction is acquired.
  • the information processing apparatus according to (8), wherein the correction amount of the parameter related to the display position of the virtual object is determined according to.
  • the parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
  • the parameter related to display of the virtual object includes a parameter related to the display position of the virtual object
  • the user's instruction information includes a correction start instruction for instructing start of correction of parameters related to display of the virtual object, and a correction end instruction for instructing end of correction of parameters related to display of the virtual object.
  • the correction unit is configured to display, after the display of the correction object, the first position information of the user when the correction start instruction is acquired and the number of the user when the correction end instruction is acquired.
  • the information processing apparatus includes a plurality of correction objects
  • the depth sensing result includes a plurality of depth sensing results based on different combinations of a plurality of types of parameters related to depth sensing, Each of the plurality of correction objects indicates a corresponding one of the plurality of depth sensing results,
  • the plurality of types of parameters are a result of first depth sensing of the real object by the left depth camera corresponding to the viewpoint position of the user, and a second of the real object by the right depth camera corresponding to the left depth camera.
  • the matching score is calculated using a plurality of types of calculation methods.
  • the plurality of types of parameters related to calculation of the matching score are a first parameter related to an application ratio of each of the plurality of types of calculation methods at the time of calculation of the matching score, and a second parameter related to a threshold of the matching score
  • the real object includes at least the user's hand
  • the correction object is a plurality of moving images in which a plurality of depth sensing results on the real object when the different combinations are used with respect to the plurality of types of parameters and a captured image of the user's hand are combined
  • the correction object includes at least a first correction object and a second correction object,
  • the display control unit controls the display unit to display the first correction object and the display unit to display the second correction object for a predetermined time.
  • the information processing apparatus according to any one of (13) to (17), wherein switching is performed at intervals.
  • eyewear 20 server 22 communication network 100 control unit 102 sensing result acquisition unit 104 correction unit 106 display control unit 120 communication unit 122 sensor unit 124 display unit 126 input unit 128 storage unit 130 parameter set DB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To recommend an information processing device, information processing method, and program capable of highly conveniently correcting a parameter relating to a display of a virtual object based on a result of a depth sensing. [Solution] Provided is an information processing device comprising: an acquisition part for acquiring a result of a depth sensing of a real object associated with a user's viewing position; and a display control part for causing a display part associated with the user to display an object for correction for correcting a parameter relating to the display of a virtual object based on the result of the depth sensing of the real object.

Description

情報処理装置、情報処理方法、および、プログラムINFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
 本開示は、情報処理装置、情報処理方法、および、プログラムに関する。 The present disclosure relates to an information processing device, an information processing method, and a program.
 従来、AR(Augmented Reality)に関する技術が各種開発されている。ARでは、実空間におけるユーザの位置と関連付けて各種の情報(例えば仮想オブジェクトなど)をユーザに対して提示することができる。 Conventionally, various technologies related to AR (Augmented Reality) have been developed. In the AR, various types of information (for example, a virtual object) can be presented to the user in association with the position of the user in the real space.
 例えば、下記特許文献1には、ユーザが操作可能な複数の操作対象の基準位置に基づいて、少なくとも一つの操作対象に関連付けられるジェスチャ領域が設定され、そして、当該ジェスチャ領域におけるジェスチャインターフェースを介して、個々の操作対象の操作を制御することが記載されている。 For example, in Patent Document 1 below, a gesture area associated with at least one operation target is set based on the reference positions of a plurality of operation targets operable by the user, and then, via the gesture interface in the gesture area It is described to control the operation of each operation target.
特開2014-186361号公報JP 2014-186361 A
 ところで、デプスセンシングの結果に基づいて仮想オブジェクトが表示される場面では、当該仮想オブジェクトの表示に関するパラメータの値が不適切な場合が生じ得る。しかしながら、特許文献1に記載の技術では、当該仮想オブジェクトの表示に関するパラメータの値を補正することは考慮されていない。 By the way, in the scene where a virtual object is displayed based on the result of depth sensing, the case where the value of the parameter regarding the display of the virtual object concerned may arise may arise. However, in the technology described in Patent Document 1, it is not considered to correct the value of the parameter related to the display of the virtual object.
 そこで、本開示では、デプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを利便性高く補正することが可能な、新規かつ改良された情報処理装置、情報処理方法、および、プログラムを提案する。 Therefore, the present disclosure proposes a novel and improved information processing apparatus, an information processing method, and a program that can highly conveniently correct parameters related to display of a virtual object based on the result of depth sensing.
 本開示によれば、ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得する取得部と、前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部に表示させる表示制御部と、を備える、情報処理装置が提供される。 According to the present disclosure, an acquisition unit for acquiring a result of depth sensing of a real object corresponding to a viewpoint position of a user, and a correction for correcting a parameter related to display of a virtual object based on the result of depth sensing of the real object An information processing apparatus is provided, comprising: a display control unit configured to display an object for display on a display unit corresponding to the user.
 また、本開示によれば、ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得することと、前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部にプロセッサが表示させることと、を含む、情報処理方法が提供される。 Further, according to the present disclosure, for acquiring a result of depth sensing of a real object corresponding to a user's viewpoint position, and correcting a parameter related to display of a virtual object based on a result of depth sensing of the real object. There is provided an information processing method including a processor displaying a correction object on a display unit corresponding to the user.
 また、本開示によれば、コンピュータを、ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得する取得部と、前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部に表示させる表示制御部、として機能させるためのプログラムが提供される。 Further, according to the present disclosure, an acquisition unit that acquires a result of depth sensing of a real object corresponding to a viewpoint position of a user, and a parameter related to display of a virtual object based on the result of depth sensing of the real object There is provided a program for causing a correction object for correction to function as a display control unit that causes the display unit corresponding to the user to display the correction object.
 以上説明したように本開示によれば、デプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを利便性高く補正することができる。なお、ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載されたいずれかの効果であってもよい。 As described above, according to the present disclosure, parameters relating to display of a virtual object based on the result of depth sensing can be corrected with high convenience. In addition, the effect described here is not necessarily limited, and may be any effect described in the present disclosure.
本開示の各実施形態に共通する情報処理システムの構成例を示した説明図である。It is an explanatory view showing an example of composition of an information processing system common to each embodiment of this indication. 手による仮想オブジェクトの遮蔽領域が正しく表示されている例を示した図である。It is a figure showing the example where the occlusion field of the virtual object by the hand is correctly displayed. 手による仮想オブジェクトの遮蔽領域がずれて表示されている例を示した図である。It is a figure showing the example where the occlusion field of the virtual object by the hand is shifted and displayed. 第1の実施形態に係るアイウェア10の機能構成例を示したブロック図である。It is a block diagram showing an example of functional composition of eyewear 10 concerning a 1st embodiment. 第1の実施形態に係る、スクリーン座標系におけるデプスオブジェクトの移動に基づいた、仮想オブジェクトの表示位置に関するパラメータの補正例を概略的に示した図である。FIG. 7 is a diagram schematically illustrating an example of correction of parameters related to a display position of a virtual object based on movement of a depth object in the screen coordinate system according to the first embodiment. 補正開始指示および補正終了指示の流れの具体例を示した図である。FIG. 6 is a diagram showing a specific example of the flow of a correction start instruction and a correction end instruction. 補正開始指示および補正終了指示の流れの具体例を示した図である。FIG. 6 is a diagram showing a specific example of the flow of a correction start instruction and a correction end instruction. 補正開始指示および補正終了指示の流れの具体例を示した図である。FIG. 6 is a diagram showing a specific example of the flow of a correction start instruction and a correction end instruction. 補正開始指示および補正終了指示の流れの具体例を示した図である。FIG. 6 is a diagram showing a specific example of the flow of a correction start instruction and a correction end instruction. 第1の実施形態に係るデプスオブジェクトと4個の補正用オブジェクトとの表示例を示した図である。FIG. 6 is a view showing a display example of a depth object and four correction objects according to the first embodiment. 第1の実施形態に係るデプスオブジェクトと4個の補正用オブジェクトとの表示例を示した図である。FIG. 6 is a view showing a display example of a depth object and four correction objects according to the first embodiment. 第1の実施形態に係る処理の流れを示した図である。It is a figure showing the flow of processing concerning a 1st embodiment. 第1の実施形態の応用例1に係る、仮想オブジェクトの表示位置に関するパラメータの補正の具体例を示した図である。It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 1 of a 1st embodiment. 第1の実施形態の応用例1に係る、仮想オブジェクトの表示位置に関するパラメータの補正の具体例を示した図である。It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 1 of a 1st embodiment. 第1の実施形態の応用例1に係る、仮想オブジェクトの表示位置に関するパラメータの補正の具体例を示した図である。It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 1 of a 1st embodiment. 第1の実施形態の応用例2に係る、仮想オブジェクトの表示位置に関するパラメータの補正の具体例を示した図である。It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 2 of a 1st embodiment. 第1の実施形態の応用例2に係る、仮想オブジェクトの表示位置に関するパラメータの補正の具体例を示した図である。It is a figure showing a concrete example of amendment of a parameter about a display position of a virtual object concerning application example 2 of a 1st embodiment. 第1の実施形態の応用例4に係る、カメラ座標系におけるデプスオブジェクトの移動に基づいた、仮想オブジェクトの表示位置に関するパラメータの補正例を概略的に示した図である。FIG. 17 is a diagram schematically illustrating an example of correction of a parameter related to a display position of a virtual object based on movement of a depth object in a camera coordinate system according to an application example 4 of the first embodiment. デプスセンシングの結果の精度が低いケースの種類の分類例を示した図である。It is a figure showing an example of classification of a kind of case where accuracy of a result of depth sensing is low. 第2の実施形態に係るアイウェア10の機能構成例を示したブロック図である。It is a block diagram showing an example of functional composition of eyewear 10 concerning a 2nd embodiment. 第2の実施形態に係るパラメータセットの具体例を示した図である。It is a figure showing a concrete example of a parameter set concerning a 2nd embodiment. 第2の実施形態に係る、各パラメータセットに対応する動画の表示例を示した図である。It is the figure which showed the example of a display of the animation corresponding to each parameter set concerning a 2nd embodiment. 図15Aに示した動画90cを拡大して示した図である。It is the figure which expanded and showed the moving image 90c shown to FIG. 15A. 第2の実施形態に係る処理の流れの一部を示した図である。It is a figure showing a part of flow of processing concerning a 2nd embodiment. 第2の実施形態に係る処理の流れの一部を示した図である。It is a figure showing a part of flow of processing concerning a 2nd embodiment. 各実施形態に共通するアイウェア10のハードウェア構成例を示した図である。It is a figure showing an example of hardware constitutions of eyewear 10 common to each embodiment.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the present specification and the drawings, components having substantially the same functional configuration will be assigned the same reference numerals and redundant description will be omitted.
 また、本明細書及び図面において、実質的に同一の機能構成を有する複数の構成要素を、同一の符号の後に異なるアルファベットを付して区別する場合もある。例えば、実質的に同一の機能構成を有する複数の構成要素を、必要に応じて表示部124aおよび表示部124bのように区別する。ただし、実質的に同一の機能構成を有する複数の構成要素の各々を特に区別する必要がない場合、同一符号のみを付する。例えば、表示部124aおよび表示部124bを特に区別する必要が無い場合には、単に表示部124と称する。 Further, in the present specification and the drawings, a plurality of components having substantially the same functional configuration may be distinguished by attaching different alphabets to the same reference numerals. For example, a plurality of components having substantially the same functional configuration are distinguished as required by the display unit 124 a and the display unit 124 b. However, when it is not necessary to distinguish each of a plurality of components having substantially the same functional configuration, only the same reference numerals will be given. For example, when there is no need to distinguish between the display unit 124 a and the display unit 124 b, the display unit 124 is simply referred to as the display unit 124.
 また、以下に示す項目順序に従って当該「発明を実施するための形態」を説明する。
 1.情報処理システムの構成
 2.第1の実施形態
 3.第2の実施形態
 4.ハードウェア構成
 5.変形例
In addition, the “mode for carrying out the invention” will be described in the order of items shown below.
1. Configuration of information processing system First embodiment 3. Second embodiment 4. Hardware configuration 5. Modified example
<<1.情報処理システムの構成>>
 本開示は、一例として「2.第1の実施形態」および「3.第2の実施形態」において詳細に説明するように、多様な形態で実施され得る。まず、本開示の各実施形態に共通する情報処理システムの構成例について、図1を参照して説明する。図1に示したように、各実施形態に共通する情報処理システムは、アイウェア10、サーバ20、および、通信網22を有する。
<< 1. Information Processing System Configuration >>
The present disclosure can be implemented in various forms as will be described in detail in, for example, "2. first embodiment" and "3. second embodiment". First, a configuration example of an information processing system common to each embodiment of the present disclosure will be described with reference to FIG. As shown in FIG. 1, an information processing system common to the embodiments includes an eyewear 10, a server 20, and a communication network 22.
 <1-1.アイウェア10>
 アイウェア10は、本開示に係る情報処理装置の一例である。アイウェア10は、一以上の仮想オブジェクトを含むコンテンツの表示を制御し得る。例えば、アイウェア10は、アイウェア10を装着しているユーザの周囲の実オブジェクト(例えばユーザの手など)を当該ユーザが目視可能としつつ、一以上の仮想オブジェクトを、後述する表示部124に表示させる。
<1-1. Eyewear 10>
The eyewear 10 is an example of an information processing apparatus according to the present disclosure. Eyewear 10 may control the display of content that includes one or more virtual objects. For example, the eyewear 10 allows one or more virtual objects to be displayed on the display unit 124 described later while making the real object (for example, the user's hand, etc.) around the user wearing the eyewear 10 visible to the user. Display.
 ここで、当該コンテンツは、例えば、ARコンテンツ、または、VR(Virtual Reality)コンテンツなどである。また、仮想オブジェクトは、2Dのオブジェクトであってもよいし、3Dのオブジェクトであってもよい。なお、アイウェア10は、当該コンテンツを、例えば、後述する通信網22を介して、サーバ20などの外部の装置から受信することも可能であるし、または、(自装置内に)予め記憶していてもよい。 Here, the content is, for example, AR content or VR (Virtual Reality) content. Also, the virtual object may be a 2D object or a 3D object. The eyewear 10 can also receive the content, for example, from an external device such as the server 20 via the communication network 22 described later, or can be stored in advance (in its own device) It may be
 図1に示したように、アイウェア10は、表示部124を含む頭部装着型のデバイスであり得る。例えば、アイウェア10は、ARグラス、ビデオシースルー型のHMD(Head Mounted Display)、または、遮蔽型のHMDであってもよい。 As shown in FIG. 1, eyewear 10 may be a head-mounted device that includes a display 124. For example, the eyewear 10 may be an AR glass, a video see-through HMD (Head Mounted Display), or a shield HMD.
 図1に示した例では、アイウェア10は、表示部124として、後述する右側表示部124aおよび左側表示部124bを含む。この場合、アイウェア10は、右側表示部124aおよび左側表示部124bに当該所定のコンテンツを表示し得る。例えば、アイウェア10は、まず、当該所定のコンテンツに基づいて右眼用画像および左眼用画像を生成し、そして、右眼用画像を右側表示部124aに表示し、かつ、左眼用画像を左側表示部124bに表示する。 In the example illustrated in FIG. 1, the eyewear 10 includes, as the display unit 124, a right side display unit 124a and a left side display unit 124b described later. In this case, the eyewear 10 can display the predetermined content on the right side display unit 124a and the left side display unit 124b. For example, the eyewear 10 first generates an image for the right eye and an image for the left eye based on the predetermined content, and displays the image for the right eye on the right display unit 124 a and the image for the left eye Is displayed on the left side display unit 124b.
 {1-1-1.右側表示部124a、左側表示部124b}
 図1に示したように、右側表示部124aおよび左側表示部124bは、透過型の表示装置として構成され得る。この場合、右側表示部124aは、アイウェア10に含まれる右眼用レンズ(または、ゴーグル型レンズ)の少なくとも一部の領域を投影面として映像を投影し得る。さらに、左側表示部124bは、アイウェア10に含まれる左眼用レンズ(または、ゴーグル型レンズ)の少なくとも一部の領域を投影面として映像を投影し得る。
{1-1-1. Right display section 124a, left display section 124b}
As shown in FIG. 1, the right side display unit 124 a and the left side display unit 124 b may be configured as a transmissive display device. In this case, the right side display unit 124a can project an image by using at least a partial area of the right-eye lens (or the goggle-type lens) included in the eyewear 10 as a projection plane. Furthermore, the left display unit 124b can project an image by using at least a partial area of the left eye lens (or the goggle type lens) included in the eyewear 10 as a projection plane.
 変形例として、表示部124は、非透過型の表示装置として構成されてもよい。例えば、右側表示部124aおよび左側表示部124bはそれぞれ、LCD(Liquid Crystal Display)、または、OLED(Organic Light Emitting Diode)などを含んで構成され得る。この場合、アイウェア10は、カメラを有し、そして、当該カメラにより撮影されたユーザの前方の映像を表示部124に逐次表示し得る。これにより、ユーザは、表示部124に表示される映像を介して、ユーザの前方の風景を見ることができる。 As a modification, the display unit 124 may be configured as a non-transmissive display device. For example, the right side display unit 124 a and the left side display unit 124 b may be configured to include an LCD (Liquid Crystal Display), an OLED (Organic Light Emitting Diode), and the like. In this case, the eyewear 10 has a camera, and can sequentially display on the display unit 124 an image in front of the user captured by the camera. Thus, the user can view the scenery in front of the user through the video displayed on the display unit 124.
 {1-1-2.仮想オブジェクトの表示例}
 ここで、図2を参照して、アイウェア10による仮想オブジェクトの表示例について説明する。図2は、表示部124の表示画面30内に、ある仮想オブジェクト40が表示されている例を示した図である。詳細については後述するが、アイウェア10は、デプスセンサを有し、そして、実空間内の個々の実オブジェクト(図2に示した例では、ユーザの手2など)を当該デプスセンサを用いてセンシングし得る。その後、アイウェア10は、当該デプスセンサによるデプスセンシングの結果と、仮想オブジェクト40の表示位置に対応する実空間内の位置情報との比較に基づいて、当該個々の実オブジェクトと仮想オブジェクト40との重なりの有無(換言すれば、当該個々の実オブジェクトによる仮想オブジェクト40の遮蔽の有無)を判定する。例えば、手2と仮想オブジェクト40とが重なっている(換言すれば、仮想オブジェクト40のうちの一部が手2により遮蔽されている)と判定された場合には、図2に示したように、アイウェア10は、仮想オブジェクト40のうちの当該重なり領域に対応する部分を非表示化する。図2に示した例では、平面である仮想オブジェクト40のうち、ユーザの手2が重なっている領域が非表示化されている。
{1-1-2. Display example of virtual object}
Here, with reference to FIG. 2, a display example of a virtual object by the eyewear 10 will be described. FIG. 2 is a view showing an example in which a virtual object 40 is displayed on the display screen 30 of the display unit 124. As shown in FIG. Although details will be described later, the eyewear 10 has a depth sensor, and senses individual real objects in the real space (in the example shown in FIG. 2, the user's hand 2 etc.) using the depth sensor. obtain. Thereafter, the eyewear 10 overlaps the individual real object with the virtual object 40 based on comparison of the result of the depth sensing by the depth sensor with the position information in the real space corresponding to the display position of the virtual object 40. (In other words, the presence or absence of occlusion of the virtual object 40 by the respective real objects). For example, if it is determined that the hand 2 and the virtual object 40 overlap (in other words, part of the virtual object 40 is occluded by the hand 2), as shown in FIG. The eyewear 10 hides the portion of the virtual object 40 that corresponds to the overlapping area. In the example illustrated in FIG. 2, in the virtual object 40 which is a plane, the area where the user's hand 2 overlaps is hidden.
 <1-2.サーバ20>
 サーバ20は、各種のコンテンツ(ARコンテンツやVRコンテンツなど)を管理する装置である。また、サーバ20は、通信網22を介して他の装置と通信を行うことが可能である。例えば、アイウェア10からコンテンツの取得要求を受信した場合には、サーバ20は、当該取得要求に対応するコンテンツをアイウェア10へ送信する。
<1-2. Server 20>
The server 20 is a device that manages various types of content (such as AR content and VR content). Also, the server 20 can communicate with other devices via the communication network 22. For example, when receiving an acquisition request for content from the eyewear 10, the server 20 transmits the content corresponding to the acquisition request to the eyewear 10.
 <1-3.通信網22>
 通信網22は、通信網22に接続されている装置から送信される情報の有線、または無線の伝送路である。例えば、通信網22は、電話回線網、インターネット、衛星通信網などの公衆回線網や、Ethernet(登録商標)を含む各種のLAN(Local Area Network)、WAN(Wide Area Network)などを含んでもよい。また、通信網22は、IP-VPN(Internet Protocol-Virtual Private Network)などの専用回線網を含んでもよい。
<1-3. Communication network 22>
The communication network 22 is a wired or wireless transmission path of information transmitted from a device connected to the communication network 22. For example, the communication network 22 may include a telephone network, the Internet, a public network such as a satellite communication network, various LANs (Local Area Network) including Ethernet (registered trademark), a WAN (Wide Area Network), etc. . Also, the communication network 22 may include a dedicated line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
 <1-4.課題の整理>
 以上、各実施形態に共通する情報処理システムの構成について説明した。ところで、例えば図3に示したように、一以上の実オブジェクト(図3に示した例では、ユーザの手2)による仮想オブジェクト40の遮蔽領域(換言すれば、仮想オブジェクト40のうちの、一以上の実オブジェクトによる非表示領域)は、様々な原因により、目標の位置(例えば、開発者が意図する位置)からズレて表示され得る。図3は、図2に示した例と同じ状況において、ユーザの手2による仮想オブジェクト40の遮蔽領域が何らかの原因によりズレて表示された例を示した図である。
<1-4. Organize issues>
The configuration of the information processing system common to each embodiment has been described above. By the way, for example, as shown in FIG. 3, a shielded area of the virtual object 40 (in other words, one of the virtual objects 40) by one or more real objects (in the example shown in FIG. The non-display area by the above real object may be displayed shifted from the position of the target (for example, the position intended by the developer) due to various causes. FIG. 3 is a view showing an example in which the shielded area of the virtual object 40 by the user's hand 2 is shifted and displayed for some reason in the same situation as the example shown in FIG.
 ここで、当該様々な原因の例としては、第1に、アイウェア10によるデプスセンシングのキャリブレーションに関する誤差が挙げられる。当該誤差は、例えば、アイウェア10が有する、後述するデプスセンサによるセンシングの精度が低い場合に生じ得る。例えば、当該デプスセンサの内部パラメータの値として不正確な値が設定されていることなどの理由により当該誤差が生じ得る。さらに、当該誤差は、表示部124のキャリブレーションが適切に行われていないケースに生じ得る。通常、アイウェア10内の当該デプスセンサの設置位置と表示部124の設置位置とは異なり得る。このため、当該デプスセンサの設置位置と表示部124の設置位置との差分に応じて適切にキャリブレーション(補正など)されていない場合には当該誤差が生じ得る。 Here, as an example of the various causes, first, an error related to calibration of depth sensing by the eyewear 10 can be mentioned. The said error may arise, for example, when the precision of the sensing by the depth sensor which eyewear 10 has later mentioned later is low. For example, the error may occur because an incorrect value is set as the value of the internal parameter of the depth sensor. Furthermore, the error may occur in the case where the calibration of the display unit 124 is not properly performed. Generally, the installation position of the depth sensor in the eyewear 10 and the installation position of the display unit 124 may be different. Therefore, the error may occur if calibration (correction or the like) is not appropriately performed according to the difference between the installation position of the depth sensor and the installation position of the display unit 124.
 当該様々な原因の別の例として、アイウェア10を装着するユーザの視点位置が変動することが挙げられる。通常、ユーザごとに眼間距離が異なり、そして、当該眼間距離の個人差により当該誤差が生じ得る。また、通常、ユーザごとに内旋角が異なり得る。例えば、物体を見るときに人間の眼は内側を向く傾向があるが、この傾向の大きさは、ユーザごとに異なり得る。そして、当該内旋角の個人差により当該誤差が生じ得る。 Another example of the various causes is that the viewpoint position of the user wearing the eyewear 10 changes. Usually, the interocular distance is different for each user, and the individual difference in the interocular distance may cause the error. Also, usually, the internal rotation angle may be different for each user. For example, while the human eye tends to turn inward when looking at an object, the magnitude of this tendency may differ from user to user. And the said difference | error may arise by the individual difference of the said internal rotation angle.
 さらに、アイウェア10のかけズレ(つまり、ユーザがアイウェア10を適切に装着していないこと)によっても当該誤差は生じ得る。 Furthermore, the error may also occur due to the slippage of the eyewear 10 (that is, the user does not properly wear the eyewear 10).
 従って、例えば図3に示したような、実オブジェクトによる仮想オブジェクトの遮蔽領域のズレを補正するためには、これらの誤差に応じた適切な補正を行うこと必要がある。しかしながら、完全な補正を自動的に行うことは極めて難しい。そこで、アイウェア10を装着するユーザが、実オブジェクト(例えば手など)による仮想オブジェクトの遮蔽領域のズレを容易、かつ、精度高く補正可能であることが望まれる。 Therefore, for example, in order to correct the displacement of the shielding area of the virtual object due to the real object as shown in FIG. 3, it is necessary to perform appropriate correction according to these errors. However, it is extremely difficult to make a complete correction automatically. Therefore, it is desirable that the user wearing the eyewear 10 can easily and accurately correct the displacement of the shielding area of the virtual object due to the real object (for example, a hand or the like).
 そこで、上記事情を一着眼点にして、各実施形態に係るアイウェア10を創作するに至った。各実施形態に係るアイウェア10は、アイウェア10を装着するユーザの視点位置に対応する、少なくとも一つの実オブジェクトのデプスセンシングの結果を取得し、そして、当該少なくとも一つの実オブジェクトのデプスセンシングの結果に基づく少なくとも一つの仮想オブジェクトの表示に関するパラメータを補正するための少なくとも一つの補正用オブジェクトを表示部124に表示させることが可能である。このため、当該少なくとも一つの仮想オブジェクトの表示に関するパラメータの値を、ユーザが容易、かつ、適切に補正することができる。以下、各実施形態の内容について順次詳細に説明する。 Then, the eyewear 10 which concerns on each embodiment came to be created in view of the said situation. The eyewear 10 according to each embodiment acquires the result of depth sensing of at least one real object corresponding to the viewpoint position of the user wearing the eyewear 10, and performs depth sensing of the at least one real object. At least one correction object can be displayed on the display unit 124 for correcting parameters related to the display of at least one virtual object based on the result. Therefore, the user can easily and appropriately correct the value of the parameter related to the display of the at least one virtual object. The contents of each embodiment will be sequentially described in detail below.
<<2.第1の実施形態>>
 <2-1.構成>
 まず、第1の実施形態について説明する。最初に、第1の実施形態に係るアイウェア10の構成について説明する。図4は、アイウェア10の機能構成例を示したブロック図である。図4に示したように、アイウェア10は、制御部100、通信部120、センサ部122、表示部124、入力部126、および、記憶部128を有する。なお、上記の説明と同様の内容については説明を省略する。
<< 2. First embodiment >>
<2-1. Configuration>
First, the first embodiment will be described. First, the configuration of the eyewear 10 according to the first embodiment will be described. FIG. 4 is a block diagram showing an example of the functional configuration of the eyewear 10. As shown in FIG. 4, the eyewear 10 includes a control unit 100, a communication unit 120, a sensor unit 122, a display unit 124, an input unit 126, and a storage unit 128. In addition, description is abbreviate | omitted about the same content as said description.
 {2-1-1.センサ部122}
 センサ部122は、例えば、デプスセンサ(例えばステレオカメラ、または、time of flight方式のセンサなど)、イメージセンサ(カメラ)、および、マイクロフォンなどを含み得る。例えば、当該デプスセンサがステレオカメラである場合には、当該デプスセンサは、アイウェア10を装着するユーザの前方左側に関してデプスセンシングを行う左側カメラと、当該ユーザの前方右側に関してデプスセンシングを行う右側カメラとを含む。ここで、左側カメラは、本開示に係る左側デプスカメラの一例である。また、右側カメラは、本開示に係る右側デプスカメラの一例である。
{2-1-1. Sensor unit 122}
The sensor unit 122 may include, for example, a depth sensor (for example, a stereo camera or a time of flight sensor), an image sensor (camera), a microphone, and the like. For example, when the depth sensor is a stereo camera, the depth sensor may be a left camera that performs depth sensing on the front left side of the user wearing the eyewear 10 and a right camera that performs depth sensing on the front right side of the user. Including. Here, the left camera is an example of the left depth camera according to the present disclosure. In addition, the right camera is an example of the right depth camera according to the present disclosure.
 なお、センサ部122に含まれる個々のセンサは、常時センシングをしてもよいし、定期的にセンシングしてもよいし、または、特定の場合(例えば制御部100からの指示があった場合など)にのみセンシングしてもよい。 The individual sensors included in the sensor unit 122 may constantly sense, may periodically sense, or in a specific case (for example, when an instruction from the control unit 100 is given, etc.) You may sense only in).
 {2-1-2.入力部126}
 入力部126は、アイウェア10を装着するユーザによる各種の入力を受け付ける。この入力部126は、後述する入力装置160を含んで構成され得る。例えば、入力部126は、一以上の物理ボタンを含む。
{2-1-2. Input unit 126}
The input unit 126 receives various inputs from the user wearing the eyewear 10. The input unit 126 can be configured to include an input device 160 described later. For example, the input unit 126 includes one or more physical buttons.
 {2-1-3.制御部100}
 制御部100は、例えば、後述するCPU(Central Processing Unit)150やGPU(Graphics Processing Unit)などの処理回路を含んで構成され得る。制御部100は、アイウェア10の動作を統括的に制御する。また、図4に示したように、制御部100は、センシング結果取得部102、補正部104、および、表示制御部106を有する。
{2-1-3. Control unit 100}
The control unit 100 can be configured to include, for example, processing circuits such as a central processing unit (CPU) 150 and a graphics processing unit (GPU) described later. The control unit 100 centrally controls the operation of the eyewear 10. Further, as illustrated in FIG. 4, the control unit 100 includes a sensing result acquisition unit 102, a correction unit 104, and a display control unit 106.
 {2-1-4.センシング結果取得部102}
 センシング結果取得部102は、本開示に係る取得部の一例である。センシング結果取得部102は、アイウェア10を装着するユーザの視点位置に対応する、一以上の実オブジェクトのデプスセンシングの結果を、例えば受信または読出し処理などにより取得する。例えば、センシング結果取得部102は、センサ部122に含まれるデプスセンサによる、当該一以上の実オブジェクトのデプスセンシングの結果をセンサ部122から読み出すことにより取得する。
{2-1-4. Sensing result acquisition unit 102}
The sensing result acquisition unit 102 is an example of an acquisition unit according to the present disclosure. The sensing result acquisition unit 102 acquires the result of depth sensing of one or more real objects corresponding to the viewpoint position of the user wearing the eyewear 10, for example, by reception or readout processing. For example, the sensing result acquisition unit 102 acquires the result of the depth sensing of the one or more real objects by the depth sensor included in the sensor unit 122 by reading from the sensor unit 122.
 変形例として、当該ユーザが位置する環境内に一以上のデプスセンサ(図示省略)が設置されている場合、センシング結果取得部102は、当該一以上のデプスセンサによるデプスセンシングの結果を当該一以上のデプスセンサから受信することにより取得してもよい。この場合、例えば、センシング結果取得部102は、まず、当該一以上のデプスセンサのうちの少なくとも一つに対して、センシング結果の取得要求を通信部120に送信させる。そして、後述する通信部120が当該一以上のデプスセンサから当該デプスセンシングの結果を受信した場合には、センシング結果取得部102は、当該デプスセンシングの結果を通信部120から取得し得る。 As a modification, in the case where one or more depth sensors (not shown) are installed in the environment where the user is located, the sensing result acquisition unit 102 determines the result of the depth sensing by the one or more depth sensors as the one or more depth sensors It may be acquired by receiving from. In this case, for example, the sensing result acquisition unit 102 first causes the communication unit 120 to transmit a sensing result acquisition request to at least one of the one or more depth sensors. Then, when the communication unit 120 described later receives the result of the depth sensing from the one or more depth sensors, the sensing result acquisition unit 102 may acquire the result of the depth sensing from the communication unit 120.
 ここで、当該一以上の実オブジェクトは、基本的には、当該ユーザの視界に対応する空間内に位置する実オブジェクトであり得る。但し、かかる例に限定されず、当該一以上の実オブジェクトは、当該ユーザの視界外(例えば当該ユーザの後方など)に対応する所定の空間内に位置する一以上の実オブジェクトを含んでもよい。 Here, the one or more real objects may basically be real objects located in a space corresponding to the field of view of the user. However, the present invention is not limited to such an example, and the one or more real objects may include one or more real objects located in a predetermined space corresponding to the outside (e.g., behind the user) of the user.
 {2-1-5.補正部104}
 補正部104は、アイウェア10を装着するユーザの所定の指示情報が取得された場合に、一以上の仮想オブジェクトの表示に関するパラメータの値を当該指示情報に基づいて補正する。例えば、補正部104は、デプスセンシングの結果がセンシング結果取得部102により取得された後に取得される当該ユーザの指示情報に基づいて、当該一以上の仮想オブジェクトの表示に関するパラメータの値を補正する。ここで、当該一以上の仮想オブジェクトの表示に関するパラメータは、当該一以上の仮想オブジェクトの表示位置に関するパラメータを含む。
{2-1-5. Correction unit 104}
When predetermined instruction information of the user wearing the eyewear 10 is acquired, the correction unit 104 corrects the value of the parameter related to the display of one or more virtual objects based on the instruction information. For example, the correction unit 104 corrects the value of the parameter related to the display of the one or more virtual objects based on the instruction information of the user acquired after the depth sensing result is acquired by the sensing result acquisition unit 102. Here, the parameters related to the display of the one or more virtual objects include the parameters related to the display position of the one or more virtual objects.
 例えば、補正部104は、後述する表示制御部106の制御により表示部124に表示される一以上の補正用オブジェクトに対する当該ユーザの指示情報に基づいて、当該一以上の仮想オブジェクトの表示位置に関するパラメータを補正する。ここで、当該ユーザの指示情報は、当該デプスセンシングの結果が取得されており、かつ、当該一以上の補正用オブジェクトが表示部124に表示されている間に取得された情報であり得る。この場合、当該ユーザの指示情報が取得された際に、補正部104は、当該一以上の仮想オブジェクトの表示位置に関するパラメータを当該ユーザの指示情報に基づいて補正し得る。なお、ユーザの指示情報の具体的な内容については後述する。 For example, the correction unit 104 is a parameter related to the display position of the one or more virtual objects based on the instruction information of the user for the one or more correction objects displayed on the display unit 124 under the control of the display control unit 106 described later. Correct the Here, the instruction information of the user may be information acquired while the result of the depth sensing is acquired and the one or more correction objects are displayed on the display unit 124. In this case, when the instruction information of the user is acquired, the correction unit 104 may correct the parameter related to the display position of the one or more virtual objects based on the instruction information of the user. The specific contents of the user's instruction information will be described later.
 (2-1-5-1.パラメータの補正例)
 以下では、補正部104による補正の内容に関してより詳細に説明する。例えば、補正部104は、センシング結果取得部102により取得されたデプスセンシングの結果に基づいて特定された、一以上の実オブジェクトの形状を示すデプスオブジェクトを、スクリーン座標系において当該ユーザの指示情報に基づいて移動することにより、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を補正してもよい。ここで、当該デプスオブジェクトは、当該一以上の実オブジェクトの各々の全体の形状を示すオブジェクトであってもよいし、または、当該一以上の実オブジェクトの各々の外周部分(輪郭など)の形状を強調して示すオブジェクトであってもよい。または、当該デプスオブジェクトは、当該一以上の実オブジェクトの撮像画像から抽出される一部のエッジを強調して示すオブジェクトであってもよい。
(2-1-5-1. Correction example of parameter)
The contents of the correction by the correction unit 104 will be described in more detail below. For example, the correction unit 104 sets the depth object indicating the shape of one or more real objects specified based on the result of depth sensing acquired by the sensing result acquisition unit 102 to the instruction information of the user in the screen coordinate system. By moving based on the value, the value of the parameter related to the display position of the one or more virtual objects may be corrected. Here, the depth object may be an object that indicates the entire shape of each of the one or more real objects, or the shape of the outer peripheral portion (such as an outline) of each of the one or more real objects. It may be an object to be emphasized. Alternatively, the depth object may be an object that emphasizes and indicates a part of edges extracted from a captured image of the one or more real objects.
 図5は、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を補正するために、スクリーン座標系においてデプスオブジェクト50が移動される例を概略的に示した図である。図5に示した例では、デプスオブジェクト50は、デプスセンシングの結果に基づいて特定された当該ユーザの手の形状を示すオブジェクトである。例えば、図5に示したように、デプスオブジェクト50aの位置をデプスオブジェクト50bの位置まで移動することを指示する当該ユーザの指示情報が取得された場合には、補正部104は、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を、当該指示情報が示す、デプスオブジェクト50の移動量だけ補正する。 FIG. 5 is a diagram schematically showing an example in which the depth object 50 is moved in the screen coordinate system in order to correct the value of the parameter regarding the display position of the one or more virtual objects. In the example illustrated in FIG. 5, the depth object 50 is an object indicating the shape of the user's hand specified based on the result of depth sensing. For example, as illustrated in FIG. 5, when the user's instruction information for instructing to move the position of the depth object 50 a to the position of the depth object 50 b is acquired, the correction unit 104 corrects the one or more items. The value of the parameter related to the display position of the virtual object is corrected by the movement amount of the depth object 50 indicated by the instruction information.
 より詳細には、図5に示した例では、右側表示部124aと左側表示部124bとに関して、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値をそれぞれ別々に補正され得る。例えば、右側表示部124aと左側表示部124bとの各々に対応するユーザの指示情報が得られた場合に、補正部104は、まず、左右のスクリーン座標系の各々におけるデプスオブジェクト50の平行移動や拡縮を、これらの指示情報に基づいて別々に行う。そして、補正部104は、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を、当該デプスオブジェクト50の平行移動や拡縮の結果に基づいて左右それぞれ別々に補正する。 More specifically, in the example illustrated in FIG. 5, regarding the right display unit 124 a and the left display unit 124 b, the values of the parameters related to the display position of the one or more virtual objects may be separately corrected. For example, when the instruction information of the user corresponding to each of the right side display unit 124a and the left side display unit 124b is obtained, the correction unit 104 first performs parallel movement of the depth object 50 in each of the left and right screen coordinate systems. Scaling is performed separately based on these instruction information. Then, the correction unit 104 separately corrects the values of the parameters regarding the display position of the one or more virtual objects based on the result of the parallel movement and the scaling of the depth object 50.
 ここで、上記の内容についてさらに詳細に説明する。例えば、右側表示部124aおよび左側表示部124bの各々に関して、補正部104は、以下の数式(1)に基づいて、デプスオブジェクト50の各頂点の座標変換をそれぞれ行うことにより、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を別々に補正する。 Here, the above contents will be described in more detail. For example, with respect to each of the right side display unit 124a and the left side display unit 124b, the correction unit 104 performs coordinate conversion of each vertex of the depth object 50 on the basis of Equation (1) below to perform the one or more virtual Correct the parameter values related to the display position of the object separately.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、vは、ローカル座標系における頂点位置である。つまり、デプスセンシングの結果を該当のデプスセンサ(右側カメラあるいは左側カメラ)を原点とした三次元座標系において表現された頂点の位置であり得る。Mは、モデル行列である。具体的には、Mは、当該デプスセンシングの結果を平行移動・回転・拡縮を行うための行列であり得る。Vは、ビューイング変換である。具体的には、Vは、右側表示部124aあるいは左側表示部124bに頂点vを表示させるために、仮想カメラのカメラ座標系へ変換するための行列であり得る。Pは、プロジェクション行列である。具体的には、Pは、スクリーン座標系を生成するために、3方向(縦・横・奥行)を正規化するための行列であり得る。v’は、変換後の頂点位置(座標)である。具体的には、v’は、当該3方向が正規化された座標系におけるvの位置(座標)であり得る。 Here, v is a vertex position in the local coordinate system. That is, the result of the depth sensing may be the position of the vertex represented in the three-dimensional coordinate system with the corresponding depth sensor (right or left camera) as the origin. M is a model matrix. Specifically, M may be a matrix for translating, rotating, or scaling the result of the depth sensing. V is a viewing transformation. Specifically, V may be a matrix for conversion to the camera coordinate system of the virtual camera in order to display the vertex v on the right side display unit 124 a or the left side display unit 124 b. P is a projection matrix. Specifically, P may be a matrix for normalizing three directions (vertical, horizontal, and depth) to generate a screen coordinate system. v 'is the vertex position (coordinates) after conversion. Specifically, v 'may be the position (coordinates) of v in the coordinate system in which the three directions are normalized.
 図5に示した例では、右側表示部124aおよび左側表示部124bの各々に関して、補正部104は、数式(1)における行列P内の一以上のパラメータの値を、当該ユーザの指示情報に基づいて別々に補正することにより、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を補正する。 In the example shown in FIG. 5, for each of the right side display unit 124a and the left side display unit 124b, the correction unit 104 determines the value of one or more parameters in the matrix P in Equation (1) based on the instruction information of the user. By correcting separately, the value of the parameter related to the display position of the one or more virtual objects is corrected.
 なお、上記の補正は、片眼ごとに行われる。従って、左右のいずれか一方に関して補正が行われている間は、通常、ユーザは、当該いずれか他方側の眼を瞑ったり、または、当該いずれか他方側の眼を手で隠すことなどが必要になる。しかしながら、これらの方法は、ユーザの負荷が高かったり、または、アイウェア10が汚れる恐れがある。そこで、左右のいずれか一方に関して補正が行われている間は、後述する表示制御部106は、当該いずれか他方側の表示部124の表示領域全体の表示色を所定の色(例えば黒色)に設定してもよいし、または、当該いずれか他方側の表示部124の表示領域全体に所定の画像を表示させてもよい。または、2つの表示部124の各々に一以上の調光素子が設けられている場合で、かつ、いずれか一方に関して補正が行われている間は、表示制御部106は、当該いずれか他方側の表示部124に関しては、表示領域全体が黒色で表示されるように当該表示部124の一以上の調光素子を制御してもよい。または、2つの表示部124の各々に、所定の方向(上下方向など)に開閉可能なシャッター部(物理的な目隠し)が設けられている場合で、かつ、いずれか一方に関して補正が行われている間は、表示制御部106は、当該いずれか一方側の表示部124のシャッター部が開いており、かつ、当該いずれか他方側の表示部124のシャッター部が閉じているように、これらのシャッター部を制御してもよい。 The above correction is performed for each eye. Therefore, while correction is being performed on either the left or the right, it is usually necessary for the user to close the eyes on the other side or hide the eyes on the other side become. However, these methods may cause high user load or stains on the eyewear 10. Therefore, while correction is performed on either the left or the right, the display control unit 106 described later sets the display color of the entire display area of the display unit 124 on the other side to a predetermined color (for example, black). It may be set, or a predetermined image may be displayed on the entire display area of the display unit 124 on the other side. Alternatively, in the case where one or more light adjustment elements are provided in each of the two display units 124, and while correction is performed for one of the two, the display control unit 106 is configured to With regard to the display unit 124 of the above, one or more light control elements of the display unit 124 may be controlled such that the entire display area is displayed in black. Alternatively, in the case where each of the two display units 124 is provided with a shutter unit (physical blindfold) that can be opened and closed in a predetermined direction (such as in the vertical direction), correction is performed on one of them. While the display control unit 106 is operating, the shutter unit of the display unit 124 on one of the two sides is open, and the shutter unit of the display unit 124 on the other side is closed. The shutter unit may be controlled.
 (2-1-5-2.指示情報)
 以下では、当該ユーザの指示情報の内容についてより詳細に説明する。当該ユーザの指示情報は、当該一以上の仮想オブジェクトの表示に関するパラメータの補正を開始するための補正開始指示を含んでもよいし、当該一以上の仮想オブジェクトの表示に関するパラメータの補正を終了させるための補正終了指示を含んでもよいし、または、補正部104により補正が行われる際の当該パラメータの補正量の指示を含んでもよい。当該ユーザの指示情報は、これら3種類の指示のうちの一以上を含み得る。
(2-1-5-2. Instruction information)
Below, the content of the said user's instruction information is demonstrated in more detail. The instruction information of the user may include a correction start instruction for starting correction of a parameter related to display of the one or more virtual objects, or may end correction of a parameter related to the display of the one or more virtual objects. A correction end instruction may be included, or an instruction of the correction amount of the parameter when the correction is performed by the correction unit 104 may be included. The instruction information of the user may include one or more of these three types of instructions.
 ‐入力部126に対する操作
 以下では、上記の補正開始指示、および、上記の補正終了指示の具体例について説明する。例えば、当該補正開始指示および/または当該補正終了指示は、入力部126(例えば所定の物理ボタンなど)に対する当該ユーザの所定の操作が検出されることであってもよい。一例として、(入力部126に含まれる)所定の物理ボタンが長押しされたことが検出された際には、補正部104は、当該検出結果を、当該補正開始指示もしくは当該補正終了指示として取得してもよい。この方法によれば、ユーザの操作ミスが少なく、かつ、アイウェア10による当該操作の判定の負荷が小さいという利点がある。
-Operation on Input Unit 126 Hereinafter, specific examples of the correction start instruction and the correction end instruction will be described. For example, the correction start instruction and / or the correction end instruction may be that a predetermined operation of the user on the input unit 126 (for example, a predetermined physical button or the like) is detected. As one example, when it is detected that a predetermined physical button (included in input unit 126) is pressed for a long time, correction unit 104 acquires the detection result as the correction start instruction or the correction end instruction. You may According to this method, there is an advantage that the user's operation mistake is small, and the load of determination of the operation by the eyewear 10 is small.
 ‐発話
 または、当該補正開始指示および/または当該補正終了指示は、当該ユーザの所定の発話の音声認識結果であってもよい。例えば、当該補正を開始するための所定の音声コマンド(例えば「Start calibration.」など)を当該ユーザが発したことが認識された際には、補正部104は、当該認識結果を当該補正開始指示として取得する。または、当該補正を終了するための所定の音声コマンド(例えば「End calibration.」など)を当該ユーザが発したことが認識された際には、補正部104は、当該認識結果を当該補正終了指示として取得してもよい。この方法によれば、ユーザは、補正の開始や終了をハンズフリーで指示することができる。
-Speech or The correction start instruction and / or the correction end instruction may be a result of speech recognition of a predetermined speech of the user. For example, when it is recognized that the user has issued a predetermined voice command (for example, “Start calibration.” Or the like) for starting the correction, the correction unit 104 instructs the correction result to be the correction start instruction. Get as. Alternatively, when it is recognized that the user has issued a predetermined voice command (for example, “End calibration.” Or the like) for ending the correction, the correction unit 104 instructs the correction end to the correction end. It may be acquired as According to this method, the user can instruct hands-free start and end of correction.
 ‐手の動き
 または、当該補正開始指示および/または当該補正終了指示は、当該ユーザの手の動きの認識結果であってもよい。例えば、当該補正の開始および/または終了を指示するための、手を用いた所定のジェスチャーが認識された際には、補正部104は、当該認識結果を、当該補正開始指示もしくは当該補正終了指示として取得する。当該所定のジェスチャーは、例えば、手を「グー」の形状に変形し、その後、手を「パー」の形状に変形することを合計2回行うことであってもよい。あるいは、当該所定のジェスチャーは、両手の手の平を合わせることであってもよいし、または、両手の所定の指の指先を合せることであってもよい。
-Movement of a hand or the correction start instruction and / or the correction end instruction may be a recognition result of the user's hand movement. For example, when a predetermined gesture using a hand is recognized to instruct start and / or end of the correction, the correction unit 104 instructs the correction result to be the correction start instruction or the correction end instruction. Get as. The predetermined gesture may be, for example, transforming the hand into the shape of "goo" and then performing the transformation of the hand into the shape of "par" twice in total. Alternatively, the predetermined gesture may be to align the palms of both hands, or may be to align the fingertips of predetermined fingers of both hands.
 このような所定のジェスチャーは、他の操作のための手の動きと顕著に異なるので、アイウェア10が認識しやすいという利点がある。なお、当該所定のジェスチャーが当該補正開始指示である場合、当該所定のジェスチャーを行った直後に当該ジェスチャーを行っている手を所定の方向へ伸ばしたり、引き込めることが、当該パラメータの補正量の指示として、さらに定められてもよい。これにより、ユーザは、操作の連続性高く、当該補正量を指示することができる。 Such predetermined gestures are significantly different from hand movements for other operations, and thus have the advantage that the eyewear 10 is easy to recognize. In addition, when the predetermined gesture is the correction start instruction, immediately after performing the predetermined gesture, it is possible to extend or withdraw the hand performing the gesture in a predetermined direction. It may be further defined as an instruction. Thereby, the user can indicate the correction amount with high continuity of operation.
 ‐視線の変化
 または、当該補正開始指示および/または当該補正終了指示は、当該ユーザの視線情報に基づいて取得される指示であってもよい。例えば、当該ユーザが片目を閉じることを開始し、そして、当該片目が閉じている時間が所定の時間以上継続したことが認識された際には、補正部104は、当該認識結果を当該補正開始指示、もしくは、当該補正終了指示として取得してもよい。なお、当該ユーザの視線情報は、例えばユーザの眼球の撮像画像に基づいて取得される情報であってもよいし、または、アイウェア10の姿勢(向き等)の検出結果に基づいて取得される情報であってもよい。後者の場合、当該ユーザの視線の変化は、当該検出結果に基づいて間接的に特定され得る。
-Line-of-sight change Alternatively, the correction start instruction and / or the correction end instruction may be an instruction acquired based on the line-of-sight information of the user. For example, when it is recognized that the user has closed one eye and it is recognized that the time during which the one eye is closed has continued for a predetermined time or longer, the correction unit 104 starts the correction of the recognition result. It may be acquired as an instruction or the correction end instruction. Note that the line-of-sight information of the user may be information acquired based on, for example, a captured image of the eyeball of the user, or may be acquired based on the detection result of the posture (direction or the like) of the eyewear 10 It may be information. In the latter case, the change in the user's line of sight may be identified indirectly based on the detection result.
 ‐ステレオカメラに対する手の動き
 または、センサ部122に含まれるデプスセンサがステレオカメラである場合、当該補正開始指示および/または当該補正終了指示は、当該ステレオカメラのうちの少なくと一方のカメラの前方に手をかざすこと(つまり、当該カメラを手で隠すこと)であってもよい。通常、当該カメラの前方に手がかざされたか否かを認識することは容易である。従って、この方法によれば、補正の開始や終了の指示に関する誤認識を精度高く防ぐことができる。
-Movement of a hand with respect to a stereo camera Alternatively, when the depth sensor included in the sensor unit 122 is a stereo camera, the correction start instruction and / or the correction end instruction is performed in front of at least one of the stereo cameras. Holding the hand (that is, hiding the camera with the hand) may be used. Usually, it is easy to recognize whether the hand is held in front of the camera. Therefore, according to this method, it is possible to prevent misrecognition regarding the instruction of start and end of correction with high accuracy.
 ‐組み合わせ
 または、当該補正開始指示および/または当該補正終了指示は、前述した複数の種類の操作のうちのいずれか2以上の組み合わせであってもよい。例えば、これらの指示は、(入力部126に含まれる)所定の物理ボタンを長押ししながら、手を「グー」の形状に変形し、その後、手を「パー」の形状に変形することであってもよい。または、これらの指示は、所定の音声コマンドを発話しながら、手を「グー」の形状に変形し、その後、手を「パー」の形状に変形することであってもよい。または、これらの指示は、所定の音声コマンドを発話しながら、両手の手の平を合わせることであってもよい。または、これらの指示は、所定の音声コマンドを発話しながら、当該ユーザが片目を閉じることであってもよい。または、これらの指示は、当該ユーザが片目を閉じながら、少なくとも一方の手を当該ユーザの前方へ伸ばすことであってもよい。または、これらの指示は、当該ステレオカメラのうちの片方のカメラの前方に手をかざしながら、所定の音声コマンドを発話することであってもよい。または、これらの指示は、当該ステレオカメラのうちの片方のカメラの前方に一方の手をかざしながら、もう一方の手を当該ユーザの前方へ伸ばすことであってもよい。
-Combination or The correction start instruction and / or the correction end instruction may be a combination of any two or more of the plurality of types of operations described above. For example, while these instructions are long pressed on a predetermined physical button (included in the input unit 126), the hand is deformed into the shape of "goo" and then the hand is deformed into the shape of "par" It may be. Alternatively, these instructions may be, while uttering a predetermined voice command, transforming the hand into the shape of "goo" and then transforming the hand into the shape of "par". Alternatively, these instructions may be to align the palms of both hands while uttering a predetermined voice command. Alternatively, these instructions may be that the user closes one eye while uttering a predetermined voice command. Alternatively, these instructions may be that the user stretches at least one hand ahead of the user while closing one's eyes. Alternatively, these instructions may be to utter a predetermined voice command while holding a hand in front of one of the stereo cameras. Alternatively, these instructions may be to extend the other hand forward of the user while holding one hand in front of one of the stereo cameras.
 このように、当該補正開始指示および/または当該補正終了指示として、複数の種類の操作の組み合わせが用いられることにより、アイウェア10は、これらの指示と他の操作(通常の操作)とを、より精度高く区別することができる。特に、カメラが隠れていること(つまり、カメラの前方に手がかざされていること)は認識が大変容易であるので、カメラの前方に手をかざすことと他の操作との組み合わせが、これらの指示として用いられることにより、より精度高く誤認識を防ぐことができる。 As described above, by using a combination of a plurality of types of operations as the correction start instruction and / or the correction end instruction, the eyewear 10 can perform these instructions and other operations (normal operation), More accurate distinction is possible. In particular, since it is very easy to recognize that the camera is hidden (that is, the hand is held in front of the camera), the combination of holding the hand in front of the camera and other operations By being used as an instruction of, misrecognition can be prevented with higher accuracy.
 ‐具体例
 ここで、当該補正開始指示および当該補正終了指示の流れの具体例について、図6A~図6Dを参照して説明する。図6A~図6Dに示した例では、アイウェア10は、ステレオカメラ(右側カメラ122aおよび左側カメラ122b)をデプスセンサとして有している。図6Aは、アイウェア10を装着しているユーザが左手2aを用いて通常の操作(つまり、これらの指示以外の操作)を行っている例を示した図である。図6Aに示したタイミングの後、図6Bに示したように、当該ユーザは、当該補正開始指示として、右側カメラ122aおよび右側表示部124aの両方の前方に右手2bをかざす。その後、図6Cに示したように、当該ユーザは、右側カメラ122aの前方から右手2bをずらし、かつ、右側表示部124aの前方のみに右手2bをかざし続ける。その後、図6Dに示したように、当該ユーザは、当該補正終了指示として、右側カメラ122aおよび右側表示部124aの両方の前方に右手2bを再度かざす。
Specific Example Here, a specific example of the flow of the correction start instruction and the correction end instruction will be described with reference to FIGS. 6A to 6D. In the example shown in FIGS. 6A to 6D, the eyewear 10 includes stereo cameras (right camera 122a and left camera 122b) as depth sensors. FIG. 6A is a diagram showing an example in which the user wearing the eyewear 10 performs a normal operation (that is, an operation other than these instructions) using the left hand 2a. After the timing shown in FIG. 6A, as shown in FIG. 6B, the user holds the right hand 2b in front of both the right camera 122a and the right display unit 124a as the correction start instruction. Thereafter, as shown in FIG. 6C, the user shifts the right hand 2b from the front of the right camera 122a, and keeps holding the right hand 2b only in front of the right display 124a. Thereafter, as shown in FIG. 6D, the user holds the right hand 2b again in front of both the right camera 122a and the right display unit 124a as the correction end instruction.
 図6A~図6Dに示した例において、補正部104は、まず、図6Bに示したタイミングにおける左手2aの位置の検出結果と、図6Dに示したタイミングにおける左手2aの位置の検出結果との差分を算出する。そして、補正部104は、当該一以上の仮想オブジェクトの表示位置に関するパラメータの補正量を当該差分に応じて決定してもよい。 In the example shown in FIGS. 6A to 6D, the correction unit 104 first detects the detection result of the position of the left hand 2a at the timing shown in FIG. 6B and the detection result of the position of the left hand 2a at the timing shown in FIG. Calculate the difference. Then, the correction unit 104 may determine the correction amount of the parameter related to the display position of the one or more virtual objects according to the difference.
 {2-1-6.表示制御部106}
 表示制御部106は、表示部124に対して表示を制御する。例えば、表示制御部106は、センシング結果取得部102により取得されたデプスセンシングの結果に基づく一以上の仮想オブジェクトの表示位置に関するパラメータを補正するための一以上の補正用オブジェクトを表示部124に表示させる。一例として、当該一以上の補正用オブジェクトは、当該一以上の仮想オブジェクトの表示位置に関する互いに異なる補正方向の数だけ表示され得る。そして、当該一以上の補正用オブジェクトの各々は、当該補正用オブジェクトに対応する補正方向に関する、当該一以上の仮想オブジェクトの表示位置の補正量を当該ユーザが指示するためのオブジェクトであってもよい。
{2-1-6. Display control unit 106}
The display control unit 106 controls the display of the display unit 124. For example, the display control unit 106 displays, on the display unit 124, one or more correction objects for correcting a parameter related to the display position of one or more virtual objects based on the result of depth sensing acquired by the sensing result acquisition unit 102. Let As one example, the one or more correction objects may be displayed by the number of different correction directions with respect to the display position of the one or more virtual objects. Then, each of the one or more correction objects may be an object for the user to designate the correction amount of the display position of the one or more virtual objects in the correction direction corresponding to the correction object. .
 さらに、表示制御部106は、センシング結果取得部102により取得されたデプスセンシングの結果に基づいたデプスオブジェクトを、当該一以上の補正用オブジェクトと関連付けて表示部124に表示させることが可能である。例えば、表示制御部106は、当該デプスオブジェクトと当該一以上の補正用オブジェクトとを同時に表示部124に表示させる。 Furthermore, the display control unit 106 can cause the display unit 124 to display a depth object based on the result of depth sensing acquired by the sensing result acquisition unit 102 in association with the one or more correction objects. For example, the display control unit 106 causes the display unit 124 to simultaneously display the depth object and the one or more correction objects.
 (2-1-6-1.具体例)
 ここで、図7Aおよび図7Bを参照して、上記の機能についてより詳細に説明する。図7Aは、該当のユーザの右手のデプスセンシングの結果に基づいたデプスオブジェクトと、4個の補正用オブジェクトとが同時に表示部124に表示される例を示した図である。図7Aに示したように、例えば、表示制御部106は、右手2bのデプスセンシングの結果に基づいて特定された右手2bの輪郭を示すデプスオブジェクト52と、デプスオブジェクト52の表示位置の補正量を当該ユーザが指示するための、4個の補正用オブジェクト54とを表示画面30に表示させる。より具体的には、表示制御部106は、右手2bの輪郭部分のデプスセンシングの結果に対応する、表示画面30内の表示位置にデプスオブジェクト52を表示させる。同時に、表示制御部106は、表示画面30内の上下左右の4方向に関して、各方向に関してデプスオブジェクト52の表示位置の補正量を当該ユーザが指示するための補正用オブジェクト54を一個ずつ、デプスオブジェクト52の近傍に表示させる。
(2-1-6-1. Specific example)
The above functions will now be described in more detail with reference to FIGS. 7A and 7B. FIG. 7A is a diagram showing an example in which the depth object based on the result of the depth sensing of the user's right hand and the four correction objects are displayed on the display unit 124 simultaneously. As shown in FIG. 7A, for example, the display control unit 106 corrects the correction amount of the display position of the depth object 52 indicating the contour of the right hand 2b specified based on the result of the depth sensing of the right hand 2b The display screen 30 displays four correction objects 54 for the user to instruct. More specifically, the display control unit 106 causes the depth object 52 to be displayed at the display position in the display screen 30 corresponding to the result of the depth sensing of the contour portion of the right hand 2 b. At the same time, the display control unit 106 makes one correction object 54 for the user to designate the correction amount of the display position of the depth object 52 in each of four directions in the upper, lower, right and left directions in the display screen 30 Display in the vicinity of 52.
 当該4個の補正用オブジェクト54のうちのいずれかに対して所定の操作(例えば左手2aでのタップなど)が行われると、補正部104は、操作対象の補正用オブジェクト54と該当の操作の内容とに応じて、仮想オブジェクトの表示位置に関するパラメータを補正し得る。例えば、まず、(入力部126に含まれる)所定の物理ボタンが所定の時間以上長押しされたことが検出された場合、補正部104は、当該検出結果を補正開始指示として取得し、そして、現在のモードを通常モードから補正用モードに切り替える。続いて、図7Aに示したように、表示制御部106は、デプスオブジェクト52と4個の補正用オブジェクト54とを同時に表示画面30に表示させる。その後、ユーザは、デプスオブジェクト52の表示位置と右手2bの輪郭の位置とが一致するように、4個の補正用オブジェクト54のうちの一以上をそれぞれ必要な数だけタップする。例えば、4個の補正用オブジェクト54のうちのいずれかがタップされると、当該補正用オブジェクト54に対応する補正方向に関する、デプスオブジェクト52の表示位置のオフセット量が所定の値だけ加算される。 When a predetermined operation (for example, tapping with the left hand 2a) is performed on any of the four correction objects 54, the correction unit 104 corrects the operation object correction object 54 and the corresponding operation. Depending on the content, parameters relating to the display position of the virtual object may be corrected. For example, when it is detected that a predetermined physical button (included in the input unit 126) has been pressed for a predetermined time or longer, the correction unit 104 acquires the detection result as a correction start instruction, and Switch the current mode from the normal mode to the correction mode. Subsequently, as shown in FIG. 7A, the display control unit 106 causes the display screen 30 to simultaneously display the depth object 52 and the four correction objects 54. Thereafter, the user taps one or more of the four correction objects 54 as many as necessary so that the display position of the depth object 52 and the position of the outline of the right hand 2b coincide with each other. For example, when one of the four correction objects 54 is tapped, the offset amount of the display position of the depth object 52 in the correction direction corresponding to the correction object 54 is added by a predetermined value.
 上記のようなユーザの操作が繰り返されることにより、例えば図7Bに示したように、デプスオブジェクト52の表示位置と右手2bの輪郭の位置とが略一致し得る。その後、当該ユーザは、当該所定の物理ボタンを長押しする。当該所定の物理ボタンが所定の時間以上長押しされたことが検出された場合には、補正部104は、当該検出結果を補正終了指示として取得し、そして、現在のモードを当該補正用モードから当該通常モードへ切り替え得る。つまり、当該補正用モードは終了する。 By repeating the user's operation as described above, for example, as shown in FIG. 7B, the display position of the depth object 52 and the position of the contour of the right hand 2b can substantially coincide. Thereafter, the user holds the predetermined physical button for a long time. If it is detected that the predetermined physical button has been pressed and held for a predetermined time or more, the correction unit 104 acquires the detection result as a correction end instruction, and the current mode is determined from the correction mode. The mode can be switched to the normal mode. That is, the correction mode ends.
 なお、図7Aおよび図7Bに示した表示画面30の表示時において、センサ部122に含まれるデプスセンサは、リアルタイムにデプスセンシングし得る。そして、当該デプスセンシングの結果が新たに得られる度に、表示制御部106は、新たに得られた当該デプスセンシングの結果と、各方向に関するデプスオブジェクト52の表示位置の最新のオフセット量とに基づいて、デプスオブジェクト52の表示位置を逐次更新してもよい。これにより、当該補正用モード中に、ユーザは、各方向に関するデプスオブジェクト52の表示位置のオフセット量がそれぞれ適切であるかを、右手2bを自由に動かすことにより詳細に確認することができる。 When the display screen 30 shown in FIGS. 7A and 7B is displayed, the depth sensor included in the sensor unit 122 can perform depth sensing in real time. Then, each time a result of the depth sensing is newly obtained, the display control unit 106 is based on the result of the depth sensing newly obtained and the latest offset amount of the display position of the depth object 52 in each direction. Thus, the display position of the depth object 52 may be updated sequentially. Thereby, during the correction mode, the user can confirm in detail whether the offset amount of the display position of the depth object 52 in each direction is appropriate by freely moving the right hand 2b.
 ‐効果1
 補正用オブジェクト54に対するタップの有無を認識するためには、例えば音声認識などの他の認識処理が不要である。従って、上記のように、デプスオブジェクト52の表示位置の補正方法として、補正用オブジェクト54に対するタップが用いられることにより、アイウェア10による当該補正方法の認識処理の負荷が軽減され得る。さらに、例えば入力部126に含まれる所定の物理ボタンの押下と比較して、上下左右の4個の補正用オブジェクト54に対するジェスチャー操作(タップ等)の方がユーザは直感的に操作できる。また、ジェスチャー操作では、当該所定の物理ボタンの設置場所などに依存せずに、所望の操作ができるという利点もある。
-Effect 1
In order to recognize the presence or absence of a tap on the correction object 54, other recognition processing such as speech recognition is not necessary. Therefore, as described above, by using the tap for the correction object 54 as the correction method of the display position of the depth object 52, the load of recognition processing of the correction method by the eyewear 10 can be reduced. Furthermore, compared with, for example, pressing of a predetermined physical button included in the input unit 126, the user can intuitively operate the gesture operation (tap or the like) on the four correction objects 54 vertically and horizontally. In addition, in the gesture operation, there is also an advantage that a desired operation can be performed without depending on the installation location of the predetermined physical button or the like.
 ‐効果2
 また、前述したように、モードの遷移のトリガとして、当該所定の物理ボタンの押下が用いられることにより、ユーザの意図通りにモードが遷移されるという利点がある。通常、当該所定の物理ボタンを押下することは、ユーザの誤操作の恐れがほとんどない。従って、この遷移方法によれば、ユーザの意図通りにモード遷移を行うことができる。
-Effect 2
Further, as described above, by using the pressing of the predetermined physical button as a trigger of the mode transition, there is an advantage that the mode is transitioned as the user intends. In general, pressing the predetermined physical button is unlikely to cause a user's erroneous operation. Therefore, according to this transition method, mode transition can be performed as intended by the user.
 (2-1-6-2.変形例1)
 変形例として、図7Aおよび図7Bに示した例において、補正用オブジェクト54に対するタップの代わりに、ユーザの発話の認識結果に基づいて、デプスオブジェクト52の表示位置を変更可能としてもよい。例えば、ユーザが「右」と一回発話したことが認識された際には、表示制御部106は、デプスオブジェクト52が表示画面30内の右側へ第1の速度で移動するように表示を制御してもよい。この場合、ユーザがもう一回「右」と発話したことが認識された際には、表示制御部106は、デプスオブジェクト52の移動速度を、第1の速度よりも速い第2の速度に変更してもよい。また、ユーザが「ストップ」と発話したことが認識された際には、表示制御部106は、デプスオブジェクト52の移動を停止してもよい。
(2-1-6-2. Modified Example 1)
As a modification, in the examples shown in FIGS. 7A and 7B, instead of the tap on the correction object 54, the display position of the depth object 52 may be changed based on the recognition result of the user's speech. For example, when it is recognized that the user has uttered “right” once, the display control unit 106 controls the display so that the depth object 52 moves to the right in the display screen 30 at the first speed. You may In this case, when it is recognized that the user has uttered “right” once more, the display control unit 106 changes the moving speed of the depth object 52 to a second speed faster than the first speed. You may In addition, when it is recognized that the user has uttered “stop”, the display control unit 106 may stop the movement of the depth object 52.
 (2-1-6-3.変形例2)
 別の変形例として、当該一以上の補正用オブジェクトは、二分探索的にパラメータの値を補正するためのオブジェクトであってもよい。例えば、当該一以上の補正用オブジェクトは、表示部124におけるデプスオブジェクトの表示位置を、所定の方向(例えば、表示部124の水平方向など)に関して一方の端側へ所定の距離以上ずらした結果(以下、第1のオブジェクトとも称する)と、当該デプスオブジェクトの表示位置を当該所定の方向に関して反対の端側へ当該所定の距離以上ずらした結果(以下、第2のオブジェクトとも称する)とから構成されてもよい。この場合、例えば、表示制御部106は、まず、当該第1のオブジェクトと当該第2のオブジェクトとを同時に表示部124に表示させる。続いて、表示制御部106は、当該第1のオブジェクトおよび当該第2のオブジェクトのうちのいずれが目標の位置からのズレ量(つまり、該当の実オブジェクト(手など)による該当の仮想オブジェクトの遮蔽領域の位置のズレ量)がより小さいかを当該ユーザに選択させる。その後、表示制御部106は、例えば当該第1のオブジェクトと当該第2のオブジェクトとの距離や、当該第1のオブジェクトおよび当該第2のオブジェクトの当初の表示位置からずらす方向などを毎回異ならせながら、上記のプロセスを複数回繰り返す。
(2-1-6-3. Modified Example 2)
As another modification, the one or more correction objects may be objects for correcting parameter values in a binary search. For example, the one or more correction objects shift the display position of the depth object on the display unit 124 toward the one end side by a predetermined distance or more with respect to the predetermined direction (for example, the horizontal direction of the display unit 124) Hereinafter, the first object is also referred to) and the result of shifting the display position of the depth object to the opposite end side with respect to the predetermined direction by the predetermined distance or more (hereinafter also referred to as a second object) May be In this case, for example, the display control unit 106 first causes the display unit 124 to simultaneously display the first object and the second object. Subsequently, the display control unit 106 is configured to occlude the corresponding virtual object by the amount of deviation from the target position of either the first object or the second object (that is, the corresponding real object (eg, a hand)). The user is made to select whether the shift amount of the position of the area is smaller. After that, the display control unit 106 changes, for example, the distance between the first object and the second object, the direction in which the first object and the second object shift from the initial display position, and the like each time. Repeat the above process several times.
 上記のような二分探索的なパラメータの補正方法が用いられることにより、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値をより効率的に補正することができる。例えば、当該パラメータの値が適切な値に補正されるまでの当該ユーザの操作回数を削減できることが期待できる。その結果、ユーザの負荷が減少し得る。 By using the binary search parameter correction method as described above, it is possible to correct the parameter values regarding the display position of the one or more virtual objects more efficiently. For example, it can be expected that the number of operations performed by the user until the value of the parameter is corrected to an appropriate value can be reduced. As a result, the load on the user may be reduced.
 {2-1-7.通信部120}
 通信部120は、後述する通信装置166を含んで構成され得る。通信部120は、例えば無線通信および/または有線通信により、他の装置との間で情報の送受信を行う。例えば、通信部120は、各種のコンテンツをサーバ20から受信し得る。
{2-1-7. Communication unit 120}
The communication unit 120 may be configured to include a communication device 166 described later. The communication unit 120 transmits and receives information to and from another device by, for example, wireless communication and / or wired communication. For example, the communication unit 120 can receive various contents from the server 20.
 {2-1-8.記憶部128}
 記憶部128は、後述するストレージ装置164を含んで構成され得る。記憶部128は、例えば一以上の仮想オブジェクトや一以上のコンテンツなどの各種のデータや、各種のソフトウェアを記憶する。
{2-1-8. Storage unit 128}
The storage unit 128 may be configured to include a storage device 164 described later. The storage unit 128 stores various data such as one or more virtual objects and one or more contents, and various software.
 <2-2.処理の流れ>
 以上、第1の実施形態の構成について説明した。次に、第1の実施形態に係る処理の流れの一例について、図8を参照して説明する。図8は、第1の実施形態に係る処理の流れを示したフローチャートである。ここでは、初期状態のモードとして通常モード(つまり、補正用モード以外のモード)が設定されている例について説明する。
<2-2. Flow of processing>
The configuration of the first embodiment has been described above. Next, an example of the flow of processing according to the first embodiment will be described with reference to FIG. FIG. 8 is a flowchart showing the flow of processing according to the first embodiment. Here, an example in which a normal mode (that is, a mode other than the correction mode) is set as the mode in the initial state will be described.
 図8に示したように、まず、アイウェア10のセンシング結果取得部102は、(センサ部122に含まれる)デプスセンサによるデプスセンシングの結果を取得し、そして、当該デプスセンシングの結果に対して所定の認識処理を行う。そして、表示制御部106は、例えば現在起動中のコンテンツに含まれる一以上の仮想オブジェクトを当該認識処理の結果に基づいて表示部124に表示させる。例えば、表示制御部106は、当該一以上の仮想オブジェクトの表示に関するパラメータの値の補正に関するオフセット量(以下、補正用オフセット量とも称する)を当該認識結果に対して適用することにより、当該一以上の仮想オブジェクトを表示部124に表示させる(S101)。 As shown in FIG. 8, first, the sensing result acquisition unit 102 of the eyewear 10 acquires the result of depth sensing by the depth sensor (included in the sensor unit 122), and then, the predetermined result is obtained for the result of the depth sensing. Perform recognition processing of Then, the display control unit 106 causes the display unit 124 to display, for example, one or more virtual objects included in the currently activated content based on the result of the recognition process. For example, the display control unit 106 applies the offset amount (hereinafter also referred to as a correction offset amount) related to the correction of the value of the parameter related to the display of the one or more virtual objects to the recognition result. Are displayed on the display unit 124 (S101).
 その後、補正部104は、アイウェア10を装着するユーザによる補正開始指示(以下、キャリブレーションの開始トリガとも称する)が検出されたか否かを判定する。当該キャリブレーションの開始トリガが検出されていない間は(S103:No)、アイウェア10は、再びS101以降の処理を繰り返す。 Thereafter, the correction unit 104 determines whether a correction start instruction (hereinafter also referred to as a calibration start trigger) by the user wearing the eyewear 10 is detected. While the start trigger of the calibration is not detected (S103: No), the eyewear 10 repeats the processing after S101 again.
 一方、当該キャリブレーションの開始トリガが検出された場合には(S103:Yes)、まず、補正部104は、現在のモードを当該通常モードから当該補正用モードへ切り替える。続いて、表示制御部106は、当該一以上の仮想オブジェクトの表示を中止する。さらに、表示制御部106は、当該補正用モードに切り替わったことを示す表示を表示部124に表示させてもよい。その後、補正部104は、記憶部128に記憶されている現在の補正用オフセット量を、新オフセット値に設定する(つまり、当該新オフセット値を初期化する)(S105)。 On the other hand, when the calibration start trigger is detected (S103: Yes), first, the correction unit 104 switches the current mode from the normal mode to the correction mode. Subsequently, the display control unit 106 cancels the display of the one or more virtual objects. Furthermore, the display control unit 106 may cause the display unit 124 to display a display indicating that the mode for correction has been switched. After that, the correction unit 104 sets the current correction offset amount stored in the storage unit 128 as a new offset value (that is, initializes the new offset value) (S105).
 続いて、制御部100は、当該デプスセンサによる最新のデプスセンシングの結果を取得し、そして、当該デプスセンシングの結果に対して上記の所定の認識処理を行う。 Subsequently, the control unit 100 acquires the latest depth sensing result by the depth sensor, and performs the predetermined recognition processing on the depth sensing result.
 続いて、表示制御部106は、当該認識結果(つまり、現在時点の認識結果)に対して当該新オフセット値を適用することにより、当該デプスセンシングの結果に対応するデプスオブジェクトを生成する。そして、表示制御部106は、一以上の補正用オブジェクトと当該デプスオブジェクトとを一緒に表示部124に表示させる(S107)。 Subsequently, the display control unit 106 generates the depth object corresponding to the result of the depth sensing by applying the new offset value to the recognition result (that is, the recognition result at the current point in time). Then, the display control unit 106 causes the display unit 124 to display one or more correction objects and the depth object together (S107).
 その後、補正部104は、当該ユーザによる補正終了指示(以下、キャリブレーションの終了トリガとも称する)が検出されたか否かを判定する(S109)。当該キャリブレーションの終了トリガが検出された場合には(S109:Yes)、補正部104は、記憶部128に記憶されている現在の補正用オフセット量を、(S105またはS115で設定された)当該新オフセット値に変更(更新)する。(つまり、記憶部128の記憶内容を更新する)(S111)。その後、アイウェア10は、再びS101以降の処理を繰り返す。 After that, the correction unit 104 determines whether a correction end instruction (hereinafter also referred to as a calibration end trigger) by the user is detected (S109). If the calibration termination trigger is detected (S109: Yes), the correction unit 104 determines the current correction offset amount stored in the storage unit 128 (set in S105 or S115). Change (update) to the new offset value. (In other words, the storage content of the storage unit 128 is updated) (S111). After that, the eyewear 10 repeats the processing after S101 again.
 一方、当該キャリブレーションの終了トリガが検出されていない場合には(S109:No)、次に、補正部104は、当該新オフセット値の変更に関する当該ユーザの指示が(当該ユーザの指示情報として)検出されたか否かを判定する(S113)。当該ユーザの指示が検出されていない場合には(S113:No)、アイウェア10は、再びS107以降の処理を繰り返す。 On the other hand, when the termination trigger of the calibration is not detected (S109: No), next, the correction unit 104 instructs the user regarding the change of the new offset value (as instruction information of the user) It is determined whether it has been detected (S113). When the instruction of the user is not detected (S113: No), the eyewear 10 repeats the processing of S107 and thereafter again.
 一方、当該ユーザの指示が検出された場合には(S113:Yes)、補正部104は、当該新オフセット値を、当該検出結果に対応する値だけ変更(更新)する(S115)。その後、アイウェア10は、再びS107以降の処理を繰り返す。 On the other hand, when the instruction of the user is detected (S113: Yes), the correction unit 104 changes (updates) the new offset value by a value corresponding to the detection result (S115). After that, the eyewear 10 repeats the processing after S107 again.
 <2-3.効果>
 以上説明したように、第1の実施形態に係るアイウェア10は、アイウェア10を装着するユーザの視点位置に対応する、少なくとも一つの実オブジェクトのデプスセンシングの結果を取得し、そして、当該少なくとも一つの実オブジェクトのデプスセンシングの結果に基づく少なくとも一つの仮想オブジェクトの表示に関するパラメータを補正するための少なくとも一つの補正用オブジェクトを表示部124に表示させる。このため、当該少なくとも一つの仮想オブジェクトの表示に関するパラメータの値をユーザが容易、かつ、適切に補正することができる。
<2-3. Effect>
As described above, the eyewear 10 according to the first embodiment acquires the result of depth sensing of at least one real object corresponding to the viewpoint position of the user wearing the eyewear 10, and The display unit 124 displays at least one correction object for correcting parameters related to display of at least one virtual object based on a result of depth sensing of one real object. Therefore, the user can easily and appropriately correct the value of the parameter related to the display of the at least one virtual object.
 <2-4.応用例>
 第1の実施形態は前述した例に限定されない。以下では、第1の実施形態の応用例について、「2-4-1.応用例1」~「2-4-4.応用例4」において説明する。なお、各応用例に係るアイウェア10に含まれる全ての構成要素は、図4に示した例と同様である。以下では、前述した内容と異なる機能を有する構成要素についてのみ説明することとし、同一の内容については説明を省略する。
<2-4. Application example>
The first embodiment is not limited to the example described above. Hereinafter, application examples of the first embodiment will be described in “2-4-1. Application Example 1” to “2-4-4. Application Example 4”. Note that all components included in the eyewear 10 according to each application are the same as the example shown in FIG. In the following, only components having functions different from those described above will be described, and descriptions of the same content will be omitted.
 {2-4-1.応用例1}
 まず、第1の実施形態に係る応用例1について説明する。上記の説明(例えば図8等)では、補正モード中でもデプスセンシングの結果をリアルタイムに更新する例(つまり、補正モード中でもデプスセンシングを継続する例)について説明した。
{2-4-1. Application example 1}
First, Application Example 1 according to the first embodiment will be described. In the above description (for example, FIG. 8 and the like), the example in which the depth sensing result is updated in real time even in the correction mode (that is, the example in which the depth sensing is continued even in the correction mode) has been described.
 後述するように、本応用例1では、補正モード中にはデプスセンシングの結果を更新しない。また、本応用例1では、通常モードと補正モードとの切り替わりの前後における、ユーザの手のデプスセンシングの結果の差分に応じて、当該一以上の仮想オブジェクトの表示に関するパラメータの値を更新することができる。 As described later, in the application example 1, the result of depth sensing is not updated during the correction mode. Furthermore, in the application example 1, the values of the parameters related to the display of the one or more virtual objects are updated according to the difference in the result of the depth sensing of the user's hand before and after the switching between the normal mode and the correction mode. Can.
 (2-4-1-1.補正部104)
 応用例1に係る補正部104は、当該ユーザの補正開始指示が取得された際の当該ユーザの手の第1の位置情報と、当該補正終了指示が取得された際の当該ユーザの手の第2の位置情報との差分に応じて、当該一以上の仮想オブジェクトの表示位置に関するパラメータの補正量(つまり、上記の補正用オフセット量)を決定する。例えば、当該ユーザの手の第1の位置情報は、(センサ部122に含まれる)デプスセンサによる、当該ユーザの補正開始指示が取得された際の当該ユーザの手のデプスセンシングの結果に対応する位置情報である。また、当該ユーザの手の第2の位置情報は、当該デプスセンサによる、当該ユーザの補正終了指示が取得された際の当該ユーザの手のデプスセンシングの結果に対応する位置情報である。
(2-4-1. Correction section 104)
The correction unit 104 according to the application example 1 includes the first position information of the user's hand when the user's correction start instruction is acquired, and the number of the user's hand when the correction end instruction is acquired. In accordance with the difference from the position information of 2, the correction amount of the parameter related to the display position of the one or more virtual objects (that is, the above-mentioned offset amount for correction) is determined. For example, the first position information of the user's hand is a position corresponding to the result of the depth sensing of the user's hand when the correction start instruction of the user is acquired by the depth sensor (included in the sensor unit 122) It is information. The second position information of the user's hand is position information corresponding to the result of depth sensing of the user's hand when the correction end instruction of the user is acquired by the depth sensor.
 なお、補正部104は、(当該補正開始指示の取得時および当該補正終了指示の取得時におけるユーザの手の全体の位置情報の関係性を用いる代わりに)これらのタイミングにおけるユーザの手の一部分(例えば所定の指の指先など)の位置情報の関係性を用いて、当該一以上の仮想オブジェクトの表示位置に関するパラメータを補正してもよい。 In addition, the correction unit 104 (a part of the user's hand at these timings (instead of using the relationship of positional information of the user's hand at the time of acquisition of the correction start instruction and acquisition of the correction end instruction) For example, parameters related to the display position of the one or more virtual objects may be corrected using the relationship of position information of a predetermined finger tip or the like.
 ここで、図9A~図9Cを参照して、補正部104の上記の機能についてより詳細に説明する。なお、以下では、例えば図3に示したように、ユーザの手2による、ある仮想オブジェクト(図示省略)の遮蔽領域が目標の位置からズレて表示されていることを前提とする。 Here, the above-mentioned function of the correction unit 104 will be described in more detail with reference to FIGS. 9A to 9C. In addition, below, as shown, for example in FIG. 3, it presupposes that the shielding area | region of a certain virtual object (illustration omitted) by the user's hand 2 is shifted and displayed from the position of a target.
 図9Aに示したように、まず、ユーザは、補正開始指示(例えば、所定の音声コマンド(「Start calibration」など)の発話など)を行う。その後、補正部104は、当該補正開始指示の取得時における、当該デプスセンサによるユーザの手2(例えば指先など)のデプスセンシングの結果に対応する位置情報60aを、当該ユーザの手の第1の位置情報として取得する。そして、補正部104は、現在のモードを通常のモードから補正用モードに切り替える。前述したように、応用例1では、補正モード中にはデプスセンシングの結果を更新しない。 As shown in FIG. 9A, first, the user performs a correction start instruction (for example, an utterance of a predetermined voice command (“Start calibration” or the like)). Thereafter, when the correction unit 104 acquires the correction start instruction, the position information 60a corresponding to the result of the depth sensing of the user's hand 2 (for example, a fingertip) by the depth sensor is the first position of the user's hand. Acquire as information. Then, the correction unit 104 switches the current mode from the normal mode to the correction mode. As described above, in the application example 1, the result of depth sensing is not updated during the correction mode.
 その後、ユーザの手2による当該仮想オブジェクト(図示省略)の遮蔽領域が目標の位置と略一致するように、当該ユーザは、手2を動かす。そして、図9Bに示したように、当該ユーザは、補正終了指示(例えば、所定の音声コマンド(「End calibration」など)の発話など)を行う。その後、図9Bに示したように、補正部104は、当該補正終了指示の取得時における、当該デプスセンサによるユーザの手2(例えば指先など)のデプスセンシングの結果に対応する位置情報60bを、当該ユーザの手の第2の位置情報として取得する。次に、補正部104は、位置情報60aと位置情報60bとの差分(図9Bに示した「d」)を計算し、そして、当該差分に応じて上記の補正用オフセット量を決定する。そして、補正部104は、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を、決定されたオフセット量だけ補正する。これにより、図9Cに示したように、手のデプスオブジェクト50の位置が目標の位置と略一致する。従って、ユーザの手2による当該仮想オブジェクトの遮蔽領域が目標の位置と略一致する。その後、補正部104は、現在のモードを補正用モードから通常のモードに切り替える。 Thereafter, the user moves the hand 2 so that the shielding area of the virtual object (not shown) by the user's hand 2 substantially matches the position of the target. Then, as shown in FIG. 9B, the user performs a correction end instruction (for example, an utterance of a predetermined voice command (“End calibration” or the like)). After that, as illustrated in FIG. 9B, the correction unit 104 performs positional information 60b corresponding to the result of the depth sensing of the user's hand 2 (for example, a fingertip) by the depth sensor at the time of obtaining the correction end instruction. Acquire as second position information of the user's hand. Next, the correction unit 104 calculates the difference ("d" shown in FIG. 9B) between the position information 60a and the position information 60b, and determines the above-mentioned correction offset amount according to the difference. Then, the correction unit 104 corrects the value of the parameter related to the display position of the one or more virtual objects by the determined offset amount. As a result, as shown in FIG. 9C, the position of the hand depth object 50 substantially matches the position of the target. Therefore, the shielded area of the virtual object by the user's hand 2 substantially matches the position of the target. After that, the correction unit 104 switches the current mode from the correction mode to the normal mode.
 (2-4-1-2.表示制御部106)
 ‐表示例1
 次に、応用例1に係る表示制御部106の機能について説明する。一般的に、例えば図3に示したような、ユーザの手による仮想オブジェクトの遮蔽領域がズレた状態から、(例えば図2に示したような)目標の位置へとユーザが手を動かす場合には、当該ユーザにとって手がかりが少ないので、ユーザ負荷が高い。例えば、試行回数の増加やキャリブレーション時間の増加に繋がり得る。
(2-4-1. Display control unit 106)
-Display example 1
Next, the function of the display control unit 106 according to Application Example 1 will be described. Generally, for example, when the user moves the hand from the state where the shielding area of the virtual object by the user's hand is shifted as shown in FIG. 3 to the target position (as shown in FIG. 2) The user load is high because there are few clues for the user. For example, it may lead to an increase in the number of trials and an increase in calibration time.
 そこで、応用例1に係る表示制御部106は、例えば当該補正開始指示の取得時における、手の外周部分(例えば輪郭など)だけを光らせて表示部124に表示させてもよい。
または、表示制御部106は、当該補正開始指示の取得時の手の撮像画像に対してエッジが強調された画像を表示部124に表示させてもよい。これにより、ユーザにとって、合わせるべき手がかり(例えば、手の輪郭部分や手の皺など)が増えるので、ユーザの負荷が減少し得る。
Therefore, the display control unit 106 according to the application example 1 may cause the display unit 124 to display only the outer peripheral portion (for example, an outline) of the hand at the time of obtaining the correction start instruction, for example.
Alternatively, the display control unit 106 may cause the display unit 124 to display an image in which the edge of the captured image of the hand at the time of acquisition of the correction start instruction is emphasized. This may reduce the load on the user as the user has more clues to be aligned (e.g., outlines of the hand, eyebrows on the hand, etc.).
 ‐表示例2
 また、通常、ユーザの手による仮想オブジェクトの遮蔽領域がズレた状態から、目標の位置(厳密な位置)へユーザが手を動かすことは、ユーザの負荷が大きい。そこで、表示制御部106は、例えば当該補正開始指示の取得時における、ユーザの手による仮想オブジェクトの遮蔽領域がぼやけるように表示を制御してもよい。これにより、必ずしも厳密に合わせなくてもよいことをユーザに知らせることができる。
-Display example 2
In addition, moving the hand from the state where the shielding area of the virtual object by the user's hand is shifted to the target position (strict position) usually causes a heavy load on the user. Therefore, the display control unit 106 may control the display so that the shielding area of the virtual object by the user's hand is blurred, for example, at the time of obtaining the correction start instruction. This can inform the user that they do not have to be exactly the same.
 ‐表示例3
 さらに、図9A~図9Cに示した例のように、ユーザの手の一部分(例えば所定の指の指先など)の位置関係を用いて、上記の一以上の仮想オブジェクトの表示位置に関するパラメータが補正される場合には、表示制御部106は、例えば当該一部分だけを光らせるなど、当該一部分だけを強調して表示部124に表示させてもよい。
-Display example 3
Furthermore, as in the example shown in FIGS. 9A to 9C, the parameter related to the display position of the one or more virtual objects is corrected using the positional relationship of a part of the user's hand (for example, a fingertip of a predetermined finger). In the case where the display control unit 106 determines that only the part is illuminated, the display control unit 106 may cause the display unit 124 to highlight the part.
 ‐変形例
 なお、デプスセンサとユーザの両眼との位置関係によっては、ユーザは視認可能であるが、当該デプスセンサはセンシングし難いような、実空間内の領域が存在し得る。そこで、表示制御部106は、当該デプスセンサによるセンシングの精度が高い空間が目立つように表示部124による表示を制御してもよい。例えば、表示制御部106は、当該空間に対応する表示領域を所定の表示色で表示させてもよい。
-Modified Example Note that depending on the positional relationship between the depth sensor and the user's eyes, there may exist an area in the real space that can be viewed by the user but is difficult to sense by the depth sensor. Therefore, the display control unit 106 may control the display by the display unit 124 so that a space in which the accuracy of the sensing by the depth sensor is high is noticeable. For example, the display control unit 106 may display the display area corresponding to the space in a predetermined display color.
 {2-4-2.応用例2}
 次に、第1の実施形態に係る応用例2について説明する。後述するように、本応用例2では、一以上の仮想オブジェクトの表示位置に関するパラメータを、ユーザの位置情報(例えば頭の位置など)の変化に基づいて補正することができる。
{2-4-2. Application example 2}
Next, application example 2 according to the first embodiment will be described. As described later, in the second application, parameters relating to the display position of one or more virtual objects can be corrected based on a change in position information (for example, the position of the head) of the user.
 (2-4-2-1.補正部104)
 応用例2に係る補正部104は、センシング結果取得部102により取得されたデプスセンシングの結果に対応する一以上のデプスオブジェクトの表示後で、かつ、当該ユーザの補正開始指示が取得された際の当該ユーザの第1の位置情報と、当該ユーザの補正終了指示が取得された際の当該ユーザの第2の位置情報との差分に応じて、当該一以上の仮想オブジェクトの表示位置に関するパラメータの補正量(つまり、補正用オフセット量)を決定する。例えば、当該ユーザの第1の位置情報は、(センサ部122に含まれる)デプスセンサによる、当該ユーザの補正開始指示が取得された際のアイウェア10の位置情報である。また、当該ユーザの第2の位置情報は、当該デプスセンサによる、当該ユーザの補正終了指示が取得された際のアイウェア10の位置情報である。例えば、当該補正開始指示が取得された際から当該補正終了指示が取得されるまでの間にセンシングされたアイウェア10の位置情報の変化量と、当該第1の位置情報とに基づいて、当該ユーザの第2の位置情報は特定されてもよい。
(2-4-2-1. Correction unit 104)
The correction unit 104 according to the application example 2 displays one or more depth objects corresponding to the result of the depth sensing acquired by the sensing result acquisition unit 102 and when the correction start instruction of the user is acquired. Correction of parameters related to the display position of the one or more virtual objects according to the difference between the first position information of the user and the second position information of the user when the correction end instruction of the user is acquired Determine the amount (that is, the offset amount for correction). For example, the first position information of the user is position information of the eyewear 10 when the correction start instruction of the user is acquired by the depth sensor (included in the sensor unit 122). The second position information of the user is the position information of the eyewear 10 when the correction end instruction of the user is acquired by the depth sensor. For example, the amount of change in the position information of the eyewear 10 sensed between the time when the correction start instruction is obtained and the time when the correction end instruction is obtained and the first position information. The second position information of the user may be identified.
 ‐具体例
 ここで、図10Aおよび図10Bを参照して、上記の機能についてより詳細に説明する。図10Aに示したように、まず、アイウェア10を装着しているユーザは、まず、補正開始指示(例えば、所定の音声コマンド(「Start calibration.」など)の発話など)を行ったとする。その後、補正部104は、現在のモードを通常のモードから補正用モードに切り替える。この場合、図10Aに示したように、表示制御部106は、当該補正開始指示の取得時における、(センサ部122に含まれる)デプスセンサによるデプスセンシングの結果に対応する、各実オブジェクトの輪郭を示すデプスオブジェクト52を表示画面30に表示させる。つまり、表示制御部106は、当該デプスセンサによる環境全体のデプスセンシングの結果をエッジを強調して(デプスオブジェクト52として)表示部124に表示させる。なお、応用例2では、補正モード中にはデプスセンシングの結果は更新されない。
Specific Example Now, with reference to FIGS. 10A and 10B, the above functions will be described in more detail. As shown in FIG. 10A, first, it is assumed that the user wearing the eyewear 10 first issues a correction start instruction (for example, an utterance of a predetermined voice command (“Start calibration.” Or the like)). Thereafter, the correction unit 104 switches the current mode from the normal mode to the correction mode. In this case, as shown in FIG. 10A, the display control unit 106 sets the contour of each real object corresponding to the result of the depth sensing by the depth sensor (included in the sensor unit 122) at the time of obtaining the correction start instruction. The depth object 52 shown is displayed on the display screen 30. That is, the display control unit 106 emphasizes an edge (as a depth object 52) and causes the display unit 124 to display the result of the depth sensing of the entire environment by the depth sensor. In Application Example 2, the result of depth sensing is not updated during the correction mode.
 その後、図10Bに示したように、デプスオブジェクト52と、表示画面30内の各実オブジェクトの位置とが略一致するように、当該室内において、当該ユーザは移動したり、頭を動かす。そして、当該ユーザは、補正終了指示(例えば、所定の音声コマンド(「End calibration.」など)の発話など)を行う。次に、補正部104は、当該補正開始指示の取得時における当該ユーザの位置情報と、当該補正終了指示の取得時における当該ユーザの位置情報との差分を計算し、そして、当該差分に応じて上記の補正用オフセット量を決定する。そして、補正部104は、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を、決定された補正用オフセット量だけ補正する。その後、補正部104は、現在のモードを補正用モードから通常のモードに切り替える。 Thereafter, as shown in FIG. 10B, the user moves or moves the head in the room such that the depth object 52 and the position of each real object in the display screen 30 substantially coincide with each other. Then, the user performs a correction end instruction (for example, an utterance of a predetermined voice command (“End calibration.” Or the like)). Next, the correction unit 104 calculates the difference between the position information of the user at the time of acquisition of the correction start instruction and the position information of the user at the time of acquisition of the correction end instruction, and according to the difference. The above-mentioned offset amount for correction is determined. Then, the correction unit 104 corrects the value of the parameter related to the display position of the one or more virtual objects by the determined correction offset amount. After that, the correction unit 104 switches the current mode from the correction mode to the normal mode.
 以上説明したように、応用例2によれば、ユーザの位置情報の変化またはユーザの頭の動きの変化の検出結果に基づいて、当該一以上の仮想オブジェクトの表示位置に関するパラメータを補正する。これにより、ユーザの視界全体における遮蔽のズレを補正することができる。従って、ユーザの手以外のオブジェクトによる仮想オブジェクトの遮蔽が生じるケースにおいて特に有効である。 As described above, according to Application Example 2, the parameter related to the display position of the one or more virtual objects is corrected based on the detection result of the change of the user's position information or the change of the user's head movement. This makes it possible to correct the displacement of the shielding in the entire field of view of the user. Therefore, it is particularly effective in the case where occlusion of a virtual object by an object other than the user's hand occurs.
 {2-4-3.応用例3}
 次に、第1の実施形態に係る応用例3について説明する。通常、実オブジェクトがアイウェア10の近くに存在する場合と、アイウェア10から遠くに存在する場合とでは、当該実オブジェクトによる仮想オブジェクトの遮蔽領域のズレ量が異なり得る。
{2-4-3. Application example 3}
Next, Application Example 3 according to the first embodiment will be described. Usually, when the real object exists near the eyewear 10 and when it exists far from the eyewear 10, the shift amount of the shielding area of the virtual object by the real object may be different.
 そこで、応用例3に係る補正部104は、アイウェア10から所定の距離内に存在する第1の実オブジェクトと、アイウェア10から当該所定の距離よりも遠くに位置する第2の実オブジェクトとに関して、一以上の仮想オブジェクトの表示に関するパラメータを別々に補正し得る。例えば、補正部104は、当該第1の実オブジェクトのデプスセンシングの結果に基づく一以上の仮想オブジェクトの表示に関するパラメータと、当該第2の実オブジェクトのデプスセンシングの結果に基づく一以上の仮想オブジェクトの表示に関するパラメータとを、上記のユーザの指示情報に基づいて別々に補正する。これにより、アイウェア10からの各実オブジェクトの距離に応じて、各実オブジェクトによる仮想オブジェクトの遮蔽領域のズレ量を適切に補正することができる。 Therefore, the correction unit 104 according to the application example 3 includes a first real object existing within a predetermined distance from the eyewear 10 and a second real object located farther than the predetermined distance from the eyewear 10 , The parameters relating to the display of one or more virtual objects may be separately corrected. For example, the correction unit 104 may use one or more virtual object display parameters based on the result of the first real object depth sensing and one or more virtual objects based on the second real object depth sensing result. Parameters related to display are separately corrected based on the user's instruction information described above. Thereby, according to the distance of each real object from eyewear 10, the gap amount of the occlusion field of the virtual object by each real object can be amended appropriately.
 {2-4-4.応用例4}
 また、上記の説明では、補正部104が、右側表示部124aと左側表示部124bとに関して、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値をそれぞれ別々に補正する例を主として説明した。つまり、補正部104が、当該パラメータを二次元的に補正する例について説明した。
{2-4-4. Application example 4}
Further, in the above description, an example in which the correction unit 104 separately corrects values of parameters regarding display positions of the one or more virtual objects with respect to the right side display unit 124a and the left side display unit 124b has been mainly described. That is, the example in which the correction unit 104 two-dimensionally corrects the parameter has been described.
 次に、第1の実施形態に係る応用例4について説明する。本応用例4によれば、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を3次元的に補正することが可能である。例えば、図11に示したように、制御部100は、まず、(センサ部122に含まれる)デプスセンサによる、ある実オブジェクト(例えばユーザの手)のセンシング結果に基づいて、当該実オブジェクトの3次元のメッシュデータ70を生成する。そして、表示制御部106は、補正モード中に、メッシュデータ70を表示部124に表示させる。なお、メッシュデータ70は、本開示に係るデプスオブジェクトの一例である。 Next, Application 4 according to the first embodiment will be described. According to this application example 4, it is possible to three-dimensionally correct the values of the parameters related to the display position of the one or more virtual objects. For example, as shown in FIG. 11, first, the control unit 100 determines the three-dimensional shape of the real object based on the sensing result of a certain real object (for example, the user's hand) by the depth sensor (included in the sensor unit 122). To generate mesh data 70 of Then, the display control unit 106 causes the display unit 124 to display the mesh data 70 during the correction mode. The mesh data 70 is an example of a depth object according to the present disclosure.
 当該補正モード中に、メッシュデータ70を移動させるためのユーザの指示情報が得られた場合には、図11に示したように、補正部104は、カメラ座標系においてメッシュデータ70を当該指示情報に基づいて例えば前後左右に移動させる。そして、補正部104は、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を、当該メッシュデータ70の移動量に応じて補正する。 When the user's instruction information for moving the mesh data 70 is obtained during the correction mode, as shown in FIG. 11, the correction unit 104 performs the mesh information 70 in the camera coordinate system as the instruction information. For example, it moves to the front and rear, right and left based on. Then, the correction unit 104 corrects the value of the parameter related to the display position of the one or more virtual objects according to the movement amount of the mesh data 70.
 例えば、補正部104は、上記の数式(1)における行列M内の一以上のパラメータの値を当該ユーザの指示情報に基づいて変更することにより、当該一以上の仮想オブジェクトの表示位置に関するパラメータの値を変更する。 For example, the correction unit 104 changes the value of one or more parameters in the matrix M in the above equation (1) based on the instruction information of the user to obtain the parameters related to the display position of the one or more virtual objects. Change the value
<<3.第2の実施形態>>
 以上、第1の実施形態について説明した。前述したように、第1の実施形態に係るアイウェア10は、一以上の仮想オブジェクトの表示位置に関するパラメータの値を補正し得る。
<< 3. Second embodiment >>
The first embodiment has been described above. As described above, the eyewear 10 according to the first embodiment may correct the value of the parameter regarding the display position of one or more virtual objects.
 <3-1.背景>
 次に、第2の実施形態について説明する。まず、第2の実施形態を創作するに至った背景について説明する。デプスセンサとしてステレオカメラが用いられる場合、アイウェア10は、当該ステレオカメラのうちの左側カメラによるデプスセンシングの結果(以下では、左側画像とも称する)と、当該ステレオカメラのうちの右側カメラによるデプスセンシングの結果(以下では、右側画像とも称する)と、所定のアルゴリズムとを用いて、一以上の実オブジェクトのデプス値をそれぞれ計測し得る。ここで、当該所定のアルゴリズムは、当該左側画像内のある画素に対応する、当該右側画像内の画素がどれであるかを推定することを含む。ここで、当該所定のアルゴリズムでは、数百程度の種類のデプスパラメータが用いられ得る。そして、これらの複数の種類のデプスパラメータは、当該所定のアルゴリズムにおいて例えば探索範囲や閾値などを定め得る。なお、当該複数の種類のデプスパラメータは、本開示に係る「一以上の仮想オブジェクトの表示に関するパラメータ」の一例である。
<3-1. Background>
Next, a second embodiment will be described. First, the background that led to the creation of the second embodiment will be described. When a stereo camera is used as the depth sensor, the eyewear 10 is the result of depth sensing by the left camera of the stereo cameras (hereinafter also referred to as left image) and the depth sensing by the right camera of the stereo cameras. The depth values of one or more real objects may be measured using the result (hereinafter also referred to as right image) and a predetermined algorithm. Here, the predetermined algorithm includes estimating which pixel in the right image corresponds to a pixel in the left image. Here, several hundreds of types of depth parameters may be used in the predetermined algorithm. The plurality of types of depth parameters can define, for example, a search range, a threshold, and the like in the predetermined algorithm. The plurality of types of depth parameters are examples of the “parameter related to the display of one or more virtual objects” according to the present disclosure.
 より具体的には、当該複数の種類のデプスパラメータは、当該左側デプスカメラによるデプスセンシングの結果と当該右側デプスカメラによるデプスセンシングの結果とのマッチングスコアの算出に関する複数の種類のパラメータを含み得る。例えば、当該マッチングスコアは、複数の種類の算出方法を用いて算出され得る。そして、当該マッチングスコアの算出に関する複数の種類のパラメータは、複数の種類の算出方法の各々の適用比率に関する第1のパラメータと、当該マッチングスコアの閾値に関する第2のパラメータとを含み得る。例えば、第1のパラメータは、マッチングスコア算出比率パラメータであり、かつ、第2のパラメータは、視差ペナルティパラメータである。マッチングスコア算出比率パラメータは、当該マッチングスコアの算出時における差分絶対値に基づくスコアと、ハミング距離に基づくスコアとのブレンドの比率に関するパラメータである。また、視差ペナルティパラメータは、当該マッチングスコアの閾値である。 More specifically, the plurality of types of depth parameters may include a plurality of types of parameters regarding calculation of a matching score between the result of depth sensing by the left depth camera and the result of depth sensing by the right depth camera. For example, the matching score may be calculated using a plurality of types of calculation methods. The plurality of types of parameters related to calculation of the matching score may include a first parameter related to the application ratio of each of the plurality of types of calculation methods, and a second parameter related to the threshold value of the matching score. For example, the first parameter is a matching score calculation ratio parameter, and the second parameter is a disparity penalty parameter. The matching score calculation ratio parameter is a parameter relating to the blending ratio of the score based on the difference absolute value at the time of calculation of the matching score and the score based on the Hamming distance. The disparity penalty parameter is a threshold of the matching score.
 一般的に、これらの複数の種類のデプスパラメータの各々の値によって、例えばセンシング結果の精度の高低や、得意・不得意な条件の種類などが変わり得る。例えば、ある仮想オブジェクトがユーザの手で遮蔽される場面では、当該複数の種類のデプスパラメータのうち、当該手の3次元形状を精度高く認識可能なデプスパラメータの種類は、様々な環境条件(例えば、当該手の後方に位置する実オブジェクト、環境の明るさ、および、手の色など)によって異なり得る。図12は、デプスセンシングの結果の精度が低いケースの種類の分類例を示した図(表80)である。図12に示したように、デプスセンシングの結果の精度が低いケースは複数の種類が存在し得る。 Generally, depending on the value of each of the plurality of types of depth parameters, for example, the level of accuracy of the sensing result, the type of excellent / unfavorable condition, and the like may change. For example, in the case where a virtual object is occluded by the user's hand, among the plurality of types of depth parameters, types of depth parameters that can accurately recognize the three-dimensional shape of the hand are various environmental conditions (for example, It may differ depending on the real object located behind the hand, the brightness of the environment, the color of the hand, and the like. FIG. 12 is a diagram (Table 80) illustrating classification examples of types of cases in which the accuracy of the depth sensing result is low. As shown in FIG. 12, there may be multiple types of cases where the accuracy of the depth sensing result is low.
 さらに、デプスセンシングの結果が同一であっても、ユーザごとに、デプスセンシングの結果の品質に関する感じ方が異なり得る。図12に示した例において、あるユーザは、「ざわざわ」(例えばオブジェクトの輪郭周辺において目立つ時間方向の高周波成分)よりも「はみ出し」の方が不快に感じ得る。また、別のユーザは、「はみ出し」よりも「穴あき」の方が不快に感じ得る。そこで、アイウェア10を利用するユーザの感覚(好み)に応じて、当該複数の種類のデプスパラメータの各々の値が適切、かつ、容易に設定可能であることが望まれる。 Furthermore, even if the result of depth sensing is the same, how to feel the quality of the result of depth sensing may differ for each user. In the example shown in FIG. 12, a user may feel more uncomfortable with “protruding” than “performing” (for example, a high frequency component in the temporal direction that is noticeable around the outline of the object). Also, another user may feel more uncomfortable with "perforated" than with "outside". Therefore, it is desirable that the values of each of the plurality of types of depth parameters can be set appropriately and easily according to the sense (preference) of the user who uses the eyewear 10.
 後述するように、第2の実施形態によれば、アイウェア10を利用するユーザが、当該複数の種類のデプスパラメータの各々に関して所望の値を容易に設定することができる。 As described later, according to the second embodiment, the user using the eyewear 10 can easily set desired values for each of the plurality of types of depth parameters.
 <3-2.構成>
 次に、第2の実施形態に係るアイウェア10の構成について説明する。図13は、第2の実施形態に係るアイウェア10の機能構成例を示したブロック図である。図13に示したように、第2の実施形態に係るアイウェア10は、図4に示した第1の実施形態と比較して、パラメータセットDB130を記憶部128内にさらに有する。以下では、第1の実施形態とは異なる内容についてのみ説明することとし、同一の内容については説明を省略する。
<3-2. Configuration>
Next, the configuration of the eyewear 10 according to the second embodiment will be described. FIG. 13 is a block diagram showing an example of the functional configuration of the eyewear 10 according to the second embodiment. As shown in FIG. 13, the eyewear 10 according to the second embodiment further includes a parameter set DB 130 in the storage unit 128 as compared to the first embodiment shown in FIG. 4. In the following, only the contents different from the first embodiment will be described, and the description of the same contents will be omitted.
 {3-2-1.パラメータセットDB130}
 パラメータセットDB130は、上記の複数の種類のデプスパラメータの値の、互いに異なる複数の種類の組み合わせ(以下では、「複数の種類のパラメータセット」とも称する)を予め記憶し得る。当該複数の種類のデプスパラメータは、例えば、マッチングスコア算出比率パラメータおよび視差ペナルティパラメータを含む。さらに、パラメータセットごとに、これらの2つのパラメータの各々の比率はそれぞれ異なり得る。
{3-2-1. Parameter set DB130}
The parameter set DB 130 can store in advance a combination of a plurality of different types of values of the plurality of types of depth parameters (hereinafter also referred to as “a plurality of parameter sets”). The plurality of types of depth parameters include, for example, a matching score calculation ratio parameter and a disparity penalty parameter. Furthermore, for each parameter set, the ratio of each of these two parameters may be different.
 図14は、マッチングスコア算出比率パラメータおよび視差ペナルティパラメータの各々の比率と、図12に示した、デプスセンシングの結果の精度が低いケースの種類との関係性を示した図である。図14に示したように、マッチングスコア算出比率パラメータの値が変わると、(図12に示した)「ざわざわ」および「穴あき」の発生頻度が変化し得る。また、視差ペナルティパラメータの値が変わると、「はみ出し」および「穴あき」の発生頻度が変化し得る。 FIG. 14 is a diagram showing the relationship between the ratio of each of the matching score calculation ratio parameter and the disparity penalty parameter, and the type of the case in which the accuracy of the depth sensing result is low as shown in FIG. 12. As shown in FIG. 14, when the value of the matching score calculation ratio parameter changes, the occurrence frequency of “punch” and “perforation” (shown in FIG. 12) may change. In addition, when the value of the parallax penalty parameter changes, the occurrence frequency of "outside" and "perforation" may change.
 また、図14では、2種類のパラメータセット82の例を示している。図14に示したように、パラメータセット82aは、マッチングスコア算出比率パラメータの値が小さく、かつ、視差ペナルティパラメータの値が大きい。従って、「はみ出し」よりも「穴あき」の方が不快に感じるユーザにとって、パラメータセット82aは望ましい。また、パラメータセット82bは、マッチングスコア算出比率パラメータの値が大きく、かつ、視差ペナルティパラメータの値が小さい。従って、「ざわざわ」よりも「穴あき」の方が不快に感じるユーザにとって、パラメータセット82bは望ましい。 Further, FIG. 14 shows an example of two types of parameter sets 82. As shown in FIG. 14, in the parameter set 82a, the value of the matching score calculation ratio parameter is small, and the value of the disparity penalty parameter is large. Therefore, the parameter set 82 a is desirable for a user who feels more uncomfortable than “overwhelming” than “overwhelming”. Further, in the parameter set 82b, the value of the matching score calculation ratio parameter is large, and the value of the disparity penalty parameter is small. Therefore, the parameter set 82 b is desirable for a user who feels more uncomfortable than “perforated” than “abrupt”.
 {3-2-2.表示制御部106}
 第2の実施形態に係る表示制御部106は、パラメータセットDB130に格納されている、複数の種類のパラメータセットの各々に関して、(センサ部122に含まれる)ステレオカメラによりセンシングされた左側画像および右側画像に対して当該パラメータセットが用いられた結果を示すオブジェクトとして、一以上の補正用オブジェクトの各々を表示部124に表示させる。
{3-2-2. Display control unit 106}
The display control unit 106 according to the second embodiment is configured such that the left image and the right side sensed by the stereo camera (included in the sensor unit 122) for each of the plurality of types of parameter sets stored in the parameter set DB 130 Each of one or more correction objects is displayed on the display unit 124 as an object indicating the result of using the parameter set for an image.
 例えば、当該複数の種類のパラメータセットの各々と、一以上の補正用オブジェクトの各々とは一意に関連付けられ得る。この場合、表示制御部106は、まず、当該一以上の補正用オブジェクトの各々に関連付けられているパラメータセットが用いられた際の、アイウェア10を装着しているユーザの手のデプスセンシングの結果と、当該ユーザの手の撮像画像とが合成された動画をそれぞれ生成する。そして、表示制御部106は、生成された各動画を表示部124に表示させる。 For example, each of the plurality of types of parameter sets may be uniquely associated with each of one or more correction objects. In this case, the display control unit 106 first obtains the result of the depth sensing of the hand of the user wearing the eyewear 10 when the parameter set associated with each of the one or more correction objects is used. And a captured image of the user's hand are combined to generate a moving image. Then, the display control unit 106 causes the display unit 124 to display each generated moving image.
 図15Aは、個々のパラメータセットに対応する動画(動画90)の表示例を示した図である。また、図15Bは、図15Aに示した動画90cを拡大して示した図である。図15Aに示したように、表示制御部106は、例えば6つなどの所定の数の動画90を同時に表示画面30に表示させる。前述したように、当該所定の数の動画90の各々に関連付けられているパラメータセットは互いに異なる。 FIG. 15A is a diagram showing a display example of a moving image (moving image 90) corresponding to each parameter set. FIG. 15B is an enlarged view of the moving image 90c shown in FIG. 15A. As shown in FIG. 15A, the display control unit 106 causes the display screen 30 to simultaneously display a predetermined number of moving images 90, such as six. As mentioned above, the parameter sets associated with each of the predetermined number of animations 90 are different from one another.
 図15Bに示したように、各動画90は、手の撮像画像900と、手による仮想オブジェクトの遮蔽の映像とが合成されている。当該遮蔽の映像には、当該動画90に対応するパラメータセットが用いられた際のデプスセンシングの結果に対応するノイズ902が写っている。 As shown in FIG. 15B, in each moving image 90, a hand-captured image 900 and an image of occlusion of a virtual object by the hand are combined. A noise 902 corresponding to the result of depth sensing when the parameter set corresponding to the moving image 90 is used is shown in the image of the shielding.
 例えば、表示制御部106は、(補正用オブジェクトとして)生成された全ての動画90のうちの所定の数の動画90ずつを同時に表示画面30に表示させる。一例として、表示制御部106は、まず、全ての動画90の中から所定の数の動画90を抽出し、そして、当該所定の数の動画90を同時に表示画面30に表示させる。そして、当該所定の数の動画90のうちのいずれかがユーザに選択される度に、表示制御部106は、当該全ての動画90の中から未表示の所定の数の動画90を同時に表示画面30に表示させる。表示制御部106は、未表示の動画90が無くなるまで、このプロセスを繰り返し得る。 For example, the display control unit 106 causes the display screen 30 to simultaneously display a predetermined number of moving images 90 out of all the generated moving images 90 (as correction objects). As an example, the display control unit 106 first extracts a predetermined number of moving images 90 from all the moving images 90, and causes the display screen 30 to simultaneously display the predetermined number of moving images 90. Then, whenever any of the predetermined number of moving images 90 is selected by the user, the display control unit 106 simultaneously displays a predetermined number of undisplayed moving images 90 among all the moving images 90. Display on 30 The display control unit 106 can repeat this process until the undisplayed moving image 90 disappears.
 または、表示制御部106は、当該全ての動画90のうち、互いに異なる、所定の数の動画90ずつを所定の時間間隔で切り替えながら表示部124に表示させてもよい。 Alternatively, the display control unit 106 may cause the display unit 124 to display a predetermined number of moving images 90 different from one another among all the moving images 90 while switching at predetermined time intervals.
 なお、当該一以上の補正用オブジェクトは、パラメータセットDB130に格納されている全部のパラメータセットの数だけ存在してもよいし、または、パラメータセットDB130に格納されている一部のパラメータセットの数だけ存在してもよい。 The one or more correction objects may exist by the number of all parameter sets stored in the parameter set DB 130, or the number of partial parameter sets stored in the parameter set DB 130. It may only exist.
 {3-2-3.補正部104}
 第2の実施形態に係る補正部104は、表示部124に表示された一以上の補正用オブジェクトの中から当該ユーザにより選択された少なくとも一つの補正用オブジェクトの各々に関連付けられているパラメータセットに基づいて、一以上の仮想オブジェクトの表示に関するパラメータを補正する。
{3-2-3. Correction unit 104}
The correction unit 104 according to the second embodiment uses the parameter set associated with each of at least one correction object selected by the user among the one or more correction objects displayed on the display unit 124. Based on the parameters related to the display of one or more virtual objects.
 例えば、一以上の補正用オブジェクトが表示部124に表示されており、かつ、当該一以上の補正用オブジェクトの中から少なくとも一つの補正用オブジェクトが選択された場合に、補正部104は、まず、当該ユーザにより選択された該当の補正用オブジェクトの種類を示す情報を、当該ユーザの指示情報として取得する。そして、補正部104は、当該ユーザの指示情報が示す、当該ユーザにより選択された該当の補正用オブジェクトの各々に関連付けられているパラメータセットに基づいて、当該一以上の仮想オブジェクトの表示に関するパラメータを補正する。例えば、補正部104は、当該一以上の仮想オブジェクトの表示に関するパラメータの値を、該当の補正用オブジェクトの各々に関連付けられているパラメータセットに含まれる個々のデプスパラメータの値に応じた値に補正する。 For example, when one or more correction objects are displayed on the display unit 124 and at least one correction object is selected from the one or more correction objects, the correction unit 104 first Information indicating the type of the corresponding correction object selected by the user is acquired as instruction information of the user. Then, based on the parameter set associated with each of the corresponding correction objects selected by the user indicated by the user's instruction information, the correction unit 104 indicates the parameter related to the display of the one or more virtual objects. to correct. For example, the correction unit 104 corrects the values of the parameters related to the display of the one or more virtual objects to values corresponding to the values of the individual depth parameters included in the parameter set associated with each of the corresponding correction objects. Do.
 <3-3.処理の流れ>
 以上、第2の実施形態の構成について説明した。次に、第2の実施形態に係る処理の流れの一例について、図16および図17を参照して説明する。図16および図17はそれぞれ、第2の実施形態に係る処理の流れの一部を示したフローチャートである。
<3-3. Flow of processing>
The configuration of the second embodiment has been described above. Next, an example of the flow of processing according to the second embodiment will be described with reference to FIG. 16 and FIG. 16 and 17 are flowcharts showing a part of the flow of processing according to the second embodiment.
 図16に示したように、まず、アイウェア10の制御部100は、「仮想オブジェクトの表示位置に関する補正処理」を行う(S201)。この「仮想オブジェクトの表示位置に関する補正処理」は、第1の実施形態に係る処理の流れ(つまり、S101~S115の処理)と概略同様であり得る。これにより、仮想オブジェクトの表示位置に関するパラメータの値が適切に補正され得る。 As shown in FIG. 16, first, the control unit 100 of the eyewear 10 performs “correction processing regarding display position of virtual object” (S201). The “correction process regarding the display position of the virtual object” may be substantially the same as the process flow according to the first embodiment (that is, the process of S101 to S115). Thereby, the value of the parameter regarding the display position of the virtual object can be appropriately corrected.
 その後、表示制御部106は、手を動かすことをユーザに指示するための文字列を表示部124に表示させる。例えば、表示制御部106は、「認識精度を調整します。目の前で手を動かして下さい。」および「スタート」などの文字列を表示部124に表示させる(S203)。 Thereafter, the display control unit 106 causes the display unit 124 to display a character string for instructing the user to move the hand. For example, the display control unit 106 causes the display unit 124 to display character strings such as “Adjust the recognition accuracy. Move your hand in front of your eyes.” And “Start” (S203).
 続いて、(センサ部122に含まれる)ステレオカメラは、制御部100の制御に従って、ユーザの手のデプスセンシングを開始する(S205)。 Subsequently, the stereo camera (included in the sensor unit 122) starts depth sensing of the user's hand according to the control of the control unit 100 (S205).
 その後、ユーザは、S203で表示された指示に従って、手を動かす(S207)。 Thereafter, the user moves the hand according to the instruction displayed in S203 (S207).
 その後、S205から所定の時間が経過した際には、表示制御部106は、手を動かすことの中止を指示するための文字列(例えば「ストップ」など)を表示部124に表示させる。そして、当該ステレオカメラは、制御部100の制御に従って、当該ユーザの手のデプスセンシングを終了する(S209)。 After that, when a predetermined time has elapsed from S205, the display control unit 106 causes the display unit 124 to display a character string (for example, "stop" or the like) for instructing to stop moving the hand. Then, the stereo camera ends the depth sensing of the user's hand according to the control of the control unit 100 (S209).
 その後、表示制御部106は、パラメータセットDB130に格納されている複数の種類のパラメータセットの各々が、S205~S209の間におけるデプスセンシングの結果(一以上の左側画像および一以上の右側画像)に対して適用された結果と、当該ユーザの手の撮像画像とが合成された動画をそれぞれ生成する(S211)。 Thereafter, the display control unit 106 causes each of the plurality of types of parameter sets stored in the parameter set DB 130 to be a result of depth sensing (one or more left images and one or more right images) between S205 and S209. A moving image in which the result of application to the image and the captured image of the user's hand are combined is generated (S211).
 ここで、図17を参照して、S211より後の処理の流れについて説明する。図17に示したように、S211の後、表示制御部106は、S211で生成された全ての動画の中から、未表示の動画を複数抽出する(S221)。 Here, the flow of processing after S211 will be described with reference to FIG. As shown in FIG. 17, after S211, the display control unit 106 extracts a plurality of undisplayed moving images from all the moving images generated in S211 (S221).
 続いて、表示制御部106は、S221で抽出された複数の動画を表示部124に同時に表示させる(S223)。 Subsequently, the display control unit 106 causes the display unit 124 to simultaneously display the plurality of moving images extracted in S221 (S223).
 その後、当該ユーザは、表示中の複数の動画のうちの一つを選択する。そして、補正部104は、選択された動画の識別情報をユーザの指示情報として取得する(S225)。 Thereafter, the user selects one of the plurality of moving images being displayed. Then, the correction unit 104 acquires identification information of the selected moving image as instruction information of the user (S225).
 続いて、補正部104は、S211で生成された全ての動画の表示が終了したか否かを判定する(S227)。未表示の動画がある場合には(S227:No)、再びS221以降の処理を繰り返す。 Subsequently, the correction unit 104 determines whether the display of all the moving images generated in S211 is finished (S227). If there is an undisplayed moving image (S227: No), the processing after S221 is repeated again.
 一方、当該全ての動画の表示が終了した場合には(S227:Yes)、補正部104は、S225が行われる度に取得された全てのユーザの指示情報の各々に対応するパラメータセットを用いて、一以上の仮想オブジェクトの表示に関するパラメータを補正する(S229)。 On the other hand, when the display of all the moving images is completed (S227: Yes), the correction unit 104 uses the parameter set corresponding to each of the instruction information of all the users acquired each time S225 is performed. And correct parameters related to display of one or more virtual objects (S229).
 (変形例)
 第2の実施形態に係る処理の流れは、前述した例に限定されない。例えば、前述した「仮想オブジェクトの表示位置に関する補正処理」は、最初に行われる代わりに、最後(つまり、S229の後)に行われてもよい。
(Modification)
The flow of processing according to the second embodiment is not limited to the example described above. For example, the above-described “correction process regarding display position of virtual object” may be performed last (that is, after S229) instead of being performed first.
 <3-4.効果>
 以上説明したように、第2の実施形態に係るアイウェア10は、表示部124に表示された一以上の補正用オブジェクトの中から、アイウェア10を装着中のユーザにより選択された少なくとも一つの補正用オブジェクトの各々に関連付けられているパラメータセットに基づいて、一以上の仮想オブジェクトの表示に関するパラメータを補正する。例えば、アイウェア10は、当該ユーザにより選択された少なくとも一つの補正用オブジェクトの各々に関連付けられているパラメータセットに含まれる各デプスパラメータの値を、当該一以上の仮想オブジェクトの表示時に利用される各デプスパラメータの値として設定する。このため、当該ユーザは、当該複数の種類のデプスパラメータの各々に関して所望の値を容易に設定することができる。
<3-4. Effect>
As described above, the eyewear 10 according to the second embodiment includes at least one of the one or more correction objects displayed on the display unit 124 selected by the user wearing the eyewear 10. The parameters related to the display of one or more virtual objects are corrected based on the parameter set associated with each of the correction objects. For example, the eyewear 10 uses the value of each depth parameter included in the parameter set associated with each of the at least one correction object selected by the user when displaying the one or more virtual objects. Set as the value of each depth parameter. Therefore, the user can easily set desired values for each of the plurality of types of depth parameters.
 <3-5.変形例>
 第2の実施形態は、前述した例に限定されない。通常、例えば、メッシュ化や予測などの、遮蔽メッシュを作るアルゴリズムにも複数の種類のパラメータ(遮蔽フィルタパラメータ)が存在し得る。そこで、アイウェア10は、一以上の遮蔽フィルタパラメータを補正するために、前述した各デプスパラメータの値の補正処理と概略同様の処理を行ってもよい。ここで、一以上の遮蔽フィルタパラメータは、本開示に係る「一以上の仮想オブジェクトの表示に関するパラメータ」の一例である。
<3-5. Modified example>
The second embodiment is not limited to the example described above. In general, there may be more than one type of parameter (occluding filter parameter) in the algorithm that produces the occluding mesh, eg, meshing or prediction. Therefore, the eyewear 10 may perform substantially the same processing as the correction processing of the value of each depth parameter described above in order to correct one or more shielding filter parameters. Here, the one or more occlusion filter parameters are examples of the “parameter related to the display of one or more virtual objects” according to the present disclosure.
<<4.ハードウェア構成>>
 次に、各実施形態に共通するアイウェア10のハードウェア構成例について、図18を参照して説明する。図18に示したように、アイウェア10は、CPU150、ROM(Read Only Memory)152、RAM(Random Access Memory)154、バス156、インターフェース158、入力装置160、出力装置162、ストレージ装置164、および、通信装置166を備える。
<< 4. Hardware configuration >>
Next, a hardware configuration example of the eyewear 10 common to each embodiment will be described with reference to FIG. As illustrated in FIG. 18, the eyewear 10 includes a CPU 150, a read only memory (ROM) 152, a random access memory (RAM) 154, a bus 156, an interface 158, an input device 160, an output device 162, a storage device 164, and , Communication device 166.
 CPU150は、演算処理装置および制御装置として機能し、各種プログラムに従ってアイウェア10内の動作全般を制御する。また、CPU150は、アイウェア10において制御部100の機能を実現する。なお、CPU150は、マイクロプロセッサなどのプロセッサにより構成される。 The CPU 150 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the eyewear 10 according to various programs. The CPU 150 also realizes the function of the control unit 100 in the eyewear 10. The CPU 150 is configured of a processor such as a microprocessor.
 ROM152は、CPU150が使用するプログラムや演算パラメータなどの制御用データなどを記憶する。 The ROM 152 stores programs used by the CPU 150, control data such as calculation parameters, and the like.
 RAM154は、例えば、CPU150により実行されるプログラムや、使用中のデータなどを一時的に記憶する。 The RAM 154 temporarily stores, for example, a program executed by the CPU 150, data in use, and the like.
 バス156は、CPUバスなどから構成される。このバス156は、CPU150、ROM152、および、RAM154を相互に接続する。 The bus 156 is configured of a CPU bus and the like. The bus 156 connects the CPU 150, the ROM 152, and the RAM 154 to one another.
 インターフェース158は、入力装置160、出力装置162、ストレージ装置164、および、通信装置166を、バス156と接続する。 The interface 158 connects the input device 160, the output device 162, the storage device 164, and the communication device 166 to the bus 156.
 入力装置160は、例えばタッチパネル、ボタン、スイッチ、レバー、マイクロフォンなどユーザが情報を入力するための入力手段、および、ユーザによる入力に基づいて入力信号を生成し、CPU150に出力する入力制御回路などから構成される。入力装置160は、入力部126として機能し得る。 The input device 160 includes, for example, an input unit such as a touch panel, a button, a switch, a lever, and a microphone for inputting information by a user, and an input control circuit that generates an input signal based on an input by the user and outputs it to the CPU 150 Configured The input device 160 can function as the input unit 126.
 出力装置162は、例えばLCDやOLEDなどのディスプレイ、または、プロジェクタなどの表示装置を含む。また、出力装置162は、スピーカなどの音声出力装置を含み得る。出力装置162は、表示部124として機能し得る。 The output device 162 includes a display such as an LCD or an OLED, or a display such as a projector. Output device 162 may also include an audio output device such as a speaker. The output device 162 can function as the display unit 124.
 ストレージ装置164は、データ格納用の装置である。ストレージ装置164は、例えば、記憶媒体、記憶媒体にデータを記録する記録装置、記憶媒体からデータを読み出す読出し装置、または、記憶媒体に記録されたデータを削除する削除装置などを含む。ストレージ装置164は、記憶部128として機能し得る。 The storage device 164 is a device for storing data. The storage device 164 includes, for example, a storage medium, a recording device that records data in the storage medium, a reading device that reads data from the storage medium, or a deletion device that deletes data recorded in the storage medium. The storage device 164 can function as the storage unit 128.
 通信装置166は、例えば通信網22などに接続するための通信デバイス(例えばネットワークカードなど)等で構成された通信インターフェースである。また、通信装置166は、無線LAN対応通信装置、LTE(Long Term Evolution)対応通信装置、または有線による通信を行うワイヤー通信装置であってもよい。通信装置166は、通信部120として機能し得る。 The communication device 166 is a communication interface configured by, for example, a communication device (for example, a network card or the like) for connecting to the communication network 22 or the like. Further, the communication device 166 may be a wireless LAN compatible communication device, an LTE (Long Term Evolution) compatible communication device, or a wire communication device performing communication by wire. The communication device 166 can function as the communication unit 120.
<<5.変形例>>
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示はかかる例に限定されない。本開示の属する技術の分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
<< 5. Modified example >>
The preferred embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, but the present disclosure is not limited to such examples. It is obvious that those skilled in the art to which the present disclosure belongs can conceive of various changes or modifications within the scope of the technical idea described in the claims. It is naturally understood that these also fall within the technical scope of the present disclosure.
 <5-1.変形例1>
 前述した各実施形態では、本開示に係る情報処理装置がアイウェア10である例について説明したが、かかる例に限定されない。例えば、各実施形態に係る、センシング結果取得部102、表示制御部106、および、補正部104などの機能を他の種類の装置が有する場合には、当該情報処理装置は、当該他の種類の装置であってもよい。一例として、当該他の種類の装置は、サーバ20であってもよい。または、当該他の種類の装置は、汎用PC(Personal Computer)、タブレット型端末、ゲーム機、スマートフォンなどの携帯電話、携帯型音楽プレーヤ、スピーカ、プロジェクタ、例えばスマートウォッチやイヤフォンなどのウェアラブルデバイス、車載装置(カーナビゲーション装置など)、または、ロボット(例えばヒューマノイド型ロボットや自動運転車など)であってもよい。
<5-1. Modification 1>
In each embodiment mentioned above, although the information processor concerning this indication explained an example which is eyewear 10, it is not limited to this example. For example, when a device of another type has functions such as the sensing result acquisition unit 102, the display control unit 106, and the correction unit 104 according to each embodiment, the information processing device is the other type of device. It may be an apparatus. As one example, the other type of device may be the server 20. Alternatively, the other type of device may be a general-purpose PC (Personal Computer), a tablet type terminal, a game machine, a mobile phone such as a smartphone, a portable music player, a speaker, a projector, a wearable device such as a smart watch or earphone, It may be a device (such as a car navigation device) or a robot (such as a humanoid robot or an autonomous vehicle).
 <5-2.変形例2>
 前述した各実施形態の処理の流れにおける各ステップは、必ずしも記載された順序に沿って処理されなくてもよい。例えば、各ステップは、適宜順序が変更されて処理されてもよい。また、各ステップは、時系列的に処理される代わりに、一部並列的に又は個別的に処理されてもよい。また、記載されたステップのうちの一部が省略されたり、または、別のステップがさらに追加されてもよい。
<5-2. Modification 2>
The steps in the process flow of each embodiment described above may not necessarily be processed in the order described. For example, each step may be processed in an appropriate order. Also, each step may be processed partially in parallel or individually instead of being processed chronologically. Also, some of the described steps may be omitted or additional steps may be added.
 また、前述した各実施形態によれば、CPU150、ROM152、およびRAM154などのハードウェアを、各実施形態に係るアイウェア10の各構成要素と同等の機能を発揮させるためのコンピュータプログラムも提供可能である。また、当該コンピュータプログラムが記録された記憶媒体も提供され得る。 Moreover, according to each embodiment described above, it is also possible to provide a computer program for causing hardware such as the CPU 150, the ROM 152, and the RAM 154 to exhibit the same function as each component of the eyewear 10 according to each embodiment. is there. Also, a storage medium in which the computer program is recorded may be provided.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in the present specification are merely illustrative or exemplary, and not limiting. That is, the technology according to the present disclosure can exhibit other effects apparent to those skilled in the art from the description of the present specification, in addition to or instead of the effects described above.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得する取得部と、
 前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部に表示させる表示制御部と、
を備える、情報処理装置。
(2)
 前記補正用オブジェクトに対する前記ユーザの指示情報に基づいて、前記仮想オブジェクトの表示に関するパラメータを補正する補正部をさらに備える、前記(1)に記載の情報処理装置。
(3)
 前記ユーザの指示情報は、前記デプスセンシングの結果が取得されており、かつ、前記補正用オブジェクトが表示されている間に取得される情報である、前記(2)に記載の情報処理装置。
(4)
 前記実オブジェクトは、前記ユーザの視界に対応する空間内に位置する、前記(3)に記載の情報処理装置。
(5)
 前記表示制御部は、前記実オブジェクトのデプスセンシングの結果に基づいて特定される、前記実オブジェクトの形状を示すデプスオブジェクトを、前記補正用オブジェクトと一緒に前記表示部に表示させる、前記(4)に記載の情報処理装置。
(6)
 前記仮想オブジェクトの表示に関するパラメータは、前記仮想オブジェクトの表示位置に関するパラメータを含み、
 前記実オブジェクトは、少なくとも前記ユーザの手を含み、
 前記ユーザの指示情報は、前記補正用オブジェクトに対する前記ユーザの手の動きの認識結果を含む、前記(5)に記載の情報処理装置。
(7)
 前記補正用オブジェクトは、複数の補正用オブジェクトを含み、
 前記複数の補正用オブジェクトは、前記仮想オブジェクトの表示位置に関する互いに異なる補正方向にそれぞれ対応し、
 前記複数の補正用オブジェクトは、前記補正方向に関する、前記仮想オブジェクトの表示位置の補正量を指示するためのオブジェクトである、前記(6)に記載の情報処理装置。
(8)
 前記補正用オブジェクトは、前記実オブジェクトのデプスセンシングの結果に基づいて特定される、前記実オブジェクトの形状を示すデプスオブジェクトを含む、前記(4)に記載の情報処理装置。
(9)
 前記仮想オブジェクトの表示に関するパラメータは、前記仮想オブジェクトの表示位置に関するパラメータを含み、
 前記実オブジェクトは、少なくとも前記ユーザの手を含み、
 前記ユーザの指示情報は、前記仮想オブジェクトの表示に関するパラメータの補正の開始を指示するための補正開始指示、および、前記仮想オブジェクトの表示に関するパラメータの補正の終了を指示するための補正終了指示を含み、
 前記補正部は、前記補正開始指示が取得された際の前記ユーザの手の第1の位置情報と、前記補正終了指示が取得された際の前記ユーザの手の第2の位置情報との差分に応じて、前記仮想オブジェクトの表示位置に関するパラメータの補正量を決定する、前記(8)に記載の情報処理装置。
(10)
 前記仮想オブジェクトの表示に関するパラメータは、前記仮想オブジェクトの表示位置に関するパラメータを含み、
 前記ユーザの指示情報は、前記ユーザの視線情報に基づいた情報である、前記(8)または(9)に記載の情報処理装置。
(11)
 前記仮想オブジェクトの表示に関するパラメータは、前記仮想オブジェクトの表示位置に関するパラメータを含み、
 前記ユーザの指示情報は、前記仮想オブジェクトの表示に関するパラメータの補正の開始を指示するための補正開始指示、および、前記仮想オブジェクトの表示に関するパラメータの補正の終了を指示するための補正終了指示を含み、
 前記補正部は、前記補正用オブジェクトの表示後で、かつ、前記補正開始指示が取得された際の前記ユーザの第1の位置情報と、前記補正終了指示が取得された際の前記ユーザの第2の位置情報との差分に応じて、前記仮想オブジェクトの表示位置に関するパラメータの補正量を決定する、前記(8)に記載の情報処理装置。
(12)
 前記補正用オブジェクトは、複数の補正用オブジェクトを含み、
 前記デプスセンシングの結果は、デプスセンシングに関する複数の種類のパラメータに関して互いに異なる組み合わせに基づく複数のデプスセンシングの結果を含み、
 前記複数の補正用オブジェクトの各々は、前記複数のデプスセンシングの結果のうち対応する1つを示し、
 前記ユーザの指示情報は、前記複数の補正用オブジェクトの中から前記ユーザにより選択された少なくとも一つの補正用オブジェクトに関する情報である、前記(3)~(11)のいずれか一項に記載の情報処理装置。
(13)
 前記補正部は、前記選択された少なくとも一つの補正用オブジェクトに関連付けられている前記複数のパラメータの組み合わせに基づいて、前記仮想オブジェクトの表示に関するパラメータを補正する、前記(12)に記載の情報処理装置。
(14)
 前記複数の種類のパラメータは、前記ユーザの視点位置に対応する左側デプスカメラによる前記実オブジェクトの第1のデプスセンシングの結果と、前記左側デプスカメラに対応する右側デプスカメラによる前記実オブジェクトの第2のデプスセンシングの結果とのマッチングスコアの算出に関する複数の種類のパラメータを含む、前記(13)に記載の情報処理装置。
(15)
 前記マッチングスコアは、複数の種類の算出方法を用いて算出され、
 前記マッチングスコアの算出に関する複数の種類のパラメータは、前記マッチングスコアの算出時における前記複数の種類の算出方法の各々の適用比率に関する第1のパラメータと、前記マッチングスコアの閾値に関する第2のパラメータとを含む、前記(14)に記載の情報処理装置。
(16)
 前記実オブジェクトは、少なくとも前記ユーザの手を含み、
 前記補正用オブジェクトは、前記複数の種類のパラメータに関して前記互いに異なる組み合わせが用いられた場合の前記実オブジェクトに関する複数のデプスセンシングの結果と、前記ユーザの手の撮像画像とが合成された複数の動画を含む、前記(13)~(15)のいずれか一項に記載の情報処理装置。
(17)
 前記表示制御部は、前記複数の補正用オブジェクトを前記表示部に同時に表示させる、前記(13)~(16)のいずれか一項に記載の情報処理装置。
(18)
 前記補正用オブジェクトは、第1の補正用オブジェクトと第2の補正用オブジェクトとを少なくとも含み、
 前記補正用オブジェクトが表示される際に、前記表示制御部は、前記表示部による前記第1の補正用オブジェクトの表示と、前記表示部による前記第2の補正用オブジェクトの表示とを所定の時間間隔で切り替える、前記(13)~(17)のいずれか一項に記載の情報処理装置。
(19)
 ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得することと、
 前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部にプロセッサが表示させることと、
を含む、情報処理方法。
(20)
 コンピュータを、
 ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得する取得部と、
 前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部に表示させる表示制御部、
として機能させるためのプログラム。
The following configurations are also within the technical scope of the present disclosure.
(1)
An acquisition unit for acquiring a result of depth sensing of a real object corresponding to the viewpoint position of the user;
A display control unit that causes a display unit corresponding to the user to display a correction object for correcting a parameter related to display of a virtual object based on a result of depth sensing of the real object;
An information processing apparatus comprising:
(2)
The information processing apparatus according to (1), further including: a correction unit that corrects a parameter related to display of the virtual object based on instruction information of the user on the correction object.
(3)
The information processing apparatus according to (2), wherein the user instruction information is information acquired while the depth sensing result is acquired and the correction object is displayed.
(4)
The information processing apparatus according to (3), wherein the real object is located in a space corresponding to a field of view of the user.
(5)
The display control unit causes the display unit to display a depth object indicating a shape of the real object, which is specified based on a result of depth sensing of the real object, together with the correction object. The information processing apparatus according to
(6)
The parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
The real object includes at least the user's hand,
The information processing apparatus according to (5), wherein the instruction information of the user includes a recognition result of the movement of the user's hand with respect to the correction object.
(7)
The correction object includes a plurality of correction objects
The plurality of correction objects respectively correspond to different correction directions with respect to the display position of the virtual object,
The information processing apparatus according to (6), wherein the plurality of correction objects are objects for instructing a correction amount of a display position of the virtual object in the correction direction.
(8)
The information processing apparatus according to (4), wherein the correction object includes a depth object indicating a shape of the real object, which is identified based on a result of depth sensing of the real object.
(9)
The parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
The real object includes at least the user's hand,
The user's instruction information includes a correction start instruction for instructing start of correction of parameters related to display of the virtual object, and a correction end instruction for instructing end of correction of parameters related to display of the virtual object. ,
The correction unit is configured to calculate a difference between the first position information of the user's hand when the correction start instruction is acquired and the second position information of the user's hand when the correction end instruction is acquired. The information processing apparatus according to (8), wherein the correction amount of the parameter related to the display position of the virtual object is determined according to.
(10)
The parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
The information processing apparatus according to (8) or (9), wherein the instruction information of the user is information based on line-of-sight information of the user.
(11)
The parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
The user's instruction information includes a correction start instruction for instructing start of correction of parameters related to display of the virtual object, and a correction end instruction for instructing end of correction of parameters related to display of the virtual object. ,
The correction unit is configured to display, after the display of the correction object, the first position information of the user when the correction start instruction is acquired and the number of the user when the correction end instruction is acquired. The information processing apparatus according to (8), wherein the correction amount of the parameter related to the display position of the virtual object is determined according to the difference between the position information and the second position information.
(12)
The correction object includes a plurality of correction objects
The depth sensing result includes a plurality of depth sensing results based on different combinations of a plurality of types of parameters related to depth sensing,
Each of the plurality of correction objects indicates a corresponding one of the plurality of depth sensing results,
The information according to any one of (3) to (11), wherein the user's instruction information is information on at least one correction object selected by the user from the plurality of correction objects. Processing unit.
(13)
The information processing according to (12), wherein the correction unit corrects a parameter related to display of the virtual object based on a combination of the plurality of parameters associated with the selected at least one correction object. apparatus.
(14)
The plurality of types of parameters are a result of first depth sensing of the real object by the left depth camera corresponding to the viewpoint position of the user, and a second of the real object by the right depth camera corresponding to the left depth camera The information processing apparatus according to (13), including a plurality of types of parameters related to calculation of a matching score with the result of depth sensing.
(15)
The matching score is calculated using a plurality of types of calculation methods.
The plurality of types of parameters related to calculation of the matching score are a first parameter related to an application ratio of each of the plurality of types of calculation methods at the time of calculation of the matching score, and a second parameter related to a threshold of the matching score The information processing apparatus according to (14), including
(16)
The real object includes at least the user's hand,
The correction object is a plurality of moving images in which a plurality of depth sensing results on the real object when the different combinations are used with respect to the plurality of types of parameters and a captured image of the user's hand are combined The information processing apparatus according to any one of (13) to (15), including
(17)
The information processing apparatus according to any one of (13) to (16), wherein the display control unit causes the display unit to simultaneously display the plurality of correction objects.
(18)
The correction object includes at least a first correction object and a second correction object,
When the correction object is displayed, the display control unit controls the display unit to display the first correction object and the display unit to display the second correction object for a predetermined time. The information processing apparatus according to any one of (13) to (17), wherein switching is performed at intervals.
(19)
Obtaining a result of depth sensing of a real object corresponding to the viewpoint position of the user;
Displaying a correction object for correcting a parameter related to display of a virtual object based on a result of depth sensing of the real object on a display unit corresponding to the user;
Information processing methods, including:
(20)
Computer,
An acquisition unit for acquiring a result of depth sensing of a real object corresponding to the viewpoint position of the user;
A display control unit that causes a display unit corresponding to the user to display a correction object for correcting a parameter related to display of a virtual object based on a result of depth sensing of the real object;
Program to function as.
10 アイウェア
20 サーバ
22 通信網
100 制御部
102 センシング結果取得部
104 補正部
106 表示制御部
120 通信部
122 センサ部
124 表示部
126 入力部
128 記憶部
130 パラメータセットDB
10 eyewear 20 server 22 communication network 100 control unit 102 sensing result acquisition unit 104 correction unit 106 display control unit 120 communication unit 122 sensor unit 124 display unit 126 input unit 128 storage unit 130 parameter set DB

Claims (20)

  1.  ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得する取得部と、
     前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部に表示させる表示制御部と、
    を備える、情報処理装置。
    An acquisition unit for acquiring a result of depth sensing of a real object corresponding to the viewpoint position of the user;
    A display control unit that causes a display unit corresponding to the user to display a correction object for correcting a parameter related to display of a virtual object based on a result of depth sensing of the real object;
    An information processing apparatus comprising:
  2.  前記補正用オブジェクトに対する前記ユーザの指示情報に基づいて、前記仮想オブジェクトの表示に関するパラメータを補正する補正部をさらに備える、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, further comprising a correction unit configured to correct a parameter related to display of the virtual object based on instruction information of the user on the correction object.
  3.  前記ユーザの指示情報は、前記デプスセンシングの結果が取得されており、かつ、前記補正用オブジェクトが表示されている間に取得される情報である、請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the user instruction information is information acquired while the result of the depth sensing is acquired and the correction object is displayed.
  4.  前記実オブジェクトは、前記ユーザの視界に対応する空間内に位置する、請求項3に記載の情報処理装置。 The information processing apparatus according to claim 3, wherein the real object is located in a space corresponding to a field of view of the user.
  5.  前記表示制御部は、前記実オブジェクトのデプスセンシングの結果に基づいて特定される、前記実オブジェクトの形状を示すデプスオブジェクトを、前記補正用オブジェクトと一緒に前記表示部に表示させる、請求項4に記載の情報処理装置。 The display control unit causes the display unit to display a depth object indicating a shape of the real object, which is specified based on a result of depth sensing of the real object, together with the correction object. Information processor as described.
  6.  前記仮想オブジェクトの表示に関するパラメータは、前記仮想オブジェクトの表示位置に関するパラメータを含み、
     前記実オブジェクトは、少なくとも前記ユーザの手を含み、
     前記ユーザの指示情報は、前記補正用オブジェクトに対する前記ユーザの手の動きの認識結果を含む、請求項5に記載の情報処理装置。
    The parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
    The real object includes at least the user's hand,
    The information processing apparatus according to claim 5, wherein the instruction information of the user includes a recognition result of the movement of the user's hand with respect to the correction object.
  7.  前記補正用オブジェクトは、複数の補正用オブジェクトを含み、
     前記複数の補正用オブジェクトは、前記仮想オブジェクトの表示位置に関する互いに異なる補正方向にそれぞれ対応し、
     前記複数の補正用オブジェクトは、前記補正方向に関する、前記仮想オブジェクトの表示位置の補正量を指示するためのオブジェクトである、請求項6に記載の情報処理装置。
    The correction object includes a plurality of correction objects
    The plurality of correction objects respectively correspond to different correction directions with respect to the display position of the virtual object,
    The information processing apparatus according to claim 6, wherein the plurality of correction objects are objects for instructing a correction amount of a display position of the virtual object in the correction direction.
  8.  前記補正用オブジェクトは、前記実オブジェクトのデプスセンシングの結果に基づいて特定される、前記実オブジェクトの形状を示すデプスオブジェクトを含む、請求項4に記載の情報処理装置。 The information processing apparatus according to claim 4, wherein the correction object includes a depth object indicating a shape of the real object, which is identified based on a result of depth sensing of the real object.
  9.  前記仮想オブジェクトの表示に関するパラメータは、前記仮想オブジェクトの表示位置に関するパラメータを含み、
     前記実オブジェクトは、少なくとも前記ユーザの手を含み、
     前記ユーザの指示情報は、前記仮想オブジェクトの表示に関するパラメータの補正の開始を指示するための補正開始指示、および、前記仮想オブジェクトの表示に関するパラメータの補正の終了を指示するための補正終了指示を含み、
     前記補正部は、前記補正開始指示が取得された際の前記ユーザの手の第1の位置情報と、前記補正終了指示が取得された際の前記ユーザの手の第2の位置情報との差分に応じて、前記仮想オブジェクトの表示位置に関するパラメータの補正量を決定する、請求項8に記載の情報処理装置。
    The parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
    The real object includes at least the user's hand,
    The user's instruction information includes a correction start instruction for instructing start of correction of parameters related to display of the virtual object, and a correction end instruction for instructing end of correction of parameters related to display of the virtual object. ,
    The correction unit is configured to calculate a difference between the first position information of the user's hand when the correction start instruction is acquired and the second position information of the user's hand when the correction end instruction is acquired. The information processing apparatus according to claim 8, wherein the correction amount of the parameter related to the display position of the virtual object is determined in accordance with.
  10.  前記仮想オブジェクトの表示に関するパラメータは、前記仮想オブジェクトの表示位置に関するパラメータを含み、
     前記ユーザの指示情報は、前記ユーザの視線情報に基づいた情報である、請求項8に記載の情報処理装置。
    The parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
    The information processing apparatus according to claim 8, wherein the instruction information of the user is information based on line-of-sight information of the user.
  11.  前記仮想オブジェクトの表示に関するパラメータは、前記仮想オブジェクトの表示位置に関するパラメータを含み、
     前記ユーザの指示情報は、前記仮想オブジェクトの表示に関するパラメータの補正の開始を指示するための補正開始指示、および、前記仮想オブジェクトの表示に関するパラメータの補正の終了を指示するための補正終了指示を含み、
     前記補正部は、前記補正用オブジェクトの表示後で、かつ、前記補正開始指示が取得された際の前記ユーザの第1の位置情報と、前記補正終了指示が取得された際の前記ユーザの第2の位置情報との差分に応じて、前記仮想オブジェクトの表示位置に関するパラメータの補正量を決定する、請求項8に記載の情報処理装置。
    The parameter related to display of the virtual object includes a parameter related to the display position of the virtual object,
    The user's instruction information includes a correction start instruction for instructing start of correction of parameters related to display of the virtual object, and a correction end instruction for instructing end of correction of parameters related to display of the virtual object. ,
    The correction unit is configured to display, after the display of the correction object, the first position information of the user when the correction start instruction is acquired and the number of the user when the correction end instruction is acquired. The information processing apparatus according to claim 8, wherein the correction amount of the parameter related to the display position of the virtual object is determined according to a difference between the position information and the second position information.
  12.  前記補正用オブジェクトは、複数の補正用オブジェクトを含み、
     前記デプスセンシングの結果は、デプスセンシングに関する複数の種類のパラメータに関して互いに異なる組み合わせに基づく複数のデプスセンシングの結果を含み、
     前記複数の補正用オブジェクトの各々は、前記複数のデプスセンシングの結果のうち対応する1つを示し、
     前記ユーザの指示情報は、前記複数の補正用オブジェクトの中から前記ユーザにより選択された少なくとも一つの補正用オブジェクトに関する情報である、請求項3に記載の情報処理装置。
    The correction object includes a plurality of correction objects
    The depth sensing result includes a plurality of depth sensing results based on different combinations of a plurality of types of parameters related to depth sensing,
    Each of the plurality of correction objects indicates a corresponding one of the plurality of depth sensing results,
    The information processing apparatus according to claim 3, wherein the instruction information of the user is information on at least one correction object selected by the user from among the plurality of correction objects.
  13.  前記補正部は、前記選択された少なくとも一つの補正用オブジェクトに関連付けられている前記複数のパラメータの組み合わせに基づいて、前記仮想オブジェクトの表示に関するパラメータを補正する、請求項12に記載の情報処理装置。 The information processing apparatus according to claim 12, wherein the correction unit corrects a parameter related to display of the virtual object based on a combination of the plurality of parameters associated with the selected at least one correction object. .
  14.  前記複数の種類のパラメータは、前記ユーザの視点位置に対応する左側デプスカメラによる前記実オブジェクトの第1のデプスセンシングの結果と、前記左側デプスカメラに対応する右側デプスカメラによる前記実オブジェクトの第2のデプスセンシングの結果とのマッチングスコアの算出に関する複数の種類のパラメータを含む、請求項13に記載の情報処理装置。 The plurality of types of parameters are a result of first depth sensing of the real object by the left depth camera corresponding to the viewpoint position of the user, and a second of the real object by the right depth camera corresponding to the left depth camera The information processing apparatus according to claim 13, comprising a plurality of types of parameters related to calculation of a matching score with a result of depth sensing of.
  15.  前記マッチングスコアは、複数の種類の算出方法を用いて算出され、
     前記マッチングスコアの算出に関する複数の種類のパラメータは、前記マッチングスコアの算出時における前記複数の種類の算出方法の各々の適用比率に関する第1のパラメータと、前記マッチングスコアの閾値に関する第2のパラメータとを含む、請求項14に記載の情報処理装置。
    The matching score is calculated using a plurality of types of calculation methods.
    The plurality of types of parameters related to calculation of the matching score are a first parameter related to an application ratio of each of the plurality of types of calculation methods at the time of calculation of the matching score, and a second parameter related to a threshold of the matching score The information processing apparatus according to claim 14, comprising:
  16.  前記実オブジェクトは、少なくとも前記ユーザの手を含み、
     前記補正用オブジェクトは、前記複数の種類のパラメータに関して前記互いに異なる組み合わせが用いられた場合の前記実オブジェクトに関する複数のデプスセンシングの結果と、前記ユーザの手の撮像画像とが合成された複数の動画を含む、請求項13に記載の情報処理装置。
    The real object includes at least the user's hand,
    The correction object is a plurality of moving images in which a plurality of depth sensing results on the real object when the different combinations are used with respect to the plurality of types of parameters and a captured image of the user's hand are combined The information processing apparatus according to claim 13, comprising:
  17.  前記表示制御部は、前記複数の補正用オブジェクトを前記表示部に同時に表示させる、請求項13に記載の情報処理装置。 The information processing apparatus according to claim 13, wherein the display control unit causes the display unit to simultaneously display the plurality of correction objects.
  18.  前記補正用オブジェクトは、第1の補正用オブジェクトと第2の補正用オブジェクトとを少なくとも含み、
     前記補正用オブジェクトが表示される際に、前記表示制御部は、前記表示部による前記第1の補正用オブジェクトの表示と、前記表示部による前記第2の補正用オブジェクトの表示とを所定の時間間隔で切り替える、請求項13に記載の情報処理装置。
    The correction object includes at least a first correction object and a second correction object,
    When the correction object is displayed, the display control unit controls the display unit to display the first correction object and the display unit to display the second correction object for a predetermined time. The information processing apparatus according to claim 13, wherein switching is performed at intervals.
  19.  ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得することと、
     前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部にプロセッサが表示させることと、
    を含む、情報処理方法。
    Obtaining a result of depth sensing of a real object corresponding to the viewpoint position of the user;
    Displaying a correction object for correcting a parameter related to display of a virtual object based on a result of depth sensing of the real object on a display unit corresponding to the user;
    Information processing methods, including:
  20.  コンピュータを、
     ユーザの視点位置に対応する、実オブジェクトのデプスセンシングの結果を取得する取得部と、
     前記実オブジェクトのデプスセンシングの結果に基づく仮想オブジェクトの表示に関するパラメータを補正するための補正用オブジェクトを、前記ユーザに対応する表示部に表示させる表示制御部、
    として機能させるためのプログラム。
    Computer,
    An acquisition unit for acquiring a result of depth sensing of a real object corresponding to the viewpoint position of the user;
    A display control unit that causes a display unit corresponding to the user to display a correction object for correcting a parameter related to display of a virtual object based on a result of depth sensing of the real object;
    Program to function as.
PCT/JP2018/042527 2017-12-26 2018-11-16 Information processing device, information processing method, and program WO2019130900A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-249745 2017-12-26
JP2017249745 2017-12-26

Publications (1)

Publication Number Publication Date
WO2019130900A1 true WO2019130900A1 (en) 2019-07-04

Family

ID=67063440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/042527 WO2019130900A1 (en) 2017-12-26 2018-11-16 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2019130900A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020154009A (en) * 2019-03-18 2020-09-24 Necプラットフォームズ株式会社 Information display system and wearable device
WO2021100553A1 (en) * 2019-11-20 2021-05-27 ソニーグループ株式会社 Information processing device, information processing method, and information processing program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015119373A (en) * 2013-12-19 2015-06-25 ソニー株式会社 Image processor and method, and program
US20160080732A1 (en) * 2014-09-17 2016-03-17 Qualcomm Incorporated Optical see-through display calibration
JP2017102696A (en) * 2015-12-02 2017-06-08 セイコーエプソン株式会社 Head mounted display device and computer program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015119373A (en) * 2013-12-19 2015-06-25 ソニー株式会社 Image processor and method, and program
US20160080732A1 (en) * 2014-09-17 2016-03-17 Qualcomm Incorporated Optical see-through display calibration
JP2017102696A (en) * 2015-12-02 2017-06-08 セイコーエプソン株式会社 Head mounted display device and computer program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020154009A (en) * 2019-03-18 2020-09-24 Necプラットフォームズ株式会社 Information display system and wearable device
US11557233B2 (en) 2019-03-18 2023-01-17 Nec Platforms, Ltd. Information display system and wearable device
WO2021100553A1 (en) * 2019-11-20 2021-05-27 ソニーグループ株式会社 Information processing device, information processing method, and information processing program

Similar Documents

Publication Publication Date Title
US10712901B2 (en) Gesture-based content sharing in artificial reality environments
US11160688B2 (en) Visual aid display device and method of operating the same
US10817067B2 (en) Systems and methods of direct pointing detection for interaction with a digital device
EP3469458B1 (en) Six dof mixed reality input by fusing inertial handheld controller with hand tracking
US11520456B2 (en) Methods for adjusting and/or controlling immersion associated with user interfaces
CN106845335B (en) Gesture recognition method and device for virtual reality equipment and virtual reality equipment
CN116324703A (en) Method for interacting with virtual controls and/or affordances for moving virtual objects in a virtual environment
KR101812227B1 (en) Smart glass based on gesture recognition
KR20150110257A (en) Method and wearable device for providing a virtual input interface
CN111052063B (en) Electronic device and control method thereof
US20180059812A1 (en) Method for providing virtual space, method for providing virtual experience, program and recording medium therefor
CN114647317A (en) Remote touch detection enabled by a peripheral device
CN110968190B (en) IMU for touch detection
JP6911834B2 (en) Information processing equipment, information processing methods, and programs
WO2019142560A1 (en) Information processing device for guiding gaze
WO2012133226A1 (en) Electronic apparatus, control method, and control program
WO2017051595A1 (en) Information processing device, information processing method and program
US9600938B1 (en) 3D augmented reality with comfortable 3D viewing
WO2019130900A1 (en) Information processing device, information processing method, and program
KR20180094875A (en) Information processing apparatus, information processing method, and program
US20240103712A1 (en) Devices, Methods, and Graphical User Interfaces For Interacting with Three-Dimensional Environments
WO2021241110A1 (en) Information processing device, information processing method, and program
CN114923418A (en) Point selection based measurement
US20230370578A1 (en) Generating and Displaying Content based on Respective Positions of Individuals
CN116888562A (en) Mapping a computer-generated touch pad to a content manipulation area

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18895684

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18895684

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP