WO2024154692A1 - 支援システム、支援方法及びプログラム - Google Patents
支援システム、支援方法及びプログラム Download PDFInfo
- Publication number
- WO2024154692A1 WO2024154692A1 PCT/JP2024/000785 JP2024000785W WO2024154692A1 WO 2024154692 A1 WO2024154692 A1 WO 2024154692A1 JP 2024000785 W JP2024000785 W JP 2024000785W WO 2024154692 A1 WO2024154692 A1 WO 2024154692A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- posture
- joint
- information
- acted
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the present invention relates to an assistance system, an assistance method, and a program.
- Non-Patent Document 1 augmented reality
- the present invention aims to provide technology that reduces the burden of caregiving.
- One aspect of the present invention is an assistance system that includes a control unit that executes a target posture estimation process that estimates a target posture, which is a posture to which the acted object will transition, based on current posture information, which is information obtained based on the results of photography by a first photography device that photographs an acted object having multiple joints and indicates the current posture of the acted object at the time of photography, and teacher information, which is information obtained in the past and indicates a change in posture, and a presentation process that superimposes a target computer graphics image, which is an image of the acted object in the target posture, on the acted object and presents it to an agent that moves the acted object.
- a target posture estimation process that estimates a target posture, which is a posture to which the acted object will transition, based on current posture information, which is information obtained based on the results of photography by a first photography device that photographs an acted object having multiple joints and indicates the current posture of the acted object at the time of photography, and teacher information, which is information obtained in the past and indicates
- One aspect of the present invention is an assistance method having a control step of executing a target posture estimation process that estimates a target posture, which is a posture to which the acted object will transition, based on current posture information, which is information obtained based on the results of photographing an acted object having multiple joints using a first photographing device and is information indicating the current posture, which is the posture of the acted object at the time of photographing, and a presentation process that superimposes a target computer graphics image, which is an image of the acted object in the target posture, on the acted object and presents it to an agent that moves the acted object.
- One aspect of the present invention is a program for causing a computer to function as the above-mentioned support system.
- This invention makes it possible to reduce the burden of caregiving.
- FIG. 1 is a first explanatory diagram for explaining a support system according to an embodiment.
- FIG. 2 is a second explanatory diagram for explaining the support system according to the embodiment.
- 4 is a flowchart showing an example of a flow of a process executed by a teaching control device in the embodiment. 13 shows an example of the transition process of an image of a care recipient and a target computer graphics image superimposed thereon, which are displayed on an AR device of the support system in the embodiment.
- FIG. 1 is a first explanatory diagram for explaining a support system 100 of an embodiment.
- FIG. 2 is a second explanatory diagram for explaining the support system 100 of an embodiment.
- the support system 100 includes a first image capturing device 1, an AR device 2, and a support device 3.
- the AR device 2 may be an MR (Mixed Reality) device.
- the support system 100 supports a caregiver 901 in caring for a care recipient 902.
- the care recipient 902 is a person who receives care, and has multiple joints.
- the care recipient 902 is an example of an acted object having multiple joints.
- the caregiver 901 is an example of an acting subject that moves an acted object.
- the first image capturing device 1 is an image capturing device such as a camera that captures an image of the care recipient 902.
- the camera may be, for example, a depth camera.
- the AR device 2 is an AR device that is worn by the caregiver 901.
- the AR device 2 superimposes a target computer graphics image, which is an image of the care recipient 902 in a target posture, onto the care recipient 902 at the time of photographing by the first photographing device 1 (hereinafter referred to as the "current time"), and displays it on a display device such as a display device with a translucent lens capable of displaying images.
- the target posture is the posture to which the care recipient 902 will transition from their current posture (i.e., the posture they should be in next).
- Image G1 shown in FIG. 2 is an example of an image seen by the caregiver 901 wearing the AR device 2.
- Image G101 in FIG. 2 is an example of a target computer graphics image displayed superimposed on the care recipient 902.
- the posture of image G101 is the transition destination posture from the current posture.
- the display of the transition destination posture by the AR device 2 may be repeated until the transition to the destination posture is made.
- the AR device 2 displays an image of the transition destination posture to be made next to the caregiver.
- the caregiver 901 can provide appropriate care to the care recipient 902 by transitioning the care recipient 902 according to the image of the movement of each step in line with the image displayed on the AR device 2.
- the target computer graphics image is, for example, a three-dimensional model that imitates the care recipient 902 (i.e., an avatar of the care recipient 902) and is an image of the three-dimensional model in a target posture.
- the three-dimensional model may be a display of an image of a part of the care recipient 902 that is to be transitioned, or an entire image of the care recipient 902.
- the caregiver 901 wearing the AR device 2 can easily visually recognize the difference between the current posture of the care recipient 902 and the target posture.
- the display device 201 in FIG. 1 is an example of a display device provided in the AR device 2, such as a display or a transmissive lens.
- the target posture is, for example, the posture of the care recipient 902 while the care recipient 902 is being cared for by an experienced caregiver.
- Such posture information is acquired, for example, based on the results of filming the state of the care recipient 902 being cared for by the experienced caregiver.
- filming the state of the care recipient 902 being cared for by the experienced caregiver a series of movements during the care are filmed.
- a time series of postures of the care recipient 902 while he or she is being cared for by a skilled caregiver is acquired by a specific information processing method, such as using a mathematical model (hereinafter referred to as a "posture information acquisition model") that has been trained using motion capture or machine learning and obtains posture information from images.
- the time series acquired in this way is an example of the training information described below.
- the image of the three-dimensional model of the target posture is the posture shown by each sample of the time series thus obtained.
- the support system 100 presents the caregiver 901 with a target posture that corresponds to the current posture of the care recipient 902.
- the posture of the care recipient 902 is not limited to that captured by the first image capturing device 1. If the AR device 2 is an AR device equipped with, for example, a transparent lens, the posture of the care recipient 902 is, for example, the posture of the care recipient 902 while being viewed by the caregiver 901 through the transparent lens. If the AR device 2 is an AR device equipped with, for example, a display, the AR device 2 may also be equipped with an image capturing device, and the captured results of the image capturing device are displayed on the display.
- the AR device 2 superimposes the target computer graphics image on the care recipient 902 and presents it to the caregiver 901. Note that such presentation processing by the AR device 2 is executed under the control of the control unit 31 provided in the support device 3, as described below. Therefore, it can be said that the control unit 31 executes the presentation processing.
- the presentation process is a process in which the target computer graphics image is displayed on a predetermined display destination such as the display device 201 of the AR device 2, and the target computer graphics image is presented to the caregiver 901 by being superimposed on the care recipient 902. As a result of the presentation process, the caregiver 901 sees the target computer graphics image superimposed on the care recipient 902.
- the image processing technique for superimposing may be any known technique.
- the control unit 31 estimates a superimposition transformation, which is a transformation that matches the image of the care recipient 902 captured by the first image capturing device 1 with the image of the care recipient 902 seen by the caregiver 901, based on relative positioning information that indicates the relative position and orientation relationship between the first image capturing device 1 and the AR device 2.
- the relative positioning information may be obtained, for example, using technology such as GPS (Global Positioning System) or other well-known positioning technology.
- GPS Global Positioning System
- a specific landmark such as a QR code (registered trademark, the same applies below) is printed on the first photographing device 1 and the AR device 2 is equipped with a photographing device
- the relative positioning information may be obtained by reading the position of the specific landmark with the photographing device.
- control unit 31 can convert the image of the care recipient 902 in the target state as seen by the first image capturing device 1 into an image of the care recipient 902 in the target state as seen by the caregiver 901. In this manner, once the image of the care recipient 902 in the target state as seen by the first image capturing device 1 has been estimated, it is possible to superimpose the target computer graphics image on the care recipient 902 and present it to the caregiver 901.
- the image of the care recipient 902 in the target state as seen by the first image capture device 1 is estimated, for example, using a target posture estimation process described below. Note that if the difference between the image of the care recipient 902 as seen by the first image capture device 1 and the image of the care recipient 902 as seen by the caregiver 901 is small, it is not necessarily necessary to perform a convolution transformation.
- the posture information acquisition model described above is a mathematical model that estimates posture information such as the position and orientation of each joint in the joint coordinate system described below from an image.
- a posture information acquisition model is obtained by learning using pairs of images and posture information as learning data, and learning a mathematical model that estimates posture for an input image.
- the mathematical model of the learning object is updated so as to reduce the difference between the posture estimated by the mathematical model of the learning object for an input image and the posture indicated by the posture information included in the learning data.
- the learning object at the point in time when a specified condition for the end of learning is met is the posture information acquisition model.
- the support device 3 has a control unit 31 including a processor 91, such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit) or an NPU (Neural Network Processing Unit), and a memory 92, which are connected by a bus, and executes a program. By executing the program, the support device 3 functions as a device including the control unit 31, an interface unit 32 and a memory unit 33.
- a processor 91 such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit) or an NPU (Neural Network Processing Unit)
- a memory 92 which are connected by a bus, and executes a program.
- the support device 3 functions as a device including the control unit 31, an interface unit 32 and a memory unit 33.
- the processor 91 reads out a program stored in the storage unit 33 and stores the read out program in the memory 92.
- the processor 91 executes the program stored in the memory 92, whereby the support device 3 functions as a device including a control unit 31, an interface unit 32, and a storage unit 33.
- the control unit 31 controls the operation of each functional unit of the support device 3.
- the control unit 31 controls, for example, the operation of the interface unit 32, and acquires the results of the image capture by the first image capture device 1.
- the control unit 31 acquires, for example, information stored in the memory unit 33.
- the process of acquiring the information stored in the memory unit 33 is specifically a read process.
- the control unit 31 executes, for example, a target posture estimation process.
- the target posture estimation process is a process for estimating a target posture based on current posture information, which is information obtained based on the result of imaging by the first imaging device 1 and indicates the current posture, which is the posture of the care recipient 902 at the time of imaging, and teacher information.
- the teacher information is information obtained in the past and indicates a change in posture.
- the teacher information may be information that modifies the change in posture according to the physical condition or condition of the care recipient 902. This allows the support device 3 to provide optimal care to the care recipient 902.
- the teacher information may also change the level of the care action according to the care proficiency or physical strength of the caregiver 901.
- the teacher information is information that indicates the change in the care posture according to the level.
- the control unit 31 acquires and displays a care action performed in one step, a care action performed in multiple steps, or a care action using an auxiliary tool from the information stored in the storage unit 33.
- control unit 31 when the control unit 31 obtains information indicating a level, the control unit 31 displays the teacher information of that level on a predetermined display destination such as the display device 201 provided in the AR device 2 based on the level indicated by the information indicating the level. This allows even a caregiver 901 who cannot perform a step due to poor care skills or lack of physical strength to perform the step by using multiple steps or tools.
- the control unit 31 controls the display operation of the AR device 2, for example, via the interface unit 32. As a result, the control unit 31 executes, for example, a presentation process.
- the interface unit 32 includes a communication interface for connecting the support device 3 to an external device.
- the interface unit 32 communicates with the external device via a wired or wireless connection.
- the external device is, for example, the first image capture device 1.
- the external device is, for example, the AR device 2.
- the storage unit 33 is configured using a computer-readable storage medium device (non-transitory computer-readable recording medium) such as a magnetic hard disk device or a semiconductor storage device.
- the storage unit 33 stores various information related to the support device 3.
- the storage unit 33 stores, for example, teacher information in advance.
- the storage unit 33 stores, for example, various information generated by the operation of the control unit 31.
- FIG. 3 is a flowchart showing an example of the flow of processing executed by the support device 3 in an embodiment.
- the control unit 31 acquires the results of shooting by the first shooting device 1, etc. (step S101). Next, the control unit 31 executes a target posture estimation process (step S102). Next, the control unit 31 executes a presentation process (step S103).
- the caregiver 901 moves the care recipient 902 so that the care recipient approaches the target computer graphics image presented in step S103.
- the first image capturing device 1 repeatedly captures images at a predetermined timing, and the control unit 31 synchronizes with the timing and determines based on the capture results whether the difference between the posture of the care recipient 902 and the target posture is a predetermined difference.
- control unit 31 determines that the difference is within a predetermined difference, the target posture estimated by the target posture estimation process is updated. If the difference is greater than the predetermined difference, the target posture is not updated even if the target posture estimation process is executed. In this way, by repeating the processes of steps S101 to S103, the caregiver 901 is able to move the care recipient 902 in accordance with the instruction information.
- the support device 3 configured in this manner controls the AR device 2 to present the target computer graphics image to the caregiver 901, superimposed on the care recipient 902. Thanks to the control of the AR device 2 by the support device 3 in this manner, the caregiver 901 can visually and more easily grasp how to move the care recipient 902 next. Therefore, by using the support device 3, even a caregiver 901 with low skills can provide appropriate care, without the direct guidance of a skilled caregiver.
- a less skilled caregiver 901 can learn the caregiving techniques of an experienced caregiver.
- the burden of instruction on the experienced caregiver and the burden of caregiving on the caregiver 901 can be reduced.
- the support system 100 configured in this way is equipped with the support device 3, it can reduce various burdens required for caregiving.
- Non-Patent Document 1 text information and videos are used to show what to do, but the posture of the care recipient 902 does not necessarily match the presented content. Also, with video information, the information is limited to the viewpoint at the time of shooting. However, since the viewpoint of the caregiver 901 during care actions differs from time to time, it is desirable to present an appearance that matches the current viewpoint of the caregiver 901. In the case of the support system 100, care actions can be presented to the caregiver 901 according to the target care recipient 902, which reduces the burden on the caregiver 901 and contributes to skill acquisition.
- Figure 4 shows an example of the transition process between the image of the care recipient 902 displayed on the AR device 2 of the support system 100 and the target computer graphics image G101 superimposed thereon.
- Image D101 in Figure 4 shows the initial state where the target computer graphics image G101 in the initial state is superimposed on the current posture of the care recipient 902.
- Image D102 in Figure 4 shows the state where the target computer graphics image G101 superimposed on the care recipient 902 has started to move toward the transition posture.
- Image D103 in Figure 4 shows the state where the target computer graphics image G101 superimposed on the care recipient 902 has approached the transition posture.
- Image D104 in Figure 4 shows the state where the target computer graphics image G101 superimposed on the care recipient 902 has reached the final transition posture.
- the control unit 31 may control the operation of the AR device 2 to repeatedly cause the AR device 2 to play and display the target computer graphics image G101 up to that point until the caregiver 901 places the care recipient 902 in the final transition posture.
- the control unit 31 may also execute a process to change the playback display speed of the target computer graphics image G101.
- the control unit 31 may also execute a process to turn off the playback display of the target computer graphics image G101.
- the control unit 31 may be a CPU provided in an external device such as the support device 3, or may be a CPU mounted on the main body of the AR device 2.
- the teaching information may be, for example, information obtained based on the sample acted action, and may indicate the posture of the sample acted object at each timing while the sample acting subject is performing the sample acting action.
- the sample acted upon action is a change in posture (i.e., movement) of the sample acted upon that occurs as a result of the execution of the sample action by the sample actor who executes the sample action.
- the sample acted upon is an acted upon that is the same as or different from the acted upon that photographed by the first photographing device 1.
- the sample acted upon is the teacher who is being cared for when two teachers who are teaching nursing care techniques to students are divided into the roles of the one being cared for and the one providing the care, and each plays one of these roles.
- the sample acted upon object is an acted upon object that is the same as or different from the acted upon object photographed by the first photographing device 1.
- the sample action is a predetermined action that moves the sample acted upon object.
- the sample actor is an entity that executes the sample action.
- the sample actor is, for example, an experienced caregiver.
- the sample acted upon object is, for example, a teacher playing the role of caregiver when two teachers who teach caregiving techniques to students play the roles of the care recipient and the caregiver.
- the teaching information may be information obtained based on the results of photographing a sample affected action using a second imaging device, which may be the same as or different from the first imaging device 1, for example.
- the current posture information may indicate the posture of the acted upon object based on the position and orientation of a joint coordinate system, which is a coordinate system defined in advance for each joint.
- the teacher information may indicate the posture of the acted upon object based on the position and orientation of the joint coordinate system.
- the position and orientation of the joint coordinate system refer to the position of the origin of the joint coordinate system and the orientation of each axis.
- the position and orientation of the joint coordinate system of the (n+1)th joint which is one of two joints adjacent to the nth joint (n is an integer equal to or greater than 1) among the multiple joints and is a joint different from the (n-1)th joint, may be represented by the joint coordinate system of the nth joint.
- the target computer graphics image presented in the presentation process may be presented in a state in which the position and orientation of the joint coordinate system of the reference joint, which is a specific joint among the multiple joints, match the position and orientation of the joint coordinate system of the reference joint of the acted object.
- the reference joint is the position that is the starting point of the movement, and for example, in the movement of bending the arm of the care recipient 902, it is the shoulder joint.
- the caregiver 901 sees an image in which the shoulder joint of the care recipient 902 and the shoulder joint in image G101 match.
- the starting point of the movement is defined by a human hand or an estimation result of a CPU when creating the teaching data.
- the definition of the starting point of the movement may be determined in advance by the user based on which joint is the root and which joint is the end of the movement that is intuitively easy to teach to the user wearing the AR device 2.
- the following three coordinate axes are usually calculated as part of the process for visualizing the change in the coordinate axes of the joints using coordinates.
- One of the three coordinate axes is a coordinate axis that indicates the movement of the shoulder joint, and is calculated by multiplying the matrix that represents the acquired coordinate axes of the shoulder joint by a matrix that represents the time-series change in the coordinate axes of the shoulder joint (teaching data).
- the other of the three coordinate axes is a coordinate axis that indicates the movement of the elbow joint, and is calculated using a matrix that represents the acquired coordinate axes of the shoulder joint, a matrix that represents the time series changes in the coordinate axes of the shoulder joint (teacher data), and a matrix that represents the time series changes in the coordinate axes of the elbow joint (teacher data).
- the remaining of the three coordinate axes is calculated by multiplying the matrix that represents the acquired coordinate axes of the shoulder joint, a matrix that represents the time series changes in the coordinate axes of the shoulder joint (teacher data), a matrix that represents the time series changes in the coordinate axes of the elbow joint (teacher data), and a matrix that represents the time series changes in the coordinate axes of the wrist joint (teacher data).
- the process for visualization described above requires calculation of each of the three coordinate axes, so the amount of calculation increases with each additional joint. Since the AR device 2 often uses a computer with relatively low performance for calculations in order to reduce weight, it is desirable to reduce the amount of calculation.
- An easy-to-edit form means a form in which the teacher information can be easily modified by the user or the CPU to issue commands regarding the direction and degree to which the distal joint is to be rotated for each joint.
- the control unit 31 can determine the joint group position and orientation of a multi-joint object in real space, such as the care recipient 902, and the joint group position and orientation of a multi-joint object in virtual space displayed by the AR device 2, and superimpose and display the target computer graphics image.
- the starting point of the movement is determined by the control unit 31 or the human hand.
- the shoulder joint at the base side is set as the starting point as the reference joint for the rotation of the joint coordinate system group from the shoulder joint at the base side of the multiple joints to the fingertip at the distal end.
- control unit 31 determines the starting point in this way and positions the target computer graphics image in accordance with the real space, it acquires the coordinate system of the starting point of the operation and displays the coordinate system as the position and posture when the target computer graphics image is operated. Note that the starting point may be determined manually.
- the teacher information is stored in the storage unit 33 in a format in which the base side of a multi-joint object is the starting point of the movement. Specifically, for example, in the movement of bending the arm of the care recipient 902, time-series information on the "position and posture of the shoulder joint," the “position and posture of the elbow joint as viewed from the shoulder joint,” and the “position and posture of the wrist joint as viewed from the elbow joint” is stored as teacher information.
- the control unit 31 When appropriately playing back such time-series information, the control unit 31 first performs coordinate conversion to correct the "position and posture of the shoulder joint" in the current situation of the care recipient 902 and the "position and posture of the shoulder joint” in the teacher information. Note that “appropriate” here is defined as not being uncomfortable for the caregiver 901. As a result, the control unit 31 superimposes the shoulder joint coordinate axis in real space and the shoulder joint coordinate axis in the teacher information. The control unit 31 then displays the joint movement between the shoulder joint on the root side and the hand on the distal side on a specified display.
- control unit 31 displays the joint movement between the root side and the distal side on a specified display by displaying the changes in each coordinate axis from the root side to the distal side on a specified display. This allows the target computer graphics image obtained from the teacher information to be appropriately superimposed on the real space and played back.
- the user or computer can directly write commands into the teaching information specifying in what direction and by how many degrees the distal joint is to be rotated for each joint, and information can be easily edited.
- the starting point in this invention is defined as the point where the movement of the multi-joint begins in a transition operation.
- n 1
- the target posture changes and the position of the reference joint changes, but the relative positions and orientations of joints where n is 2 or more with respect to the reference joint do not change.
- the positions and orientations of all joints are expressed in the world coordinate system, the positions and orientations of all joints must be re-estimated through computationally intensive processing such as executing motion capture or executing machine-learned models such as a posture information acquisition model.
- the position and orientation of the joint coordinate system of the (n+1)th joint is represented by the joint coordinate system of the nth joint, it represents the relative position and orientation from adjacent joints, so as long as the change in the position of the reference joint is estimated, it is only necessary to perform matrix multiplication, which requires less computation, for the other joints.
- the effect of reducing the amount of computation is achieved.
- the sample interacted object which is the care recipient when obtaining the teacher information
- the care recipient 902 when the caregiver 901 provides care.
- the caregiver 901 taking into consideration that the distance between joints differs from person to person, for example due to differences in arm length, it is preferable for the caregiver 901 to have the avatar image of the care recipient 902 superimposed on the care recipient 902 as the target computer graphics image, rather than the avatar image of the sample interacted object being superimposed on the care recipient 902.
- the control unit 31 may estimate the difference in the distance between the joints of the care recipient 902 and the sample object based on the image capturing results by the first image capturing device 1, and may present a target computer graphics image, in which the distance between the joints is adjusted to match the care recipient 902, to a specified presentation destination based on the estimation result.
- the process of adjusting the distance between the joints to match the care recipient 902 is, for example, a process of moving the origin of each joint coordinate system to match the distance between the joints.
- the first image capturing device 1 may be attached to, for example, the AR device 2.
- the AR device 2 may be equipped with, for example, an accelerometer and may be capable of acquiring information indicating the position of the device itself.
- the AR device 2 may be equipped with an image capturing device, and in such a case, the image capturing device may be, for example, a depth camera.
- the control unit 31 may also control the AR device 2 to present an image of a three-dimensional model of the caregiver 901 (i.e., an avatar of the caregiver 901) that indicates the next action that the caregiver 901 should perform.
- the support device 3 may be implemented using multiple information processing devices connected to each other so that they can communicate with each other via a network. In this case, each process executed by the control unit 31 may be executed in a distributed manner by the multiple information processing devices.
- the display device 201 provided in the AR device 2 is an example of a predetermined display destination that displays a target computer graphics image superimposed on the currently acted upon object.
- All or part of the functions of the support system 100 and the support device 3 may be realized using hardware such as an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), or an FPGA (Field Programmable Gate Array).
- the program may be recorded on a computer-readable recording medium. Examples of computer-readable recording media include portable media such as flexible disks, optical magnetic disks, ROMs, and CD-ROMs, and storage devices such as hard disks built into computer systems.
- the program may be transmitted via a telecommunications line.
- the support system 100 and support device 3 of the present invention are not limited to nursing care, and may be any device that moves an acted upon object having multiple joints to a predetermined transition posture.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2024571746A JPWO2024154692A1 (enrdf_load_stackoverflow) | 2023-01-16 | 2024-01-15 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202363439155P | 2023-01-16 | 2023-01-16 | |
US63/439,155 | 2023-01-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024154692A1 true WO2024154692A1 (ja) | 2024-07-25 |
Family
ID=91956007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2024/000785 WO2024154692A1 (ja) | 2023-01-16 | 2024-01-15 | 支援システム、支援方法及びプログラム |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2024154692A1 (enrdf_load_stackoverflow) |
WO (1) | WO2024154692A1 (enrdf_load_stackoverflow) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020005192A (ja) * | 2018-06-29 | 2020-01-09 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP2020201793A (ja) * | 2019-06-12 | 2020-12-17 | オムロン株式会社 | 支援装置、支援方法およびプログラム |
-
2024
- 2024-01-15 WO PCT/JP2024/000785 patent/WO2024154692A1/ja active Application Filing
- 2024-01-15 JP JP2024571746A patent/JPWO2024154692A1/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020005192A (ja) * | 2018-06-29 | 2020-01-09 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP2020201793A (ja) * | 2019-06-12 | 2020-12-17 | オムロン株式会社 | 支援装置、支援方法およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2024154692A1 (enrdf_load_stackoverflow) | 2024-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7273880B2 (ja) | 仮想オブジェクト駆動方法、装置、電子機器及び可読記憶媒体 | |
US11836294B2 (en) | Spatially consistent representation of hand motion | |
CN113689577B (zh) | 虚拟三维模型与实体模型匹配的方法、系统、设备及介质 | |
US11605192B2 (en) | Skeleton model update apparatus, skeleton model update method, and program | |
US20200311396A1 (en) | Spatially consistent representation of hand motion | |
US9159152B1 (en) | Mapping between a capture volume and a virtual world in a motion capture simulation environment | |
US20160028994A1 (en) | System and method for surgical telementoring | |
US11763464B2 (en) | Estimation apparatus, learning apparatus, estimation method, learning method, and program | |
JP2019012965A (ja) | 映像制御方法、映像制御装置、及び映像制御プログラム | |
US11845006B2 (en) | Skeleton model updating apparatus, skeleton model updating method, and program | |
US20230360336A1 (en) | Collaborative mixed-reality system for immersive surgical telementoring | |
JP6975347B2 (ja) | トラッカーのキャリブレーション装置、トラッカーのキャリブレーション方法及びプログラム | |
US12122039B2 (en) | Information processing device and information processing method | |
US11660526B2 (en) | Estimation apparatus, estimation method, and program | |
JP7521602B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
WO2024154692A1 (ja) | 支援システム、支援方法及びプログラム | |
CN113051973A (zh) | 用于姿势矫正的方法及装置、电子设备 | |
Saggio et al. | Augmented reality for restoration/reconstruction of artefacts with artistic or historical value | |
US11037354B1 (en) | Character joint representation | |
CN119987557A (zh) | 一种人机共融系统 | |
US20250252695A1 (en) | Operating apparatus, information processing apparatus, information processing system, information processing method, and information processing program | |
Huang et al. | Interactive demonstration of pointing gestures for virtual trainers | |
Andersen | Effective User Guidance Through Augmented Reality Interfaces: Advances and Applications | |
US20240346736A1 (en) | Information processing device, information processing method, and program | |
CN119533860A (zh) | 动作捕捉设备的校准方法、装置、设备和介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24744615 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2024571746 Country of ref document: JP |