CN117396976A - Patient positioning adaptive guidance system - Google Patents

Patient positioning adaptive guidance system Download PDF

Info

Publication number
CN117396976A
CN117396976A CN202280032875.2A CN202280032875A CN117396976A CN 117396976 A CN117396976 A CN 117396976A CN 202280032875 A CN202280032875 A CN 202280032875A CN 117396976 A CN117396976 A CN 117396976A
Authority
CN
China
Prior art keywords
patient
information
determination
reached
body positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280032875.2A
Other languages
Chinese (zh)
Inventor
J·P·查亚迪宁拉特
E·C·V·塔尔戈恩
N·劳特
J·科诺伊斯特
H·李
R·博斯曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Publication of CN117396976A publication Critical patent/CN117396976A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/70Means for positioning the patient in relation to the detecting, measuring or recording means
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/744Displaying an avatar, e.g. an animated cartoon character
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/04Positioning of patients; Tiltable beds or the like
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4824Touch or pain perception evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Bio-feedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7455Details of notification to user or communication with user or patient ; user input means characterised by tactile indication, e.g. vibration or electrical stimulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/037Emission tomography
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)

Abstract

The invention relates to a patient positioning adaptive guiding system (10), comprising: a medical image acquisition unit (20); at least one communication unit (30); at least one camera (40); and a processing unit (50). The medical image acquisition unit is configured to acquire a medical image of a patient. The processing unit is configured to control the at least one communication unit to provide body positioning information to the patient prior to the acquisition of the medical image. One or more of the at least one camera is configured to acquire position and movement image data of the patient associated with providing the body position information to the patient. The processing unit is configured to determine whether the patient has not reached the desired position due to physical or cognitive limitations, the determining comprising utilizing the body positioning information and the position and movement image data of the patient, wherein based on a determination that the patient has moved in the correct direction, a determination is made that the patient has not reached the desired position due to physical limitations, and wherein based on a determination that the patient has moved in an incorrect direction or has not substantially moved, a determination is made that the patient has not reached the desired position due to cognitive limitations. The processing unit is configured to adjust the body positioning information in at least one first way based on a determination that the patient has not reached a desired position due to body limitations. The processing unit is configured to adjust the body positioning information in at least one second way based on a determination that the patient has not reached a desired location due to cognitive limitations.

Description

Patient positioning adaptive guidance system
Technical Field
The invention relates to a patient positioning adaptive guiding system, a patient positioning adaptive guiding method, and a computer program element and a computer readable medium.
Background
For many medical examinations, a patient must take a specific position and a specific posture. For example, for MR scanning, the patient may need to hold both hands over the top of the head. For knee examination, she may need to bend the knee. For chest X-rays, the patient may need to have both hands behind and the shoulder blade curled forward. Currently, experienced nurses and technicians explain how to do to the patient and can push the patient to the correct position and posture.
The challenge in providing guidance is that patients differ in both physical and cognitive aspects. Some patients can easily take the desired location on the body, while others may not, for example, because they are older, have arthritis, have physical morphological abnormalities (e.g., large abdomen or chest) or have other mobility disorders. From a cognitive perspective, some people may (initially) have difficulty understanding the movement guide and others may immediately understand it. Depending on the familiarity of people with the guidance media, their cognitive abilities, and their ability to relate movement guidance to their body movement (i.e., dancers may find this easier).
The experienced nurse will typically analyze the situation correctly. If the patient physically fails to reach the suggested posture, the nurse will not continue to push the ideal posture strongly, but will go back to find the next best achievable position. If the patient does not understand, the nurse may re-interpret or interpret in a different manner.
However, due to the increasing costs of healthcare and the trend of bringing diagnostic imaging to non-localized medical centers with fewer professionals, there is a trend of using less trained staff who may lack knowledge, skills, and experience to help patients take correct postures. There is even a discussion about such medical examinations being performed fully automatically without the presence of medical personnel.
However, problems may currently occur when only inexperienced staff is present, where incorrect image data is acquired, and where there is currently no ability to perform such checks in an automated manner.
There is a need to address these issues.
Disclosure of Invention
It would be advantageous to have an improved patient positioning device for medical examinations. The object of the invention is solved by the subject matter of the independent claims, wherein further embodiments are incorporated in the dependent claims. It should be noted that the following described aspects and examples of the invention apply for the patient positioning adaptive guiding system and also for the patient positioning adaptive guiding method as well as for the computer program element and the computer readable medium.
In a first aspect, there is provided a patient positioning adaptive guidance system comprising:
-a medical image acquisition unit;
-at least one communication unit;
-at least one camera; and
-a processing unit.
The medical image acquisition unit is configured to acquire a medical image of a patient. The processing unit is configured to control the at least one communication unit to provide body positioning information to the patient prior to the acquisition of the medical image. One or more of the at least one camera is configured to acquire position and movement image data of the patient associated with providing body position information to the patient. The processing unit is configured to determine whether the patient has not reached the desired location due to physical or cognitive limitations, the determination comprising the use of body positioning information and the patient's location and moving image data. Based on the determination that the patient has moved in the correct direction, a determination is made that the patient has not reached the desired location due to physical limitations. Based on a determination that the patient has moved in an incorrect direction or has not substantially moved, a determination is made that the patient has not reached the desired location due to cognitive limitations. The processing unit is configured to adjust the body positioning information in at least one first way based on a determination that the patient has not reached the desired position due to the body restriction. The processing unit is configured to adjust the body positioning information in at least one second way based on a determination that the patient has not reached the desired location due to cognitive limitations.
In other words, the system analyzes the response behavior through computer vision, for example, movement and position analysis may be performed using a depth camera with bone recognition, and expression analysis may be performed through camera-based facial analysis. Errors of the user are classified according to cognition or physical limitation, and if the initial movement direction of the patient is correct, it is determined that physical limitation exists, whereas if the patient does not move or moves in a completely different direction than the guidance instructions, it is determined that cognition limitation exists. Then, if the restriction is physical, the guidance is adjusted to match the physical ability of the patient, and if the restriction is cognitive, the presented guidance is adjusted by, for example, presenting an image in the form of an avatar character of the body, which is deformed to more closely resemble the body of the patient in view of the sex, height, weight, etc. of the patient.
In this way, the patient may be automatically guided to the correct or optimal position for the medical image to be acquired based on the user's behavior, taking into account the physical or cognitive limitations of the patient.
In addition, the new system saves time. This is because the nurse is no longer required to travel to and from the control room to adjust the posture of the patient. The bootstrap system will do this for s/he. For example, such examinations typically take 60 seconds (while they do many times per day), but for elderly people it may take several minutes to adjust. The new system is significantly faster and can save a lot of time for those cases.
In an example, one or more of the at least one camera is configured to acquire facial image data of the patient associated with providing body position information to the patient. The determination of whether the patient has not reached the desired location due to physical or cognitive limitations may then include the utilization of facial image data of the patient associated with providing the patient with body position information.
In an example, the processing unit is configured to determine whether the facial image data indicates that the patient is in pain or whether the facial image data indicates that the patient is in confusion. Wherein the determination of whether the patient has not reached the desired location due to physical or cognitive limitations may then include the utilization of the determination of whether the patient is in pain or confusion.
Thus, if the patient has moved in the correct direction but has not yet reached the correct position and the facial analysis indicates that they are in pain, the guidance may be to move to a different end position by a different direction of movement, or the patient may be accepted as not being able to move to the correct position of the first indication and the end position should be accepted as being as close as possible to the correct end position, and the movement guidance provided to the patient may be adjusted accordingly. Moreover, if the patient has not moved at all, or has moved in the wrong direction while also appearing confusing, the guidance provided to the patient may take this into account and adjustments may be made to enable the patient to better understand the desired movement and the desired end correct position.
In an example, the body positioning information includes visual and/or audio information and/or tactile feedback.
In an example, the processing unit is configured to control the at least one communication unit to provide the body positioning information in the form of information related to the movement from the starting position to the desired position.
In an example, the information related to the movement from the starting position to the desired position is in the form of at least one repeatable cycle.
In an example, the information related to movement from the starting position to the desired position is in the form of a plurality of repeatable cycles, each cycle being related to a different segment of movement from the starting position to the desired position.
In this way, the desired movement of the body from the initial position to the end position can be subdivided into video loops relating to different parts of the entire movement. Each cycle may be played in turn as the patient takes each part of the overall action, and if they encounter difficulty in completing a segment of the overall movement, a particular video cycle may be replayed. Then, when they have reached the end of the moving segment, the next video loop can be played, etc. Here, video is used in a general manner, and may be related to animation that can be deformed or adapted, or to a fixed video clip.
In an example, the at least one first mode includes providing encouragement to the patient to move to a desired location.
In other words, if the patient has almost reached the correct position, an audio and/or visual presentation may be provided to the patient, e.g. "please rotate your body a little further to the left". Tactile information such as a series of pulses may also be provided to encourage patient movement, where the time interval between pulses becomes smaller, for example, as the correct position is approached. The pulse may then be stopped or changed to indicate that the correct position has been reached, for example.
In an example, the at least one first way comprises adjusting the body positioning information for a mobility capability of the patient, and/or wherein the at least one first way comprises adjusting the body positioning information for a physical trait of the patient.
In this way, it may be determined that for some reason (e.g., may be that the patient has joint pain), the patient is unable to move as desired, and the information provided to the patient is adjusted to help them reach the desired position or change the desired position.
Furthermore, it can be determined that the desired end position cannot be reached on the patient's body. For example, image processing may determine that a male abdomen is large and cannot be moved to a desired location, or that a female chest is large and cannot be moved to a desired location, and adjust movement information to provide the best possible location to the patient to achieve, taking into account the physical characteristics of the patient.
In an example, the at least one second way includes changing the avatar to have a shape that more closely matches the shape of the patient or changing the avatar to be more abstract.
In an example, the at least one second manner includes changing a perspective from which the body positioning information in the form of visual instructions is presented to the patient, and/or adding at least one vision-enhancing element to the visual information. The enhanced visual element may include one or more of the following: text, arrow, highlight.
In an example, the at least one second manner includes changing a length of time of the repeatable cycle, changing an order of the plurality of cycle segments, changing a playback speed of the repeatable cycle.
In an example, the at least one second way includes changing body positioning information from visual information to audio information, or changing body positioning information from audio information to visual information, or changing body positioning information from visual information to visual information and audio information, or changing body positioning information from audio information to visual information and audio information.
In a second aspect, there is provided a patient positioning adaptive guidance method comprising:
a) Controlling, by the processing unit, at least one communication unit to provide body positioning information prior to acquisition of a medical image of a patient;
b) Acquiring, by one or more of the at least one camera, at least one camera position and moving image data of the patient associated with providing body position information to the patient;
c) Determining, by the processing unit, whether the patient has not reached the desired location due to physical or cognitive limitations, the determining comprising utilizing the body positioning information and the location and movement image data of the patient, wherein a determination is made that the patient has not reached the desired location due to physical limitations based on a determination that the patient has moved in the correct direction, and wherein a determination is made that the patient has not reached the desired location due to cognitive limitations based on a determination that the patient has moved in an incorrect direction or has not substantially moved;
wherein the method comprises the following steps:
adjusting, by the processing unit, the body positioning information in at least one first manner based on a determination that the patient has not reached the desired location due to the body restriction; or alternatively
Adjusting, by the processing unit, the body positioning information in at least one second way based on a determination that the patient has not reached the desired location due to cognitive limitations; and
d) After the body positioning information is adjusted, a medical image of the patient is acquired by a medical image acquisition unit.
According to a further aspect, there is provided a computer program element controlling one or more of the systems as described above, which, if being executed by a processing unit, is adapted to carry out the method as described above.
According to another aspect, there is provided a computer readable medium storing a computer element as described above.
The computer program element may for example be a software program, but may also be an FPGA, a PLD or any other suitable digital device.
Advantageously, the benefits provided by any of the above aspects apply equally to all other aspects and vice versa.
The aspects and examples above will become apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Exemplary embodiments will be described below with reference to the accompanying drawings:
FIG. 1 shows a schematic setup of an example of a patient positioning adaptive guidance system;
FIG. 2 illustrates a patient positioning adaptive guidance method; and
fig. 3 shows three snapshots of a looped fragment from positioning information.
Detailed Description
Fig. 1 shows a schematic example of a patient positioning adaptive guidance system 10. The system comprises a medical image acquisition unit 20, at least one communication unit 30, at least one camera 40, and a processing unit 50. The medical image acquisition unit is configured to acquire a medical image of a patient. The processing unit is configured to control the at least one communication unit to provide body positioning information to the patient prior to the acquisition of the medical image. One or more of the at least one camera is configured to acquire position and movement image data of the patient associated with providing body position information to the patient. The processing unit is configured to determine whether the patient has not reached the desired location due to physical or cognitive limitations, the determination comprising the use of body positioning information and the patient's location and moving image data. Based on the determination that the patient has moved in the correct direction, a determination is made that the patient has not reached the desired location due to physical limitations. Based on a determination that the patient has moved in an incorrect direction or has not moved substantially, a determination is made that the patient has not reached the desired location due to cognitive limitations. The processing unit is configured to adjust the body positioning information in at least one first way based on a determination that the patient has not reached the desired position due to the body restriction. The processing unit is configured to adjust the body positioning information in at least one second way based on a determination that the patient has not reached the desired location due to cognitive limitations.
According to an example, one or more of the at least one camera is configured to acquire facial image data of the patient associated with providing body position information to the patient. The determination of whether the patient has not reached the desired location due to physical or cognitive limitations may then include the utilization of facial image data of the patient associated with providing the patient with body position information.
According to an example, the processing unit is configured to determine whether the facial image data indicates that the patient is in pain or whether the facial image data indicates that the patient is in confusion. Wherein the determination of whether the patient has not reached the desired location due to physical or cognitive limitations may then include the utilization of the determination of whether the patient is in pain or confusion.
According to an example, the body positioning information includes visual and/or audio information and/or tactile feedback.
In an example, the visual information includes utilization of an avatar.
In an example, the avatar makes the required movements that the patient must make to reach the required location.
In an example, the visual information includes a representation of the current position of the patient and a silhouette representing the desired position.
In an example, the haptic feedback is provided via a wearable device. In an example, the wearable device is a smart watch.
According to an example, the processing unit is configured to control the at least one communication unit to provide the body positioning information in the form of information related to the movement from the starting position to the desired position.
According to an example, the information related to the movement from the starting position to the desired position is in the form of at least one repeatable cycle.
According to an example, the information related to the movement from the starting position to the desired position is in the form of a plurality of repeatable cycles, each cycle being related to a different segment of the movement from the starting position to the desired position.
According to an example, the at least one first mode includes providing encouragement to the patient to move to a desired location.
According to an example, the at least one first mode includes adjusting body positioning information for a mobility capability of the patient. Alternatively or additionally, the at least one first mode includes adjusting body positioning information for a physical characteristic of the patient.
According to an example, the at least one second way includes changing the avatar to have a shape that more closely matches the shape of the patient, or changing the avatar to be more abstract.
According to an example, the at least one second way comprises changing a viewing angle from which body positioning information in the form of visual instructions is presented to the patient. Alternatively or additionally, the at least one second way comprises adding at least one enhanced visual element to the visual information, and the enhanced visual element may comprise one or more of: text, arrow, highlight.
According to an example, the at least one second way includes changing a length of time of the repeatable cycle, changing an order of the plurality of cycle segments, changing a playback speed of the repeatable cycle.
According to an example, the at least one second way comprises changing the body positioning information from visual information to audio information.
According to an example, the at least one second way comprises changing the body positioning information from audio information to visual information.
According to an example, the at least one second way comprises changing the body positioning information from visual information to visual information and audio information.
According to an example, the at least one second way comprises changing the body positioning information from audio information to visual information and audio information.
In an example, the system includes at least one pressure sensor configured to acquire pressure level information from a patient, and wherein the determination of whether the patient has not reached a desired location due to physical or cognitive limitations includes utilization of the pressure level information.
In an example, the pressure level information includes one or more of the following: heartbeat, perspiration level, skin conductance, respiratory rate.
Fig. 2 shows in its basic steps a patient positioning adaptive guidance method 100. The method comprises the following steps:
in a control step 110, also referred to as step a), controlling, by the processing unit, at least one communication unit to provide body positioning information prior to acquisition of a medical image of a patient;
in an acquisition step 120, also referred to as step b), position and moving image data of the patient associated with providing body position information to the patient is acquired by one or more of the at least one camera;
in a determining step 130, also referred to as step c), it is determined by the processing unit whether the patient has not reached the desired position due to physical or cognitive limitations, the determining comprising using the body positioning information and the position and movement image data of the patient, wherein a determination is made that the patient has not reached the desired position due to physical limitations based on the determination that the patient has moved in the correct direction, and wherein a determination is made that the patient has not reached the desired position due to cognitive limitations based on the determination that the patient has moved in an incorrect direction or has not substantially moved;
wherein the method comprises the following steps:
adjusting, by the processing unit, the body positioning information in at least one first manner based on a determination that the patient has not reached the desired location due to the body restriction; or alternatively
Adjusting, by the processing unit, the body positioning information in at least one second way based on a determination that the patient has not reached the desired location due to cognitive limitations; and
in an acquisition step 140, also referred to as step d), after the body positioning information is adjusted, a medical image of the patient is acquired by a medical image acquisition unit.
In an example, the method includes acquiring, by one or more of the at least one camera, facial image data of a patient associated with providing body position information to the patient, and wherein determining whether the patient has not reached a desired position due to physical or cognitive limitations includes utilizing the facial image data of the patient associated with providing body position information to the patient.
In an example, the method includes determining, by the processing unit, whether the facial image data indicates that the patient is in pain or whether the facial image data indicates that the patient is in confusion, and wherein determining whether the patient has not reached the desired location due to physical or cognitive limitations includes utilizing the determination of whether the patient is in pain or confusion.
In an example, the body positioning information includes visual information and/or audio information and/or tactile feedback.
In an example, the visual information includes an avatar.
In an example, the avatar makes the required movements that the patient must make to reach the required location.
In an example, the haptic feedback is provided via a wearable device. In an example, the wearable device is a smart watch.
In an example, the visual information includes a representation of the current position of the patient and a silhouette representing the desired position.
In an example, the method comprises controlling, by the processing unit, at least one communication unit to provide body positioning information in the form of information related to movement from a starting position to a desired position.
In an example, the information related to the movement from the starting position to the desired position is in the form of at least one repeatable cycle.
In an example, the information related to movement from the starting position to the desired position is in the form of a plurality of repeatable cycles, each cycle being related to a different segment of movement from the starting position to the desired position.
In an example, the at least one first mode includes providing encouragement to the patient to move to a desired location.
In an example, at least one first mode includes adjusting the body positioning information for a mobility capability of the patient and/or at least one first mode includes adjusting the body positioning information for a body characteristic of the patient.
In an example, the at least one second way includes changing the avatar to have a shape that more closely matches the shape of the patient, or changing the avatar to be more abstract.
In an example, the at least one second manner includes changing a perspective from which the body positioning information in the form of visual instructions is presented to the patient, and/or adding at least one vision-enhancing element to the visual information, wherein the vision-enhancing element includes one or more of: text, arrow, highlighting
In an example, the at least one second manner includes changing a length of time of the repeatable cycle, changing an order of the plurality of cycle segments, changing a playback speed of the repeatable cycle.
In an example, the at least one second way includes changing body positioning information from visual information to audio information, or changing body positioning information from audio information to visual information, or changing body positioning information from visual information to visual information and audio information, or changing body positioning information from audio information to visual information and audio information.
In an example, the method includes collecting pressure level information from a patient by at least one pressure sensor, and wherein determining whether the patient has not reached a desired location due to physical or cognitive limitations includes utilizing the pressure level information.
In an example, the pressure level information includes one or more of the following: heartbeat, perspiration level, skin conductance, respiratory rate.
Thus, the system and method describe a number of rules so that a processing unit (e.g., a processing unit within a computer) can intelligently adjust the guidance of patient movement. In particular, it provides a way to distinguish between whether the patient is unable to take the correct posture due to physical or cognitive or mental limitations.
The system and method help guide the patient in what posture to take. Such instructions may be visual (e.g., 2D visual, 3D visual, animated visual, video) and/or audio (e.g., verbal or abstract) and/or tactile (e.g., wearable device or device integrated into a cradle), and this is because the system/method determines whether the patient is not following the indication due to physical or cognitive limitations is achieved and adjusts the indication accordingly.
Specific details of the patient positioning adaptive guidance system and the patient positioning adaptive guidance method are explained with reference to fig. 3.
Fig. 3 shows three snapshots of a looped fragment from positioning information. The inventors have devised a guiding concept for chest X-ray image units or other X-ray or MRI or PET image systems based on a short cycle of 3D animation. In such a loop, an animated character (e.g., an avatar or "virtual twin") is given visual instructions, such as an arrow, along with audio instructions, to display movements that require movement from the current posture of the patient to the desired posture. This is illustrated in fig. 3, where complex movements are "cut" into intelligible blocks. Each of the three pictures shown in fig. 3 is in fact an animated cycle in which only one snapshot image from the cycle is displayed. In the first cycle segment, "a" means "handle is placed at the back of you," B "means" tilt left "in the second cycle segment and" C "means" hand moves down "in the third cycle segment. The loop may be a real life video of the action performed by a real person, who may be a 3D animated character capable of adjusting the action in real time. For example, real-time adjustment of the character or avatar by making the avatar more abstract (interesting to children because it looks like a cartoon character) or by cycling the avatar more like a patient based on the image of the patient being acquired has been shown to help those who have difficulty following the instruction so that the patient can better understand what they have to do with the avatar.
Another guidance principle (not shown) is "shadow guidance" in which the user attempts to match his silhouette to the target silhouette. Thus, the target shadow can be moved as desired and superimposed on a rendition of the patient's own body position being seen on the screen, and they move their own body so that the shadow on the screen follows the target shadow.
Thus, an outline of how the system/method operates may be described in detail as follows:
the guidance portion of the system displays instructions to the user for the step of reaching the desired target location. The guidance may be visual (2D visual, 3D visual, animated visual, video) and/or audible (e.g., verbal or abstract) and/or tactile (e.g., hardware movement or vibration for active pushing or implications).
A portion of the system analyzes response behavior through computer vision, which may be through the use of depth cameras with skeletal recognition for movement and position analysis, and through camera-based facial analysis for expression analysis. The system may also collect pressure data from wearable sensors such as heart rate or skin conductance sensors, or via the use of audio/voice analysis.
Performing classification of errors of the user in terms of cognitive or physical limitations:
if the initial direction of patient movement is correct (similar to the guidance instructions) but the magnitude or final direction of movement is incorrect (and thus the target location is not reached), it is physical;
it is cognitive if the patient does not move, or moves in a completely different direction than the guidance instructions, or shows signs of confusion (facial expression recognition) or stress.
Adjusting the guiding principle
If the restriction is physical, the guideline is adjusted to match the physical ability of the patient (as detected by computer vision)
Adapting the magnitude of character movement (and thus target position) to the detected magnitude of patient movement
The magnitude of character movements is adapted to the restrictive features of the patient's body (e.g., large bellies, chest).
If the constraint is cognitive, adjusting the presented guidance:
adapting the physical form of the character to the physical form (sex, height, weight) of the patient
The viewing angle is adapted to the magnitude of the patient movement. Some movements may be more pronounced at some angles and when the patient is not in compliance, the viewing angle may be adjusted to better visualize the movement.
Adapting the speed of character movement to the speed of detected patient, e.g. while walking
Changing the guidance cadence, e.g. 3D loops being shorter, or in a different order
Adding visual enhancement elements, e.g. text, arrows, highlighting
Changing visual appearance, e.g. from a real avatar to an abstract avatar
Changing the guiding mode or combining the two modes (visual, auditory, tactile)
Thus, an exemplary system may perform the following operations: it is checked whether the desired posture is reached. If not, it is checked whether the user tries to move towards this position by measuring body movement, for example. To enhance this, the facial expression may be checked to see if the user is confused (where it may be determined that the facial expression represents, for example, "how i should move to that location.
Detailed workflow
The following relates to a detailed workflow regarding the acquisition of frontal chest X-rays, but the workflow may be applicable to different X-ray examinations and different types of examinations, such as MR/PET.
Dividing the total posture change required into manageable, understandable sub-movements (e.g. dividing front chest X-rays into (i) standing in the middle of a wall mount, (ii) putting hands on your back, (iii) rolling your shoulders against a wall mount)
Each of these sub-actions is interpreted by a cyclic 3D animation. The cyclic display 3D animated character performs the required actions once again. The body parts and their movements may be visually emphasized by various visual means, such as arrows, colors and lights.
The physical form of the patient is analyzed using computer vision (e.g., depth camera).
Using the patient's physical form, the system adjusts the target posture (e.g., in the case of a patient's large bellyband or chest undergoing an X-ray examination, the patient may not be able to rest his/her shoulders fully against the detector plate).
Computer vision (e.g., depth camera) is used to analyze whether the patient's response complies with the set paradigm and the speed of compliance.
In this way, multiple behavior patterns can be distinguished:
stage 0-cognitive overload/mental confusion:
patient response: if the patient does not respond, moves in the wrong direction, shows signs of confusion (e.g. unstable movements, facial expressions), this means that he does not understand the instructions.
And (3) system adjustment:
the instructions are repeated and/or re-expressed.
An animation is set for the virtual camera to display movements from different angles.
Less preferred but easier to achieve target poses are accepted.
Stage 1-able to perform an action but not accurately complete it
Patient response: if the initial direction of patient movement is correct but the magnitude is incorrect, this may be due to misunderstanding of the magnitude (i.e., the patient thinks he is moving far enough, but this is not the case).
And (3) system adjustment:
encouraging further movement of the patient
Magnifying display of the last portion of an animation cycle
Stage 3-physically unable to move:
patient response: if the initial direction of patient movement is correct but the magnitude is still incorrect, even after repeated encouragement of phase 2, this may mean that the patient is physically unable to reach the target location. In addition, the suffering condition of the facial expression of the patient can be analyzed to see if the patient is exerting his or her effort.
And (3) system adjustment:
switching to an alternative target position (e.g. for chest X-ray examination, the patient can put his hands behind them, but if this is not effective for them they can "hug" the wall mount)
The correct but smaller amplitude movement is received.
In a further exemplary embodiment a computer program or a computer program element is provided, characterized by being configured to perform the method steps of the method according to one of the preceding embodiments on a suitable system.
Thus, a computer program element may be stored on a computer unit, which may also be part of an embodiment. The computing unit may be configured to perform or cause to perform the steps of the above-described method. Furthermore, it may be configured to operate components of the above-described apparatus and/or systems. The computing unit may be configured to automatically operate and/or execute commands of a user. The computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out a method according to one of the preceding embodiments.
This exemplary embodiment of the invention covers both the use of the computer program of the invention from the beginning and the conversion of an existing program into a computer program using the program of the invention by means of an update.
Further, the computer program element may be capable of providing all the required steps of a process to implement the exemplary embodiments of the method described above.
According to a further exemplary embodiment of the invention, a computer-readable medium, such as a CD-ROM, a USB stick or the like, is proposed, wherein the computer-readable medium has stored thereon a computer program element, which is described in the previous section.
A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
However, the computer program may also be presented on a network, such as the world wide web, and may be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the invention, a medium for making available for downloading a computer program element is provided, which computer program element is arranged to perform a method according to one of the preceding embodiments of the invention.
It has to be noted that embodiments of the invention are described with reference to different subject-matters. In particular, some embodiments are described with reference to method class claims, while other embodiments are described with reference to apparatus class claims. However, those skilled in the art will appreciate from the description of the context that, unless otherwise indicated, in addition to any combination of features belonging to one type of subject matter, any combination between features relating to different subject matter is also considered as being disclosed with the present application. However, all features can be combined, providing more synergistic effects than simple addition of features.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the dependent claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims shall not be construed as limiting the scope.

Claims (15)

1. A patient positioning adaptive guidance system (10), comprising:
a medical image acquisition unit (20);
at least one communication unit (30);
at least one camera (40); and
a processing unit (50);
the medical image acquisition unit is an X-ray image acquisition unit, an MRI image acquisition unit or a PET image acquisition unit;
wherein the medical image acquisition unit is configured to acquire a medical image of a patient;
wherein the processing unit is configured to control the at least one communication unit to provide body positioning information to the patient prior to acquisition of the medical image;
wherein one or more of the at least one camera is configured to acquire position and moving image data of the patient associated with providing the body position information to the patient;
wherein the processing unit is configured to determine whether the patient has not reached a desired location due to physical or cognitive limitations, the determination comprising utilization of the body positioning information and the location and movement image data of the patient, wherein a determination is made that the patient has not reached the desired location due to physical limitations based on a determination that the patient has moved in the correct direction, and wherein a determination is made that the patient has not reached the desired location due to cognitive limitations based on a determination that the patient has moved in an incorrect direction or has not substantially moved;
wherein the processing unit is configured to adjust the body positioning information in at least one first way based on a determination that the patient has not reached the desired position due to body limitations; and is also provided with
Wherein the processing unit is configured to adjust the body positioning information in at least one second way based on a determination that the patient has not reached the desired position due to cognitive limitations.
2. The system of claim 1, wherein one or more of the at least one camera is configured to acquire facial image data of the patient associated with providing the body position information to the patient, and wherein the determination of whether the patient has not reached a desired location due to physical or cognitive limitations includes utilization of the facial image data of the patient associated with providing the body position information to the patient.
3. The system of claim 2, wherein the processing unit is configured to determine whether the facial image data indicates that the patient is in pain or whether the facial image data indicates that the patient is in confusion, and wherein the determination of whether the patient has not reached a desired location due to physical or cognitive limitations includes utilization of the determination of whether the patient is in pain or confusion.
4. A system according to any of claims 1-3, wherein the body positioning information comprises visual and/or audio information and/or tactile feedback.
5. The system according to any of claims 1-4, wherein the processing unit is configured to control the at least one communication unit to provide the body positioning information in the form of information related to movement from a starting position to the desired position.
6. The system of claim 5, wherein the information related to movement from the starting position to the desired position is in the form of at least one repeatable cycle.
7. The system of claim 6, wherein the information related to movement from the starting position to the desired position is in the form of a plurality of repeatable cycles, each cycle related to a different segment of movement from the starting position to the desired position.
8. The system of any of claims 1-7, wherein the at least one first mode includes providing encouragement to the patient to move to the desired location.
9. The system of any of claims 1-8, wherein the at least one first mode includes adapting the body positioning information to a mobility capability of the patient, and/or wherein the at least one first mode includes adapting the body positioning information to a body characteristic of the patient.
10. The system of any of claims 1-9, wherein the at least one second manner includes changing an avatar to have a shape that more closely matches a patient's shape or changing the avatar to be more abstract.
11. The system of any of claims 1-10, wherein the at least one second manner includes changing a perspective from which body positioning information is presented to the patient in visual instructions and/or adding at least one enhanced visual element to visual information, wherein the enhanced visual element includes one or more of: text, arrow, highlight.
12. The system of any of claims 6-7, or any of claims 8-11 when dependent on any of claims 6-7, wherein the at least one second manner comprises changing a length of time of the repeatable cycle, changing an order of the plurality of cycle segments, changing a playback speed of the repeatable cycle.
13. The system of any of claims 1-12, wherein the at least one second manner includes changing the body positioning information from visual information to audio information, or changing the body positioning information from audio information to visual information, or changing the body positioning information from visual information to visual information and audio information, or changing the body positioning information from audio information to visual information and audio information.
14. A patient positioning adaptive guidance method (100), comprising:
a) Controlling (110), by the processing unit, at least one communication unit to provide body positioning information prior to acquisition of a medical image of a patient;
b) Acquiring (120), by one or more of at least one camera, position and movement image data of the patient associated with providing the body position information to the patient;
c) Determining (130), by the processing unit, whether the patient has not reached a desired position due to physical or cognitive limitations, the determining comprising utilizing body positioning information and the position and movement image data of the patient, wherein a determination is made that the patient has not reached the desired position due to physical limitations based on a determination that the patient has moved in the correct direction, and wherein a determination is made that the patient has not reached the desired position due to cognitive limitations based on a determination that the patient has moved in an incorrect direction or has not substantially moved;
wherein the method comprises the following steps:
adjusting, by the processing unit, the body positioning information in at least one first manner based on a determination that the patient has not reached the desired location due to physical limitations; or alternatively
Adjusting, by the processing unit, the body positioning information in at least one second manner based on a determination that the patient has not reached the desired location due to cognitive limitations; and
d) After adjusting the body positioning information, the medical image of the patient is acquired (140) by a medical image acquisition unit, and wherein the medical image acquisition unit is an X-ray image acquisition unit, an MRI image acquisition unit or a PET image acquisition unit.
15. A computer program element for controlling a system according to any one of claims 1 to 13, which, when being executed by a processor, is configured to perform the method of claim 14.
CN202280032875.2A 2021-05-03 2022-04-26 Patient positioning adaptive guidance system Pending CN117396976A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP21171851.5 2021-05-03
EP21171851.5A EP4086916A1 (en) 2021-05-03 2021-05-03 Patient positioning adaptive guidance system
PCT/EP2022/060986 WO2022233638A1 (en) 2021-05-03 2022-04-26 Patient positioning adaptive guidance system

Publications (1)

Publication Number Publication Date
CN117396976A true CN117396976A (en) 2024-01-12

Family

ID=76034414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280032875.2A Pending CN117396976A (en) 2021-05-03 2022-04-26 Patient positioning adaptive guidance system

Country Status (3)

Country Link
EP (2) EP4086916A1 (en)
CN (1) CN117396976A (en)
WO (1) WO2022233638A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1732495A4 (en) * 2004-02-05 2012-08-08 Motorika Ltd Methods and apparatus for rehabilitation and training
US10943407B1 (en) * 2019-01-25 2021-03-09 Wellovate, LLC XR health platform, system and method

Also Published As

Publication number Publication date
EP4334956A1 (en) 2024-03-13
WO2022233638A1 (en) 2022-11-10
EP4086916A1 (en) 2022-11-09

Similar Documents

Publication Publication Date Title
AU2017386412B2 (en) Systems and methods for real-time data quantification, acquisition, analysis, and feedback
US11944446B2 (en) Apparatus, method, and system for pre-action therapy
US10973439B2 (en) Systems and methods for real-time data quantification, acquisition, analysis, and feedback
US11633659B2 (en) Systems and methods for assessing balance and form during body movement
US11679300B2 (en) Systems and methods for real-time data quantification, acquisition, analysis, and feedback
US20190282324A1 (en) Augmented Reality Device for Providing Feedback to an Acute Care Provider
KR101660157B1 (en) Rehabilitation system based on gaze tracking
US20140371633A1 (en) Method and system for evaluating a patient during a rehabilitation exercise
CN112384970A (en) Augmented reality system for time critical biomedical applications
US20130324857A1 (en) Automated system for workspace, range of motion and functional analysis
US20150004581A1 (en) Interactive physical therapy
CN108091377B (en) Use of infrared light absorption for vein discovery and patient identification
KR102388337B1 (en) Service provision method of the application for temporomandibular joint disease improvement service
JP2016080752A (en) Medical activity training appropriateness evaluation device
CN113257387B (en) Wearable device for rehabilitation training, rehabilitation training method and system
Lupu et al. Virtual reality system for stroke recovery for upper limbs using ArUco markers
CN117396976A (en) Patient positioning adaptive guidance system
US20240215922A1 (en) Patient positioning adaptive guidance system
JP7353605B2 (en) Inhalation motion estimation device, computer program, and inhalation motion estimation method
EP4181789B1 (en) One-dimensional position indicator
CN116052835A (en) Joint rehabilitation training method and system based on computer vision
CN116709977A (en) System and method for patient profile creation
CN118197545A (en) Pelvic floor muscle rehabilitation training auxiliary system and method based on virtual reality
Vella et al. Towards the Human Ethome: Human Kinematics Study in Daily Life Environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication