EP3908969A1 - Verfahren und system zur erfassung des bewegungsablaufs einer person - Google Patents
Verfahren und system zur erfassung des bewegungsablaufs einer personInfo
- Publication number
- EP3908969A1 EP3908969A1 EP20700648.7A EP20700648A EP3908969A1 EP 3908969 A1 EP3908969 A1 EP 3908969A1 EP 20700648 A EP20700648 A EP 20700648A EP 3908969 A1 EP3908969 A1 EP 3908969A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- patient
- service robot
- person
- movement sequence
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 417
- 238000000034 method Methods 0.000 title claims abstract description 94
- 230000005021 gait Effects 0.000 claims description 125
- 238000011156 evaluation Methods 0.000 claims description 115
- 210000003423 ankle Anatomy 0.000 claims description 59
- 230000015654 memory Effects 0.000 claims description 58
- 238000001514 detection method Methods 0.000 claims description 55
- 210000003414 extremity Anatomy 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000002604 ultrasonography Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 8
- 230000001427 coherent effect Effects 0.000 claims description 3
- 210000002414 leg Anatomy 0.000 description 115
- 238000012937 correction Methods 0.000 description 98
- 238000012549 training Methods 0.000 description 78
- 239000013598 vector Substances 0.000 description 36
- 238000000605 extraction Methods 0.000 description 34
- 210000001624 hip Anatomy 0.000 description 28
- 238000013439 planning Methods 0.000 description 26
- 238000004422 calculation algorithm Methods 0.000 description 24
- 238000013507 mapping Methods 0.000 description 23
- 210000000245 forearm Anatomy 0.000 description 22
- 210000004394 hip joint Anatomy 0.000 description 20
- 238000002560 therapeutic procedure Methods 0.000 description 20
- 210000002683 foot Anatomy 0.000 description 18
- 230000006978 adaptation Effects 0.000 description 17
- 238000013528 artificial neural network Methods 0.000 description 17
- 210000003127 knee Anatomy 0.000 description 17
- 210000000629 knee joint Anatomy 0.000 description 16
- 230000006870 function Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 15
- 230000006399 behavior Effects 0.000 description 14
- 238000013459 approach Methods 0.000 description 12
- 238000013523 data management Methods 0.000 description 12
- 238000003860 storage Methods 0.000 description 12
- 238000012546 transfer Methods 0.000 description 12
- 230000007704 transition Effects 0.000 description 12
- 230000001133 acceleration Effects 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 10
- 230000015572 biosynthetic process Effects 0.000 description 9
- 210000003128 head Anatomy 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 9
- 230000036544 posture Effects 0.000 description 9
- 238000001356 surgical procedure Methods 0.000 description 8
- 238000003786 synthesis reaction Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000001965 increasing effect Effects 0.000 description 7
- 210000000323 shoulder joint Anatomy 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000036541 health Effects 0.000 description 6
- 230000006872 improvement Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000001225 therapeutic effect Effects 0.000 description 6
- 238000011282 treatment Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 230000001419 dependent effect Effects 0.000 description 5
- 210000004247 hand Anatomy 0.000 description 5
- 238000002372 labelling Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 238000003825 pressing Methods 0.000 description 5
- 210000000689 upper leg Anatomy 0.000 description 5
- 239000008186 active pharmaceutical agent Substances 0.000 description 4
- 230000000295 complement effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 239000003814 drug Substances 0.000 description 4
- 239000012530 fluid Substances 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 210000001503 joint Anatomy 0.000 description 4
- 230000002980 postoperative effect Effects 0.000 description 4
- 238000011867 re-evaluation Methods 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- OKKJLVBELUTLKV-UHFFFAOYSA-N Methanol Chemical compound OC OKKJLVBELUTLKV-UHFFFAOYSA-N 0.000 description 3
- 210000000544 articulatio talocruralis Anatomy 0.000 description 3
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 3
- 230000009194 climbing Effects 0.000 description 3
- 238000003066 decision tree Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 229940079593 drug Drugs 0.000 description 3
- 230000001976 improved effect Effects 0.000 description 3
- 238000007477 logistic regression Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000000474 nursing effect Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000000399 orthopedic effect Effects 0.000 description 3
- 229910052760 oxygen Inorganic materials 0.000 description 3
- 239000001301 oxygen Substances 0.000 description 3
- 210000004197 pelvis Anatomy 0.000 description 3
- 238000012913 prioritisation Methods 0.000 description 3
- 239000000047 product Substances 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 210000000707 wrist Anatomy 0.000 description 3
- 230000033228 biological regulation Effects 0.000 description 2
- 239000008280 blood Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 210000000746 body region Anatomy 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000002354 daily effect Effects 0.000 description 2
- 230000006735 deficit Effects 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000003203 everyday effect Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 239000000446 fuel Substances 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 230000035876 healing Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 201000008482 osteoarthritis Diseases 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000003442 weekly effect Effects 0.000 description 2
- 208000010201 Exanthema Diseases 0.000 description 1
- 206010019280 Heart failures Diseases 0.000 description 1
- 238000006165 Knowles reaction Methods 0.000 description 1
- 206010033799 Paralysis Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000004146 energy storage Methods 0.000 description 1
- 201000005884 exanthem Diseases 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 230000003923 mental ability Effects 0.000 description 1
- 230000037230 mobility Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004220 muscle function Effects 0.000 description 1
- 208000010125 myocardial infarction Diseases 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 238000000554 physical therapy Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 206010037844 rash Diseases 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 229910000679 solder Inorganic materials 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000008093 supporting effect Effects 0.000 description 1
- 238000011477 surgical intervention Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000011541 total hip replacement Methods 0.000 description 1
- 230000032258 transport Effects 0.000 description 1
- 230000004584 weight gain Effects 0.000 description 1
- 235000019786 weight gain Nutrition 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Definitions
- the invention comprises a method and system for detecting the movement sequence of a person.
- the health system suffers from a considerable shortage of skilled workers. This means that therapy and care can increasingly only be provided to an insufficient extent, with considerable effects on health costs and on the value added of the economy. For patients, this may also mean prolonged suffering or even secondary illnesses, which can arise, for example, from poor posture during rehabilitation measures, provided that patients are not adequately instructed. These effects go hand in hand with the fact that the need to document the patient's condition is increasing so that, from a clinic perspective, it may be necessary to defend yourself against claims for damages that can be attributed to inadequate therapies. In some cases, this can lead to a self-reinforcing effect.
- the system described in this document addresses this problem by the primary monitoring of rehabilitation measures, in particular those of the posture and gait during a movement, is provided by, for example, a service robot.
- This service robot is also able to document the completed exercises precisely, which means that the healthcare facility in question is able to meet its compliance obligations in this regard, without having to second staff separately.
- Another effect is that the use of such a system standardizes the assessment of the success of therapy, because at present the assessment of whether an exercise is performed correctly is subject to the assessment of a therapist, who in turn differs from other therapists through individual experiences.
- therapists' assessments for the same exercise are different Assessments possible, while using the system or the service robot there is a uniform assessment.
- the use of the service robot for example in the areas of gait and stair training, provides a significant relief for the therapeutic staff: in neurological, geriatric and internal medicine facilities, the service robot can take care of patients with poor orientation as well as training the
- Service robots can be reduced, for example, in patients suitable for walking and stair training.
- the service robot helps to maintain the result of the operation and to avoid incorrect movement patterns in the movement sequence.
- Therapeutic training after an operation attempts to correct the wrong movement, e.g. an incorrect gait, which the patient may experience due to pain or
- the service robot is a good alternative to recognize and correct errors in the movement process in good time.
- CN206833244 in which a service robot distributes materials in the hospital, is similarly stored.
- Chinese patent applications CN107518989 and CN101862245 which include a service robot that transports patients, also operate in the hospital environment. similar to a wheelchair.
- CN205950753 describes a service robot that detects patients using sensors and guides them through a hospital.
- CN203338133 describes a service robot to support the nursing staff, who accompanies patients in the hospital with everyday things.
- CN203527474 refers to a service robot that supports the elderly with its arm.
- CN108073104 refers to a nursing robot that cares for infected patients by providing these patients with medication or administering these medications, massaging the patient, eating enough, communicating with the patient, etc.
- the nursing robot thereby reduces the risk of infection for medical staff by reducing the number of patient contacts between staff.
- a service robot to accompany older people can be found in CN107598943. This service robot has some monitoring functions, but above all a function for floor cleaning.
- CN106671105 is a mobile service robot for the care of the elderly.
- the service robot uses sensors to monitor parameters of the body, such as temperature, but also facial expressions. He also recognizes whether the person has fallen and can alarm help accordingly via a network.
- CN104889994 and CN204772554 in which a service robot from the medical field detects the heart rate, supplies patients with oxygen and has voice recognition and a multimedia module for entertainment purposes, are similarly mounted.
- the blood oxygen is also determined in CN105082149.
- CN105078445 refers to a service robot that makes it possible to record an electrocardiogram and measure the oxygen content in the blood, especially in older people.
- CN105078450 with an electroencephalogram measurement is similar.
- CN108053889 a system is described in a relatively abstract manner, which carries out exercises with a patient based on stored information.
- CN108039193 describes a system for the automatic generation of health reports that is used in a service robot. The recording of movements / fitness exercises using a service robot, the recording and
- CN106709254 describes a service robot for medical diagnosis of a patient, who at the same time, based on the diagnosis, also creates a plan for treatment. For this purpose, the service robot evaluates voice and image information and compares it with information stored in memories.
- a neural network is used here.
- CN106407715 describes a service robot which carries out the patient's medical history by means of speech processing and image recognition. In addition to querying via voice input and output devices via a touchpad, a photo of the tongue, which is taken by a camera of the service robot, is also used for the medical history.
- CN105078449 describes a service robot with a tablet computer as a communication unit, via which u.a. a cognitive functional training or a cognitive psychological assessment takes place to discover Alzheimer's in patients.
- the tablet records a telephone call between the patient and a child that follows a certain process and derives from the course of the conversation whether the patient has Alzheimer's.
- Jaeschke et al. 2018 validates whether a gait evaluation using Microsoft Kinect on a service robot can provide valid results in comparison to established stationary gait evaluation systems (the gold standard) when it comes to determining the position of the joints and limbs, i.e. whether the parameters relevant for the evaluation of gait training can be recorded in this way. Steps per minute,
- Stride speed, stride length, time on one or both feet are mentioned as relevant parameters as well as extension and flexion of the ankles, knees and hips as well as the inclination of the pelvis and the forward or backward leaning of the trunk.
- Trinh et al. 2018 set out how a seated person can be recognized by a service robot and how a service robot can interact with this person. It is also described that people with walking aids are identified via a 2D laser scanner.
- Vorndran et al. 2018 illustrate how a user who completes gait training is tracked by a service robot driving in front of the user.
- a camera that can be controlled in its orientation is also used for this purpose, which enables better tracking of the user.
- People tracking takes place using LID AR and RGB camera.
- the service robot also uses both a laser scanner and an RGB-D camera (3D camera) to determine (future) users. This forecast of user behavior is used both to control the service robot and to track the RGB camera using a PID controller.
- the invention comprises a method and a system e.g. a service robot to support the therapy of a patient, in particular the therapy of the
- the service robot has sensor devices to detect the
- Storage unit are stored in the service robot or in a cloud. Based on any deviations in the movements, the service robot can provide the patient with tips on how to improve his movements. The exercises completed and the data stored here can then be evaluated by a therapist.
- Sequence of movements includes movements of body elements of the person.
- the method comprises a detection by a non-contact sensor of a plurality of images of the person during a movement sequence (e.g. a gait sequence), the plurality of images representing the movements of the body elements of the person, creation of at least one skeleton model with limb positions for at least some of the Variety of images, and a calculation of the movement from the movements of the a movement sequence (e.g. a gait sequence), the plurality of images representing the movements of the body elements of the person, creation of at least one skeleton model with limb positions for at least some of the Variety of images, and a calculation of the movement from the movements of the
- the method can also include a comparison of the calculated movement sequence with a predetermined movement sequence, which is stored in a memory.
- the calculation of the movement sequence includes the evaluation of movements of the body elements over at least one complete gait cycle, so that the system receives a complete picture of the movement sequence.
- the method also includes recognition of at least one walking aid in the plurality of images by comparison with walking aid patterns.
- the method can carry out a coherent evaluation of the at least one walking aid and at least one ankle point obtained from the skeleton model, the evaluation determining the difference between the at least one ankle point and a floor-side end point that comprises at least one walking aid. This difference is e.g. determined in the Sagittal plane and the evaluation is carried out e.g. at a point of contact with the ground.
- the method may further include a notification if the movements in the detected movement sequence deviate from the movements in the predetermined movement sequence in order to inform the person about incorrect movements in the movement sequence.
- the number of messages output depends on the number and type of the detected
- the system for detecting a movement sequence of a person comprises at least one sensor for contactless detection of a plurality of images of the person during a movement sequence (for example a course of gait), the plurality of images being the
- the evaluation unit can have a memory with predetermined values for the positions of the body elements in the case of an intended movement sequence and, in operation, compares the predetermined values with the movements of the body elements. During operation, the evaluation unit evaluates the positions of the body elements with the aid of walking aids over at least one gait cycle and / or a symmetry of the movement of the body elements.
- the system also has an output unit for outputting messages when determining deviations between the movements of body elements and the predetermined movement sequence.
- the system comprises a segmentation unit for recognizing objects e.g. Mates or other items in the variety of images.
- Fig. 1 Exemplary system architecture
- Fig. 2 Top view of the wheels of the service robot
- Fig. 3 Management system of the service robot
- Fig. 4 exemplary exercise plan
- Fig. 8 3D data acquisition and evaluation
- Fig. 10 Self-learning procedure for adapting the exercise plan
- Fig. 11 Evaluation matrix of the patient's movement data for the therapist
- Fig. 12 Method for improving the motion sequence correction
- Fig. 13 automated improvement of the movement sequence correction
- Fig. 14 Sequence of the lean of trunk, hip flexion and knee flexion of a patient with a prosthesis (TEP) in the right hip.
- Fig. 15 Use of props over time
- Fig. 16 Sequence of standing times of a patient with a prosthesis in the right hip
- Fig. 17 Histogram of the error classes: symmetry of the gait sequences
- Fig. 18 Exercise interruption
- a service robot 17 is shown in FIG. 3 and can be designed in different hardware-software configurations that include different components and / or modules.
- This service robot 17 is an example of a system for detecting the
- FIG. 1 An exemplary system architecture is shown in FIG. 1. As described by way of example at other points in this document, alternative aspects are also possible in which individual components and / or modules are added and / or omitted.
- a service robot 17 has at least one processor (in PC or ARM architecture) and at least one memory connected to the processor.
- the system architecture comprises four levels, including three software levels (an application level 2010, a behavior level 2020, and a level 2030 of service robot capabilities), and a hardware level 2080.
- the levels 2010, 2020 and 2030 mainly depict modules, which expressis verbis is not shown in Fig. 1 for reasons of clarity and also does not have to be shown in all places in the text expressis verbis.
- the service robot capabilities are mapped, which in turn form the basis for behavior level 2020, which depicts the behavior of service robot 17, while application level 2010 covers the application.
- this application level 2010 there is, for example, a gait training application in a movement training module 2011, in which instructions etc. for a patient are stored.
- the training module can also do other training instructions include that are not necessarily movement-related, e.g. instructions for training the memory, etc.
- a exercise plan or instructions such as speech and / or display output to implement the exercise plan 2012, the evaluation of the exercises 2013 in the exercise plan, and ultimately ( optional) patient data such as age, comorbidities, room numbers of the patient, etc. 2014.
- the four levels each build on one another.
- the application motion training certain robot skills which in turn determined certain
- the behavior level 2020 there is a module for user guidance 2021 and a movement sequence correction in the movement correction module 2022. Furthermore, there is a module which depicts how the service robot 17 approaches the patient 2023, i.e. also how the service robot 17 communicates with the patient. Another behavior that is mapped on this behavior level in 2020 is driving to the 2024 goal and that
- the personal recognition module 2040 includes one
- Personal identification module 2041 for personal identification, a first one
- Person tracking module 2042 for visual person tracking primarily via 2D camera, a second person tracking module for LIDAR-based person tracking 2043. There is also a sub-module as a re-identification module 2044 for person re-identification, which is used when a person (patient) uses a tracking area has left one
- Submodule as a seat recognition module 2045 for the seat recognition 2045 which is used to recognize people (patients) sitting on a chair and one
- Submodule as skeletal recognition module 2046 for 3D skeletal recognition This can be done using a 2D or 3D camera.
- Another module at the level of service robot capabilities 2030 is one
- Motion evaluation module 2050 which as a submodule
- Movement sequence extraction module 2051 for feature extraction of the movement sequence and includes a submodule as a movement sequence evaluation module 2052 for recording and evaluating the movement sequence of the patient.
- the navigation module 2060 there is a submodule for the 2D / 3D acquisition 2061, a mapping module 2061a for mapping its surroundings, a map module 2061b with a map of the surroundings in which the service robot 17 moves. Furthermore, the navigation module 2060 has a submodule for self-localization 2062, for example within a mapped environment. In addition, the navigation module 2060 has a submodule in order to keep the service robot 17 always in view of the people to be tracked 2063.
- a path planning module 2064 for metric path planning ensures that the service robot 17 can efficiently calculate its own route to be covered.
- a motion planning module 2065 for motion planning uses i.a. the results of the metric path planning from the path planning module 2064 and calculates an optimal path for the
- Service robot 17 taking into account various target functions, including the
- the submodule for the user address 2066 As to how the service robot 17 navigates in order to e.g. to address the patient.
- the submodule 2067 ensures that a distance to the user (e.g. a patient, a therapist, a caregiver, or another person) is maintained, which reflects both security requirements and the personal, culturally shaped, interpersonal distance that the service robot 17 when interacting with people.
- the service robot 17 has a mechanism which detects this self-blocking 2068 and can also release it again.
- a module for determining waiting positions 2069 ensures that the service robot takes 17 waiting positions where it does not bother anyone.
- Energy supply ensures that the service robot 17 automatically searches for a charging station when it is low on energy, docks there and charges its battery.
- the level for service robot capabilities 2030 also holds a module that is dedicated to human-service robot interaction 2070.
- a submodule covers the graphical user interface (GUI) from 2071, another submodule provides eye contact between the patient and the patient Service robot 17 from 2072 (if the service robot 17 has a head with eyes 2094) and two further submodules use speech synthesis 2073 and speech recognition 2074.
- GUI graphical user interface
- odometry module 2081 i.e. a measuring and control unit for the odometry function, which is connected to the navigation module 2060 via an interface.
- Pressure sensitive bumpers 2082 are located several centimeters above the ground and allow collision detection. If a collision in
- a charging port with associated charging electronics 2091 makes it possible to recharge the integrated battery and to be supplied with the appropriate energy by an external charging device.
- Alternative energy sources such as a fuel cell, including a direct methanol or a solid oxide fuel cell, are also possible for the power supply of the service robot 17.
- the service robot 17 has a LID AR 2083 and a panorama camera (2D, RGB) 2084. There is also an RGB-D camera (3D) 2085 on
- Service robot 17 which has a zoom function and can be tracked 2086.
- Wireless interfaces once a WLAN module 2088 and once for an RFID transponder 2089, allow the electronic exchange of data.
- the service robot 17 has a touch display 2087. At least one
- Loudspeaker 2092 enables, for example, the output of speech synthesis 2073, at least one microphone 2093 the recording of speech signals, for example with the aim of
- Speech recognition 2074 using natural language processing is also possible.
- a head with controllable eyes 2094 (with 6 degrees of freedom) ensures improved human-machine communication on an emotional level.
- the components 2087, 2092-2094 serve primarily the human-s ervi cerob oter interaction.
- the display 2087 can i.a. can be used for the following purposes within the 2021 user guide:
- the service robot 17 can also have lighting elements in order to give instructions to the patient, for example to signal that the patient should turn into a certain aisle.
- lighting elements are located, for example, in the upper area of the service robot 17 and comprise, for example, LEDs.
- the service robot 17 has two drive wheels 6, which are centered and arranged parallel to one another (see FIG. 2). Around it, for example on a circular path, there are two or more support wheels 5.
- This arrangement of the support wheels 5 allows the service robot 17 to be rotated on the spot by driving the drive wheels 6 in opposite directions.
- the horizontal axis of the support wheels 5 is mounted such that the axis can rotate 360 degrees around the vertical axis.
- the distance between the drive wheels 6 is greater than that shown in FIG. 2, so that the service robot 17 is prevented from tilting too easily.
- This cloud 18 can be both a public and private cloud (“on premise”). A therapist has that
- Access patient administration module 161 which in turn is connected to a memory of the patient administration module 162.
- the patient administration module 161 and the memory of the patient administration module 162 are collectively referred to as the patient administration module 160.
- the therapist can store patient data in the memory of the patient administration module 162 or, in one aspect, this patient data from at least one other via an interface Import patient data management system 170, which in turn via a computer of the patient data management system 171 and a memory of the
- Patient data management system 172 has. These other systems include hospital management systems, hospital information systems (HIS) and / or patient data management systems, as are usually used in clinics or
- Patient administration module 160 assign the exercise plan to the patient
- Modify the exercise plan for example, over time and view the evaluations of the exercises in the exercise plan that the patient has carried out with the service robot 17 and which were transferred from the service robot 17 into the memory of the patient administration module 162 via an interface.
- the patient administration module 160 documents the treatment progress of the patient in that the patient administration module 160 receives evaluations carried out by the service robot 17, and can transfer the treatment progress to external patient data management systems 170 such as hospital management systems.
- a navigation system 180 is located in the cloud 18, which contains navigation information and displays it via a computer of the navigation system 181 and a memory of the navigation system 182.
- This navigation information is connected via an interface to the navigation module 2060 of the service robot 17, in particular to a room plan module 2060r.
- the coordinates of the rooms in which the service robot 17 moves and which have also been mapped by the service robot 17 are assigned, for example, room numbers which are in the
- Navigation system 180 and / or in the space planning module 2060r are examples of the navigation system 180 and / or in the space planning module 2060r.
- the cloud 18 is connected to a set of rules 150, which has a computer of the set of rules 151 and a memory of the set of rules 152. Central to this are primarily those algorithms that are primarily used at the application level 2010, behavior level 2020 and service robot capabilities 2030 of the service robot 17 in FIG. 1, but also those that are used in the
- Patient administration module 160 can be used. Algorithms that are used to evaluate the movement sequence in the movement sequence evaluation module 2052 may be mentioned as examples. This also means that, depending on the aspect, individual modules from FIG. 1 can only be kept in the cloud 18, provided the service robot 17 has an online connection to the cloud 18, in particular also when it is for navigation purposes. Other algorithms that are in rule set 150 may make therapist suggestions for exercise plan adjustments.
- a learning module 190 in the cloud 18 with at least one learning module computer 191 and at least one learning module memory 192.
- Historical data that the therapist has recorded, which the therapist has created, for example, are stored here has generated a exercise plan in the patient administration module 160 and / or which come from an external patient data management system 170 and, for example, was previously transferred to the patient administration module 160 and / or directly from the service robot 17. If this historical data relates to the patient, this historical data is anonymized in front. This historical data is accessible via a terminal 12 (see FIG. 3) and can be labeled, for example. As will be described in more detail below, this historical data is used to improve the algorithms in the rule set 150.
- the rule set 150 is configured such that the algorithms installed locally on the service robot 17 can be updated, for example, via a wireless interface such as the WLAN module 2088, with algorithms being transferred from the memory of the rule set 152 here.
- algorithms in the memory of the rule set 152 are configured such that the algorithms installed locally on the service robot 17 can be updated, for example, via a wireless interface such as the WLAN module 2088, with algorithms being transferred from the memory of the rule set 152 here.
- Patient administration module 162 can be updated via the cloud 18.
- the service robot 17 shown as an example itself has a computer 9 and a memory 10, at least one sensor 3, at least one support wheel 5 and at least one drive wheel 6 and an energy source 8.
- the service robot 17 can have alternative and / or additional sensors. These include ultrasonic sensors and / or radar sensors that are pressure sensitive
- the service robot 17 can have one or more magnetic sensors which are arranged in such a way that the magnetic sensors detect magnets on the floor with the purpose of limiting the spatial dimensions of the
- the service robot 17 can also have infrared sensors, ultrasound and / or radar sensors which are directed towards the floor and in one aspect in this way are configured so that the infrared sensors, ultrasound and / or radar sensors can detect steps, for example.
- the information from these infrared sensors, ultrasound and / or radar sensors can be found, for example, in the mapping module 2061 in the maps created.
- the at least 3D camera (either designed as an RGB-D or as a pure depth camera) can be used not only for functions of the person recognition module 2040, but also for three-dimensional mapping in the mapping module 2061.
- the at least one 2D-RGB camera can be used With the help of appropriate
- Frameworks such as Open pose (Cao et al. 2017) can also be used for 3D person tracking 2042, for which the corresponding framework is used in the case of a Kinect, in the case of an Astra Orbbec, for example, NUITrack.
- the 3D camera can be replaced.
- time-of-flight (ToF) technology can be used, such as in the Microsoft Kinect, or a speckle sensor as in the Astra Orbbec.
- ToF time-of-flight
- one or more 3D cameras can also be used as a replacement for the LID AR 2083.
- the person recognition in the person recognition module 2040 takes place on the basis of at least one optical sensor such as the LID AR 2083, a 2D camera 2084 and / or a 3D camera (either configured as an RGB-3D camera 2085 or as a pure depth camera).
- the LID AR 2083 especially in the case of a 2D LIDAR, is also not used alone, but in combination with at least one 2D camera 2084 and / or 3D camera.
- the distance between the patient and the service robot 17 is determined via the LID AR 2083 and a detection of their movements and poses via the 2D camera 2084 or the 3D camera, the latter also being able to determine the data for determining the distance between patient and service robot 17.
- a 2D-RGB camera and a separate 3D depth camera can also be used, which means an additional effort in signal processing, in particular synchronization, compared to a (3D) RGB-D camera.
- the term “poses” is understood to mean the orientation of a person in the room including their limbs / body parts (body elements) as well as the orientation of the service robot 17 in the room. Establish exercise plans
- the starting point, for example, for therapeutic gait training is the exercise plan, the aim of which is to show the physical abilities of the patient and thus his or her over time
- the service robot 17 is now configured in such a way that the described sensor system supports the therapeutic one
- Gait training is possible. For example, after the implantation of a total hip endoprosthesis (hip TEP), the patient must now take care of relieving the operated area with the help of forearm crutches (UAGS) and try to relieve the normal or physiological movement as close as possible get.
- UGS forearm crutches
- This sequence of movements is clearly defined and, for example, detailed in Chapter 2 by Götz-Neumann, “Understanding Walking”, Thieme-Verlag, 2016 and in Chapter 6 by Smolenski et al, “Janda, Manual Muscle Function Diagnostics (Elsevier-Verlag, 2016). Due to pain, the patient "trained” a gait pattern (i.e. movement sequence) until the time of the operation, which the patient now does after the
- This trained gait pattern is also referred to below as “deviating gait sequence” or “deviating movement sequence”, corresponding features of the
- Movement sequence such as a step length influenced by pain and therefore a different stride length than in the physiological movement sequence, is called a different stride length.
- the support and contralateral leg are placed forward at the same time or promptly, followed by the opposite side (right forearm support - left leg, left forearm support - right leg; this corresponds to a reciprocal sequence of movements). As soon as the patient no longer needs this relief, consult the doctor and the doctor
- the transition from three-point gait to two-point gait takes place when the patient has a fluid movement, characterized by standing time, playing leg phase, stride length and / or increasing symmetry between both legs and an increase in walking speed.
- the criteria for the transition to the two-point gait or omission of the UAGS are met if the service robot 17 makes fewer corrections to the patient than a threshold value and the walking speed determined by the service robot 17 is above a threshold value, which in turn can depend on various influencing parameters such as the initial walking speed, comorbidities, etc. determined during the first training session
- the transition can also be made if previously determined
- Exercise parameters and the exercise progress derived from these parameters show a high degree of agreement with interventions in the exercise plan that a therapist carries out and gives the patient approval for the two-point course. Exercise progress is shown as the difference in the movement sequence parameters in the movement sequence
- the service robot 17 can either make the change from three-point to two-point gear independently in the exercise plan or suggest it to the therapist, as illustrated, for example, in FIG. 10 (explained in more detail later) with a view to various other parameters.
- Figure 4 illustrates such an exemplary exercise plan that the patient is completing. While the lines indicate the days, the third column shows which exercises the patient is carrying out with the service robot 17. In this example, this is not an exercise on the first day, on the second day, for example, the three-point gait is practiced for 5-10 minutes, with the patient covering approx. -20-200m. In the second column are the tasks for the
- Patient administration module 160 created.
- the therapist can, for example, access a graphical user interface (GUI) (FIG. 5), via which, for example, the exercise plans are configured for each day that the patient spends in the clinic or rehabilitation facility.
- GUI graphical user interface
- These exercise plans include a patient ID and, for example, the date of the operation, the side of the operation (in the case of knee and / or hip operations), a possible release of certain gaits such as the two and / or three-point gait from a certain date, a possible training release for exercises on a staircase, as well as the dosage of the exercise (frequency, duration and route length per day or exercise).
- GUI graphical user interface
- Service robots 17 clinic- / facility-specific exercise plans are stored, which correspond to the requirements of the respective surgeons.
- the aim is, for example, after an operation on the hip / knee / ankle to learn the correct sequence with the UAGS depending on the respective degree of stress and healing and on the other hand the respective movements in the joints.
- These exercise plans adapt to the defined / agreed scheme of the surgeons. However, they can be actively changed or adapted by the therapist, as explained below.
- the therapist can change the gait in the exercise plan "Reset" three point gait and let the patient learn the course of the two point gait at a later time.
- the therapist can see in one aspect, and as described in more detail elsewhere, on the basis of the evaluation of historical data, for example in the GUI, which adjustments to the exercise plan are to be made, primarily dynamically, i.e. for example on day three after the operation based on the training success of the previous days. As described elsewhere in this document, are also automatic
- this exercise plan or the instructions associated therewith are transmitted to the service robot 17.
- 6 describes the data exchange between the patient administration module 160, the navigation system 180, which contains room information (alternatively, the room plan module 2060r, which also contains this room information - optional),
- patient data for a patient are created in the patient administration module 160.
- patient data include the name, an ID, the diagnosis (hip operation if necessary),
- the patient data can also be obtained, for example, via an external system such as the hospital information system via an interface.
- the coordinates of the room can be taken from the room number module 2060r in the navigation module 2060 of the service robot 17, alternatively from one
- complementary module in the memory of the cloud-based navigation module 160 are transmitted via an interface in step 610, which in one aspect of the method can be used to pick up patients in their room. Alternatively and / or additionally, the area in which the service robot 17 meets the patient can be defined.
- the exercise plan (and possibly patient data) are transferred 610 to a transponder or storage medium (such as a USB stick). This is given to the patient.
- the patient transfers the data to the service robot 17 by holding the transponder to an RFTD reading (and possibly writing) device 2089 of the service robot 17, the service robot 17 recognizes 612 this transponder and reads it 615 accordingly.
- a contactless RFTD In the case of a memory (for example a USB memory card), the reading and / or writing device 2089 can use a contact-based interface such as a USB port for the USB memory.
- a patient ID from the patient administration module 160 is transmitted 620 to the transponder.
- an ID of the transponder in the patient administration module 160 is associated with the patient and is thus (at least for data transfer via transponder or possibly also memory) for the patient ID.
- the transponder (possibly also the memory) is on the service robot 17 on the Reading and / or (writing) device 2089 is recognized 612 and the patient ID is thus read 625, which is also done by the patient handling the transponder / memory.
- the service robot 17 downloads, via an interface from the patient administration module 160 located in the cloud 18, 660 the data necessary for carrying out the exercise, such as the exercise plan, the patient ID obtained being used for this, the relevant data record in the memory 162 of
- Identify patient administration module 160 As an alternative to using a
- a transponder and / or memory can also be generated, which contains the patient ID 630 and which is given to the patient. This is recognized 632 by the service robot 17 when the barcode is held in front of at least one of the 2D cameras 2084 or 3D cameras. The barcode or the patient ID are read 635. Based on this, the service robot 17 downloads 660 the data necessary for the execution of the exercise, such as the exercise plan, from the patient administration module 160 located in the cloud 18. The patient ID 630 obtained is used to identify the relevant data record in the database 162 of the patient administration module 160. As an alternative to the identification methods mentioned so far on
- the service robot 17 can also receive a login for the patient, which is associated 640 with a patient ID in the patient administration module 160. If the login is entered on the service robot 17 in step 645, the service robot 17 loads the login with this login (and thus the patient ID) associated and relevant for the execution of the exercise from the patient administration module 160 in step 660.
- biometric data of the patient can also be used, such as an iris scan, a fingerprint or a scan of the face in step 650, which in the
- Patient administration module 160 are associated with the patient.
- the patient can then identify himself appropriately on the service robot 17, a corresponding reading device having to be installed in the case of an iris scan or a fingerprint scan.
- the 3D camera of the service robot 17 can also be configured accordingly for a scan of the face, for example the RGB-D camera 2085. After these steps 645 and 655, the service robot 17 loads the data associated with the patient ID for carrying out the Exercises down.
- the service robot 17 then carries out the exercises in step 665, records the results of the exercises in step 670 and analyzes the results with a view to exercise plan 675. These three steps 665, 670, and 675 are discussed in more detail elsewhere.
- the data recorded by the service robot 17, in particular the evaluations of the service robot 17, possibly also video recordings of the exercises, raw data of the skeletal recognition (which will be described in more detail below), are transmitted again to the patient administration module 160. In the first case described (steps 610-612-615), this can be done by transferring this data to the transponder / memory in step 680 and this
- Transponder / memory is transferred from the patient to the therapist who
- step 685 reads the transponder / memory at a terminal into the patient administration module 160 in step 685.
- data can be transferred from the service robot 17 to the patient administration module 160 in the cloud 18 in step 690 via an interface, the data being stored there in accordance with the patient ID.
- Such transmission can take place at the end of the exercises or during the execution of the exercises, in real time or at intervals.
- One aspect of the data transmission relies on anonymity in the data exchange, for which no data are transmitted to the service robot 17, which allow the patient to be identified (without the association of person ID with, for example, names), and also no data are stored on the service robot 17 that would allow identification of the patient. If the service robot creates 17 video recordings of the patient, these video recordings, as will be described elsewhere, are
- Service robot 17 configured so that the service robot 17 picks up a patient at one location, accompanies the patient to an exercise area, does the exercises there and, if necessary, brings the patient back again. These steps are shown in dashed boxes in FIG. 7, since these steps are optional, because the service robot 17 can also wait for the patient at one location.
- the patient signals the service robot 17 that he wishes to exercise.
- a scheduled exercise can be scheduled for the patient (date, time defined, for example, in the exercise schedule) and / or a complementary schedule.
- the service robot 17 visits the patient, the information on premises (such as the patient's room number) coming from the navigation system 180 or the room plan module 2060r, in
- Patient administration module 160 are stored and transmitted together with the exercise plan to the service robot 17, as was described in FIG. 6. Of the
- Service robot 17 can use the room plan module 2060r and the map created in the mapping module 2061 to locate the room and navigate there, for which the service robot 17 uses its route guidance module 2024 (in example 7, the mapping procedure is discussed in more detail). If the service robot 17 arrives at the patient, this patient can identify himself at the service robot 17 in step 415, as has already been explained in detail in FIG. 6. As an alternative to this, the patient can also go to a place where the service robot 17 is waiting. Based on this identification in step 415, the
- Service robot 17 its sensor system, comprising a LID AR 2083, the 2D camera 2084 and / or the 3D camera (either configured as an RGB-D camera 2085 or as a pure depth camera) in order to recognize the patient in step 420, what can be done by means of the
- Personal identification takes place in the personal identification module 2041.
- the patient is then tracked in step 425, which is done by means of the first
- Person tracking module 2042 and / or the second person tracking module 2043 can happen.
- the patient is re-identified with the aid of the re-identification module.
- Example 4 shows a possible embodiment in more detail.
- the patient can transmit information about training to the service robot 17, for example an individual training request.
- the patient can also select exercises that the patient would like to complete, in contrast to the exercise plan, which contains predefined exercises.
- the service robot 17 checks, in the case of an exercise request chosen by the patient himself, whether it corresponds to the therapist's specifications. Because gait training is usually one
- Gait trainings relate to the distance to be covered, etc. With regard to a patient, this means that patients can choose exercise plan configurations that are based on an automated release for certain exercises. Alternatively and / or additionally, the number and / or type of corrections made in the movement sequence can also be used for the automated release.
- the service robot 17 can also query information in dialog 430 in step 430 and / or simulate exercises to be completed.
- the dialog can be done via a graphical user interface 2071 via a screen (e.g. a touch display or
- the service robot 17 can maintain an optimal distance 2067 from the user. Thereafter (optional), the service robot 17 navigates to the exercise area 435 with the aid of algorithms from the route guidance module 2024 “drive to the destination”.
- the service robot 17 asks the patient to follow him via the described output units, LEDs, etc. or gives him information on where the patient should move.
- the patient can move in front of the service robot 17 or follow the service robot 17, both during navigation to the exercise area and during the exercises themselves.
- the service robot 17 also moves primarily at a constant distance from the patient, which improves sensor detection of the exercises to be completed.
- the service robot 17 calculates a route that the patient and the service robot 17 should or must complete.
- the mapping module 2061 in connection with the metric path planning from the path planning module 2064 and the
- Motion planning module 2065 can be used in one aspect using evolutionary algorithms. As soon as the practice real is reached, the exercises begin. The gives Service robot 17 Instructions for exercises / corrections in step 440, which are based on the movement correction module 2022. In the further course the
- Motion evaluation module 2050 recorded the execution of the exercises (in
- the instructions for correcting the movement sequence which are generated, for example, as speech output using speech synthesis (and / or are also output on a display), can provide information on positioning the UAGS, straightening the upper body, etc.
- Steps 440-450 are an iterative process, since several corrections and possibly several sub-elements of an exercise can be carried out over the course of the exercise.
- the service robot 17 can also (again) accompany the patient to a location in step 465 (back), for example his room.
- the patient is generally tracked continuously during navigation. If tracking is interrupted during this time, re-identification must take place.
- Sensor data is exemplarily illustrated in FIG. 8.
- movements of the patient are detected by the sensor, such as a Kinect 2 or an Astra Orbbec, which is a 2085 RGB-D camera.
- the depth image generated by the 3D depth camera in step 710 from the sensor data is then transformed in step 715 into data for representing a 3D point cloud, in which each pixel of the 3D camera is assigned to a spatial coordinate. This creates a 3D representation of the
- step 720 feature extraction
- Skeleton modeling uses the data from the 3D point cloud and derive signals for the recorded color, spatial depth and skeleton information in step 725.
- these signals contain information about, for example, joint points of the respective skeleton model, which, for example, describe the patient's knee joint or hip joint.
- FIG. 20 exemplifies a skeleton model, with the body 1703, the articulation points 1701 and the connections between articulation points 1702, which, if directed, are given as
- Direction vectors can be output. If frameworks such as OpenPose are used, a 2D camera such as an ordinary RGB camera can also be used instead of a 3D camera.
- the sensor data of the 3D camera are evaluated in such a way that, seen from the sensor, the distance to each detected object or area of an object is determined, this evaluation being based on the resolution of the 3D camera and the distance of the objects is dependent.
- spatial coordinates can be assigned to the sensor data from the 3D camera. These spatial coordinates are then in turn assigned to the articulation points.
- mathematical vector operations can be used to define the direction vectors between articulation points in direction and length (which also corresponds to the distance), as well as calculate angles between them, etc.
- step 730 there is a joint selection, i.e. only the articulation points necessary for the calculations to be carried out subsequently
- step 735 angle calculations are carried out, for example for the angle between the lower and upper thigh, or the angle by which the thigh deviates from the vertical, for which purpose a joint point is defined as the base and the orientation of the limbs / trunk and / or e.g. the plumb as Direction vectors represent the basis for an angle calculation (see also explanations Fig. 19). Furthermore, distance determinations are carried out in step 760, for example
- time-distance parameters This category includes, for example, the stride length, the service life, the track width and the flexion and extension of the hip and knee joints (depending on the therapy to be carried out) over time. They are usually determined over the course of a double step. An example of this can be found in Example 1.
- the stride length can be a Euclidean distance between the
- Ankle points 1950 can be determined within the sagittal plane, e.g. each at the time when the feet come into contact with the floor (to be recognized, for example, by the minima of the height of the ankle points 1950 above the floor), which indicates the stride length.
- a recognition of the forearm crutches is used in step 740, also referred to as UAGS recognition (which in one aspect can also support armpits and / or can include other types of canes / walking aids).
- the UAGS recognition thus allows the gait pattern to be assessed at a later time in conjunction with gait aids.
- the 3D point cloud of the built-in depth camera recorded in step 710 also serves as the starting point for this detection.
- the UAGS are shown in the point cloud by a
- Segmentation unit found.
- Knowledge of the patient's skeleton is included to preselect suitable candidate regions 745, i.e. Areas in the room where the UAGS are likely to be located. It can be assumed that these represent an extension of the arms downwards, making this area, for example, the candidate region for the detection of UAGS.
- candidate regions are then checked for agreement with model assumptions about the shape of typical UAGS (elongated, narrow) in step 750 and selected if necessary.
- the position of the UAGS for example the end points 1970 which touch the ground, are in turn used as time-distance parameters within the feature extraction 720, for example within the
- step 765 This is followed by the feature classification (or feature evaluation) in step 765, which can take place according to various criteria.
- a previously determined and possibly patient-specific threshold value comes into question, the over- or is undercut, the extracted feature is evaluated primarily over time.
- a step length of 30 cm can be determined by the service robot 17. The maximum allowed deviation of the recorded stride length according to the
- Classification is 20cm. Since 65 cm standard stride length minus 30 cm measured by the service robot 17 stride length> 20 deviation value from the classification, the stride length of 30 cm would be assessed as too short.
- Step length deviations (spoken in absolute values) of 20 cm serve as a threshold value or, for example, 20%, based on the physiological step length (i.e. that of the leg not operated on) - in contrast to the deviating step length, which relates to the operated leg and from the physiological step the operation deviates conditionally.
- the classified features are evaluated in context (referred to as movement sequence classification in step 770), i.e. not with a view to individual characteristics, e.g. Stride length or lane width, but taking into account combinations of features that are expressed, for example, in body poses, in one aspect taking walking aids into account.
- This can be used to make statements about the use of the prescribed movement type (two-point gait, three-point gait). This happens, like the feature extraction 720 or the feature classification 765 in the motion sequence evaluation module 2052.
- the goal is the output of the
- Movement sequence correction (step 450), i.e. an issue of instructions to the patient to ask him to adapt his movements in his own movement sequence so that they correspond to or at least come close to the physiological movement sequence.
- the service robot 17 provides feedback to the patient (indication of errors,
- Movement sequence classification assigned a set of rules that can be represented in one aspect as a decision matrix in which for certain poses (e.g.
- This motion sequence correction is shown as step 775 decision classification in FIG. 9.
- the detection values for the stride length of the healthy leg are "too short” (ie below a certain threshold) and at the same time the distance between the UAGS and the healthy leg will be assessed as “too short forward” (ie above a certain threshold) ). If these two and only these two feature classifications apply in this combination, the decision matrix makes the correction: "Put the healthy leg past the operated leg, from the over the imaginary connecting line between the supports.”
- Fig. 9 shows this schematically. For example, as the
- Correction announcement 2 in turn is triggered, for example, when errors 1, 2 and 4 in
- Movement sequence can be made and not all errors can be corrected at the same time, a prioritization of the therapist determined by therapists
- Movement sequence classifications 770 which e.g. a correction has been assigned, but can still be saved and made available to the therapist, which is described in more detail in the following section.
- the matrix in FIG. 9 does not have to be deterministic, ie it is not necessary to store a correction output for each detected deviation of the movement sequence from the physiological movement sequence, which is output in the case of this deviation.
- the output can also be dynamic.
- the correction editions can, for example, also be prioritized. Different priority scores are assigned to individual deviations.
- the service robot can then carry out a certain number of outputs per time interval, with only the highest priority outputs, ie those with the highest priority scores.
- defined delay periods can be stored in one aspect after a detected deviation, after which the movement sequence correction 450 is output, for example, within these delay periods of the correction output with the highest priority.
- therapists can influence the set of rules for the movement sequence correction, for example the decision matrix for the movement sequence classification 770, by making settings in the patient administration module in such a way that certain poses / instructions are prioritized, others are possibly ignored, with which the
- Movement sequence correction is adaptive. This information can be transmitted to the service robot 17 together with the exercise plan and is therefore in the
- Such settings can be learned together with the exercise plan settings via the suggestion function in the learning module 190 and proposed to the therapist, in a further aspect they can also be made automatically by the service robot 17, as will be explained in more detail below.
- therapists can, after viewing the evaluations of the movement training that the service robot 17 has carried out with the patient, modify the exercise plan for the patient in order to improve the success of the treatment.
- the system illustrated in FIG. 3 is capable of providing the therapist
- Propose exercise plan adjustments based on historical data. For example, the rules implemented here can make suggestions to switch back from two to three points if the patient makes too many mistakes in the course of two points, is too slow, the operated leg is still too heavily loaded, etc. The therapist can make such suggestions accept or reject.
- the service robot 17 even carries out these past-based exercise plan adjustments automatically.
- the basis for this ability of the system is a self-learning system, which is shown in FIG. 10 accordingly.
- This self-learning system increases iteratively Quality of therapy success, which mainly occurs in two ways: a) recording situations that have not been described before because they may rarely occur, and b) increasing the number of cases. Both leads to the fact that more precise weight determinations of the node weight can be carried out within the framework of machine Lem models and / or neural networks, which the effectiveness of the service robot 17
- the patient administration module 160 has the function of submitting treatment suggestions to the therapist 1330. In a first stage that gets
- Patient administration module 160 patient data about the patient, either through input by the therapist and / or via the interface to an external system such as a hospital information system (HIS) / patient data management system 170.
- an external system such as a hospital information system (HIS) / patient data management system 170.
- HIS hospital information system
- factors are recorded that have an influence on the design of the therapy, such as parameters that affect the general mobility or agility of the patient (degree of independence, possible paralysis of the extremities, aids to be used) 1310, comorbidities (heart failure , Heart attack, dizziness, diabetes, diseases that are associated with an increased risk of falling, such as Parkinson's) 1315, but above all the reason for completing the exercises (such as a hip TEP on the right, due to osteoarthritis) 1320.
- parameters that affect the general mobility or agility of the patient (degree of independence, possible paralysis of the extremities, aids to be used) 1310
- comorbidities heart failure , Heart attack, dizziness, diabetes, diseases that are associated with an increased risk of falling, such as Parkinson's
- the reason for completing the exercises such as a hip TEP on the right, due to osteoarthritis
- One aspect also includes the type of surgery (direct anterior anterior access, lateral access to the hip, etc.) 1325, which in
- the therapist 1330 defines an exercise plan 1335 for each patient.
- standardized exercise plans 1335 which are transmitted to the patient administration module 160 via an interface (not shown in FIG. 10 for reasons of simplification), which are proposed to the therapist 1330 in one embodiment variant, and are selected automatically in another embodiment variant.
- agility 1305, location of the OP 1320 (such as knees, hips) and / or type of OP 1325 are taken into account accordingly, i.e.
- Embodiment variants can also be clinic-specific and in this case can be configured as such, for example, via the patient administration module 160.
- Transferred to the GUI 5 it may be that the therapist 1330 is already shown a pre-selection of options, such as the three-point gait 1345 from the second postoperative day, the transition from three-point gait to two-point gait 1340 from day 4 postoperatively, the release for climbing stairs three days after completed operation 1350 and a maximum distance of 300m, with two exercises per day.
- the configuration of exercise plan 1355 is thus a function of, for example, the starting day, the duration of the exercises, the distance, the
- Rules 150 are automatically specified, represent the exercise plan 1335.
- This exercise plan 1335 is, in one aspect, also transmitted to the service robot 17 via an interface 1360 together with data of the patient, such as, for example, the body size and possibly also comorbidities, the type of operation (OP), etc. stored 1405.
- the service robot 17 then evaluates exercises 2013 (gait training) with a view to the sizes that are shown, for example, in the GUI representation in FIG. 11, which is shown in FIG Module for evaluating the exercises in 2013 can be implemented.
- This data is transmitted to the cloud via an interface 1360 and flows into the learning module 190.
- the data recorded by the service robot 17 are not yet processed with respect to the exercise plan evaluation as shown in FIG. 11, but only the raw data (such as the measured step length) are sent to the
- Transfer patient administration module 160 There, for example, those of the measured different stride lengths would be set in relation to the normal (physiological) stride length. Depending on the aspect, you can do this in one or both cases
- processed data are then also transmitted back to the service robot 17 via the interface 1360.
- FIG. 11 In the context of the evaluation 1406, it is shown (FIG. 11) how long a training lasted per day, how long the distance and the speed of the patient was, whereby the progress from day to day is also determined. It also shows which Concrete deviations from the physiological movement sequence occur, in particular with regard to key parameters, including the sequence of the use of the support determined by the service robot 17, the standing time, leg phase, upper body / view, stride length and / or track width. The progress of the therapy is also shown, as well as in one aspect which corrective measures have been initiated in the meantime. Furthermore, in one aspect, the therapist 1330 can receive information for the individual therapy to be carried out in order to target individual deviations from the desired one
- the data of the evaluation of the gait training are transmitted to the lem module 190 in the cloud over time 1406. As explained above, this transmitted data also includes data from
- Service robot 17 outputs the movement sequence corrections 450 or deviations of the movement sequence associated therewith. This data is stored in
- Lem module memory 192 of the lem module 190 is stored and, if already available, supplements historical data 1505 that originate from previous exercises, wherein this historical data 1505 could and should also originate from more than one patient and more than one therapist.
- Weight determinations for node weights in the nodes of the neural networks and / or the Lem models for an exercise plan adaptation 1510 were carried out.
- the evaluation of the patient performed by the service robot 17, possibly in conjunction with the set of rules 150 and / or patient administration module 160, are used as input variables
- patient data such as age of the patient, his height, person weight, comorbidities, type of operation, etc. (1310-1325) and as a starting point the settings of the exercise plan for the exercise later time t + 1 or t.
- node weights Once such node weights have been determined, they can be used to determine the patient's weight based on evaluations of the service robot 17 (possibly in conjunction with the set of rules 150 and / or patient administration module 160) Movement training forecasts are made about which settings the therapist would or should make, for example, these settings both
- Exercise plan adjustments can also be to leave an already defined exercise plan on standard settings. Ultimately, you can also do that
- the determined node weights are transmitted to the rule set 150, where any node weights from previous determinations are updated. Based on the node weights, rules derived from the node weights can then be updated in the next step (for example, if a certain speed is present on a certain day and taking other parameters into account, the duration of the training is extended).
- these rules 1530 can also be transmitted directly to the service robot 17 so that this service robot 17 can carry out an autonomous adaptation of the exercise plans.
- the system outlined in FIG. 10 can either recalculate the node weights after the completion of each exercise or can only do this recalculation at certain intervals, in which case the data relevant for the new weighting of the node weights are temporarily stored.
- the rules stored in the rules 150 or the rules stored in corresponding modules of the service robot are used.
- the automated adaptation of the exercise plan 2012 can also be carried out in the patient administration module, ie the proposals for the 1535 plan adaptation are not implemented as suggestions, but instead an automated plan adaptation takes place that does not require any therapist intervention. Improvement of movement sequence evaluation and movement sequence correction
- the movement sequence evaluation in the movement sequence evaluation module 2052 in a first stage and movement sequence correction in the movement correction module 2022 in a second stage essentially determine how well errors in the movement sequence are recognized and then also corrected. Together they play
- Movement sequence evaluation and the movement sequence correction play a role for the
- Evaluations are possible, in the case of an evaluation using machine learning and / or neural networks, for example, more precise weight determinations of the node weights can also be carried out.
- Fig. 12 describes the underlying system and the process.
- the service robot 17 carries out the movement training with a patient.
- the motion sequence extraction module 2051 a feature extraction of the motion sequence takes place and in the motion correction module 2022 the output of the motion sequence correction 450 takes place, which is carried out by the processes feature extraction 720, feature classification 765, motion sequence classification 770, motion sequence correction like that
- Decision classification 775 and output of motion sequence correction 450 is characterized.
- the service robot 17 here records the data on the movement sequence of the person (i.e. the patient) and / or stores this recorded data in a step 1535a.
- the recorded data include the recorded data
- Movement sequence parameters such as the movements of the patient, including the movement sequence classification 770, the feature classification 765, and / or the decision classification 775 and / or the output of the movement sequence correction 450, and / or video sequences 1425, which document the movements of the patient and from an RGB-2D Camera 2084 and / or RGB-D-3D camera 2085.
- Motion sequence parameters also include in one execution
- Movement sequence classification 770 and feature classifications 765 This data are transmitted to the lem module 190 via the interface 1360 and stored there 1535a.
- the video data with the video sequences are previously anonymized to the extent that facial features are pixelated so that the identity of the person cannot be recognized when the videos are viewed.
- Solutions of this type are known in the prior art, are used for the automated blurring (pixelation or blackening) of picture elements such as faces or vehicle license plates and are available, for example, as product solutions from 3DIS GmbH or Guardian Project.
- Therapists can access this via a terminal, view and record the recorded video data and various aspects of the movement sequence.
- the therapist respects e.g. on personal parameters such as the stride length, the track width and / or the posture of the upper body including shoulder area of the patient.
- the therapist enters these personal parameters (stride length, track width, posture, etc.) into the motion sequence assessment module 2052 together with the associated ones
- Video sequences and provides this information with a so-called label 1535b i.e. the therapist marks the recorded movement sequences and differentiates the movement sequence into deviating and physiological for the movement sequence evaluation in the movement sequence evaluation module 2052.
- the labeling by the therapist represents a movement sequence and / or feature classification.
- the movement sequence and / or feature classification is continuously re-evaluated in steps 1540 and 1542 after an initial recording and labeling, and there may also be a re-evaluation of the
- This adaptation can also be a manual control adaptation of the
- Characteristic classification 1592 and / or decision classification 1595 updated. These updates to the rule set 150 are (via an interface, not shown) to the motion sequence extraction module 2051 for the feature extraction of the
- Movement sequence classification 770 and decision classification 775 and possibly output of the movement sequence correction 450 is updated. A reassessment of the
- the motion sequence can also result in the feature extraction of the motion sequence in the motion sequence extraction module 2051 also having to be adapted.
- FIG. 13 shows this method in which a model is trained and then used to evaluate the movement sequence and the movement sequence correction.
- the service robot 17 carries out the movement training with a patient. It takes place in
- Movement sequence extraction module 2051 a feature extraction of the movement sequence movement sequence and via the movement correction module 2022 this takes place
- the service robot 17 captures data and / or stores this captured data in step 1410. This includes the movement sequence parameters, i.e. the patient's movements, in one aspect the ones that have occurred
- Evaluations of the movement training (as shown, for example, in FIG. 11) 1406. These evaluations of the movement training are transmitted via the interface 1360 to the learning module 190 and stored there 1535a.
- the data is saved in a database and thereby an addition to existing, historical ones
- Service robots 17 recorded data of the data with a view to the movement sequence classification 1550 and feature extraction 1552 and / or historical data of the
- Decision classification 1555 which are supplemented by the newly stored data. Taking into account the evaluation of the movement training over time 1406, as shown, for example, in FIG. 11, a weight determination of the node weights of the
- Decision classification 1565 can be made in a Lem model or a neural network. For this purpose, comparable to the weight determination of the node weights when adapting the exercise plan, machine learning algorithms such as
- Clustering processes support vector machines as well as regression processes and / or neural networks such as convolutional neural networks are used.
- the evaluations of the movement training such as the distance covered, standing time, leg phase etc., serve as direct values and / or calculated as an improvement over the previous period as output variables.
- the ascertained features of the movement sequence including in one aspect the movement sequence classifications 770 and / or feature classifications 765 and / or decision classification 775 and / or output of the
- Movement sequence corrections 450 and the patient's personal parameters such as age of the patient, his height, person weight, comorbidities, type of operation, etc. (not shown in FIG. 13) as input variables.
- new node weights are generated and transmitted to rule set 150, one
- Weight update of the node weights on the part of the movement sequence classification 1580, feature classification 1582 and / or decision classification 1585 result. These weight updates to the node weights in turn lead to
- Rule updates of the movement sequence classification 1590, feature classification 1592 and / or the decision classification 1595 are (via an interface, not shown directly) to the rule system 150.
- Movement sequence extraction module 2051 for the feature extraction of the movement sequence or the movement correction module 2022 transmitted in the service robot 17.
- the two described methods of determining the weight of the node weights by means of machine learning and / or neural networks enable, for example, only those movement sequence corrections to be carried out which show actual success in terms of the progress of the therapy. This can also lead to fewer corrections being made by the service robot 17, which leads to savings in the
- a reassessment of the movement process can also result in the feature extraction 720 in the movement process extraction module 2051 having to be adapted for the feature extraction of the movement process.
- Movement sequence evaluation and the manual improvement of the movement sequence correction as well as which are combined based on the approaches of machine learning and / or neural networks.
- a person's stride length is defined as the Euclidean distance between the ankles, i.e. the dated
- Gait cycle consists of a swing phase and a stance phase for each leg.
- the swing phase begins when the foot is lifted off the ground and continues as long as the foot is in the air and the leg is brought forward. As soon as the foot, ideally the heel, touches the ground (initial contact), the stance phase begins.
- the standing phase of a leg is defined as the period in which the foot is on the floor, as is also evident from the skeleton model, which also identifies the level that corresponds to the floor. During a gait cycle, there are one for the right and left leg
- Stride length that always refers to the leg that has initial ground contact after completing its swing phase.
- the track width is defined as the distance between the two heel centers and is in the range of 5-13 cm, also ascertainable via the distance between the identified ankles 1950 within the frontal plane.
- Patients in the skeleton model in the room are output as hinge points.
- the Kinect 2 used in this example shows no poses of body parts in the skeleton model, these poses can be modeled via the connection of adjacent articulation points that the Kinect recognizes, which is implemented as part of the feature extraction 720 in step 735.
- Figure 19 illustrates this modeling.
- Some recognized articulation points are also shown here as filled circles.
- direction vectors between recognized articulation points are calculated, e.g. by creating a vector between the 3D coordinates of adjacent hinge points.
- Fig. 19 they are drawn as dashed arrows between the joint points lying next to each other.
- the angle a of the knee joint point 1930 can be defined via the two direction vectors 1910 and 1920, which correspond to the course of the thigh and lower leg.
- the first direction vector 1910 from the knee to the hip skeleton point and the second direction vector 1920 from the knee to the ankle skeleton point (or ankle-skeletal point) are calculated, namely by determining a connecting line between a knee joint point 1930 and the
- the specified angle a shows the knee flexion, for example.
- the determination can, for example, take place during the standing leg phase, i.e. in the phase from the first contact of the heel to the transition in the direction of weight gain through the other leg, which also initiates the so-called leg phase in the leg under consideration.
- the corresponding shoulder and knee points are used.
- the angle is determined using two direction vectors, one from the hip joint point 1940 to the
- Knee joint point 1930 extends as well as the other from the hip joint point 1940 to
- Hip joint point 1940r to right shoulder joint point 1960r One speaks of flexion, especially hip flexion, when a leg is oriented forward from the vertical, ie the flexion angle is generally defined via the direction vector 1910 (shown in reverse orientation) when the leg extends beyond the vertical in the walking direction located in front of it and the direction vector between the hip joint point 1940 and the shoulder joint point 1960 (see FIG. 19 b) with the angle ⁇ i).
- Extension on the other hand, is defined as the backward orientation of the leg, ie the extension angle is defined via the direction vector towards the shoulder joint point 1960 and the direction vector 1910 (shown in reverse orientation) between the hip joint point 1940 and
- Knee joint point 1930 when the leg is beyond the vertical in the walking direction behind it (see Fig. 19 c) with the angle ßi).
- the angles can be determined on both sides.
- the flexion and extension angle of the hip in turn is influenced, for example, by the forward inclination of the upper body, which has an effect on the course of movement.
- the angle of the front tilt for example, is also recorded, which, for example, is determined by the direction vector from the middle hip joint point 1940 to the middle point
- Inclination angle e is spanned.
- calculations are also possible which relate to complement, supplement and / or secondary angle and / or which include the solder, for example, for determining the extension and / or flexion of the knee, hip or other limbs.
- This is shown by way of example in FIGS. 19 f) and g), in which the hip joint extension can be seen with the hip extension angle di in g) and the flexion with the hip flexion angle yi in f).
- the angle can be used, for example, together with the angle of the upper body inclination g in order to arrive at the angle ⁇ , etc.
- 19 h shows an example of a skeleton model in which the end points 1970 of UAGS 1970 are shown, which are used, for example, elsewhere for determining the distance to the ankle points 1950.
- the UAGS are indicated by dashed lines 1980, because they are not part of a framework like OpenPose etc.
- the plotted curves illustrate the trunk inclination (top, lean of trunk), hip (center) and knee flexion (bottom) over a period of 23s for a patient with TEP (total endoprosthesis) in the right hip.
- TEP total endoprosthesis
- the flexion is significantly more pronounced (i.e. there are stronger amplitude fluctuations), which is due to the asymmetrical postoperative movement, which among other things. expressed in a larger stride on the non-operated side.
- the gait training which the service robot 17 accompanies, takes place shortly after the operation and, as described, the patients must first complete the three-point gait, then the two-point gait.
- the three-point gait and the two-point gait include the use of forearm crutches (UAGS) to reduce the stress on the operated joint.
- UGS forearm crutches
- the patient is instructed by a therapist in how to use the service robot 17.
- the therapist must explain to the patient how to use the UAGS when sitting down and getting up, when turning around and opening the door, and the procedure of the
- the therapist then releases the training with the service robot 17. As soon as the therapist receives feedback from the service robot 17 via the patient administration module that the switch to the two-point gear could take place, the therapist also shows the patient the correct sequence of setting the support here for the first time and, if necessary, checks the suggestion of the service robot 17 before he or she does so Gait "releases".
- the service robot 17 uses depth data from the Kinect2 3D sensor for this.
- the depth image is converted into a point cloud, for which the point cloud library described in the prior art is used. This point cloud will become smaller in the next step 745
- Point clouds segmented based on the patient's skeleton model are segmented based on the patient's skeleton model.
- the assumption is made use of that the UAGS must be close to the forearms and hands and approximately parallel to the legs, ie a selection is made Candidate regions.
- Standard segmentation and fitting algorithms can thus be used efficiently to evaluate one of the smaller point clouds near the forearms / hands / legs in step 755. This is helped by the fact that certain model assumptions can also be made in step 750 which take into account the shape of the UAGS, i.e. it is taken into account in the segmentation algorithm or in the generation of the 3D data that the UAGS is quite thin in relation to the limbs of the Are patients.
- the RANSAC framework can be used, for example, to
- classification rules can also be used which are created by recording UAGS, for example from different
- Movement sequence classification is used in step 770.
- the position of the two feet is essentially evaluated in relation to the determined position of the UAGS. If the three-point movement is correct, the patient simultaneously moves the leg of the operated side forward with both UAGS in order to achieve optimal relief of the operated joint. The UAGS relieve the operated leg throughout
- Stance phase and are only relocated at the transition to the swing phase. Since the movement of the UAGS largely occurs simultaneously to the operated leg, a straight line is defined between the two UAGS end points in 1970 and the distance between the base points and this straight line is evaluated.
- the direction of movement of the patient is determined, ie the orientation of his sagittal plane, which can be done, for example, by the position and / or direction of movement of the arms, legs and / or the orientation of the pelvis and / or shoulders.
- the patient is tracked over a defined period of time and the orientation results from the movement of the patient over time.
- a line can be determined, orthogonal to the direction of movement or orientation of the patient, which runs through the UAGS and to which the distance of the base points is evaluated. This makes it possible to identify typical errors in the three-point gait, such as putting the UAGS on too early or too late and relieving the pressure on the wrong leg, by statistically evaluating the distance curve.
- a Deviation from the arranged gait for example towards an unordered / incorrect sequence, can be recognized by evaluating the UAGS position relative to the patient's body. Based on the definition of
- both UAGS are at approximately the same level in the patient's sagittal plane when the movement is correct. Deviations in this position that exceed a corresponding threshold value can then be recognized as errors in the process.
- Fig. 15 shows the use of props over time.
- the upper time-course diagram shows the height of the props above the ground (left prop: higher amplitudes on average, i.e. the solid line; right prop: smaller on average)
- the service robot 17 must distinguish different motion sequence features from physiological motion sequence features. This classification must be carried out in real time while the patient is moving behind a service robot 17.
- the classification whether stride length, standing time, stride width, trunk inclinations or joint rashes / movements are in the physiologically typical areas is carried out by physiotherapists. This involves movements of healthy people and physically impaired people recorded by means of the Kinect device and a 2D camera 2084 and the movements of the people, broken down to each individual movement, are labeled, which in each case takes advantage of time stamps in the labeling and in the recordings of the Kinect device. In this case, features are labeled that include movement errors as well as a correct movement sequence. The time stamps are used for the synchronization of labeled data and the 3D recordings of the 3D camera.
- the diagram also shows the different stride lengths.
- the curve shown is determined from the distance between the ankles (sagittal plane).
- the minima result from the moment when the ankles are at the same height.
- the maxima represent the maximum distance between the ankle of the forward leg and the ankle of the supporting leg.
- the diagram shows that the left, non-operated leg takes a significantly shorter step compared to the right leg. It would be optimal for the gradual increase in stress in the operated leg and for the maintenance of a fluid movement sequence to take the same big steps.
- the relationship between the stride length and the standing time of both legs can be regarded as a suitable instrument for characterizing a patient's gait as physiological or deviating, a classification as “deviating” resulting in the output of a movement sequence correction 450.
- the Fl score is evaluated, which divides both classes (errors / deviations vs. correct ones Movement execution) differentiated, which can be calculated for different threshold values.
- the Fl score is generally defined as
- Fig. 17 shows the best threshold corresponds to the highest Fl score.
- a symmetry of 0.0 means perfect symmetry between the left and right legs, with symmetries of ⁇ 1.0 meaning that one leg is more than twice the stride length than the other.
- a threshold value is found that best separates the two classes. In this case the best symmetry threshold is - 0.17 (precision: 0.85, recall: 0.87), which means that stride lengths are less than -0.17
- Deviation from the normal movement sequence can be classified and thus initiate a correction by the service robot 17.
- step 1805 information about the leg to be spared (operated side) is first used (which is obtained, for example, as in the explanation for FIG. 5).
- step 720 the articulation points, the direction vectors between
- the track width is determined, for example, by calculating the distance between the ankle points 1950 in the frontal plane 1825.
- the track width is assessed in step 1865, i.e. a determination of the distance between the ankle points 1950.
- a determination of the stride lengths i.e. the measured distance of the ankle points 1950 in the sagittal plane at
- step 1830 with, for example, an assessment of the stride length ratio in the gait cycle 1870 in the context of the Feature classification 765.
- the service life in step is recorded by a time measurement and evaluated in step 1875.
- the inclination of the upper body can be determined in step 1840 with a subsequent evaluation in step 1890 within the feature classification 765, for example also a detection of the flexion and / or extension 1845, for example of hip and / or knee joints, with a subsequent evaluation in 1895 as part of the 765 feature classification.
- a measurement of the distance between the UAGS end points 1970 for ground contact in the frontal plane can be carried out in step 1851, with an assessment of the distance between the UAGS end points 1970 for ground contact (which corresponds to the UAGS distance) in Step 1884 within the feature classification 765.
- the distance between the UAGS end points 1970 in contact with the floor and the ankle points 1950 in the sagittal and / or frontal plane 1863 can be determined, with subsequent evaluation of the distance between the UAGS end points 1970 and ankle points 1950 in contact with the floor in step 1885.
- Threshold values can be taken into account in each case. Exceeding the threshold in the
- the frontal level would indicate that the UAGS was set too broad, a drop below a threshold value would set the UAGS too narrow, while a threshold value in the sagittal plane would indicate the UAGS set too far forward.
- UAGS and ankle points in 1950 are placed on the floor in step 1880 (for example, to evaluate the correct timing of the leg being operated on, which should if possible only be set up after the UAGS has come into contact with the floor).
- these aspects assess the movement sequence of a person, for example when walking at UAGS.
- the position of the UAGS end points 1970 on contact with the ground is recorded in step 1850 and a determination is made as to whether the position of the contact points of the UAGS on the ground is approximately parallel to the The frontal plane of the person is 1852.
- a connecting line between the positions of the UAGS on the floor can be determined in step 1855 and then the distance from the ankle point 1950 of the leg to be protected to the connecting line in step 1857
- this distance can be evaluated in step 1887.
- Sagittal plane are formed (step 1860) and then in step 1862 the distance of these perpendicular to each other are determined in the sagittal plane, followed by an evaluation of these distances in step 1889.
- a movement sequence classification 770 takes place, in which the individual feature classification is evaluated in a coherent manner. If there are any deviations from the rules stored in the movement sequence classification, which are based, for example, on defined errors (for example, the setting of the UAGS is assigned to the error "supports too far forward" while the upper body is inclined too much), and one occurs Output of a motion sequence correction 450 (for example, the instruction not to set the UAGS so far forward) on the basis of a decision matrix (FIG. 9).
- defined errors for example, the setting of the UAGS is assigned to the error "supports too far forward" while the upper body is inclined too much
- Output of a motion sequence correction 450 for example, the instruction not to set the UAGS so far forward
- Example 2 Classification based on machine lemens / neural networks
- the feature classification 765 and the movement sequence classification 770 are determined deterministically in the sequence described, i.e. based on
- Feature extraction form classes that define themselves from vector spaces (i.e. certain vectors with a certain similarity would fall into the classes, dissimilar vectors accordingly do not).
- a feature in a certain form represents a space that is spanned by the vectors. It is the same with the
- the sequence of movements is the course of the gait.
- 17 rules for anomaly detection are stored in the movement sequence evaluation module 2052 for evaluating the movement sequence and / or in the movement correction module 2022 for movement movement correction.
- An anomaly is understood to mean a deviation in the movement behavior that deviates from the "normal” (ie the physiological gait behavior. This can mean, for example, that the healthy leg has a stride length of 65 cm, the operated leg a stride length of only 30 cm. A deviation from this "usual deviating movement sequence" can again be present, for example, if the service robot 17 measures only 10 cm instead of the 30 cm step length.
- the anomalies can also be detected over time from curve profiles of individual joint points of the skeleton model , in which
- Amplitude height, the position of minima, maxima and / or of turning points over time can characterize an anomaly. This type of detection runs in the
- Movement sequence evaluation module 2052 for evaluating the movement sequence specifically the feature classification 765 and movement sequence classification 770 are used for this.
- An event can be a notification of a therapist, for example by first transmitting information from the service robot 17 via the wireless interface to the patient administration module 160, which in turn can inform a patient, e.g. via a notification via a wireless network such as GSM, LTE, WLAN, which can be received via a therapist’s mobile device.
- This information can include, for example, a video sequence that the
- this video sequence can also be stored in the patient administration module 160 in such a way that on the basis of the
- Patient administration module 160 can access stored video sequence.
- the anomaly is provided with a time stamp and in the memory of the lem module 190 corresponding to Information that the video sequence transmitted to the learning module 190 contains the anomaly.
- sequences with anomalies are given higher priority, this prioritization, for example, within a
- the movements recorded, classified, evaluated and / or corrected are the gait sequence.
- Example 4 Person identification, visual person tracking and person re-identification
- the person identification in person identification module 2041 can take place by means of skeleton model recognition, as is made possible by evaluation frameworks of 2D and 3D cameras and / or frameworks such as OpenPose, OpenCV etc.
- Synchronization of the recordings of the sensors that implement the skeleton model recognition and an RGB recording make it possible to assign body regions of the tracked person, colors and / or color patterns and / or textures of clothes that originate, for example, from the clothes of the person. Based on the parameters color or color pattern per
- Body region and / or size parameters of the patient can be tracked and recognized again over time.
- movement sequence patterns such as Gait pattern can be used.
- Tools such as OpenPTrack can be used here.
- face recognition can also be used.
- markers can be used for person identification, tracking and also re-identification. They can be positioned on the patient or on the walking aids to identify the patient. Such markers can be color-based patterns or light sources with a certain frequency.
- the patient can wear a vest, on the surface of which barcodes are visible, visible to the service robot 17.
- the RGB-2D camera 2084 can identify the patient using these barcodes. Barcode information can be provided in a further optional aspect in accordance with the Patients are assigned, ie stored in the patient administration system 160 or in the storage medium or transponder.
- Example 5 Adaptation of the exercise plan based on a multinomial logistic regression
- a multinomial logistic regression is shown as an example for the machine-lemen approach.
- Such an estimation model makes it possible to estimate the probability for different, not necessarily directly connected output variables based on a series of input variables. For example, the selection options of the exercise plan, which result from the GUI in FIG.
- Three-point gear a period of e.g. 10 minutes and a distance of 100m, for example, can be estimated at the same time.
- the input variables are, for example, those previously determined by the service robot 17. Selection options of the exercise plan in question, but also patient comorbidities such as a general restriction of movement, if necessary
- Impairment of mental ability, his height, the type of surgery performed (which can affect different muscle groups and therefore
- ß k is a weight vector or regression coefficient with a view to the respective output variable k.
- the score can turn directly into one
- Probability value are converted that the observation i of the input variables leads to the respective output variable k.
- the input variables are also referred to as independent variables, the output variables as dependent variables.
- Service robot 17 determined, covered a distance of 250m within 5 min in three-point gait (this would be the characteristics of the input variables) and a therapist would adjust the exercise plan at this point in such a way that the patient can walk 10 min during the next exercise, the three-point gear would be maintained (these would be characteristics of the output variables).
- a multinomial logistic model can be estimated on the basis of the characteristics of the individual variables. The regression coefficients determined on the basis of this model formation phase (weights for the individual input variables such as physical condition, access to the hip, time period after the
- distance, duration of the exercise, three-point gait can be used to suggest to the therapist which exercise plan configuration he should undertake when recording additional exercises (primarily by other patients). If, for example, the setting described above is made several times by therapists, if the specified values of the input variables provide templates, this indicates that it is sensible to make these settings, which, in other words, is expressed in significant values for the node weights.
- Threshold values are stored in the set of rules 150, which, for example, from a probability of 80% for this combination of measures consisting of three-point gait and 10 min gait duration, also suggests these to the therapist.
- Alternative or additional methods for determining the weight of the node weights are naive bayes, decision trees or else neural networks such as long short-term memory recurrent neural networks.
- This signaling can be transmitted to the patient administration module 160 and / or the service robot 17 via a mobile terminal device that the patient uses, which in turn wirelessly instructs the service robot 17 in this regard. can send.
- a house call system can also be used, as is standard in clinics, or a permanently installed terminal that is in communication with the patient administration module 160 and / or the service robot 17.
- Such a request for exercise can initially also be entered in a database via
- rules can be stored that, for example, give priority to certain patients.
- the service robot 17 uses the submodule 2D / 3D recognition and the
- Mapping module 2061 by mapping its environment. All of them
- Spaces are traversed and the surroundings are recorded using 2D and / or 3D sensors.
- Sensors that are suitable for this are at least the LID AR 2083, at least the RGB-D 3-D camera 2085, at least the RGB-2-D camera 2084 and / or ultrasound and / or radar sensors. Combinations of these sensors can also be used.
- the (2D) RGB camera 2084 is sometimes only used as an aid to evaluate colors, for example.
- Rules can be stored in the mapping module 2061, in which areas the service robot 17 may move and, if necessary, for what purpose. These include, for example, areas in which the service robot 17 is not allowed to move, areas in which it is allowed to pass, and still others in which it is allowed to complete a training, such as movement training.
- the module for 2D / 3D recognition and the mapping module 2061 has an interface to the movement training module 2011 for the exchange of Localization data. For example, a room number can be stored in the movement training module 2011, while the information assigned to this number can be found in the
- the service robot 17 can thus recognize the location at which the service robot 17 can meet the patient.
- Example 8 Map enrichment using CAD data
- Service robot 17 is located, can be enriched in one aspect by CAD data of the building, which flow into the 2D / 3D acquisition and mapping module 2061 via an interface. Alternatively, they can be included in a separate module that is connected to the mapping module 2061 via an interface.
- CAD data is understood to mean, on the one hand, site plans in 2D or 3D, which come from software programs such as those used for building planning. However, it can also
- Image data (e.g. PNG, JPEG, PDF) can be used, from which the system derives appropriate information regarding the structural arrangement.
- the consideration of these site plans supports the service robot 17, for example, in recognizing passages and door openings, which can be noted accordingly by translating the drawing symbols in the mapping module 2061.
- the site plans can be used to identify temporary or quasi-stationary obstacles in the building, i.e. Objects that are not part of the building and may change their position over time or disappear entirely.
- Example 9 Mapping and self-localization using electromagnetic waves.
- the service robot 17 can also take into account electromagnetic waves such as light and / or for improved navigation within the building
- Radio signals for example from WLAN access points.
- the light intensity e.g.
- Photodetectors are detected on the service robot 17, both during the mapping process and during general navigation in the building. At the
- Comparison of the detected light radiation with the mapped may take into account the time of day and the season, as well as the geographical latitude and longitude, in order to be natural
- Fluctuations in the angle of sunlight and its intensity must be taken into account.
- artificial light can also be recorded in the building, both with regard to the intensity and the light spectrum.
- WLAN signals from a plurality of routers can be recognized by the WLAN module 2088. Via triangulation is therefore a better one
- Position determination in the building possible During the detection of the incidence of light, the speed, the distance traveled, the orientation of the service robot 17 in space etc. are recorded and stored and compared with stored values. These methods described in this example can be combined with other methods described in this document.
- Example 10 Measuring the distance covered by the patient using odometry and patient tracking
- the distance traveled by the patient is determined using the odometry module 2081 together with an optical sensor such as the LID AR 2083 and / or the 3D camera.
- an optical sensor such as the LID AR 2083 and / or the 3D camera.
- at least one optical sensor detects the position of the patient in relation to movement training
- Service robot 17 the path of which is in turn determined by the odometry module 2081.
- this can have magnetic sensors that determine the rotation of the wheels 6 and take into account the distance traveled over the radius.
- the e.g. Inaccuracies resulting from slip can be corrected by combining them with suitable inertial sensors such as acceleration sensors.
- suitable inertial sensors such as acceleration sensors.
- the basis for the position of the patient, who is identified by the service robot 17 in the person identification module 2041 and tracked with the aid of the person tracking module 2043 or the module for 3D person tracking 2046, can be the center point between the detected legs (in the case of detection by the LID AR 2083) and / or the midpoint between the detected hip joints, the spine, the ankles 1950, etc. in the case of
- the position of the ankle point 1950 can only be inadequately extracted from the skeleton model, ie the position is detected, for example, with an increased blur compared to other joint points.
- the ankle point 1950 as shown in FIG. 21, alternatively and / or additionally via the position of the knee joint point 1930, a direction vector which is oriented parallel to the lower leg from the knee joint point, and the height of the
- the segmentation methods used for UAGS detection can be used to determine the direction vector parallel to the lower leg, for example in order to detect a detected point cloud in connection with the skeleton model as the lower leg.
- Example 11 Measuring the distance covered by the patient using
- the distance traveled by the patient is determined by
- the basis for this can be the distance between the ankles, which can be recognized by 3D cameras such as a Microsoft Kinect or an Astra Orbbec and the associated evaluation frameworks.
- the position of the feet can also be determined using the LIDAR.
- Example 12 Measuring the distance covered by the patient using
- Coordinate system of its environment By self-localization 2062, the service robot 17 determines its position in space, the position in turn being assigned corresponding coordinates.
- An optical sensor such as the LID AR 2083 and / or the 2D camera 2084 or the 3D camera detect the patient as part of the exercises and determine its position relative to the service robot 17.
- This position is also assigned a spatial coordinate.
- a chain of spatial coordinates is thus determined by tracking the patient over time.
- the Euclidean distance can be determined between each coordinate point, which are successively added up.
- the basis for the center of the patient can be the center between the detected legs (in the case of detection by the LID AR 2083) and / or the center between the detected hip joints, the spine, etc. in the case of patient detection with a sensor whose data be evaluated as a skeleton model.
- the position of the patient in the room can also be determined on the basis of the patient coordinates, without necessarily taking into account the position of the service robot.
- Example 13 Determining and outputting the route to be completed
- the service robot 17 can compare the distance traveled by the patient with the route planned according to the exercise plan and give the patient information via loudspeaker 2092 and speech synthesis 2073 as to how long the remaining distance to be completed according to the exercise plan is. Alternatively and / or in addition, the output can also be shown on the display 2087.
- the service robot 17 records the time of the exercises and, in parallel, the distance traveled. In this way, the service robot 17 can determine the speed of the patient and store it in the motion evaluation module 2050. The patient's speed can be compared with a previously stored speed, a difference is calculated and the service robot 17 can provide information on the extent to which the patient deviates from the historical value during the exercise via the display and / or the loudspeaker 2092 and speech synthesis 2073.
- the service robot 17 navigates in front of or follows the patient during the actual exercise.
- the motion planning module 2065 is used, in one aspect using evolutionary algorithms, to estimate the path of the patient.
- the service robot 17 navigates the patient through the exercise area. If the service robot 17 picks up the patient in his room in advance navigation is initially carried out through an area in which the patient has not yet completed any exercises before reaching the exercise area, which is primarily characterized by a few obstacles (including other people who move there).
- the service robot 17 guides the patient by means of acoustic and or visual / optical cues, which are given via the loudspeakers 2092 or the display 2087, in one aspect also via signal lights of the service robot 17.
- an acoustic procedure is chosen especially when the service robot 17 is located behind the patient.
- the patient is tracked and the position of the patient is compared within the building.
- the service robot 17 signals a directional correction, for example advising the patient to turn into a certain aisle, and possibly also to reverse, if the patient should not have turned.
- the service robot 17 calculates the optimal route in the path planning module 2064. In the event of a change in its environment, in particular on the previously calculated path, the service robot 17 does not completely recalculate the route to the destination, but only for the section on the path that is dynamic has changed.
- the dynamic movement planning also takes into account the orientation of the service robot 17, for example shortly before the target, when the service robot 17 moves directly towards it.
- the service robot 17, for example can only drive backwards when directed more slowly than, for example, when it is directed forwards.
- this optimization also takes into account the keeping of safety distances from static and dynamic obstacles.
- the service robot 17 takes into account that the
- Service robot 17 maintains a certain distance from a tracked person, in this case the patient. These target variables are considered together as a cost function and the individual target variables are presented as a weighted sum in order to undertake a global optimization of the route.
- the approach is based on the dynamic window approach known in the prior art (Fox et al 1997).
- evolutionary algorithms that optimize the acceleration of individual path sections can be used to select individual path sections.
- the camera can be adjusted, for example, taking into account the path of the service robot 17 and the estimated direction of movement of the patient, such that the patient can be captured centrally by the camera, for example.
- a PID controller can be used for this, which uses the integrator clamping known in the prior art (ie causes an upper and lower limit of the results) and adjusts the horizontal camera angle.
- the angle determined by the PID controller can also be used to correct horizontal angles in the skeleton model, which are caused by the rotation of the
- Service robot 17 occur compared to the orthogonal camera perspective.
- a certain distance is usually covered in the aisles of a clinic or in another area that was stored as such in the room plan module 2060r.
- those areas should be selected that have few obstacles that the patient and / or service robot 17 must avoid.
- these spaces have a width which corresponds to at least three times the width of the service robot 17.
- Obstacles are also understood to mean people who, apart from patients and service robots, 7 also move in this area, including other patients from the clinic, medical staff and, for example, beds, trolleys, seating, etc., which can be found in everyday clinical practice. Fewer such obstacles not only allow a more fluid movement training of the patient, depending on the state of health, also less difficult, but also a better detection of the patient by the service robot 17. There are fewer cornering movements, thus shortening the time
- the service robot 17 is also able to independently
- Navigating in pre-defined areas e.g. a clinic
- he records the time, for example the daily and weekly schedule, the number and type of obstacles, their dimensions (absolute and / or relative to the aisle width), the density of these obstacles over the entire area as a function of time.
- These values are stored in the memory of the service robot 17, for example within the room plan module 2060r.
- This data acquisition over time can be recorded both in the context of “empty trips” of the service robot 17 (ie trips without parallel movement training with a patient) as well as during exercises with a patient.
- the recorded data on the obstacles can be processed within the service robot 17 or within a cloud, into which the data are transmitted.
- the service robot can determine and / or forecast which areas have had the lowest density in the past during the exercise time (for example Friday 13: 00-13: 15) by accessing the historical data had obstacles.
- the density determination can include clustering methods.
- the service robot 17 also takes into account the distance to be traveled
- the service robot 17 can select a gear of 50 m for an envisaged route length of, for example, 100 m, which is run back and forth, or the service robot 17 recognizes on the basis of the historical data that a section of the aisle with a length of 25 m had a density of obstacles at the time in the past (either a minimum density, e.g. calculated as the lowest 90%, or a density below a certain relative or absolute threshold value).
- the service robot 17, for example independently selects the area with a length of 25 m which the patient is to walk 4x.
- Example 18 Arrangement of the cameras and distances to the patient
- the 2D camera 2084 and / or the 3D camera is mounted in such a way that the 2D camera 2084 and / or the 3D camera can record the patient as centrally as possible. Taking an average
- the camera is therefore mounted at a height of 80-90 cm.
- this distance is determined by the angle of the 3D camera and the resulting possibility of capturing the patient with the whole body in the best case.
- a Kinect for example, has a range of up to 4m, which means that a patient can be detected without any problems, ie his body parts are also recognized at this distance. The situation is different when, for example, the UAGS are to be tracked and, above all, a 3D camera with speckle technology is used.
- a Kinect2 has a significantly higher measurement accuracy than a number of speckle cameras. If you take a standard Astra Orbbec 3D camera, this 3D camera has a horizontal exit opening of 60 ° and a vertical one of 50 ° with a resolution of 640x480. At a distance of 2m from the 3D camera, an image with a horizontal width of approx. 231cm is taken. If the 3D camera is attached at a height of 92 cm, the maximum height of the captured image is approx. 184 cm. Larger people cannot be recorded with it. In one aspect, therefore, it is tracked at a greater distance from the patient what can be at the expense of UAGS detection, depending on what is used for a camera model.
- UAGS identification and UAGS tracking is carried out via the LID AR 2083 and / or an ultrasound or radar sensor, for which the signals of the at least one camera system and possibly LIDARs 2083, ultrasound or radar sensor are used to record the UAGS synchronized and merged accordingly.
- the camera and LID AR 2083, ultrasound or radar can be used together to detect and / or track the UAGS.
- the patient can interrupt the exercise plan at any time by taking an optional break in step 510.
- the patient can cancel the exercise in step 515. To do this, the patient must
- the system with the patient administration module 160 and the service robot 17 documents the therapy.
- the documentation itself can be transferred to external systems such as a hospital information system or a patient data management system 170 via an interface.
- the documentation includes personal data such as name, age, gender, etc., the anamnesis (e.g. osteoarthritis in the hip in the
- Patient data management system 172 maintained.
- Interventions in the exercise plan be they initiated by the therapist or by the
- Service robot 17 based on established and / or learned rules.
- a server that interfaces with the
- Patient administration module 160 is connected, transfer data to a mobile device that is available to the therapist, for example. As a trigger for this
- Notifications can in turn serve to deviate from threshold values in the evaluation of the movement sequence, for example a stride length that deviates from the “typical” deviating stride length, or a combination of movements of different ones
- a step length could trigger a trigger event that only makes up 10% of the physiological step length.
- a step length of possibly 20cm, with an inclination of the upper body of more than 30 ° from the vertical could trigger a trigger.
- a trigger can also trigger a deviation in the exercise plan that a certain one
- Threshold falls below or exceeds, for example, a two-point gait instead of a three-point gait, without the therapist having released it or without this being due to the
- the threshold value being derived in one aspect from the evaluation of the historical data results.
- the Exercise cycle includes the typical sequence of components of the exercise plan, calculated from the first exercise that the patient completes to the last.
- Example 22 Cloud-based speech recognition
- Speech recognition 2074 may be implemented in one aspect via third-party cloud-based services that the service robot 7 accesses wirelessly via an API.
- a speech-to-text API such as Google Speech or Amazon Transcribe can be considered in the first step.
- the text data generated in this way can then be evaluated using APIs such as Amazon Comprehend and the results, for example using commands, which can also be implemented alternatively in the form of a screen-based menu input, can be converted into answers or commands for the service robot 17. Combinations of these services are also possible via a single API.
- Example 23 Anonymization of data transfer between the cloud and the service robot
- the therapist can assign the patient a mobile storage unit such as a transponder, i.e. the therapist hands the patient a transponder and assigns this transponder to the patient in the patient administration module 160.
- the transponder contains the patient ID and / or a further token ID which is assigned to the patient or his patient ID.
- the patient can identify himself on the service robot 17 with this transponder or the serial number and / or patient ID.
- the service robot 17 now downloads from the cloud 18 in accordance with the exercise plan stored by the therapist - but without his personal data - via an interface - the assignment is made via the patient ID.
- the service robot 17 loads the data recorded by the service robot 17 during the exercises into the patient administration module 160 in encrypted form - the assignment is made via the patient ID.
- the data is only decrypted in the patient administration module 160. In this way, no data are transmitted that contain conclusions about the patient's name or address.
- the therapist transfers the exercise plan to one
- Storage medium eg transponder in the form of an RFID tag, USB stick
- the data are transferred from the storage medium to the service robot 17 including the patient ID, which is from the Patient administration module 160 was specified.
- the service robot 17 transfers the recorded data of the exercises back to the
- Storage medium so that the therapist can transfer the data into the patient administration module 160 when reading out the storage medium.
- Data exchange via storage medium e.g. transponder
- storage medium e.g. transponder
- the patient can identify himself via a login and / or password. These serve as a patient ID or are associated with a patient ID, so that the service robot 17 can use this information to download further data such as exercise plans from the cloud.
- biometric features such as a fingerprint scan, face scan or iris scan can be used for patient identification.
- leg load In some cases, patients may have limited leg load. Although it is difficult to implement exact load detection optically, indications can be derived from some skeletal parameters that indicate the leg load. The following parameters are used for this: a) angle between the forearm and upper arm when walking, and / or b) angle of the lower leg to the thigh and / or extension and
- Flexion angle, and / or c) posture of the upper body Relief occurs above all if i) the angle between the forearm and upper arm is less than 170 ° or less than 160 °, ii) the angle of the leg that can only be loaded to a limited extent is less than 172 °, iii) the upper body around is inclined more than 5 ° forwards, and / or iv) the upper body is inclined more than 5 ° away from the affected side. The more pronounced the characteristics i-iiii), the greater the relief of the leg. This defines that an outstretched arm has an angle of 180 °. The person is in the three-point gear, which is explained in more detail in FIG. 22. In addition to the evaluation mentioned there, the arms are recorded and evaluated, while the other aspects are part of the
- Characteristic classification 765 and movement sequence classification 770 can be evaluated.
- the poses determined are accordingly classified and are stored in the rules 150, both in the rules 150 and locally on the service robot 17 in the frame the evaluation of the movement sequence in the movement sequence evaluation module 2052.
- the load on the patient's leg is continuously monitored by the processor of the service robot 17 - in parallel with the evaluation of the other poses and the patient receives instructions optically and / or acoustically if the leg load exceeds or falls below a certain threshold.
- the intensity of a person's leg load can also be determined by means of external sensors, which are wirelessly coupled to the service robot 17, for example.
- external sensors which are wirelessly coupled to the service robot 17, for example.
- insoles are known that are placed in the patient's shoes and measure the pressure on the shoe sole in different spatial resolutions.
- the actual pressure sensor system can be capacitive, resistive (e.g. based on
- the sensor signals are amplified via a bridge circuit and processed by means of an analog-digital converter in such a way that transmission via commercially available radio standards such as Bluetooth, WLAN etc. is possible.
- the service robot 17 also has a software unit that the
- the service robot 17 has at least one radio interface such as.
- the UAGS have at least one sensor that is wirelessly connected to the service robot 7 via this interface.
- the sensor effect can only result from the fact that its position is recognized. This can be done through active or passive location and associated triangulation of the position.
- Passive location means a backscattering of radio signals primarily emitted by the service robot 17, the sensor not having its own power source. Such methods are sufficiently described in the prior art in RFID technology.
- Active location means a transmitter with its own power supply.
- the service robot 17 detects the emitted signals and uses triangulation to determine the position of the transmitter.
- Such an implementation is also superior to visual identification of people when the patient and service robot 17 move in an environment in which there are many people. As a result, the direct line of sight service robot - patient is interrupted more often, so that re-identification often has to take place.
- the UAGS have a button that transmits a signal wirelessly to the service robot 17 when pressed.
- the button is attached in such a way that it can be reached by the patient without any problems while doing the exercises, in particular without having to change the load on the legs.
- the walking aid is, for example, a UAGS
- the button can be located on the distal end of the T-shaped angled handle, which is enclosed by the patient's hands when walking.
- the sensor device on the walking aids is configured so that a different number of button presses or also the
- Pressing frequency different commands can be triggered by the service robot 17. Pressing once can signal the service robot 17 that the patient wants to sit down. The exercise is interrupted. Pressing twice can signal that the patient has recognized that the service robot 17 is following another person instead of the patient (for example, indicates that a person has moved between the patient and the service robot 17) and / or a re-identification (in the re -Identification module 2044) was not successful. After sending such a signal, the service robot 17 interrupts the training and continues this training only after the patient has logged on again in person. This prevents the personalized training from being taken over by other people and unauthorized access to the data of the already registered person.
- Example 28 external sensors on the patient Furthermore, the patient, in particular the limbs, the trunk and / or head, but also the UAGS sensors (in particular acceleration sensors), which are connected to the service robot 17 via at least one interface, can also be attached. These sensors allow the movement of patients to be recorded and
- sensors it is advantageous here to attach sensors to at least as many elements of the limbs, such as, for example, also using 3D skeletal detection and further processing, in particular as part of the feature extraction 720. This would mean, for example, on each thigh, on the trunk, etc one each
- Attach accelerometer The data recorded here can be used alternatively and / or in addition to 3D detection by means of optical sensors (LID AR 2083 2D camera 2084, and / or 3D camera). The evaluation of this data can in turn
- Angle information is provided in the same way as the 3D point clouds, which can be used to build on the algorithms of the 3D sensor technology.
- the service robot 17 has a fall detection.
- this can be done via the integrated optical sensors, i.e. based on a skeleton model.
- the system recognizes from the angle of the upper body, the position or the angle of the head, shoulders, but also the legs relative to the plumb line that the patient is in a fall position, lying on the floor or on his knees has gone.
- the distance from articulation points identified in the skeleton model to the floor can serve as an alternative and / or supplementary evaluation, i.e. As soon as these fall below a threshold value (e.g. in the knee with a distance of 25cm), a fall is recognized.
- a threshold value e.g. in the knee with a distance of 25cm
- Example 30 Fall detection using inertial sensors
- an acceleration sensor / inertial sensor possibly with
- Magnetometer which the patient is wearing or which is integrated into the patient's walking aids. Inertial sensor-based detection of falls or generally “falling to the floor” / falling objects is described, for example, in US7450024 and US8279060.
- the sensors used here can either use a wireless
- Transmission technology such as RFID or Bluetooth transmit the determined acceleration to the service robot 17, which then determines whether the determined acceleration is one Has exceeded the threshold.
- the service robot 17 determines whether the determined acceleration is one Has exceeded the threshold.
- Threshold value determination takes place directly in the sensor system, which merely transmits the information as to whether a threshold value has been exceeded.
- the threshold value of the acceleration can relate to the maximum of the acceleration, to the angle of the acceleration and / or to a combination of these values.
- a movement sequence classification 770 can be carried out on the basis of the data of an exercise recorded and evaluated by the service robot. These data are, for example, on the one hand time-variant data such as distance traveled, speed, standing time, leg phase, as well as the exercises to be completed according to the exercise plan and on the other hand time-variant data, patient data such as age, height, possibly person weight, type of operation , Day of the operation performed, etc. (shown in FIG. 11). This data is all collected and fed into the lem module memory 192 of the lem module 190, where it accumulates over time, a historical one
- Form data base and be evaluated from time to time.
- the evaluation assumes that over the period of a patient's stay in the clinic (from the time of the operation to discharge), the movement sequence, which initially has many disorders, should continue to adapt to the physiological movement sequence, i.e. ideally, the exercise should bring about an improvement, which is reflected in normalizing evaluation parameters.
- the values should increase significantly with distance traveled, speed and duration of the exercise towards discharge and the frequency of corrections should decrease.
- the values from standing time and leg phase must adjust and normalize in the course. This in turn depends on the exercises that the patient is performing
- Exercise plan completed but also patient data such as his age, comorbidities, etc.
- This sequence of movements can be standardized well and is defined by the interaction of various movement parameters or features over time, e.g. setting the UAGS or walking aids relative to the feet, the respective sequence of steps, step length, etc.
- various movement parameters or features e.g. setting the UAGS or walking aids relative to the feet, the respective sequence of steps, step length, etc.
- Movement flow characteristics are those that are classified and that act as dependent variables, while the results of the exercise (e.g. Fig. 11) following the recorded movement sequence characteristics, the patient data and the exercise plans act as the influencing variables (independent variables) that determine the classification .
- Movement features are metrically available, this approach is also called
- Procedures a) transform the problem by estimating the dependent variables independently of each other, and b) adapt the algorithms in such a way that a simultaneous estimation is necessary.
- the latter approach takes into account the interdependencies of the dependent variables, which better reflects the reality of the movement.
- decision trees or GLM estimates general linear models
- MRT multivariate regression trees
- the CLUS software package can also be used, which uses decision trees in the context of predictive clustering.
- Movement sequence correction such as decision classification 775 and the output of movement sequence correction 450.
- the two-point gait is used either immediately after the operation (depending on the instructions of the surgeon) or after the three-point gait if the patient has recovered further. Ideally, the two-point gear takes place
- one leg and the corresponding contralateral support are placed forward, for example, simultaneously and at approximately the same height.
- Evaluations on the three-point gear take place.
- the main difference is that information about the leg 1805 to be protected is not necessarily required.
- it is first detected in step 1856 whether the UAGS and a contralateral leg are in the front in the sagittal plane.
- a perpendicular of the UAGS end point 1970 touching the ground and of the contralateral ankle joint 1950 to the sagittal plane is formed 1858, and the distance of the perpendicular to one another in the sagittal plane is determined 1859.
- the minimum of the distance between the UAGS can be determined and the contralateral base point are evaluated. This distance is in the
- step 1886 the extent to which the deviation is or whether the course of the two-point course is carried out correctly (step 1888).
- threshold values for the determined distances can be evaluated. This data then flows into the movement sequence classification 770 and instructions are assigned to detected deviations in the movements, for example in the decision classification 775.
- the service robot 17 is equipped with a projection unit by means of which the service robot 17 can project instructions to the patient, for example on the floor.
- the instructions may include information on speed, posture, distance (turning, turning, etc.) and may be textual in nature and / or based on pictograms, including traffic signs (such as stop,
- the service robot has a projection unit, for example a commercially available projector.
- This has a defined projection surface, i.e. the projected area is, for example, within defined areas on the floor, for example at a distance of 2 to 3 m from the service robot and with a width of 1.2 m. If the distance between the service robot and an obstacle, such as a wall, is less than the distance over which the service robot projects from the projection onto the floor, the projection would fall onto the obstacle at least in part. However, this can affect the legibility of the projected content.
- the service robot therefore carries out a comparison with its sensor system for obstacle detection.
- this can be carried out on the basis of at least one map which the robot generates dynamically from its surroundings and which contains fixed and possibly movable obstacles.
- the sensor data can also be evaluated directly in such a way that no obstacles may be detected for the projection area.
- the sensors used here can be a LID AR, a camera, ultrasound, infrared and / or radar sensors.
- the service robot is also configured so that it can adjust the type of output based on the type of obstacles found. If output is necessary, but the projection surface is at least partially covered by an obstacle, the service robot can instead select, for example, the display and / or the voice output. Such a projection can replace or complement many of the display and / or voice output mentioned elsewhere in this document.
- This example comprises a computer-implemented method for detecting a movement sequence of a person, comprising a contactless detection of a person over time, creation of a skeleton model of the person and evaluation of the articulation points and / or direction vectors of the limbs extracted from the skeleton model Classification of the person's movement.
- the movement sequence is classified into two-point or three-point.
- instructions are given to the person to issue a
- Movement sequence correction 450 which takes place, for example, as an output via a voice output, projection and / or a display.
- the voice output for example, as an output via a voice output, projection and / or a display.
- Deviations in the course of movement are defined, for example, by rules stored in a memory.
- these deviations are determined as deviations in the symmetry of the movement sequence. There is a minimum period of time between individual editions of the instruction about the movement sequence correction, which is stored in a memory and in one aspect of the number
- forearm crutches are identified in one aspect and evaluated in terms of time and space, in particular the end points of UAGS 1970 that touch the ground.
- at least one end point of the UAGS 1970 is evaluated in relation to at least one ankle point 1950 in the sagittal plane.
- the distance between the ankle point 1950 and the end point of the UAGS 1970 is evaluated and compared with the threshold value stored in a memory.
- a line can be determined between the UAGS and the distance of this line from an ankle point 1950, the ankle point 1950 being, for example, the leg joint point of a leg which is to be protected.
- Ankle joint point 1950 to endpoint 1970 of a contralateral UAGS in the sagittal plane was evaluated. This is done, for example, for the leg and the UAGS, which are in the direction of walking of the person in front. This evaluation between the ankle point 1950 and the UAGS takes place, for example, when the foot in question and the UAGS touches the ground.
- the method determines person-related parameters, for example the stride length, standing time, track width, the distance between the UAGS in the frontal plane, the trunk inclination, head inclination and / or joint flexion and / or joint extension and classifies these in one aspect.
- the process captures this personal parameters in one aspect within at least one gait cycle of the detected person.
- deviations from the symmetry of the movement sequence are determined and evaluated within the gait cycle.
- the procedure differentiates between physiological and deviating movements. In one aspect, this can be done using the F1 score defined above. In one aspect there is one
- the method may further determine the duration of an exercise that
- Example 35 Method for detection of a forearm crutch
- This example includes a computer-implemented method for evaluating the position of a forearm crutch (UAGS) relative to the position of at least one ankle point 1950 of a person.
- UGS forearm crutch
- the person is recorded over time, for example without contact.
- a skeleton model of the person is created 725.
- Sensor data are evaluated in the spatial environment of at least one wrist point for the presence of UAGS.
- a candidate region is selected 745. This can, for example, essentially extend downward from the wrist points. Then, for example, point clouds in the vicinity of the
- Wrist points recorded and compared with data stored in a memory can be used, which is, for example, approximately parallel to the orientation of the UAGS.
- model assumptions can also be taken into account which relate, for example, to the shape 750.
- a fault-tolerant segmentation algorithm 755 can be used to detect the UAGS.
- the data for the point clouds are recorded by at least one sensor that the person can contact without contact, for example a 2D camera 2084, a 3D camera, a LID AR 2083, a radar and / or an ultrasound sensor. If more than one sensor is used, the data is synchronized and merged in time. In one aspect, the position of the UAGS at the point where the UAGS touches the ground in the sense of the end point is evaluated.
- This location is then evaluated, for example, relative to the position of an ankle point 1950, for example, at the time when the foot touches the floor.
- the evaluation takes place, for example, in the Sagittal plane, in one aspect the distance between the ankle point 1950 and the end point of the UAGS 1970 is determined.
- the two-point gait of the person or the three-point gait of the person is then evaluated.
- This example comprises a service robot with at least one computer, at least one memory 10, at least one camera, at least one device for detecting obstacles in the close range, an odometry unit 2081 and one
- Display e.g. a touch display 2087, a module for 2D and / or 3D detection of the environment 2061, a map module 2061b, a self-localization module 2062, a module for controlling and / or initiating the automatic charging of a
- This service robot can alternatively and / or additionally, a mapping / mapping module 2061a, a module for metric path planning 2064, a module for detecting a self-blockade 2068, a module for addressing users 2066, a module for determining an optimal distance to a user 2067, a spatial planning module 2060r, a module for determining a waiting position 2069, a module for laser-based personal tracking 2043, a module for person re-identification 2044, a module for motion evaluation 2050
- Movement sequence extraction module 2051 a movement sequence evaluation module 2052, a module for movement correction 2022, a module for storing patient parameters 2014, a module for storing a exercise plan 2012, and / or a module for
- the service robot can have an RFID interface for data exchange 2089.
- the motion planning module 2065 uses information from the metric path planning 2064, for example to determine an optimal path of the robot, taking into account various target or cost functions.
- the module for motion planning 2065 can be the expected one
- a person 2044 can be re-identified, for example, by means of color patterns, size parameters of a person and / or a classification of the gait pattern of the person.
- the movements recorded, evaluated and corrected can be gait sequences.
- Example 37 Anomaly detection in gait behavior
- This example includes a computer-implemented method for anomaly detection in a person's motion sequence, and has a feature extraction 720, a feature classification 765, and a motion sequence classification 770, wherein the feature extraction 720 is based on a person's skeleton model.
- Characteristic classification 765 is based on the evaluation of space-time parameters for articulation points of the person and / or direction vectors between articulation points. Anomalies are characterized, for example, by the amplitude height, positions of minima and / or maxima and / or turning points, which are determined on the basis of the observation of the articulation points over time. In one aspect, anomalies about the evaluation of a gait cycle are recorded. For example, the stride length, service life, track width, the trunk inclination, head inclination and / or joint flexion are recorded and evaluated. In one aspect, the position of the ankle points 1950 is additionally evaluated relative to detected walking aids, for example forearm crutches.
- the evaluation can be carried out, for example, by determining the distance in the sagittal plane.
- the movement sequence classification 770 comprises the evaluation of at least two parameters such as stride length, standing time, track width, trunk or head inclination, joint flexion and / or the ratio of the position of ankle points 1950 and walking aids.
- An anomaly is identified by the deviation of determined parameters from classification parameters, such as defined ones
- Thresholds can be, for example, values that were generated by evaluating previously recorded movement sequences.
- classification can be done through machine learning techniques. It takes place in an optional, additional aspect in the event of a deviation from the classified
- the data from which the skeleton model is created is recorded, for example as a video sequence.
- the sequences are provided with a time stamp, for example when an anomaly is detected. For example, a transmission of the
- Video sequence to a memory This memory can be located in a lem module 190. Video sequences can be prioritized in the memory. In one aspect, this prioritization of video sequences depends on the type and / or number of anomalies detected. In one aspect, an anomaly can be a deviation of the recorded, completed exercises from an exercise plan, such as the duration of an exercise, the speed traveled, the type of use of the supports / walking aids, etc.
- Example 38 Automated exercise plan adjustment to determine the transition from three-point to two-point gear
- This example comprises a system for the automated determination of the transition from three-point to two-point gear, with a processor and a memory, the processor and the memory via an interface, for example an interface to
- the exercise plan adaptation defining the transition from the three-point to the two-point path represents.
- an automated intervention takes place, for example an automated adaptation of a exercise plan and / or the transmission of a message via an interface.
- the system or computer-implemented method compares, for example, recorded values by
- the values can be person-related parameters such as speed, standing time, playing leg phase, stride length, etc. of a person, which were detected, for example, by means of a contactless sensor.
- the personal parameters are a combination of speed, standing time, leg phase, and / or stride length.
- the system or computer-implemented method can, for example, the symmetry between the
- the detected deviations from the physiological movement sequence are recorded and evaluated in terms of their type and frequency.
- correction information stored in the memory is also taken into account, which during the recording of the exercise of the person (for example by the system or a system connected via an interface with at least one sensor) contactless detection of a person), for example the type and frequency of these correction notices.
- the values were recorded in one aspect over time, for example over several exercises.
- the system or the computer-implemented method can, for example, carry out a difference analysis over several exercises in order to
- the determined person's exercise progress To determine the person's exercise progress. In one aspect, the determined
- Exercise progress compared with previously determined exercise progress of other people and / or reference data stored in a memory.
- Example 39 Method for independently choosing a training plan configuration taking historical data into account
- This example includes a computer-implemented method for a person to independently choose exercise plan configurations.
- the exercise plan configuration desired by the person is compared with releases within a exercise plan and can only be selected if there is an approval.
- the release can take place on the basis of an automated training plan adjustment.
- data previously made by the person and recorded and evaluated by means of a contactless sensor are compared with historical data.
- a release for a training plan configuration can be released if there is a high degree of agreement between the recorded data and the historical data.
- the exercise plan can, for example
- the computer-implemented method is implemented in a service robot, for example one for the acquisition and evaluation of gait training.
- Example 40 System and / or computer-implemented method for the automated adaptation of a training plan
- This example comprises a system for the automated adaptation of a training plan by means of a wireless interface, at least one processor and at least one memory, which contains at least one database in which a training plan is stored.
- the system can receive data via the wireless interface from a second system, which is equipped with a sensor for contactless detection of a person, e.g. a camera, records and evaluates exercises stored in the exercise plan and makes these recorded exercises available to the first system poses.
- the system can receive information from a third system, in which
- Exercise schedule adjustments are made.
- a computer-implemented method can be used for an exercise plan
- the exercise plan may include information about the person, including their agility, type of surgery, location of surgery, height, and / or person weight.
- the adaptation of the exercise plan can include the distance to be covered, the type of use of the supports / walking aids etc.
- the recorded data are
- the personal parameters can include Service life, track width,
- Game leg phase, upper body, line of sight and / or the stride length of a person include, for example, symmetry values of the stride length.
- the recorded data can include the temporal and / or local sequence of the use of walking aids such as UAGS.
- This recorded data with the personal parameters can supplement previously obtained historical data within a database.
- This recorded data and, for example, the data transmitted by the second system (with the camera for recording a person) and from the third system (for adapting the exercise plan) can be evaluated by means of machine learning and / or neural networks.
- Personal data such as the age of the patient, his height, person weight, comorbidities, type of operation, etc., as well as the data recorded during the recording of the exercises, can be used as the input variable for these evaluations.
- the settings of the exercise plan for the exercise at time t or at later time t + 1 can be determined as the starting variable. The one with these
- Knot weights determined by calculations can be transmitted to the second system (with the camera for recording a person) and / or the third system (for adapting the exercise plan). These node weights can, for example, be used to automatically adapt the exercise plan.
- the second system is a service robot that performs gait training, for example.
- Example 41 System for automated training plan adjustment
- This example comprises a first system for the automated adaptation of a exercise plan, which has a patient administration module 160, a wireless interface to a second system with at least one camera, a processor and a memory, and an interface to a learning module 190.
- a second system can be a service robot 17, for example.
- the first system can be connected to a set of rules 150 via an interface.
- a exercise plan is stored in the patient administration module 190 and can be configured via a terminal 12.
- the exercise plan can take into account, for example, a person's agility, the type of operation performed and / or the location of the operation (such as region and side, e.g. left knee).
- Agility includes a person's general condition and / or their comorbidities.
- the exercise plan can include exercises based on the two-point walk, the three-point walk and / or climbing stairs.
- the second system records the duration of the exercises, the distance covered, the frequency of the exercises and / or their intensity, etc.
- the exercise plan can be transmitted to the second system via an interface and stored there.
- the second system can be, for example, an exercise with a person, such as gait training in one aspect, and / or an output of one
- Motion sequence correction 450 for example triggering a gear correction.
- the recorded data are saved and evaluated over time.
- the data can, for example, be transmitted to a learning module 190 via an interface.
- the learning module 190 has historical data in a lem module memory, which are supplemented by the evaluated data and transmitted via the interface.
- the historical data are supplemented by settings made in the exercise plan. With those made
- Settings can be e.g. is about the exercise plan and / or exercise plan adjustments that were made on the basis of the recorded data.
- the machine learning method and / or neural networks can be used, for example, to determine the node weight for the exercise plan.
- the evaluation of the exercise and optionally also the components of the exercise plan are used as the input variable for determining the knot weight, and configurations of the exercise parameters are used as the starting parameter
- the node weights determined here are transferred to a set of rules. There you can e.g. Replace already stored node weights for the training plan adjustment. Furthermore, a
- Regular updates to the exercise plan can be made.
- suggestions can be made for a training plan adjustment, for example based on the evaluation of historical data.
- these suggestions can be made available via a terminal.
- the suggestions are made on the basis of data recorded by the system with at least one camera, a processor and a memory and / or the configuration of a training plan.
- the exercise plan in turn, can include exercises for gait training.
- Example 42 Computer-implemented procedure for motion sequence evaluation
- the example comprises a computer-implemented method for
- Movements of a person a feature extraction 720 of the recorded movements, for example the movement sequence; a classification of the recorded movements, for example a movement sequence classification 770; a feature classification 765 of the recorded movements; an assignment of instructions to detected
- Deviations of the movements for example a decision classification 775 and an output of the movement sequence correction 450 of the movements, for example an output of a gait correction.
- the feature classification 765 includes, for example, the evaluation of personal parameters such as stride length, leg phase, standing time, etc. of the person being recorded.
- the movement sequence classification 770 comprises, for example, a combined consideration of different limbs and / or the use of forearm crutches.
- the output of the movement sequence correction (for example an output of a gait correction) comprises the output of instructions to the person, for example via a loudspeaker, a projection and / or a display.
- the recorded movements can be exercises as part of gait training, for example. The expenditure of
- Movement sequence correction 450 can also take place in a prioritized manner, in one aspect different deviations which have different priority scores are compared within a time interval, and only those deviations which have the highest priority score are output in each case.
- the data of the movement sequence classification 770, the feature classification 765 and / or the movement sequence correction (for example a decision classification 775) as well as the evaluations of the recorded ones Movements, such as gait training, can be saved with corresponding historical data.
- the recorded and evaluated data i.e.
- the results of the movement sequence classification 770, the movement sequence correction such as the decision classification 775 and / or the raw data of the recordings are transmitted to a learning module 190.
- the method also includes access to the data in the learning module, for example via a terminal, and a re-evaluation of the movement sequence classification, for example one
- Gait sequence classification a re-evaluation of the feature classification and / or a re-evaluation of the movement sequence correction, e.g. the decision classification, e.g. with regard to the output of a movement sequence correction 450.
- This reassessment can be done manually, for example.
- the at least one manual rule adaptation is transferred to a rule set 150.
- the rule set 150 for example, an update of the movement sequence classification, for example the gait sequence classification, the feature classification and / or
- Movement sequence correction such as the decision classification, for example a movement sequence correction.
- the updated rules of the decision classification for example a movement sequence correction.
- Movement sequence classification for example the gait sequence classification, the
- Characteristic classification and / or movement sequence correction for example the
- the system which records the movements of the person.
- these motion recordings can be anonymized in an optional step, for example by pixelating the faces.
- the video sequences can be tagged, in particular with regard to a deviating and / or physiological sequence of the movement sequence.
- Time stamps can be assigned, for example.
- an automated reassessment of the movement sequence and / or an automated reassessment of the movement sequence correction, such as the decision classification can be carried out.
- This automated reassessment takes place, for example, by means of machine learning and / or neural networks.
- the evaluations of the movements recorded for example an exercise such as a gait training, and serve as output variables the movement sequence classification, for example a gait sequence classification, the feature classification and / or the
- Movement correction for example the decision classification.
- the node weights determined in the automated reassessment for the feature classification, the movement sequence classification, for example a gait sequence classification and / or the movement sequence correction, for example a decision classification, can be transmitted to the rule set 150, for example.
- Movement sequence corrections for example the decision classification, are updated and, for example, also the corresponding rules of the movement sequence classification, for example the gait sequence classification, the feature classification and / or the
- Movement correction for example the decision classification.
- the updated node weights and / or rules are transmitted to the system that detects the person.
- Example 43 Service robot with the ability to select one independently
- This example shows a service robot 17 with at least one optical one
- Sensor unit a radar sensor and / or an ultrasonic sensor and at least one memory that contains a map of the area surrounding the service robot, wherein the service robot 17 detects and evaluates the number and / or type of obstacles in its area in order to identify such subareas within the map that have a low density of obstacles.
- Areas for example, are stored in the map in which the service robot 17 mainly moves.
- the obstacles are recorded over time, for example in the daily routine, weekly routine and / or monthly routine.
- the obstacles can be dynamic and / or static obstacles, for example.
- the dimensions of the obstacles can also be recorded, for example. The dimension can be determined, for example, absolutely and / or relative to the width of an aisle in which the service robot moves.
- the obstacles are recorded during an exercise during which the service robot 17 moves in the subareas.
- the service robot in an alternative aspect an external system into which the data has been transferred, determines the hindrance density over time.
- the data from the external system then becomes the service robot again 17 transferred.
- the evaluation of the data includes, for example, clustering.
- time intervals can be defined in which certain subareas have a low Hindemis density.
- forecasts can be made about a future density of obstacles.
- the service robot 17 can, for example, perform tasks to be completed especially in those subareas which have a low density of obstacles.
- the tasks to be performed can depend on density threshold values.
- the service robot 17 can, for example, make route selection decisions based on the density determined.
- Example 44 Computer-implemented method for recognizing the load on a leg
- This example comprises a computer-implemented method for recognizing the load on a leg of a person and has the contactless detection of the movements of the person.
- the contactless detection can be carried out using a camera.
- a skeleton model is created and joint points and / or direction vectors between the joint points are evaluated in terms of time and space.
- a feature extraction 720, a feature classification 765 and / or a movement sequence classification 770 can be carried out.
- walking aids are also recorded and the course of the walking aids is evaluated relative to the person's ankle points 1950.
- a leg to be loaded is placed in the sagittal plane near at least one forearm crutch (UAGS).
- the leg to be relieved is close to the line that connects the UAGS.
- the gait sequence of the person who is supposed to relieve the leg is the three-point gait.
- the recorded movements are compared with stored rules. There is, for example, decision classification 775 for evaluating which of the detected deviations should result in a correction output and output of a movement sequence correction 450, for example via the route of a voice output of instructions to the person.
- the angle between the forearm and upper arm, extension and / or flexion angle of the hip and knee and / or the inclination of the upper body (in the frontal and sagittal planes) are evaluated.
- Relief of a leg is determined by the angle between the forearm and upper arm being less than 170 °, For example, is less than 160 °, the knee angle of the leg, which can only be loaded to a limited extent, for example.
- Example 45 Walking aid with communication device
- the following describes a walking aid that is equipped with a power source, a control unit, wireless interface and a button that is pressed when pressing the
- Wireless interface transmits a signal, is equipped. At the recipient of the
- the transmitted signal can be, for example, a service robot.
- the walking aid is a forearm crutch.
- the button is located at the distal end of the T-shaped handle that is gripped by the patient's hands.
- the control unit is configured in such a way that different signals can be transmitted by pressing the button.
- the pressure frequency and / or the number of pressure events can represent different signals, for example.
- triggering the button triggers a re-identification of the patient by the service robot.
- Example 46 Method for evaluating a three-point gear
- the example comprises a computer-implemented method for evaluating a three-point gait with the following steps:
- the determination is made, for example, of whether the position of the contact points of the UAGS on the floor is approximately parallel to the frontal plane of the person.
- the frontal plane can be determined, for example, by the direction of movement of the person being detected. It follows, for example, whether the foot of the leg to be protected is positioned between the UAGS.
- connection line is determined between the
- the position of the UAGS end points 1970 and the ankle point 1950 of the leg to be spared in the sagittal plane is evaluated with respect to one another and the distances between these points are evaluated.
- Service life the time of putting on the UAGS and / or feet or ankle points 1950; the distance between UAGS endpoints 1970 in contact with the ground in the frontal plane; the distance between UAGS endpoints 1970 in contact with the ground to ankle points 1950 in sagittal and / or frontal plane; the last two aspects are used to evaluate the UAGS use relative to the body position, e.g. whether the supports are set too close or too far away from the body (sideways or forward).
- a movement sequence classification 770 can take place, in which at least two features are evaluated in combination and compared with stored rules. If deviations in the movement sequence are recognized, the decision matrix for the detected deviation is used to assess whether an output of
- the example discloses a computer-implemented method for evaluating a three-point gait, comprising the detection of articulation points, direction vectors between articulation points and / or forearm crutches (UAGS) as space-time parameters; the
- the distance of the perpendicular to one another in the sagittal plane or the distance between the two points in the sagittal plane can be evaluated and / or an assessment can be made as to whether there is a contralateral use of UAGS and leg. This evaluation is done, for example, by comparing
- Thresholds In addition, the further aspects from the previous example (last two paragraphs) can be used to evaluate the two-point course as further steps.
- GUI graphical user interface
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Rehabilitation Tools (AREA)
Abstract
Description
Claims
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019100228.1A DE102019100228A1 (de) | 2019-01-07 | 2019-01-07 | Serviceroboter |
DE102019116848 | 2019-06-21 | ||
PCT/EP2020/050200 WO2020144175A1 (de) | 2019-01-07 | 2020-01-07 | Verfahren und system zur erfassung des bewegungsablaufs einer person |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3908969A1 true EP3908969A1 (de) | 2021-11-17 |
Family
ID=69165347
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20700648.7A Pending EP3908969A1 (de) | 2019-01-07 | 2020-01-07 | Verfahren und system zur erfassung des bewegungsablaufs einer person |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220108561A1 (de) |
EP (1) | EP3908969A1 (de) |
CN (1) | CN113490945A (de) |
DE (1) | DE112020000351A5 (de) |
WO (1) | WO2020144175A1 (de) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11794073B2 (en) | 2021-02-03 | 2023-10-24 | Altis Movement Technologies, Inc. | System and method for generating movement based instruction |
CN112926514A (zh) * | 2021-03-26 | 2021-06-08 | 哈尔滨工业大学(威海) | 一种多目标检测及跟踪方法、系统、存储介质及应用 |
DE102021120737A1 (de) * | 2021-08-10 | 2023-02-16 | Fabian Höger | Diagnoseverfahren und -vorrichtung |
US20230306616A1 (en) * | 2022-03-25 | 2023-09-28 | Logistics and Supply Chain MultiTech R&D Centre Limited | Device and method for capturing and analyzing a motion of a user |
WO2023245157A1 (en) * | 2022-06-16 | 2023-12-21 | Poze Ai Inc. | Pose training and evaluation system |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7242306B2 (en) | 2001-05-08 | 2007-07-10 | Hill-Rom Services, Inc. | Article locating and tracking apparatus and method |
CA2605239A1 (en) * | 2005-05-02 | 2006-11-09 | University Of Virginia Patent Foundation | Systems, devices, and methods for interpreting movement |
US8279060B2 (en) | 2009-08-11 | 2012-10-02 | Empire Technology Development Llc | Wireless monitoring system and method |
US8460220B2 (en) * | 2009-12-18 | 2013-06-11 | General Electric Company | System and method for monitoring the gait characteristics of a group of individuals |
CN101862245A (zh) | 2010-05-28 | 2010-10-20 | 上海市古美高级中学 | 医院服务机器人 |
CN203338133U (zh) | 2012-11-12 | 2013-12-11 | 常熟理工学院 | 一种智能医疗服务机器人 |
JP2015061577A (ja) * | 2013-01-18 | 2015-04-02 | 株式会社東芝 | 動作情報処理装置 |
CN203527474U (zh) | 2013-07-09 | 2014-04-09 | 常熟理工学院 | 一种助老服务机器人 |
US10448867B2 (en) * | 2014-09-05 | 2019-10-22 | Vision Service Plan | Wearable gait monitoring apparatus, systems, and related methods |
GB2532194A (en) * | 2014-11-04 | 2016-05-18 | Nokia Technologies Oy | A method and an apparatus for automatic segmentation of an object |
CN204772554U (zh) | 2015-06-30 | 2015-11-18 | 广州绿松生物科技有限公司 | 智能健康服务机器人 |
CN104889994A (zh) | 2015-06-30 | 2015-09-09 | 广州绿松生物科技有限公司 | 智能健康服务机器人 |
CN105078450B (zh) | 2015-08-24 | 2018-02-27 | 华南理工大学 | 一种可实现脑电检测的健康服务机器人 |
CN105078445B (zh) | 2015-08-24 | 2018-11-02 | 华南理工大学 | 基于健康服务机器人的老年人健康服务系统 |
CN105082149B (zh) | 2015-08-24 | 2017-10-20 | 华南理工大学 | 一种可实现血氧饱和度检测的健康服务机器人 |
CN105078449B (zh) | 2015-08-24 | 2018-07-20 | 华南理工大学 | 基于健康服务机器人的老年痴呆症监护系统 |
CN205950753U (zh) | 2016-05-14 | 2017-02-15 | 深圳市华科安测信息技术有限公司 | 医院导诊服务机器人 |
CN107544266A (zh) | 2016-06-28 | 2018-01-05 | 广州零号软件科技有限公司 | 家庭健康服务机器人 |
CN106407715A (zh) | 2016-10-19 | 2017-02-15 | 上海派毅智能科技有限公司 | 一种智能服务机器人健康辨识系统及方法 |
CN108073104A (zh) | 2016-11-10 | 2018-05-25 | 贺州学院 | 基于stm32嵌入式多用途护理服务机器人 |
CN106709254B (zh) | 2016-12-29 | 2019-06-21 | 天津中科智能识别产业技术研究院有限公司 | 一种医疗诊断机器人系统 |
CN106671105A (zh) | 2017-01-17 | 2017-05-17 | 五邑大学 | 面向老人的智能陪护机器人 |
CN206833244U (zh) | 2017-04-21 | 2018-01-02 | 山东大学 | 一种基于云平台的医院服务机器人 |
CN107518989A (zh) | 2017-10-21 | 2017-12-29 | 长沙展朔轩兴信息科技有限公司 | 医院服务机器人 |
CN107598943A (zh) | 2017-10-30 | 2018-01-19 | 文杨 | 一种陪伴老人的机器人 |
CN108039193A (zh) | 2017-11-17 | 2018-05-15 | 哈尔滨工大服务机器人有限公司 | 一种自动生成体检报告的方法及装置 |
CN108053889A (zh) | 2017-12-20 | 2018-05-18 | 中国科学院合肥物质科学研究院 | 一种基于Agent技术的健康促进服务机器人 |
CN108422427A (zh) | 2018-03-21 | 2018-08-21 | 南通市巨久新材料科技有限公司 | 一种康复服务机器人 |
CN108968973A (zh) * | 2018-08-07 | 2018-12-11 | 南通大学 | 一种人体步态采集与分析系统及方法 |
-
2020
- 2020-01-07 WO PCT/EP2020/050200 patent/WO2020144175A1/de unknown
- 2020-01-07 CN CN202080013352.4A patent/CN113490945A/zh active Pending
- 2020-01-07 EP EP20700648.7A patent/EP3908969A1/de active Pending
- 2020-01-07 DE DE112020000351.5T patent/DE112020000351A5/de active Pending
- 2020-01-07 US US17/421,098 patent/US20220108561A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2020144175A1 (de) | 2020-07-16 |
DE112020000351A5 (de) | 2021-10-28 |
CN113490945A (zh) | 2021-10-08 |
US20220108561A1 (en) | 2022-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3908969A1 (de) | Verfahren und system zur erfassung des bewegungsablaufs einer person | |
EP4003164A1 (de) | System zur erfassung von bewegungsabläufen und/oder vitalparametern einer person | |
Downey et al. | Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping | |
Tang et al. | Towards BCI-actuated smart wheelchair system | |
Carlson et al. | Brain-controlled wheelchairs: a robotic architecture | |
US11304023B1 (en) | Enhanced hearing system | |
US20150320343A1 (en) | Motion information processing apparatus and method | |
Bai et al. | Development of a novel home based multi-scene upper limb rehabilitation training and evaluation system for post-stroke patients | |
US8900165B2 (en) | Balance training system | |
Fettrow et al. | Interdependence of balance mechanisms during bipedal locomotion | |
Scheidig et al. | May I keep an eye on your training? Gait assessment assisted by a mobile robot | |
Martinez-Martin et al. | Rehabilitation technology: assistance from hospital to home | |
Katevas | Mobile robotics in healthcare | |
EP3477656A1 (de) | Systeme und verfahren zur quantifizierung des posturalen gleichgewichts von benutzern in einer umgebung für erweiterte realität | |
Fiorini et al. | User profiling to enhance clinical assessment and human–robot interaction: A feasibility study | |
Gregori et al. | On the visuomotor behavior of amputees and able-bodied people during grasping | |
Scheidig et al. | Robot-assisted gait self-training: Assessing the level achieved | |
Nastac et al. | An AAL scenario involving automatic data collection and robotic manipulation | |
Tripathy et al. | Constrained particle filter for improving Kinect based measurements | |
DE102019100228A1 (de) | Serviceroboter | |
Rohmer et al. | Laser based driving assistance for smart robotic wheelchairs | |
Fiorini et al. | A robot-mediated assessment of tinetti balance scale for sarcopenia evaluation in frail elderly | |
Foresi et al. | Human-robot cooperation via brain computer interface in assistive scenario | |
Miro et al. | Development of a novel evidence-based automated powered mobility device competency assessment | |
Sayed et al. | Cognitive assessment in children through motion capture and computer vision: the cross-your-body task |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210809 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: TEDIRO GMBH |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230719 |
|
19U | Interruption of proceedings before grant |
Effective date: 20231228 |