WO2021260388A1 - Method of monitoring mobility - Google Patents

Method of monitoring mobility Download PDF

Info

Publication number
WO2021260388A1
WO2021260388A1 PCT/GB2021/051617 GB2021051617W WO2021260388A1 WO 2021260388 A1 WO2021260388 A1 WO 2021260388A1 GB 2021051617 W GB2021051617 W GB 2021051617W WO 2021260388 A1 WO2021260388 A1 WO 2021260388A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
patient
proms
data
rom
Prior art date
Application number
PCT/GB2021/051617
Other languages
French (fr)
Inventor
Peter Bishop
Original Assignee
Agile Kinetic Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agile Kinetic Limited filed Critical Agile Kinetic Limited
Publication of WO2021260388A1 publication Critical patent/WO2021260388A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records

Definitions

  • the present invention concerns a method of monitoring the mobility of a patient or user. More particularly, but not exclusively, this invention concerns the steps of acquiring images of a patient and performing an image analysis to determine a Range of Motion (ROM) value to provide an improved patient rehabilitation plan.
  • ROM Range of Motion
  • ROM range of motion
  • a goniometer a physical device known as a goniometer.
  • the measurements taken using a goniometer may be susceptible to human error.
  • Information related to the patient’s levels of comfort whilst going about daily tasks and/or specific exercises may also be gathered during these appointments. These two types of information may be used to monitor and manage patient recovery, informing the rehabilitation strategy.
  • US2019/0287261 discloses a method including the steps of receiving a plurality of images with one or more target objects, and processing the images using a neural network system (NNS).
  • NNS neural network system
  • US2019/0287261 discloses a system wherein individuals can measure, record and assess their progress on an independent basis while recovering from an orthopaedic injury or procedure.
  • the method, system and apparatus of US2019/0287261 requires multiple images to be inputted and processed, which can be time consuming for the patient and computationally demanding.
  • the system is designed to be used by the individuals, to measure, record and assess their own progress without any clinical or professional input. This may lead to the individual missing important triggers or signs in their recovery that clinical specialists may recognise.
  • GB2530754 and GB2551238 Another device and method of mapping the trajectory of a part of the anatomy of the human or animal body is disclosed in GB2530754 and GB2551238. Both documents disclose a method to measure a range of motion of the anatomy of a human or animal patient. The method comprises the use of a sensor attached to part of the anatomy and receiving signals from the sensor about the angles of rotation of the anatomy. The sensor provides signals that may be mapped to an image space for display of the trajectory of the part of the anatomy.
  • the disadvantage of using a physical sensor is that the anatomy of interest may not be easily accessible, or may pose difficulties for sensor attachment. The method also relies on having a working connection between the sensor and the data receiving means.
  • the present invention seeks to mitigate the above-mentioned problems. Alternatively or additionally, the present invention seeks to provide an improved method of monitoring mobility.
  • the present invention provides a contact-free method of monitoring a patient comprising the steps of: collecting, from the patient, patient reported outcome measures (PROMs); taking a first image of the patient in a first position; taking a second image of the patient in a second, different, position; performing an image analysis of the first image and second image such that a range of motion (ROM) value from the first position to the second position is obtained; and associating the ROM value with the PROMs data.
  • PROMs patient reported outcome measures
  • contact-free it is intended that no physical contact is made with the patient, for example, in the form of the application of tracking devices or sensors to the body of the patient.
  • the method may provide a method of monitoring a patient requiring only two images to be taken. Reducing the number of images required in order to monitor a patient may reduce pain and discomfort experienced by the patient during movement, in particular if the area being monitored is fragile or difficult to move.
  • the method of the present invention may rely solely on image analysis. This has the advantage that no direct contact with the patient is required. This avoids the disadvantages of conventional methods, which often require additional devices or sensors to be applied to the patient to track the patient’s motion, or for hardware such as depth sensing motion capture systems to be used.
  • the images being analysed in the method of the present invention are preferably two-dimensional images. Such images may be taken using a conventional camera, smart phone or the like. Further, such two-dimensional images may be extracted as still images from a video recording taken using a conventional video camera, smart phone or the like.
  • PROMs data may include, but is not limited to, a pain score.
  • the pain score may be linked to the movement of the patient between first and second positions, or relate to a general level of pain felt over an extended period.
  • PROMs data may be collected at the same time as the first image and second image are taken.
  • PROMs data may be taken before, and/or after the first image and second image are taken.
  • the PROMs data may comprise a pain score at the moments of taking the first image and second image.
  • the PROMs data may additionally or alternatively be a reflective pain score, for example a pain score representing how the patient felt the day before the first image and second image were taken.
  • PROMs data may be collected solely at the time the first image and second image are taken, or at intervals prior to taking the first and second images. For example the patient may be prompted to record PROMs data at regular intervals during the day.
  • Combining the ROM value from the image analysis with PROMs data provides an improved method of monitoring the patient without requiring in-person consultation meetings with a doctor or specialised clinician.
  • the association of the ROM value and PROMs data may be analysed to provide advice on the most appropriate recovery plan for a patient. The recovery plan may be adjusted throughout the recovery period based on additional measurements being taken and updated ROM values and PROMs data being obtained.
  • the patient may be a human or animal.
  • the PROMs data will be observed and determined by a third party based on the behaviour of the animal.
  • the use of the term “patient” does not imply that a surgical intervention must have taken place.
  • the patient may be any user looking to measure the range of motion of a movement, potentially with a view to looking to increase the range of motion of that movement.
  • the method may be used independently by the patient or with the remote support of a coach, physiotherapist or other health or fitness professional.
  • the image analysis step may utilise a neural network.
  • the image analysis step may utilise any suitable computer analysis software that identifies patterns, as will be understood by the skilled person.
  • the image analysis step may utilise a trained convolutional neutral network.
  • the method may comprise the step of undertaking a training period, in which the neural network is trained how to analyse patient images to determine ROM values.
  • the trained convolutional neural network may use a learning mode for remembering and learning patterns found in input data.
  • the convolutional neural network may rely upon training data, where the accuracy of the trained system is dependent upon the quality of the dataset used to train it. For example, by inputting multiple images of an object, the neural network places “weights” on reoccurring or familiar targets within the object. Upon the input of subsequent images of that object, the trained convolutional neural network modifies the weight of the targets each time.
  • the trained convolutional neural network provides an advantage over other conventional computer analysis techniques as it is capable of capturing associations or discovering regular patterns within a set of images quickly and efficiently. In the present invention, the trained convolutional neutral network has been configured to work at the optimum learning level.
  • the image analysis step may comprise identifying one or more key points in the first image and second image, detecting those key points in the first image and second image, and contrasting the position of those key points in the first image and second image, in order to determine the ROM value of the patient moving from the first position to second position.
  • the relative positions of the key points in the first image may be analysed to indicate a joint angle in the first image.
  • the relative positions of the key points in the second image may be analysed to indicate a joint angle in the second image.
  • the joint angle obtained from the first image may be compared to the joint angle obtained by the second image in order to calculate the ROM value of the patient moving from the first position to second position.
  • the joint angles may be stored to allow comparison to later detected joint angles, thereby allowing progress of the patient to be monitored.
  • the joint angles may be stored and analysed to inform future treatment planning.
  • the identification of one or more key points in the first image and second image may comprise the use of a computer vision application, trained to identify such key points on a patient.
  • the computer vision application may be installed on a smart device such as a phone or tablet.
  • the first and second image may be taken using a camera.
  • the camera may form part of a smart device.
  • the smart device may also comprise the computer vision application.
  • the smart device may be arranged to perform the image analysis.
  • the smart device may send the images to a computer vision application stored on a remote server, and the remote server may be arranged to perform the image analysis.
  • the remote server may send the image analysis results back to the smart device.
  • Example key points include the wrist, elbow, and shoulder, all of which may be identified and monitored by the image analysis step.
  • the ROM value of the injured arm may be determined by the relative movement of the wrist, elbow, and shoulder, when in the first position and the second position.
  • the first position may comprise the patient with a joint in a position of full extension
  • the second position may comprise the patient with the same joint in a position of full flexion.
  • the method may comprise the step of making a video recording of the patient moving from the first position to the second position.
  • the first image of the first position and the second image of the second position may be taken from the video recording.
  • the first and second images may be determined automatically by an application configured to identify the first position and second position, or may be determined by a patient or other user watching the video and marking it to indicate when the patient is in the first position and second position.
  • the method may comprise the step of, prior to taking the first image of the patient in the first position, indicating the first position to the patient.
  • the method may comprise the step of, prior to taking the second image of the patient in the second position, indicating the second position to the patient.
  • Such indications may be visual, for example on the screen of a smart device, or audible, for example via voice guidance emitted from a smart device.
  • Such an arrangement may improve the ease with which the patient assumes the first position and second position.
  • Such an arrangement may also improve the accuracy of the measurements taken by ensuring a consistent first position and second position are assumed.
  • the ROM value and the associated PROMs data may be sent to a third party.
  • the third party may be a doctor or physiotherapist.
  • the third party may assess the ROM value and associated PROMs data in order to obtain an indication of the physical condition of the patient.
  • the steps of: collecting, from the patient, patient reported outcome measure (PROMs) data; taking a first image, taking a second image, performing an image analysis of the first image and second image such that a range of motion (ROM) value from the first position to the second position is obtained; and associating the ROM value with the PROMs data, may be repeated a number of times over an extended time period in order to collect a series of ROM values with associated PROMs data.
  • the steps identified may be repeated on a daily, weekly, or monthly basis.
  • the series of ROM values with associated PROMs data may be analysed to assess any changes in the series over time. Such changes may include an increased range of motion indicated by the ROM values, and/or an improvement in patient comfort indicated by the PROMs data.
  • the series of ROM values and associated PROMs data may be analysed to indicate whether a patient who has an injury or undergone surgery, is progressing in their recovery at an acceptable rate.
  • the acceptable rate of recovery may be determined by a healthcare professional, such as a doctor or physiotherapist.
  • the acceptable rate of recovery may be determined by comparison to a database containing reference rates of recovery, for example collected from other patients with similar conditions.
  • the method may further comprise the step of recommending one or more exercises to the patient based on the analysis of the series of ROM values and associated PROMs data over time.
  • the step of recommending exercises to the patient may be performed by a third party, for example a doctor or physiotherapist, based on the indications of recovery provided by the analysis of the ROM values and associated PROMs data.
  • ROM values and associated PROMs data For example, if the analysis of the ROM values and associated PROMs data indicates a good recovery, more challenging exercises may be recommended. Conversely, if the analysis of the ROM values and associated PROMs data indicates a poor, or slow, recovery, different exercises, or repetitions of existing exercises may be recommended.
  • the recommendations may be sent directly to the patient or user through a smart device. Recommendations may be provided to the user in real-time for example with visual targets or markers on the first and/or second image.
  • the recommendations may be provided by other methods such as audible voice instruction.
  • the method may further comprise the step of alerting a third party.
  • the step of alerting a third party may comprise notifying a doctor or physiotherapist that a patient requires an in-person appointment or video or telephone consultation.
  • the method may comprise the step of automatically scheduling an in-person appointment or video or telephone consultation. Alerting a professional third party at this stage may reduce the need for the patient to have future consultations as a result of poor recovery. The method may accelerate the recovery process safely, and save time and money spent on consultations that can be avoided if the correct recovery process is followed.
  • all ROM values and associated PROMs data may be stored in a memory.
  • the stored data may be used by the convolutional neural network when in learning mode.
  • the convolutional neural network may utilise the information to better identify key points in a first image and second image. Storing the data may improve the image analysis step and provide a more accurate and reliable recommendation based on the current and stored ROM values and PROMs data.
  • the stored ROM values and associated PROMs data may be analysed for informing future analysis of different patients.
  • the stored ROM values and associated PROMs data may be anonymised prior to analysis for informing future analysis of different patients.
  • the method may include the step of a healthcare professional, for example a doctor or physiotherapist, associating additional data with the stored ROM values and associated PROMs data.
  • the additional data may be patient data, such as injuries experienced, treatment given, patient characteristics, and overall patient outcome. Collecting and storing such information may improve the analysis of future patient data, recommendations of treatment/exercises advised, and overall patient outcome.
  • the invention provides, according to a second aspect, a system for monitoring the movement of a patient, the system comprising a camera configured to take a first image of the patient in a first position, and a second image of the patient in a second position, an input device arranged to collect PROMs data from a patient, and a processing unit, wherein the processing unit is arranged to receive the first image, the second image, and the PROMs data, and analyse the first image and second image in order to obtain a ROM value and associate the ROM value with the PROMs data.
  • the camera may be a camera of a smart device, for example, a phone or tablet device.
  • the camera may be stand-alone camera, or a web-cam connected to a computer.
  • the input device may comprise a smart device, for example the smart device which also comprises the camera.
  • the input device may comprise a computer.
  • the processing unit may comprise a neural network, for example a trained convolutional neutral network.
  • the processing unit may form part of the smart device which comprises the camera and/or the input device.
  • the processing unit may form part of a computer device separate to the smart device.
  • the smart device may be arranged to transmit the first image and second image to the processing unit, for example via a wireless data transmission.
  • Various transmission protocol will be known and well understood by the skilled person.
  • the system may also comprise an output device.
  • the output device may comprise the screen of the smart device, for example a phone screen or tablet screen.
  • the output device may be configured to output instructions and/or feedback to a patient.
  • the smart device may be configured to show images illustrating the desired first position and second position, and/or exercises recommended to the patient based on an analysis of the ROM value and associated PROMs data. It will of course be appreciated that features described in relation to one aspect of the present invention may be incorporated into other aspects of the present invention.
  • Figure 1 shows a method of monitoring a patient according to a first embodiment of the invention
  • Figure 2 shows a system for monitoring a patient according to a second embodiment of the invention.
  • FIG. 1 shows a method 100 of monitoring a patient according to a first embodiment of the invention.
  • the method 100 comprises the step of taking a first image of the patient in a first position 102, and taking a second image of the patient in a second position 104.
  • the method also comprises the step of obtaining PROMs data from the patient 106.
  • the first image and second image are analysed by a trained convolutional neural network 108 in order to determine a range of motion (ROM) value of the patient moving between the first position and second position.
  • the ROM value is associated with the PROMs data 110, and the ROM value and associated PROMs data is analysed 112 in order to provide an indication of the condition of the patient.
  • advice is provided 114, for example a series of exercises intended to improve or maintain the condition of the patient.
  • the ROM value and associated PROMs data is stored for future reference 116.
  • the method is then repeated 118 at a later time, in order to monitor the patient condition over time.
  • the advice provided 114 may change as the patient condition changes over time, for example with more challenging exercises being recommended as patient condition improves.
  • the method applies image analysis using two-dimensional images.
  • images may be obtained using a camera or smart phone or the like, or may be extracted from a video recording using a video recorder, smart phone or the like.
  • a number of the method steps are performed automatically, for example the analysis of the ROM value and PROMs data 112, and/or the advice provided 114.
  • the analysis of the ROM value and associated PROMs data 112 may comprise comparison to a database of ROM values and associated PROMs data.
  • the database of ROM values and associated PROMs data may further include what advice to provide based on the ROM value and associated PROMs data.
  • the ROM value and associated PROMs data may be stored in the database of ROM values and associated PROMs data in order to further refine the advice provided when analysing future ROM values and associated PROMs data.
  • the database may include additional information provided by a medical practitioner regarding patient outcomes.
  • the analysis of the ROM value and PROMs data 112, and/or the advice provided 114 is undertaken by a healthcare professional, for example a consultant or physiotherapist.
  • the ROM value and PROMs data, along with the advice provided by a healthcare professional may be stored in a database, which may then be used to automatically provide advice as described above, once the content of the database is at a sufficiently high level to provide the basis for robust and accurate advice.
  • the analysis of the ROM value and PROMs data 112 and the resulting advice provided 114 is determined by the ROM value and PROMs data indicating that a predetermined trigger point has been reached by the patient.
  • a plurality of trigger points are determined according to known physiotherapy protocols, and in addition to the ROM value and PROMs data, may be determined by reference to factors such as time elapsed since an injury or surgery.
  • the analysis of the first image and second image may comprise a user manually identifying key points in the first image and second image, and a computer program analysing those key points to calculate the ROM value.
  • Figure 2 shows a visual representation of a system for monitoring a patient.
  • the system comprises a smart device 10 in the form of a smart phone.
  • the smart phone includes an integrated camera 12.
  • the camera 12 is used by a patient, or potentially someone assisting the patient, to take a first image 14 of the patient in a first position 24, and a second image 16 of the patient in a second position 26.
  • the area of interest is the arm of a patient
  • the first position 24 comprises the arm held at full extension
  • the second position 26 comprises the arm held at full flexion.
  • the first image 14 and second image 16 are sent to a remote processor 50 where a trained convolutional neural network analyses the images, first to detect key points 15 common to both images, and then to calculate a ROM value based on the relative movement of the key points between the first image 14 and the second image 16.
  • the analysis of the images comprises analysing the key points 15 in the first image 14 to detect a joint angle for the first image, analysing the key points 15 in the second image to detect a joint angle for the second image, and determining the ROM value by examining the difference between the joint angle of the first image 14 and joint angle of the second image 16.
  • the joint angles may be calculated after a manual analysis of the images, with a medical practitioner or other user identifying the key points in the first image 14 and second image 16.
  • a memory 52 may be linked to the remote processor 50, the memory 52 arranged to store the first image 14, the second image 16, the joint angles and ROM value, the PROMs data, and a database of reference ROM values and associated PROMs data.
  • the ROM value is sent back to the smart device 10, and is also sent to the computer or smart device 20 of a healthcare professional, for example a doctor or physiotherapist.
  • the first image 14 and second image 16 may also be sent to the healthcare professional along with the ROM value and associated PROMs data.
  • a cloud based system may be used, rather than requiring the actual data to be sent to the smart device or computer of a doctor or physiotherapist. In such an arrangement, a notification may be sent to the smart device or computer in order to prompt the doctor or physiotherapist to access the cloud stored data.
  • the series of key points is detected automatically by the neural network system based on previous learning undertaken by the neural network.
  • the underlying principles governing the construction and training of a neural network will be well understood by the skilled person, as such no further description is required.
  • the smart device 10 is arranged to prompt the patient to input PROMs data at the same time as the first image 14 and second image 16 are taken.
  • Alternative embodiments include the smart device 10 prompting the patient to input PROMs data at different times, for example on a daily or weekly basis, and may request immediate or reflective pain scores.
  • the PROMs data will include an indication of the pain level of the patient, amongst other things.
  • the PROMs data is associated with the ROM value obtained by the analysis of the first image 14 and second image 16, and sent to the computer or smart device 20 of the healthcare professional.
  • the healthcare professional analyses the ROM value and associated PROMs data, and outputs a recommended series of exercises. The series of exercises are sent to the smart device 10 of the patient.
  • the smart device 10 is configured to instruct the patient to perform those exercises, and includes instructional videos and images in order to facilitate the correct performance of those exercises.
  • the smart device 10 is also configured to prompt the patient to record completion of the recommended exercises, including number of sets and/or repetitions of exercises undertaken. That record may be sent to the healthcare professional along with the ROM value and associated PROMs data, to fully inform the healthcare professional and ensure that the patient is completing their recommended exercises.
  • the record may also be stored to inform future learning and improvements of the system. Improved compliance may improve patient recovery times when recovering from injury, or ensure there is no deterioration in a patient where a chronic condition is being monitored.
  • the ROM value and associated PROMs data are compared to a predetermined physiotherapy protocol, and exercises recommended based on the level of recovery indicated by the ROM value and associated PROMs data.
  • the ROM value and associated PROMs data may still be sent to a healthcare professional, but for monitoring purposes rather than for the healthcare professional to actively recommend individual exercise plans.
  • the smart device 10 is arranged to prompt the patient to repeat the steps of taking a first image and second image at regular intervals, for example daily or weekly, to further improve patient monitoring and potential outcomes.
  • the smart device 10 may be arrange to provide guidance to a patient when taking the first image 14 and second image 16.
  • the smart device 10 may show a series of images or a video correctly demonstrating the first position 24 and second position 26.
  • the training of the trained convolutional neural network will provide the computer program or app with a threshold for the alignment and quality of the images being analysed. Problems may arise when the images are misaligned, blurred, small, big, etc.
  • the smart device 10 is configured to prompt the patient to retake the first image and second image.
  • the non-contact nature of the method of the present invention provides the advantage that the method may be carried out remotely, without the need to apply devices or sensors or the like to the body of the patient to track their movement, or for hardware such as depth sensing motion capture systems to be used.
  • the patient may be recovering from an operation which has resulted in the reduction of a range of motion for the body part in question, for example an elbow or knee.
  • the method and system may be used to promote the recovery of the patient and also spot any patients that are recovering more slowly than normal and require extra attention from a medical professional.
  • the patient may simply have injured a body part and the monitoring process may be to ensure that recovery is optimised.
  • the patient may be an athlete or sportsperson who is not injured, but looking to improve the range of motion of a body part.
  • the method may allow the monitoring of an athlete following a stretching program intended to increase the range of motion of their shoulder joints.
  • the method may be applied where the patient is an animal, for example a horse. Where the method is applied to an animal, the PROMs data will be estimated and input by a person observing the behaviour of the animal.
  • a smart device with a camera is described above.
  • the camera may detect depth information, for example using depth sensors or LIDAR.
  • the embodiments described above reference taking a first image and second image, each image being taken as discrete individual images.
  • the images may be taken from a video of the patient moving from the first position to the second position.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Dentistry (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The present invention relates to a contact-free method of monitoring a patient comprising the steps of: collecting, from the patient, patient reported outcome measures (PROMs); taking a first image of the patient in a first position; taking a second image of the patient in a second, different, position; performing an image analysis of the first image and second image such that a range of motion (ROM) value from the first position to the second position is obtained; and associating the ROM value with the PROMs data. The present invention further relates to a system for monitoring the movement of a patient, the system comprising a camera configured to take a first image of the patient in an first position, and a second image of the patient in a second position, an input device arranged to collect PROMs data from a patient, and a processing unit, wherein the processing unit is arranged to receive the first image, the second image, and the PROMs data, and analyse the first image and second image in order to obtain a ROM value and associate the ROM value with the PROMs data.

Description

Method of monitoring mobility
Field of the Invention
The present invention concerns a method of monitoring the mobility of a patient or user. More particularly, but not exclusively, this invention concerns the steps of acquiring images of a patient and performing an image analysis to determine a Range of Motion (ROM) value to provide an improved patient rehabilitation plan.
Background of the Invention
Following injury and/or orthopaedic surgery, patients may be required to attend multiple follow-up appointments with surgeons, physiotherapists, occupational therapists and/or other clinical professionals. During follow-up appointments a patient’s range of motion (ROM) is measured using a physical device known as a goniometer. The measurements taken using a goniometer may be susceptible to human error. Information related to the patient’s levels of comfort whilst going about daily tasks and/or specific exercises may also be gathered during these appointments. These two types of information may be used to monitor and manage patient recovery, informing the rehabilitation strategy.
Attending clinical appointments can be time consuming and may be problematic if the patient is in discomfort or unable to physically attend. Also, in the time between appointments, small issues that arise may turn into larger, more complex, problems.
Current methods and devices for detecting bio-mechanical geometry are found in US2019/0287261. The document discloses a method including the steps of receiving a plurality of images with one or more target objects, and processing the images using a neural network system (NNS). Further, US2019/0287261 discloses a system wherein individuals can measure, record and assess their progress on an independent basis while recovering from an orthopaedic injury or procedure. However, the method, system and apparatus of US2019/0287261 requires multiple images to be inputted and processed, which can be time consuming for the patient and computationally demanding. Furthermore, the system is designed to be used by the individuals, to measure, record and assess their own progress without any clinical or professional input. This may lead to the individual missing important triggers or signs in their recovery that clinical specialists may recognise.
Another device and method of mapping the trajectory of a part of the anatomy of the human or animal body is disclosed in GB2530754 and GB2551238. Both documents disclose a method to measure a range of motion of the anatomy of a human or animal patient. The method comprises the use of a sensor attached to part of the anatomy and receiving signals from the sensor about the angles of rotation of the anatomy. The sensor provides signals that may be mapped to an image space for display of the trajectory of the part of the anatomy. The disadvantage of using a physical sensor is that the anatomy of interest may not be easily accessible, or may pose difficulties for sensor attachment. The method also relies on having a working connection between the sensor and the data receiving means.
The present invention seeks to mitigate the above-mentioned problems. Alternatively or additionally, the present invention seeks to provide an improved method of monitoring mobility.
Summary of the Invention
The present invention provides a contact-free method of monitoring a patient comprising the steps of: collecting, from the patient, patient reported outcome measures (PROMs); taking a first image of the patient in a first position; taking a second image of the patient in a second, different, position; performing an image analysis of the first image and second image such that a range of motion (ROM) value from the first position to the second position is obtained; and associating the ROM value with the PROMs data.
By contact-free, it is intended that no physical contact is made with the patient, for example, in the form of the application of tracking devices or sensors to the body of the patient.
The method may provide a method of monitoring a patient requiring only two images to be taken. Reducing the number of images required in order to monitor a patient may reduce pain and discomfort experienced by the patient during movement, in particular if the area being monitored is fragile or difficult to move.
The method of the present invention may rely solely on image analysis. This has the advantage that no direct contact with the patient is required. This avoids the disadvantages of conventional methods, which often require additional devices or sensors to be applied to the patient to track the patient’s motion, or for hardware such as depth sensing motion capture systems to be used.
The images being analysed in the method of the present invention are preferably two-dimensional images. Such images may be taken using a conventional camera, smart phone or the like. Further, such two-dimensional images may be extracted as still images from a video recording taken using a conventional video camera, smart phone or the like.
PROMs data may include, but is not limited to, a pain score. The pain score may be linked to the movement of the patient between first and second positions, or relate to a general level of pain felt over an extended period. PROMs data may be collected at the same time as the first image and second image are taken. PROMs data may be taken before, and/or after the first image and second image are taken.
For example, the PROMs data may comprise a pain score at the moments of taking the first image and second image. The PROMs data may additionally or alternatively be a reflective pain score, for example a pain score representing how the patient felt the day before the first image and second image were taken. PROMs data may be collected solely at the time the first image and second image are taken, or at intervals prior to taking the first and second images. For example the patient may be prompted to record PROMs data at regular intervals during the day. Combining the ROM value from the image analysis with PROMs data provides an improved method of monitoring the patient without requiring in-person consultation meetings with a doctor or specialised clinician. The association of the ROM value and PROMs data may be analysed to provide advice on the most appropriate recovery plan for a patient. The recovery plan may be adjusted throughout the recovery period based on additional measurements being taken and updated ROM values and PROMs data being obtained.
The patient may be a human or animal. When the patient is an animal, the PROMs data will be observed and determined by a third party based on the behaviour of the animal. The use of the term “patient” does not imply that a surgical intervention must have taken place. The patient may be any user looking to measure the range of motion of a movement, potentially with a view to looking to increase the range of motion of that movement. The method may be used independently by the patient or with the remote support of a coach, physiotherapist or other health or fitness professional.
The image analysis step may utilise a neural network. The image analysis step may utilise any suitable computer analysis software that identifies patterns, as will be understood by the skilled person.
The image analysis step may utilise a trained convolutional neutral network. The method may comprise the step of undertaking a training period, in which the neural network is trained how to analyse patient images to determine ROM values.
The trained convolutional neural network may use a learning mode for remembering and learning patterns found in input data. The convolutional neural network may rely upon training data, where the accuracy of the trained system is dependent upon the quality of the dataset used to train it. For example, by inputting multiple images of an object, the neural network places “weights” on reoccurring or familiar targets within the object. Upon the input of subsequent images of that object, the trained convolutional neural network modifies the weight of the targets each time. The trained convolutional neural network provides an advantage over other conventional computer analysis techniques as it is capable of capturing associations or discovering regular patterns within a set of images quickly and efficiently. In the present invention, the trained convolutional neutral network has been configured to work at the optimum learning level.
The image analysis step may comprise identifying one or more key points in the first image and second image, detecting those key points in the first image and second image, and contrasting the position of those key points in the first image and second image, in order to determine the ROM value of the patient moving from the first position to second position. The relative positions of the key points in the first image may be analysed to indicate a joint angle in the first image. The relative positions of the key points in the second image may be analysed to indicate a joint angle in the second image. The joint angle obtained from the first image may be compared to the joint angle obtained by the second image in order to calculate the ROM value of the patient moving from the first position to second position. The joint angles may be stored to allow comparison to later detected joint angles, thereby allowing progress of the patient to be monitored. The joint angles may be stored and analysed to inform future treatment planning. The identification of one or more key points in the first image and second image may comprise the use of a computer vision application, trained to identify such key points on a patient. The computer vision application may be installed on a smart device such as a phone or tablet. The first and second image may be taken using a camera. The camera may form part of a smart device. The smart device may also comprise the computer vision application. The smart device may be arranged to perform the image analysis. Alternatively, the smart device may send the images to a computer vision application stored on a remote server, and the remote server may be arranged to perform the image analysis. The remote server may send the image analysis results back to the smart device.
Example key points include the wrist, elbow, and shoulder, all of which may be identified and monitored by the image analysis step. For example, in an arm or shoulder injury, the ROM value of the injured arm may be determined by the relative movement of the wrist, elbow, and shoulder, when in the first position and the second position.
The first position may comprise the patient with a joint in a position of full extension, and the second position may comprise the patient with the same joint in a position of full flexion. The method may comprise the step of making a video recording of the patient moving from the first position to the second position. The first image of the first position and the second image of the second position may be taken from the video recording. The first and second images may be determined automatically by an application configured to identify the first position and second position, or may be determined by a patient or other user watching the video and marking it to indicate when the patient is in the first position and second position. The method may comprise the step of, prior to taking the first image of the patient in the first position, indicating the first position to the patient. The method may comprise the step of, prior to taking the second image of the patient in the second position, indicating the second position to the patient. Such indications may be visual, for example on the screen of a smart device, or audible, for example via voice guidance emitted from a smart device. Such an arrangement may improve the ease with which the patient assumes the first position and second position. Such an arrangement may also improve the accuracy of the measurements taken by ensuring a consistent first position and second position are assumed. The ROM value and the associated PROMs data may be sent to a third party. The third party may be a doctor or physiotherapist. The third party may assess the ROM value and associated PROMs data in order to obtain an indication of the physical condition of the patient.
The steps of: collecting, from the patient, patient reported outcome measure (PROMs) data; taking a first image, taking a second image, performing an image analysis of the first image and second image such that a range of motion (ROM) value from the first position to the second position is obtained; and associating the ROM value with the PROMs data, may be repeated a number of times over an extended time period in order to collect a series of ROM values with associated PROMs data. The steps identified may be repeated on a daily, weekly, or monthly basis.
The series of ROM values with associated PROMs data may be analysed to assess any changes in the series over time. Such changes may include an increased range of motion indicated by the ROM values, and/or an improvement in patient comfort indicated by the PROMs data. The series of ROM values and associated PROMs data may be analysed to indicate whether a patient who has an injury or undergone surgery, is progressing in their recovery at an acceptable rate. The acceptable rate of recovery may be determined by a healthcare professional, such as a doctor or physiotherapist. The acceptable rate of recovery may be determined by comparison to a database containing reference rates of recovery, for example collected from other patients with similar conditions.
The method may further comprise the step of recommending one or more exercises to the patient based on the analysis of the series of ROM values and associated PROMs data over time.
The step of recommending exercises to the patient may be performed by a third party, for example a doctor or physiotherapist, based on the indications of recovery provided by the analysis of the ROM values and associated PROMs data.
For example, if the analysis of the ROM values and associated PROMs data indicates a good recovery, more challenging exercises may be recommended. Conversely, if the analysis of the ROM values and associated PROMs data indicates a poor, or slow, recovery, different exercises, or repetitions of existing exercises may be recommended.
The recommendations may be sent directly to the patient or user through a smart device. Recommendations may be provided to the user in real-time for example with visual targets or markers on the first and/or second image. The recommendations may be provided by other methods such as audible voice instruction.
In response to the analysis of the series of ROM values and associated PROMs data over time wherein the analysis indicates a poor recovery, the method may further comprise the step of alerting a third party. The step of alerting a third party may comprise notifying a doctor or physiotherapist that a patient requires an in-person appointment or video or telephone consultation. The method may comprise the step of automatically scheduling an in-person appointment or video or telephone consultation. Alerting a professional third party at this stage may reduce the need for the patient to have future consultations as a result of poor recovery. The method may accelerate the recovery process safely, and save time and money spent on consultations that can be avoided if the correct recovery process is followed.
Additionally, all ROM values and associated PROMs data may be stored in a memory. The stored data may be used by the convolutional neural network when in learning mode. The convolutional neural network may utilise the information to better identify key points in a first image and second image. Storing the data may improve the image analysis step and provide a more accurate and reliable recommendation based on the current and stored ROM values and PROMs data.
The stored ROM values and associated PROMs data may be analysed for informing future analysis of different patients. The stored ROM values and associated PROMs data may be anonymised prior to analysis for informing future analysis of different patients. The method may include the step of a healthcare professional, for example a doctor or physiotherapist, associating additional data with the stored ROM values and associated PROMs data. The additional data may be patient data, such as injuries experienced, treatment given, patient characteristics, and overall patient outcome. Collecting and storing such information may improve the analysis of future patient data, recommendations of treatment/exercises advised, and overall patient outcome.
The invention provides, according to a second aspect, a system for monitoring the movement of a patient, the system comprising a camera configured to take a first image of the patient in a first position, and a second image of the patient in a second position, an input device arranged to collect PROMs data from a patient, and a processing unit, wherein the processing unit is arranged to receive the first image, the second image, and the PROMs data, and analyse the first image and second image in order to obtain a ROM value and associate the ROM value with the PROMs data.
The camera may be a camera of a smart device, for example, a phone or tablet device. The camera may be stand-alone camera, or a web-cam connected to a computer. The input device may comprise a smart device, for example the smart device which also comprises the camera. The input device may comprise a computer. The processing unit may comprise a neural network, for example a trained convolutional neutral network. The processing unit may form part of the smart device which comprises the camera and/or the input device. The processing unit may form part of a computer device separate to the smart device. The smart device may be arranged to transmit the first image and second image to the processing unit, for example via a wireless data transmission. Various transmission protocol will be known and well understood by the skilled person.
The system may also comprise an output device. The output device may comprise the screen of the smart device, for example a phone screen or tablet screen. The output device may be configured to output instructions and/or feedback to a patient. For example, the smart device may be configured to show images illustrating the desired first position and second position, and/or exercises recommended to the patient based on an analysis of the ROM value and associated PROMs data. It will of course be appreciated that features described in relation to one aspect of the present invention may be incorporated into other aspects of the present invention.
Description of the Drawings
The present invention will now be described by way of example only with reference to the accompanying schematic drawings.
Figure 1 shows a method of monitoring a patient according to a first embodiment of the invention;
Figure 2 shows a system for monitoring a patient according to a second embodiment of the invention; and
Detailed Description Figure 1 shows a method 100 of monitoring a patient according to a first embodiment of the invention. The method 100 comprises the step of taking a first image of the patient in a first position 102, and taking a second image of the patient in a second position 104. The method also comprises the step of obtaining PROMs data from the patient 106. The first image and second image are analysed by a trained convolutional neural network 108 in order to determine a range of motion (ROM) value of the patient moving between the first position and second position. The ROM value is associated with the PROMs data 110, and the ROM value and associated PROMs data is analysed 112 in order to provide an indication of the condition of the patient. Based on the analysis of the ROM value and PROMs data, advice is provided 114, for example a series of exercises intended to improve or maintain the condition of the patient. The ROM value and associated PROMs data is stored for future reference 116. The method is then repeated 118 at a later time, in order to monitor the patient condition over time. The advice provided 114 may change as the patient condition changes over time, for example with more challenging exercises being recommended as patient condition improves.
The method applies image analysis using two-dimensional images. Such images may be obtained using a camera or smart phone or the like, or may be extracted from a video recording using a video recorder, smart phone or the like.
In one variation of the embodiment, a number of the method steps are performed automatically, for example the analysis of the ROM value and PROMs data 112, and/or the advice provided 114. The analysis of the ROM value and associated PROMs data 112 may comprise comparison to a database of ROM values and associated PROMs data. The database of ROM values and associated PROMs data may further include what advice to provide based on the ROM value and associated PROMs data. The ROM value and associated PROMs data may be stored in the database of ROM values and associated PROMs data in order to further refine the advice provided when analysing future ROM values and associated PROMs data. The database may include additional information provided by a medical practitioner regarding patient outcomes.
In an alternative variation of the embodiment, the analysis of the ROM value and PROMs data 112, and/or the advice provided 114 is undertaken by a healthcare professional, for example a consultant or physiotherapist. The ROM value and PROMs data, along with the advice provided by a healthcare professional may be stored in a database, which may then be used to automatically provide advice as described above, once the content of the database is at a sufficiently high level to provide the basis for robust and accurate advice. In one embodiment of the invention, the analysis of the ROM value and PROMs data 112 and the resulting advice provided 114 is determined by the ROM value and PROMs data indicating that a predetermined trigger point has been reached by the patient. A plurality of trigger points are determined according to known physiotherapy protocols, and in addition to the ROM value and PROMs data, may be determined by reference to factors such as time elapsed since an injury or surgery.
In a further alternative embodiment of the invention, the analysis of the first image and second image may comprise a user manually identifying key points in the first image and second image, and a computer program analysing those key points to calculate the ROM value.
Figure 2 shows a visual representation of a system for monitoring a patient.
The system comprises a smart device 10 in the form of a smart phone. The smart phone includes an integrated camera 12. The camera 12 is used by a patient, or potentially someone assisting the patient, to take a first image 14 of the patient in a first position 24, and a second image 16 of the patient in a second position 26. In this embodiment, the area of interest is the arm of a patient, and the first position 24 comprises the arm held at full extension, and the second position 26 comprises the arm held at full flexion. The first image 14 and second image 16 are sent to a remote processor 50 where a trained convolutional neural network analyses the images, first to detect key points 15 common to both images, and then to calculate a ROM value based on the relative movement of the key points between the first image 14 and the second image 16. The analysis of the images comprises analysing the key points 15 in the first image 14 to detect a joint angle for the first image, analysing the key points 15 in the second image to detect a joint angle for the second image, and determining the ROM value by examining the difference between the joint angle of the first image 14 and joint angle of the second image 16. In an alternative embodiment, the joint angles may be calculated after a manual analysis of the images, with a medical practitioner or other user identifying the key points in the first image 14 and second image 16. A memory 52 may be linked to the remote processor 50, the memory 52 arranged to store the first image 14, the second image 16, the joint angles and ROM value, the PROMs data, and a database of reference ROM values and associated PROMs data. The ROM value is sent back to the smart device 10, and is also sent to the computer or smart device 20 of a healthcare professional, for example a doctor or physiotherapist. The first image 14 and second image 16 may also be sent to the healthcare professional along with the ROM value and associated PROMs data. The skilled person will appreciate that a cloud based system may be used, rather than requiring the actual data to be sent to the smart device or computer of a doctor or physiotherapist. In such an arrangement, a notification may be sent to the smart device or computer in order to prompt the doctor or physiotherapist to access the cloud stored data.
The series of key points is detected automatically by the neural network system based on previous learning undertaken by the neural network. The underlying principles governing the construction and training of a neural network will be well understood by the skilled person, as such no further description is required.
The smart device 10 is arranged to prompt the patient to input PROMs data at the same time as the first image 14 and second image 16 are taken. Alternative embodiments include the smart device 10 prompting the patient to input PROMs data at different times, for example on a daily or weekly basis, and may request immediate or reflective pain scores. The PROMs data will include an indication of the pain level of the patient, amongst other things. The PROMs data is associated with the ROM value obtained by the analysis of the first image 14 and second image 16, and sent to the computer or smart device 20 of the healthcare professional. The healthcare professional analyses the ROM value and associated PROMs data, and outputs a recommended series of exercises. The series of exercises are sent to the smart device 10 of the patient. The smart device 10 is configured to instruct the patient to perform those exercises, and includes instructional videos and images in order to facilitate the correct performance of those exercises. The smart device 10 is also configured to prompt the patient to record completion of the recommended exercises, including number of sets and/or repetitions of exercises undertaken. That record may be sent to the healthcare professional along with the ROM value and associated PROMs data, to fully inform the healthcare professional and ensure that the patient is completing their recommended exercises. The record may also be stored to inform future learning and improvements of the system. Improved compliance may improve patient recovery times when recovering from injury, or ensure there is no deterioration in a patient where a chronic condition is being monitored. In an alternative embodiment, the ROM value and associated PROMs data are compared to a predetermined physiotherapy protocol, and exercises recommended based on the level of recovery indicated by the ROM value and associated PROMs data. The ROM value and associated PROMs data may still be sent to a healthcare professional, but for monitoring purposes rather than for the healthcare professional to actively recommend individual exercise plans.
The smart device 10 is arranged to prompt the patient to repeat the steps of taking a first image and second image at regular intervals, for example daily or weekly, to further improve patient monitoring and potential outcomes. The smart device 10 may be arrange to provide guidance to a patient when taking the first image 14 and second image 16. For example, the smart device 10 may show a series of images or a video correctly demonstrating the first position 24 and second position 26.
The training of the trained convolutional neural network will provide the computer program or app with a threshold for the alignment and quality of the images being analysed. Problems may arise when the images are misaligned, blurred, small, big, etc. In the event of a threshold error value being exceeded, the smart device 10 is configured to prompt the patient to retake the first image and second image.
The non-contact nature of the method of the present invention provides the advantage that the method may be carried out remotely, without the need to apply devices or sensors or the like to the body of the patient to track their movement, or for hardware such as depth sensing motion capture systems to be used.
Whilst the present invention has been described and illustrated with reference to particular embodiments, it will be appreciated by those of ordinary skill in the art that the invention lends itself to many different variations not specifically illustrated herein. By way of example only, certain possible variations will now be described. The patient may be recovering from an operation which has resulted in the reduction of a range of motion for the body part in question, for example an elbow or knee. The method and system may be used to promote the recovery of the patient and also spot any patients that are recovering more slowly than normal and require extra attention from a medical professional. Alternatively, the patient may simply have injured a body part and the monitoring process may be to ensure that recovery is optimised. In a further alternative, the patient may be an athlete or sportsperson who is not injured, but looking to improve the range of motion of a body part. For example, the method may allow the monitoring of an athlete following a stretching program intended to increase the range of motion of their shoulder joints. The method may be applied where the patient is an animal, for example a horse. Where the method is applied to an animal, the PROMs data will be estimated and input by a person observing the behaviour of the animal. A smart device with a camera is described above. In some embodiments, the camera may detect depth information, for example using depth sensors or LIDAR. The embodiments described above reference taking a first image and second image, each image being taken as discrete individual images. In an alternative embodiment, the images may be taken from a video of the patient moving from the first position to the second position.
Where in the foregoing description, integers or elements are mentioned which have known, obvious or foreseeable equivalents, then such equivalents are herein incorporated as if individually set forth. Reference should be made to the claims for determining the true scope of the present invention, which should be construed so as to encompass any such equivalents. It will also be appreciated by the reader that integers or features of the invention that are described as preferable, advantageous, convenient or the like are optional and do not limit the scope of the independent claims. Moreover, it is to be understood that such optional integers or features, whilst of possible benefit in some embodiments of the invention, may not be desirable, and may therefore be absent, in other embodiments.

Claims

Claims
1. A contact-free method of monitoring a patient comprising the steps of: collecting, from the patient, patient reported outcome measures (PROMs); taking a first image of the patient in a first position; taking a second image of the patient in a second, different, position; performing an image analysis of the first image and second image such that a range of motion (ROM) value from the first position to the second position is obtained; and associating the ROM value with the PROMs data.
2. A method as claimed in claim 1, wherein the image analysis step utilises a neural network.
3. A method as claimed in claim 1 or claim 2, wherein the image analysis step utilises a trained convolutional neutral network.
4. A method as claimed in any preceding claim, wherein the image analysis step comprises identifying one or more key points in the first image and second image, detecting those key points in the first image and second image, and contrasting the position of those key points in the first image and second image, in order to determine the range of motion value of the patient moving from the first position to second position.
5. A method as claimed in any preceding claim, wherein the first position comprises the patient with a joint in a position of full extension, and the second position comprises the patient with the same joint in a position of full flexion.
6. A method as claimed in any preceding claim, where the ROM value and the associated PROMs data is sent to a third party.
7. A method as claimed in any preceding claim, wherein the steps of: collecting, from the patient, patient reported outcome measures (PROMs); taking a first image, taking a second image, performing an image analysis of the first image and second image such that a range of motion (ROM) value from the first position to the second position is obtained; and associating the ROM value with the PROMs data; are repeated a number of times over an extended time period in order to collect a series of ROM values with associated PROMs data.
8. A method as claimed in claim 7, wherein the series of ROM values with associated PROMs data is analysed to assess any changes in the series over time.
9. A method as claimed in claim 8, further comprising the step of, recommending one or more exercises to the patient based on the analysis of the series of ROM values and associated PROMs data over time.
10. A method as claimed in claim 8, where in response to the analysis of the series of ROM values and associated PROMs data over time indicating a poor recovery, the method comprises the step of alerting a third party.
11. A method as claimed in any preceding claim, where all ROM values and associated PROMs data is stored in a memory.
12. A method as claimed in claim 11, wherein the stored ROM values and associated PROMs data may be analysed for informing future analysis of different patients.
13. A system for monitoring the movement of a patient, the system comprising a camera configured to take a first image of the patient in an first position, and a second image of the patient in a second position, an input device arranged to collect PROMs data from a patient, and a processing unit, wherein the processing unit is arranged to receive the first image, the second image, and the PROMs data, and analyse the first image and second image in order to obtain a ROM value and associate the ROM value with the PROMs data.
14. A system as claimed in claim 13, wherein the camera and input device comprise parts of the same smart device.
15. A method according to any one of claims 1 to 12 or a system according to claim 13 or claim 14, wherein the images are two-dimensional images.
PCT/GB2021/051617 2020-06-26 2021-06-25 Method of monitoring mobility WO2021260388A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB2009759.8A GB202009759D0 (en) 2020-06-26 2020-06-26 Method of monitoring mobility
GB2009759.8 2020-06-26

Publications (1)

Publication Number Publication Date
WO2021260388A1 true WO2021260388A1 (en) 2021-12-30

Family

ID=71949788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2021/051617 WO2021260388A1 (en) 2020-06-26 2021-06-25 Method of monitoring mobility

Country Status (2)

Country Link
GB (2) GB202009759D0 (en)
WO (1) WO2021260388A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2530754A (en) 2014-09-30 2016-04-06 270 Vision Ltd Mapping the trajectory of a part of the anatomy of the human or animal body
WO2017151683A1 (en) * 2016-02-29 2017-09-08 Mahfouz Mohamed R Connected healthcare environment
US20190287261A1 (en) 2018-03-14 2019-09-19 Richard Anthony de los Santos System, method, and apparatus to detect bio-mechanical geometry in a scene using machine vision for the application of a virtual goniometer

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140081659A1 (en) * 2012-09-17 2014-03-20 Depuy Orthopaedics, Inc. Systems and methods for surgical and interventional planning, support, post-operative follow-up, and functional recovery tracking
WO2018087853A1 (en) * 2016-11-09 2018-05-17 株式会社システムフレンド Stereoscopic image generation system, stereoscopic image generation method, and stereoscopic image generation program
US11373331B2 (en) * 2019-12-18 2022-06-28 Agt International Gmbh System, method and computer program product for determining sizes and/or 3D locations of objects imaged by a single camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2530754A (en) 2014-09-30 2016-04-06 270 Vision Ltd Mapping the trajectory of a part of the anatomy of the human or animal body
GB2551238A (en) 2014-09-30 2017-12-13 270 Vision Ltd Mapping the trajectory of a part of the anatomy of the human or animal body
WO2017151683A1 (en) * 2016-02-29 2017-09-08 Mahfouz Mohamed R Connected healthcare environment
US20190287261A1 (en) 2018-03-14 2019-09-19 Richard Anthony de los Santos System, method, and apparatus to detect bio-mechanical geometry in a scene using machine vision for the application of a virtual goniometer

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "www.drgoniometer.com", 1 January 2010 (2010-01-01), XP055841744, Retrieved from the Internet <URL:http://www.drgoniometer.com/> [retrieved on 20210916] *
CASTLE HANNAH ET AL: "Smartphone technology: a reliable and valid measure of knee movement in knee replacement", INTERNATIONAL JOURNAL OF REHABILITATION RESEARCH, vol. 41, no. 2, 1 June 2018 (2018-06-01), DE, pages 152 - 158, XP055842154, ISSN: 0342-5282, DOI: 10.1097/MRR.0000000000000276 *
DAVIDE BLONNA ET AL: "Validation of a photography-based goniometry method for measuring joint range of motion", JOURNAL OF SHOULDER AND ELBOW SURGERY, vol. 21, no. 1, 2012, pages 29 - 35, XP028392767, ISSN: 1058-2746, [retrieved on 20110706], DOI: 10.1016/J.JSE.2011.06.018 *
LUCIANO WALENTY XAVIER CEJNOG ET AL: "Hand range of motion evaluation for Rheumatoid Arthritis patients", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 March 2019 (2019-03-16), XP081154236 *
MEJIA-HERNANDEZ KEVYN ET AL: "Smartphone applications for the evaluation of pathologic shoulder range of motion and shoulder scores-a comparative study", JSES OPEN ACCESS, vol. 2, no. 1, 1 March 2018 (2018-03-01), pages 109 - 114, XP055841705, ISSN: 2468-6026, DOI: 10.1016/j.jses.2017.10.001 *
MITCHELL KATY ET AL: "Reliability and validity of goniometric iPhone applications for the assessment of active shoulder external rotation", PHYSIOTHERAPY THEORY AND PRACTICE : AN INTERNATIONAL JOURNAL OF PHYSICAL THERAPY, vol. 30, no. 7, 28 October 2014 (2014-10-28), NY, US, pages 521 - 525, XP055841712, ISSN: 0959-3985, Retrieved from the Internet <URL:http://dx.doi.org/10.3109/09593985.2014.900593> DOI: 10.3109/09593985.2014.900593 *

Also Published As

Publication number Publication date
GB2598825A (en) 2022-03-16
GB202109144D0 (en) 2021-08-11
GB202009759D0 (en) 2020-08-12

Similar Documents

Publication Publication Date Title
US20210353217A1 (en) Systems and methods for evaluation of scoliosis and kyphosis
US20190066832A1 (en) Method for detecting patient risk and selectively notifying a care provider of at-risk patients
US7988647B2 (en) Assessment of medical conditions by determining mobility
US8126736B2 (en) Methods and systems for diagnosing, treating, or tracking spinal disorders
US8685093B2 (en) Methods and systems for diagnosing, treating, or tracking spinal disorders
US11259743B2 (en) Method for identifying human joint characteristics
JP4166087B2 (en) System and method for automatic biomechanical analysis and posture deviation detection and correction
US20160081594A1 (en) Range of motion system, and method
US20100191100A1 (en) Methods and systems for diagnosing, treating, or tracking spinal disorders
US20060058699A1 (en) Comprehensive neuromuscular profiler
US20140276095A1 (en) System and method for enhanced goniometry
CN113647939B (en) Artificial intelligence rehabilitation evaluation and training system for spinal degenerative diseases
US20150130841A1 (en) Methods and computing devices to measure musculoskeletal movement deficiencies
KR20190097361A (en) Posture evaluation system for posture correction and method thereof
US20170071468A1 (en) Motion tracking method for sonographer
JP2016035651A (en) Home rehabilitation system
KR20190016297A (en) Joint Examination System
WO2021260388A1 (en) Method of monitoring mobility
US11033223B2 (en) 3D shoulder motion measurement device and scapular angle locator
CN109965881B (en) Application method and device for non-contact measurement of oral cavity openness
US11544852B2 (en) Performance scanning system and method for improving athletic performance
US20240130636A1 (en) Apparatus and method for motion capture
KR102624293B1 (en) System and method for diagnosis and treatment of various dizziness
EP4215105A1 (en) Automatic pain sensing conditioned on a pose of a patient
Uday et al. Gait Analysis-A Tool for Medical Inferences.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21740174

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21740174

Country of ref document: EP

Kind code of ref document: A1