CN112420143A - Systems, methods, and apparatus for providing personalized healthcare - Google Patents

Systems, methods, and apparatus for providing personalized healthcare Download PDF

Info

Publication number
CN112420143A
CN112420143A CN202011346991.0A CN202011346991A CN112420143A CN 112420143 A CN112420143 A CN 112420143A CN 202011346991 A CN202011346991 A CN 202011346991A CN 112420143 A CN112420143 A CN 112420143A
Authority
CN
China
Prior art keywords
patient
medical
image
images
personalized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011346991.0A
Other languages
Chinese (zh)
Other versions
CN112420143B (en
Inventor
斯里克里希纳·卡拉南
吴子彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/814,373 external-priority patent/US11430564B2/en
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Publication of CN112420143A publication Critical patent/CN112420143A/en
Application granted granted Critical
Publication of CN112420143B publication Critical patent/CN112420143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The patient's healthcare experience may be enhanced with a system that automatically identifies the patient based on one or more images of the patient and generates personalized healthcare assistance information for the patient based on electronic medical records stored for the patient. Such electronic medical records may include image data and/or non-image data associated with a medical procedure performed or to be performed on a patient. It can thus be seen that image and/or non-image data can be incorporated into personalized medical assistance information to provide location and/or other types of diagnostic or therapeutic guidance to a patient or service provider.

Description

Systems, methods, and apparatus for providing personalized healthcare
Cross Reference to Related Applications
This application claims the benefit of provisional U.S. patent application No. 62/941,203 filed on 27/11/2019 and provisional U.S. patent application No. 16/814,373 filed on 10/3/2020, the disclosures of which are incorporated herein by reference in their entirety.
Technical Field
The present application relates to the field of healthcare services.
Background
Medical diagnosis and treatment are personal in nature and often require customized instructions or guidance for each patient. For example, in radiotherapy and medical imaging (e.g., X-ray imaging, Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Positron Emission Tomography (PET)), success depends largely on the ability to hold a patient in a desired pose according to the patient's physical characteristics so that a scan or treatment can be performed in an accurate and precise manner. Conventional positioning techniques typically require manual adjustment of the patient's position, placement of markers on or near the patient's body, or simulation to determine the patient's optimal operating parameters and/or conditions. These techniques are not only cumbersome, but also lack accuracy, consistency, and real-time monitoring capabilities.
Meanwhile, medical facilities such as hospitals often have a large number of medical records relating to patient's diagnostic records, treatment plans, scanned images, and the like. These medical records can provide valuable insight into the patient's medical history as well as ways to enhance the patient's healthcare experience. Therefore, it is highly desirable to utilize these medical records to personalize the way in which healthcare services are provided. Further, in view of the unique environment associated with medical facilities, it is also important to provide these personalized services in an accurate, secure and automated manner to minimize the risk of human error, cross-contamination, privacy breaches, and the like.
Disclosure of Invention
Systems, methods, and instrumentalities are described herein for providing personalized healthcare services to patients. In an example, such a system can include one or more repositories configured to store electronic medical records of a patient. The electronic medical record may include image data and/or non-image data associated with a medical procedure performed or to be performed on a patient. The image and/or non-image data may be retrieved by a processing unit of the system and used for generating personalized medical assistance information relating to the patient. For example, the processing unit may be configured to receive one or more images of a patient and extract one or more features from the images that are representative of physiological characteristics of the patient. Based on at least one of these extracted features, the processing unit may determine the identity of the patient and retrieve image and/or non-image data from one or more repositories. The personalized medical assistance information thus created may include parameters associated with the medical procedure (e.g., medical imaging parameters or operating parameters of the medical device), positioning information related to the medical procedure, and/or overlapping scanned images and pictures of the patient that show a diagnosis or treatment history of the patient. The personalized medical assistance information may be presented to the patient or service provider via a display device to assist the patient or service provider during the healthcare service.
The images described herein may be patient photographs taken by a camera, thermal images of the patient generated by a thermal sensor, and the like. Features extracted from these images may be matched to a set of known features of the patient stored in a feature database. These features may also be processed by neural networks trained for visual recognition. Further, the image data stored in the repository may include a depiction of an incorrect position or pose of the medical procedure, and the personalized medical assistance information may include instructions on how to avoid the incorrect position or pose. Overlapping scan images and pictures of a patient may be generated by: respective scan positions or poses associated with the respective images are determined and the images are aligned with a picture of the patient in a substantially similar position or pose. The resulting representation (deactivating rendering) may be suitable for display in an Augmented Reality (AR) environment to enhance the patient or service provider's experience.
The present application provides a system for providing personalized healthcare services, comprising: one or more repositories configured to store electronic medical records of a patient, the electronic medical records including image data and/or non-image data associated with a medical procedure performed or to be performed for the patient; and a processing unit configured to: receiving one or more images of the patient; extracting one or more features from the one or more images that are representative of a physiological characteristic of the patient; determining an identity of the patient based on at least one of the extracted features; in response to determining the identity of the patient, retrieving the image and/or non-image data from the one or more repositories; and generating personalized medical assistance information relating to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information comprises at least one parameter associated in the medical procedure or the position or pose of the patient for the medical procedure.
Wherein the one or more images comprise a photograph of the patient taken by a camera or a thermal image of the patient generated by a thermal sensor.
Wherein the characteristic of the patient comprises a walking pattern of the patient.
Wherein the processing unit configured to determine the identity of the patient based on at least one of the extracted features comprises: the processing unit is configured to match the at least one of the extracted features with known features of the patient stored in a feature database.
Wherein the processing unit is configured to determine the identity of the patient based on at least one of the extracted features using a neural network, the neural network being trained for visual recognition.
Wherein the personalized medical assistance information comprises instructions on how to arrive at the position or pose for the medical procedure.
Wherein the image data comprises a depiction of an incorrect position or pose of the medical procedure and the personalized medical assistance information comprises instructions on how to avoid the incorrect position or pose.
Wherein the image data comprises one or more scan images of the patient related to the medical procedure, each of the one or more scan images being associated with a scan location of the patient, and the processing unit is further configured to: aligning each of the one or more scan images of the patient with a picture of the patient depicting the patient in a position or pose substantially similar to the scan position or pose associated with the scan image; generating visual representations of pairs of aligned scan images and pictures of the patient by at least overlapping the picture of the patient with the scan image; and including the visual representation in the personalized medical assistance information.
Wherein the processing unit is further configured to determine which one or more of the extracted features is to be used to determine the identity of the patient based on user input or configuration.
Wherein the parameters associated with the medical procedure include medical imaging parameters associated with the medical procedure or operating parameters of a medical device.
The present application further provides a method for providing personalized healthcare services, the method comprising: receiving one or more images of a patient; extracting one or more features from the one or more images that are representative of a physiological characteristic of the patient; determining an identity of the patient based on at least one of the extracted features; in response to determining the identity of the patient, retrieving image and/or non-image data from one or more repositories, the image and/or non-image data associated with a medical procedure performed or to be performed for the patient; generating personalized medical assistance information relating to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information includes at least one parameter associated with the medical procedure or a position or pose of the patient for the medical procedure; and presenting the personalized medical assistance information on a display device.
Wherein the one or more images comprise a photograph of the patient taken by a camera or a thermal image of the patient generated by a thermal sensor.
Wherein determining the identity of the patient based on the at least one of the extracted features comprises: matching the at least one of the extracted features with known features of the patient stored in a feature database.
Wherein the identity of the patient is determined based on the at least one of the extracted features using a neural network trained for visual recognition.
Wherein the personalized medical assistance information comprises instructions on how to adjust to reach the position or pose for the medical procedure, and presenting the personalized medical assistance information on the display device comprises: presenting a visual depiction of the instruction on the display device.
Wherein the image data comprises one or more scan images of the patient relating to the medical procedure, each of the one or more scan images being associated with a scan position or pose of the patient, and the method further comprises: aligning each of the one or more scan images of the patient with a picture of the patient depicting the patient in a position or pose substantially similar to the scan position or pose associated with the scan image; generating visual representations of pairs of aligned scan images and pictures of the patient by at least overlapping the picture of the patient with the scan image; and including the visual representation in the personalized medical assistance information.
The present application also provides an apparatus for providing personalized healthcare services, comprising: a processing unit configured to: receiving one or more images of the patient; extracting one or more features from the one or more images that are representative of a physiological characteristic of the patient; determining an identity of the patient based on at least one of the extracted features; in response to determining the identity of the patient, retrieving image and/or non-image data from one or more repositories, the image and/or non-image data relating to a medical procedure performed or to be performed for the patient; generating personalized medical assistance information relating to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information includes at least one parameter associated with the medical procedure or a position or pose of the patient for the medical procedure; and presenting the personalized medical assistance information.
Drawings
Examples disclosed herein may be understood in more detail from the following description, given by way of example in conjunction with the accompanying drawings.
FIG. 1 is a simplified diagram illustrating an example system for providing personalized healthcare services described herein.
FIG. 2 is a simplified block diagram illustrating an example processing unit described herein.
Fig. 3a and 3b are diagrams of example Graphical User Interfaces (GUIs) for providing personalized medical assistance information to a patient or service provider.
FIG. 4 is a flow diagram illustrating a method that may be implemented by the personalized health care system depicted in FIG. 1.
Detailed Description
The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
Fig. 1 is a diagram of an example system 100 for providing personalized healthcare services at a medical facility, such as a hospital. These healthcare services may include, for example, imaging procedures via the scanner 102 (e.g., CT scanner, MRI machine, PET scanner, etc.) or radiation treatments delivered by a medical linear accelerator (LINAC) (not shown). Such services may require precise knowledge of the patient's anatomical characteristics and/or require the patient to stay in a particular position or posture to enhance the accuracy and/or efficiency of the scanning or treatment. For example, proper positioning of the patient may ensure that the target scan area of the patient is adequately and clearly captured, and/or that the patient is not exposed to unnecessary radiation during treatment. At the same time, the medical professional planning or performing the medical procedure may also desire to access the patient's personal medical information in order to obtain an accurate assessment of the patient's condition or to design an appropriate plan or protocol for the procedure.
The system 100 can facilitate the provision of the personalized services described above by automatically identifying the recipient of the service (e.g., the patient 104) and constructing a medical profile of the patient based on the patient's historical medical history. For example, the system 100 may include a sensing device 106 (e.g., an image capture device) configured to capture images of a patient in or around a medical facility. The sensing device 106 may include one or more sensors, such as a camera, red, green, and blue (RGB) sensors, depth sensors, thermal sensors, and/or infrared (FIR) or Near Infrared (NIR) sensors, configured to detect the presence of a patient and generate an image of the patient in response. Depending on the type of sensing device used, the image may be, for example, a picture of the patient taken by a camera or a thermal image of the patient generated by a thermal sensor. The sensing device 106 may be mounted at various locations in a medical facility, such as inside a treatment room, above a doorway, on an imaging device, and so forth. Alternatively or additionally, the sensing device 106 may include a scanner configured to obtain an image of the patient based on an existing photograph of the patient (e.g., a driver's license presented by the patient during enrollment).
The patient image produced by the sensing device 106 may represent one or more characteristics of the patient. Such characteristics may include, for example, facial features of the patient, body contours of the patient, walking patterns of the patient, and so forth. As explained in more detail below, the processing device may identify these characteristics of the patient based on features extracted from the images.
The system 100 may include an interface unit 108 configured to receive patient images generated by the sensing device 106. The interface unit 108 may be communicatively coupled to the sensing device 106, such as by a wired or wireless communication link. The interface unit 108 may be configured to retrieve or receive images from the sensing device 106 periodically (e.g., once per minute, according to a schedule, etc.), or the interface unit 108 may be configured to receive a notification from the sensing device 106 when an image has been generated and retrieve an image from the sensing device in response to receiving the notification. The sensing device 106 may also be configured to send the image to the interface unit 108 without first sending a notification.
The interface unit 108 may operate as a preprocessor for images received from the sensing device 108. For example, the interface unit 108 may be configured to reject poor quality images or convert received images into an appropriate format so that they may be further processed by downstream components of the system 100. The interface unit 108 may also be configured to prepare the image in a manner that will reduce the complexity of downstream processing. Such preparation may include, for example, converting a color image to grayscale, resizing the image to a uniform size, and so forth. Further, although the interface unit 108 is shown in fig. 1 as being separate from the other components of the system 100, the interface unit 108 may also be part of the other components. For example, the interface unit 108 may be included in the sensing device 106 or in the processing unit 110 without affecting the functionality of the interface unit 108 described herein.
The patient images produced by the sensing device 106 and/or the interface unit 108 may be used to establish a medical profile of the patient, for example, automatically upon detection of the patient at a medical facility or in a treatment room. It follows that the manual operations involved in the process can be minimised or reduced, which eliminates the risk of human error, unnecessary exposure to contamination or radiation, etc. Therefore, the service speed can also be improved.
The system 100 may include a processing unit 110 that can provide the improvements described above. The processing unit 110 may be communicatively coupled to the sensing arrangement 106 and/or the interface unit 108 to receive images of the patient. The processing unit 110 may be configured to extract features from an image representing physiological characteristics of a patient and compare the extracted features to a set of known features of the patient to determine the identity of the patient. Alternatively or additionally, the processing unit 110 may utilize an artificial neural network trained to take as input an image of the patient and produce an output indicative of the patient's identity. Such a neural network may be a Convolutional Neural Network (CNN) comprising a cascade of layers, each layer being trained to make pattern matching decisions based on respective levels of abstraction of visual characteristics contained in the image. Training may be performed on the CNN using various data sets and loss functions, such that the CNN becomes able to extract features from the input image (e.g., in the form of feature vectors), determine whether the features match features of known people, and indicate the match results at the output of the network. Example embodiments of neural networks and patient identification are described in more detail below.
The system 100 can also include at least one repository 112 (e.g., one or more repositories or databases) configured to store patient medical information (e.g., medical records). These medical records may include general patient information (e.g., patient ID, name, electronic and physical address, insurance, etc.), non-image medical data associated with a medical procedure performed or to be performed on the patient (e.g., a patient's diagnostic history, a patient's received treatment, a scanning protocol, medical metadata, etc.), and/or image data associated with the medical procedure. In an example, the image data may include a scan image of the patient (e.g., an MRI, CT scan, X-ray, ultrasound, etc.), a visual representation of a position or pose adopted by the patient during these scans (e.g., a correct or incorrect position or pose), a visual representation of an entry corrected position or pose made by the patient, etc.
Medical records may be stored in a structured manner (e.g., arranged in a format or pattern) in the repository 112. Medical records can be collected from a number of sources including, for example, hospitals, doctor's offices, insurance companies, and the like. The medical records may be collected and/or organized by the system 100, another system at a medical facility, or a different organization (e.g., the medical records may exist independently of the system 100). The collection and/or organization of medical records can be performed in an offline manner, or can occur while repository 112 is being actively accessed (e.g., online) by other systems or applications.
Repository 112 may be hosted on one or more database servers coupled to processing unit 110 via a wired or wireless communication link (e.g., a private computer network, a public computer network, a cellular network, a service cloud, etc.). The wired or wireless communication link may be protected via encryption, Virtual Private Network (VPN), Secure Sockets Layer (SSL), etc., to secure the medical information stored therein. The repository 112 may also utilize a distributed architecture, such as one established using a blockchain technique.
The medical records stored in the repository 112 can be used to personalize the healthcare services provided to the patient. For instance, in response to identifying a patient based on one or more images of the patient, the processing unit 110 can retrieve all or a subset of the patient's medical records, including the image and/or non-image data described above, from the repository 112 and use that information to generate personalized medical assistance information (e.g., a medical profile) for the patient. The personalized medical assistance information may include, for example, a procedure to be performed by the patient and historical data associated with the procedure, such as scanned images of the patient from similar procedures performed in the past, positions or poses assumed by the patient during the procedure, adjustments or corrections to bring the patient into a desired or correct position or pose, and so forth. The personalized medical assistance information may also include one or more parameters associated with the procedure, such as imaging parameters (e.g., image size, voxel size, repetition time, etc.) and/or operating parameters of the medical device used in the procedure (e.g., height, orientation, power, dose, etc.). Such information may provide guidance and insight to the patient as to what may be needed for the upcoming procedure (e.g., in terms of positioning).
The medical profiles described herein may also be used to assist medical professionals in providing personalized services to patients. For example, the personalized medical assistance information described herein may include a diagnosis or treatment history of a patient that a medical professional may use to assess a patient's condition. The diagnostic or treatment history may include previously scanned images of the patient taken at different times. Each scan image may be characterized by at least one positioning parameter (posing parameter) that is indicative of the position or pose of the patient during the scan. For example, the positioning parameters may be extracted from metadata associated with each scanned image. When generating the personalized medical assistance information described herein, the processing unit 110 may align these scan images of the patient with pictures or models (e.g., 3D mesh models) of the patient depicting the patient in a position or pose substantially similar to the position or pose the patient assumed when creating the scan images. In an example, a picture may be captured by an image capture device, such as sensing device 106, and a model may be built based on the picture (e.g., a human model derived from a 2D image using neural network and/or parametric modeling techniques). The processing unit 110 may then generate a visual representation of each pair of alignment picture (or model) and scan image, where the picture (or model) of the patient overlaps the scan image of the patient. The visual representation thus generated can demonstrate the change (or lack thereof) of the affected area of the patient over time, and does so with a higher level of accuracy, as the individual scan images are shown against a background that contains a depiction of the patient in a position or pose similar to the scan position or pose.
Some or all of the personalized medical assistance information (e.g., medical profile) described above may be visually presented to the patient or medical professional via display device 114. Display device 114 may include one or more monitors (e.g., a computer monitor, a TV monitor, a tablet, a mobile device such as a smartphone, etc.), one or more speakers, one or more Augmented Reality (AR) devices (e.g., AR goggles), and/or other accessories configured to facilitate visual representations. The display device 114 may be communicatively coupled to the processing unit 110 (e.g., via a wired or wireless communication link) and configured to display personalized medical assistance information generated by the processing unit 110. As described herein, such personalized medical assistance information may include basic patient information, a desired configuration of an upcoming medical procedure (e.g., according to a corresponding scan protocol designed for the patient), scan images previously taken for the patient, the position or pose of the patient during these scans, adjustments or corrections to bring the patient into a desired scan position or pose, overlapping scan images and pictures (or models) of the patient, and so forth. The personalized medical assistance information may be displayed in various formats including, for example, video, animation, and/or AR presentation. For example, an overlapping representation of a scanned image and picture of a patient may be displayed in an AR environment where a physician equipped with AR glasses and/or AR input devices may slide over the representation in a stereoscopic manner.
Fig. 2 is a simplified block diagram illustrating an example processing unit 200 (e.g., processing unit 110) described herein. Processing unit 200 may operate as a standalone device or may be connected (e.g., networked or clustered) with other computing devices to perform the functions described herein. In an example networked deployment, the processing unit 200 may operate in the capacity of a server or a client device in a server-client network environment, or it may act as a peer device in a peer-to-peer (or distributed) network environment. Further, while only a single unit is illustrated in fig. 2, the term "processing unit" should be taken to potentially include multiple units or machines that individually or jointly execute a set of instructions to perform any one or more of the functions discussed herein. Multiple units or machines may belong to a single location or to multiple locations, for example, under a distributed computing architecture.
Processing unit 200 may include at least one processor (e.g., one or more processors) 202, which in turn may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a microcontroller, a Reduced Instruction Set Computer (RISC) processor, an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or any other circuit or processor capable of performing the functions described herein. The processing unit 200 may also include communication circuitry 204, memory 206, mass storage 208, and/or input devices 210. The communication circuit 204 may be configured to send and receive information using one or more communication protocols (e.g., TCP/IP) and one or more communication networks including a Local Area Network (LAN), a Wide Area Network (WAN), the internet, a wireless data network (e.g., Wi-Fi, 3G, 4G/LTE, or 5G networks). The memory 206 may include a machine-readable medium configured to store instructions that, when executed, cause the processor 202 to perform one or more of the functions described herein. Examples of a machine-readable medium may include volatile or non-volatile memory, including but not limited to semiconductor memory (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)), flash memory, and so forth). Mass storage device 208 may include one or more magnetic disks, such as an internal hard disk, a removable disk, a magneto-optical disk, a CD-ROM or DVD-ROM disk, etc., on which instructions and/or data may be stored in order to perform the functions described herein. Input device 210 may include a keyboard, mouse, voice-controlled input device, touch-sensitive input device (e.g., touch screen), etc., for receiving user input from processing unit 200.
In an example embodiment, the processing unit 200 may include or may be coupled to a feature database 212 configured to store images of a patient and/or visual representations of one or more characteristics of the patient (e.g., known features of the patient). The images and/or visual representations may be prepared (e.g., pre-computed) and stored in the feature database 212 based on image data of the patient collected from various sources, including, for example, pictures taken during a past patient-to-medical facility visit, a repository storing medical records of the patient (e.g., repository 112 shown in fig. 1), a public photograph ID database (e.g., driver's license database), and so forth. The database 212 may be communicatively coupled to the processor 202 and used by the processor to identify a patient based on patient images obtained by a sensing device (e.g., the sensing device 106 of fig. 1). For example, the processor 202 may be configured to receive an image of a patient from a sensing device and extract a set of features from the image that represent physiological characteristics of the patient. The processor 202 may also be configured to match at least one of the extracted features with image data (e.g., known features of the patient) stored in the feature database 212 to determine the identity of the patient.
The features and/or characteristics described herein may be associated with various attributes of the patient, such as body contour, height, facial features, walking patterns, posture, and the like. In the context of digital images, these features or characteristics may correspond to structures in the image, such as points, edges, objects, and the like. Various techniques may be employed to extract these features from the image. For example, one or more keypoints associated with a feature may be identified, including a point at which the direction of the boundary of the object abruptly changes, an intersection between two or more edge segments, and so forth. These keypoints may be characterized by well-defined locations in image space and/or stability to illumination/brightness perturbations. It follows that keypoints can be identified based on image derivatives, edge detection, curvature analysis, etc.
Once identified, the keypoints and/or features associated with the keypoints may be described by feature descriptors or feature vectors. In an example implementation of such a feature descriptor or vector, information related to a feature (e.g., the appearance of a local neighborhood of each keypoint) may be represented by (e.g., encoded into) a series of values stored in the feature descriptor or vector. The descriptor or vector may then be used as a "fingerprint" to distinguish or match one feature to another.
Returning to the example shown in fig. 2, the processor 202 may be configured to determine what particular features to extract from a received image of the patient, extract these features from the image (e.g., generate one or more feature vectors corresponding to the features), and determine the identity of the patient by matching at least one of the extracted features with image data stored in the feature database 212. The processor may determine which one or more of the extracted features to use to determine the identity of the patient based on, for example, user input or configuration (e.g., system configuration parameters). Further, the processor 202 may be configured to determine that certain areas of the patient's body are occluded or covered and then avoid using features associated with the occluded areas for patient matching (e.g., the processor 202 may decide to use different features such as the patient's walking pattern to identify the patient). The blocked or covered area of the patient can be determined, for example, by: operate occlusion detectors for one or more portions of the patient's body (e.g., in a bottom-up manner), and/or identify an overall posture of the patient, and then infer an occlusion region based on the overall posture of the patient. Depth information associated with one or more images of a patient may be used to determine occlusion or coverage areas. The processor 202 may also be configured to use features associated with the occluded regions for patient matching, but provide an indication that such matching may not be robust (e.g., give a low confidence score to the match).
In an example embodiment, the processing unit 200 may include a neural network in addition to or in place of the feature database 212 for identifying the patient based on images obtained by the sensing devices (e.g., sensing device 106). The neural network may be a Convolutional Neural Network (CNN) or a Deep Neural Network (DNN) that includes multiple layers (e.g., an input layer, one or more convolutional layers, one or more pooling layers, one or more fully-connected layers, and/or an output layer). Each layer may correspond to a plurality of filters (or kernels), and each filter may be designed to detect a particular type of visual feature. The filters may be associated with respective weights that, when applied to the input, produce an output indicating whether certain visual features have been detected. The weights associated with the filters may be learned by the neural network through a training process that includes: the method includes inputting patient images from a training dataset to a neural network (e.g., in a forward pass), calculating losses resulting from weights currently assigned to the filters based on a loss function (e.g., a margin-based loss function), and updating (e.g., in a reverse pass) the weights assigned to the filters to minimize the losses (e.g., based on a stochastic gradient descent). Once trained, the neural network is capable of taking images of the patient at the input layer, extracting and/or classifying visual features of the patient from the images, and providing an indication at the output layer as to whether the input images match images of known patients.
In any of the above examples, once a matching patient is found, processor 202 may continue to query a repository (e.g., repository 112 in fig. 1) based on the identity of the patient to retrieve a medical record (e.g., image and/or non-image data associated with the medical procedure) of the patient. The medical record may include, for example, positioning information associated with a medical procedure to be performed on the patient, previously scanned pictures or other types of images of the patient, a diagnosis and treatment history of the patient, and so forth. The processor 202 can generate personalized medical assistance information for the patient (e.g., establish a medical profile) based on the retrieved medical records. Processor 202 may also display some or all of the personalized medical assistance information to the patient or medical professional via a graphical user interface, e.g., as described in conjunction with fig. 1.
Fig. 3a and 3b are diagrams illustrating an example Graphical User Interface (GUI) for presenting personalized medical assistance information to a patient or medical professional in response to identifying the patient based on one or more images. Fig. 3a illustrates a first example GUI 300a for presenting personalized medical assistance information. GUI 300a may include regions 302a, 304a, and/or 306 a. The area 302a may be configured to display basic patient information such as patient name, date of birth, time of last visit by the patient, diagnosis, prescription, etc. Region 304a may be configured to display a desired position or pose of the patient for an upcoming medical procedure (e.g., an MRI scan), and region 306a may be configured to show a position or pose that the patient previously assumed during a similar procedure (e.g., a correct and/or incorrect position or pose) and/or adjustments made by the patient to enter the desired position or pose. The position or pose and/or adjustment information may be presented in various formats including, for example, video or animation.
Fig. 3b illustrates a second example GUI 300b for presenting personalized medical assistance information as described herein. GUI 300b may include regions 302b, 304b, and/or 306 b. Region 302b in fig. 3b may be configured to display basic patient information, similar to region 302a in fig. 3a, and region 304b may be configured to display a diagnosis or treatment history of the patient, such as a scanned image of the patient overlaid with a picture or model of the patient at a corresponding scan position or pose, as described herein. A service provider (e.g., a doctor) may view the presented information, for example, using a scroll bar 306 b.
Fig. 4 is a flow diagram illustrating a method 400 that may be implemented by the personalized healthcare system (e.g., system 100 of fig. 1) described herein. For simplicity of illustration, the operations in method 400 are depicted and described herein in a particular order. However, it is to be understood that these operations may occur in various orders and/or concurrently, and with other operations not presented and described herein. Moreover, not all illustrated acts may be required to implement a methodology as disclosed herein.
The method 400 may begin at 402 by a processing unit (e.g., the processing unit 110 of fig. 1 or the processing unit 200 of fig. 2) of a personalized health care system. At 404, the processing unit may receive one or more images of the patient from a sensing device (e.g., sensing device 106 of fig. 1). Such images may include camera photographs, thermal images, and/or other types of images depicting characteristics of a patient. At 406, the processing unit may analyze the received image and determine an identity of the patient based on at least one of the features extracted from the image. As described herein, the analysis and/or identification may be performed, for example, by: matching at least one of the extracted features with known features of the patient stored in a feature database and/or utilizing a neural network trained for visual recognition. Once the identity of the patient is determined, the processing unit may retrieve medical records (e.g., image and/or non-image data) of the patient from one or more repositories (e.g., repository 112 in fig. 1) and use the medical records to generate personalized medical assistance information (e.g., a medical profile) for the patient at 408. The personalized medical assistance information may then be used to provide personalized healthcare services to the patient at 410, including, for example, positioning assistance, scanned image review, medical history analysis, and the like.
While the present disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of the embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not limit the disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure. In addition, unless specifically stated otherwise, discussions utilizing terms such as "dividing," "analyzing," "determining," "enabling," "identifying," "modifying," or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data represented as physical quantities within the computer system memories or other such information storage, transmission or display devices.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (10)

1. A system for providing personalized healthcare services, comprising:
one or more repositories configured to store electronic medical records of a patient, the electronic medical records including image data and/or non-image data associated with a medical procedure performed or to be performed for the patient; and
a processing unit configured to:
receiving one or more images of the patient;
extracting one or more features from the one or more images that are representative of a physiological characteristic of the patient;
determining an identity of the patient based on at least one of the extracted features;
in response to determining the identity of the patient, retrieving the image and/or non-image data from the one or more repositories; and is
Generating personalized medical assistance information relating to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information includes at least one parameter associated with the medical procedure or the patient physiology or position or pose for the medical procedure.
2. The system of claim 1, further comprising a display device configured to present the personalized medical assistance information, wherein the presentation comprises a visual depiction of the position or pose of the patient for the medical procedure.
3. The system of claim 1, wherein the processing unit being configured to determine the identity of the patient based on at least one of the extracted features comprises: the processing unit is configured to match the at least one of the extracted features with known features of the patient stored in a feature database.
4. The system of claim 1, wherein the processing unit is configured to determine the identity of the patient based on at least one of the extracted features using a neural network, the neural network being trained for visual recognition.
5. The system of claim 1, wherein the personalized medical assistance information includes instructions on how to adjust to achieve a desired position or pose for the medical procedure.
6. The system of claim 1, wherein the image data includes a depiction of an incorrect position or pose of the medical procedure, and the personalized medical assistance information includes instructions on how to avoid the incorrect position or pose.
7. The system of claim 1, wherein the image data comprises one or more scan images of the patient related to the medical procedure, each of the one or more scan images associated with a scan position or pose of the patient, and the processing unit is further configured to:
aligning each of the one or more scan images of the patient with a picture of the patient depicting the patient in a position or pose substantially similar to the scan position or pose associated with the scan image;
generating visual representations of pairs of aligned scan images and pictures of the patient by at least overlapping the picture of the patient with the scan image; and is
Including the visual representation in the personalized medical assistance information.
8. The system of claim 1, wherein the parameter associated with the medical procedure comprises a medical imaging parameter associated with the medical procedure or an operating parameter of a medical device.
9. A method for providing personalized healthcare services, the method comprising:
receiving one or more images of a patient;
extracting one or more features from the one or more images that are representative of a physiological characteristic of the patient;
determining an identity of the patient based on at least one of the extracted features;
in response to determining the identity of the patient, retrieving image and/or non-image data from one or more repositories, the image and/or non-image data associated with a medical procedure performed or to be performed for the patient;
generating personalized medical assistance information relating to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information includes at least one parameter associated with the medical procedure or a physiology or location or pose of the patient for the medical procedure; and
presenting the personalized medical assistance information on a display device.
10. An apparatus for providing personalized healthcare services, comprising:
a processing unit configured to:
receiving one or more images of a patient;
extracting one or more features from the one or more images that are representative of a physiological characteristic of the patient;
determining an identity of the patient based on at least one of the extracted features;
in response to determining the identity of the patient, retrieving image and/or non-image data from one or more repositories, the image and/or non-image data relating to a medical procedure performed or to be performed for the patient;
generating personalized medical assistance information relating to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information comprises at least one parameter associated in a physiology or position or pose of the medical procedure or the patient for the medical procedure; and is
Presenting the personalized medical assistance information.
CN202011346991.0A 2019-11-27 2020-11-26 Systems, methods and apparatus for providing personalized health care Active CN112420143B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962941203P 2019-11-27 2019-11-27
US62/941,203 2019-11-27
US16/814,373 US11430564B2 (en) 2019-11-27 2020-03-10 Personalized patient positioning, verification and treatment
US16/814,373 2020-03-10

Publications (2)

Publication Number Publication Date
CN112420143A true CN112420143A (en) 2021-02-26
CN112420143B CN112420143B (en) 2024-08-02

Family

ID=74843556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011346991.0A Active CN112420143B (en) 2019-11-27 2020-11-26 Systems, methods and apparatus for providing personalized health care

Country Status (1)

Country Link
CN (1) CN112420143B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101889870A (en) * 2010-07-20 2010-11-24 江苏同庚电子科技有限公司 Radiotherapy locating device
CN102016859A (en) * 2008-05-09 2011-04-13 皇家飞利浦电子股份有限公司 Method and system for personalized guideline-based therapy augmented by imaging information
CN102123761A (en) * 2008-07-11 2011-07-13 麦德托尼克公司 Posture state display on medical device user interface
CN102132280A (en) * 2008-08-15 2011-07-20 皇家飞利浦电子股份有限公司 Model enhanced imaging
US20120056800A1 (en) * 2010-09-07 2012-03-08 Microsoft Corporation System for fast, probabilistic skeletal tracking
CN102422291A (en) * 2009-05-13 2012-04-18 皇家飞利浦电子股份有限公司 Method and system for imaging patients with a personal medical device
US20130250050A1 (en) * 2012-03-23 2013-09-26 Objectvideo, Inc. Video surveillance systems, devices and methods with improved 3d human pose and shape modeling
US20150196780A1 (en) * 2012-08-09 2015-07-16 Koninklijke Philips N.V. System and method for radiotherapeutic treatment
CN105208960A (en) * 2013-05-16 2015-12-30 直观外科手术操作公司 Systems and methods for robotic medical system integration with external imaging
US20160157938A1 (en) * 2013-08-23 2016-06-09 Stryker Leibinger Gmbh & Co. Kg Computer-Implemented Technique For Determining A Coordinate Transformation For Surgical Navigation
CN106659453A (en) * 2014-07-02 2017-05-10 柯惠有限合伙公司 System and method for segmentation of lung
CN107334487A (en) * 2017-08-11 2017-11-10 上海联影医疗科技有限公司 A kind of medical image system and its scan method
US20170351911A1 (en) * 2014-02-04 2017-12-07 Pointgrab Ltd. System and method for control of a device based on user identification
CN108175503A (en) * 2013-03-13 2018-06-19 史赛克公司 System for arranging objects in an operating room in preparation for a surgical procedure
CN109247940A (en) * 2018-11-22 2019-01-22 上海联影医疗科技有限公司 The scan method and magnetic resonance system of magnetic resonance system
US20190096520A1 (en) * 2017-09-28 2019-03-28 Siemens Healthcare Gmbh Personalized patient model
CN109935316A (en) * 2017-12-19 2019-06-25 奥林巴斯株式会社 Medical auxiliary system, information terminal device, patient image data adquisitiones

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102016859A (en) * 2008-05-09 2011-04-13 皇家飞利浦电子股份有限公司 Method and system for personalized guideline-based therapy augmented by imaging information
CN102123761A (en) * 2008-07-11 2011-07-13 麦德托尼克公司 Posture state display on medical device user interface
CN102132280A (en) * 2008-08-15 2011-07-20 皇家飞利浦电子股份有限公司 Model enhanced imaging
CN102422291A (en) * 2009-05-13 2012-04-18 皇家飞利浦电子股份有限公司 Method and system for imaging patients with a personal medical device
CN101889870A (en) * 2010-07-20 2010-11-24 江苏同庚电子科技有限公司 Radiotherapy locating device
US20120056800A1 (en) * 2010-09-07 2012-03-08 Microsoft Corporation System for fast, probabilistic skeletal tracking
US20130250050A1 (en) * 2012-03-23 2013-09-26 Objectvideo, Inc. Video surveillance systems, devices and methods with improved 3d human pose and shape modeling
US20150196780A1 (en) * 2012-08-09 2015-07-16 Koninklijke Philips N.V. System and method for radiotherapeutic treatment
CN108175503A (en) * 2013-03-13 2018-06-19 史赛克公司 System for arranging objects in an operating room in preparation for a surgical procedure
CN105208960A (en) * 2013-05-16 2015-12-30 直观外科手术操作公司 Systems and methods for robotic medical system integration with external imaging
US20160157938A1 (en) * 2013-08-23 2016-06-09 Stryker Leibinger Gmbh & Co. Kg Computer-Implemented Technique For Determining A Coordinate Transformation For Surgical Navigation
US20170351911A1 (en) * 2014-02-04 2017-12-07 Pointgrab Ltd. System and method for control of a device based on user identification
CN106659453A (en) * 2014-07-02 2017-05-10 柯惠有限合伙公司 System and method for segmentation of lung
CN107334487A (en) * 2017-08-11 2017-11-10 上海联影医疗科技有限公司 A kind of medical image system and its scan method
US20190096520A1 (en) * 2017-09-28 2019-03-28 Siemens Healthcare Gmbh Personalized patient model
CN109935316A (en) * 2017-12-19 2019-06-25 奥林巴斯株式会社 Medical auxiliary system, information terminal device, patient image data adquisitiones
CN109247940A (en) * 2018-11-22 2019-01-22 上海联影医疗科技有限公司 The scan method and magnetic resonance system of magnetic resonance system

Also Published As

Publication number Publication date
CN112420143B (en) 2024-08-02

Similar Documents

Publication Publication Date Title
US11430564B2 (en) Personalized patient positioning, verification and treatment
US10049457B2 (en) Automated cephalometric analysis using machine learning
JP6947759B2 (en) Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects
KR101874348B1 (en) Method for facilitating dignosis of subject based on chest posteroanterior view thereof, and apparatus using the same
US9858667B2 (en) Scan region determining apparatus
CN111919260A (en) Surgical video retrieval based on preoperative images
CN113614790A (en) Method for generating a 3D-printable model of a patient-specific anatomy
US20140341449A1 (en) Computer system and method for atlas-based consensual and consistent contouring of medical images
US9002083B2 (en) System, method, and software for optical device recognition association
US20230032103A1 (en) Systems and methods for automated healthcare services
US11941738B2 (en) Systems and methods for personalized patient body modeling
US20210059758A1 (en) System and Method for Identification, Labeling, and Tracking of a Medical Instrument
US20190012805A1 (en) Automatic detection of an artifact in patient image data
JP6967983B2 (en) Image processing equipment, image processing methods, and programs
US9454814B2 (en) PACS viewer and a method for identifying patient orientation
KR102204309B1 (en) X-ray Image Display Method Based On Augmented Reality
CN112420143B (en) Systems, methods and apparatus for providing personalized health care
US11354803B1 (en) Identifying spurious tracts in neuro-networks
US20220122261A1 (en) Probabilistic Segmentation of Volumetric Images
WO2022194855A1 (en) Detecting abnormalities in an x-ray image
US20210192717A1 (en) Systems and methods for identifying atheromatous plaques in medical images
CN114463246A (en) Circle selection system and circle selection method
JP2021515326A (en) Systems and methods for accelerated clinical workflows
KR20190143657A (en) Apparatus and method for alignment of bone suppressed chest x-ray image
KR102596666B1 (en) System and method for providing integrated medical service based on medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant