CN112420143B - Systems, methods and apparatus for providing personalized health care - Google Patents

Systems, methods and apparatus for providing personalized health care Download PDF

Info

Publication number
CN112420143B
CN112420143B CN202011346991.0A CN202011346991A CN112420143B CN 112420143 B CN112420143 B CN 112420143B CN 202011346991 A CN202011346991 A CN 202011346991A CN 112420143 B CN112420143 B CN 112420143B
Authority
CN
China
Prior art keywords
patient
medical
personalized
assistance information
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011346991.0A
Other languages
Chinese (zh)
Other versions
CN112420143A (en
Inventor
斯里克里希纳·卡拉南
吴子彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/814,373 external-priority patent/US11430564B2/en
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Publication of CN112420143A publication Critical patent/CN112420143A/en
Application granted granted Critical
Publication of CN112420143B publication Critical patent/CN112420143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The patient's healthcare experience can be enhanced with a system that automatically identifies the patient based on one or more images of the patient and generates personalized medical assistance information for the patient based on the electronic medical record stored for the patient. Such electronic medical records may include image data and/or non-image data associated with medical procedures performed or to be performed for a patient. It can thus be seen that image and/or non-image data can be incorporated into personalized medical assistance information to provide positioning and/or other types of diagnostic or therapeutic guidance to a patient or service provider.

Description

Systems, methods and apparatus for providing personalized health care
Cross Reference to Related Applications
The present application claims the benefit of provisional U.S. patent application Ser. No. 62/941,203 filed 11/27 in 2019 and provisional U.S. patent application Ser. No. 16/814,373 filed 3/10 in 2020, the disclosures of which are incorporated herein by reference in their entireties.
Technical Field
The present application relates to the field of health care services.
Background
Medical diagnosis and treatment are personal in nature and often require custom instructions or guidelines for each patient. For example, in radiation therapy and medical imaging (e.g., X-ray photography, magnetic Resonance Imaging (MRI), computed Tomography (CT), and Positron Emission Tomography (PET)), success depends largely on the ability to hold a patient in a desired pose according to the physical characteristics of the patient so that scanning or treatment can be performed in an accurate and precise manner. Conventional positioning techniques typically require manual adjustment of the patient's position, placement of markers on or near the patient's body, or simulation to determine optimal operating parameters and/or conditions for the patient. These techniques are not only cumbersome, but lack accuracy, consistency, and real-time monitoring capabilities.
Meanwhile, medical facilities such as hospitals often have a large number of medical records related to diagnostic records, treatment plans, scanned images, etc. of patients. These medical records can provide valuable insight into the patient's medical history and ways to enhance the patient's healthcare experience. Accordingly, ways to personalize healthcare services with these medical records are highly desirable. Further, it is also important to provide these personalized services in an accurate, secure, and automatic manner to minimize the risk of human error, cross-contamination, privacy violations, and the like, given the unique circumstances associated with medical facilities.
Disclosure of Invention
Systems, methods, and instrumentalities are described herein for providing personalized health care services to a patient. In an example, such a system can include one or more repositories configured to store electronic medical records of a patient. The electronic medical record can include image data and/or non-image data associated with a medical procedure performed or to be performed for a patient. The image and/or non-image data may be retrieved (retrieved) by a processing unit of the system and used to generate personalized medical assistance information related to the patient. For example, the processing unit may be configured to receive one or more images of the patient and extract one or more features from the images that represent physiological characteristics of the patient. Based on at least one of these extracted features, the processing unit may determine the identity of the patient and retrieve image and/or non-image data from one or more repositories. The personalized medical assistance information thus created may include parameters associated with the medical procedure (e.g., medical imaging parameters or operational parameters of the medical device), positioning information related to the medical procedure, and/or overlapping scanned images and pictures of the patient that show a diagnosis or treatment history of the patient. The personalized medical assistance information may be presented to the patient or service provider via a display device to assist the patient or service provider during the healthcare service.
The images described herein may be photographs of the patient taken by a camera, thermal images of the patient generated by a thermal sensor, and so forth. Features extracted from these images may be matched to a set of known features of the patient stored in a feature database. These features may also be processed through neural networks trained for visual recognition. Further, the image data stored in the repository may include a depiction of an incorrect position or pose of the medical procedure, and the personalized medical assistance information may include instructions on how to avoid the incorrect position or pose. The overlapping scanned images and pictures of the patient may be generated by: respective scan positions or poses associated with each image are determined, and the images are aligned with the patient picture in a substantially similar position or pose. The resulting representation (resulting representation) may be adapted for display in an Augmented Reality (AR) environment to enhance the experience of the patient or service provider.
The present application provides a system for providing personalized health care services comprising: one or more repositories configured to store an electronic medical record of a patient, the electronic medical record including image data and/or non-image data associated with a medical procedure performed or to be performed for the patient; and a processing unit configured to: receiving one or more images of the patient; extracting one or more features representing physiological characteristics of the patient from the one or more images; determining an identity of the patient based on at least one of the extracted features; retrieving the image and/or non-image data from the one or more repositories in response to determining the identity of the patient; and generating personalized medical assistance information related to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information includes at least one parameter associated with the medical procedure or a position or pose of the patient for the medical procedure.
Wherein the one or more images include a photograph of the patient taken by a camera or a thermal image of the patient generated by a thermal sensor.
Wherein the characteristic of the patient comprises a walking pattern of the patient.
Wherein the processing unit being configured to determine the identity of the patient based on at least one of the extracted features comprises: the processing unit is configured to match the at least one of the extracted features with known features of the patient stored in a feature database.
Wherein the processing unit is configured to determine the identity of the patient based on at least one of the extracted features using a neural network trained for visual recognition.
Wherein the personalized medical assistance information comprises instructions on how to reach the position or pose for the medical procedure.
Wherein the image data comprises a depiction of an incorrect position or pose of the medical procedure and the personalized medical assistance information comprises instructions on how to avoid the incorrect position or pose.
Wherein the image data comprises one or more scanned images of the patient related to the medical procedure, each of the one or more scanned images being associated with a scanned location of the patient, and the processing unit is further configured to: aligning each of the one or more scanned images of the patient with a picture of the patient depicting the patient in a position or pose substantially similar to the scanning position or pose associated with the scanned image; generating visual representations of respective pairs of aligned scanned images and pictures of the patient by overlapping at least the pictures of the patient with the scanned images; and including the visual representation in the personalized medical assistance information.
Wherein the processing unit is further configured to determine which one or more of the extracted features is to be used for determining the identity of the patient based on user input or configuration.
Wherein the parameter associated with the medical procedure comprises a medical imaging parameter associated with the medical procedure or an operating parameter of a medical device.
The present application also provides a method for providing personalized health care services, the method comprising: receiving one or more images of a patient; extracting one or more features representing physiological characteristics of the patient from the one or more images; determining an identity of the patient based on at least one of the extracted features; in response to determining the identity of the patient, retrieving image and/or non-image data from one or more repositories, the image data and/or non-image data being associated with a medical procedure performed or to be performed for the patient; generating personalized medical assistance information related to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information comprises at least one parameter associated with the medical procedure or a position or pose of the patient for the medical procedure; and presenting the personalized medical assistance information on a display device.
Wherein the one or more images include a photograph of the patient taken by a camera or a thermal image of the patient generated by a thermal sensor.
Wherein determining the identity of the patient based on the at least one of the extracted features comprises: the at least one of the extracted features is matched with known features of the patient stored in a feature database.
Wherein the identity of the patient is determined based on the at least one of the extracted features using a neural network trained for visual recognition.
Wherein the personalized medical assistance information comprises instructions on how to adjust to reach the position or pose for the medical procedure, and presenting the personalized medical assistance information on the display device comprises: a visual depiction of the instructions is presented on the display device.
Wherein the image data comprises one or more scanned images of the patient related to the medical procedure, each of the one or more scanned images being associated with a scanned position or pose of the patient, and the method further comprises: aligning each of the one or more scanned images of the patient with a picture of the patient depicting the patient in a position or pose substantially similar to the scanning position or pose associated with the scanned image; generating visual representations of respective pairs of aligned scanned images and pictures of the patient by overlapping at least the pictures of the patient with the scanned images; and including the visual representation in the personalized medical assistance information.
The present application also provides an apparatus for providing personalized health care services, comprising: a processing unit configured to: receiving one or more images of the patient; extracting one or more features representing physiological characteristics of the patient from the one or more images; determining an identity of the patient based on at least one of the extracted features; in response to determining the identity of the patient, retrieving image and/or non-image data from one or more repositories, the image and/or non-image data relating to a medical procedure performed or to be performed for the patient; generating personalized medical assistance information related to the patient based on the image and/or non-image data retrieved from the one or more repositories, wherein the personalized medical assistance information comprises at least one parameter associated with the medical procedure or a position or pose of the patient for the medical procedure; and presenting the personalized medical assistance information.
Drawings
Examples disclosed herein may be understood in more detail from the following description, given by way of example in conjunction with the accompanying drawings.
FIG. 1 is a simplified diagram illustrating an example system for providing personalized health care services described herein.
FIG. 2 is a simplified block diagram illustrating an example processing unit described herein.
Fig. 3a and 3b are diagrams of an example Graphical User Interface (GUI) for providing personalized medical assistance information to a patient or service provider.
Fig. 4 is a flow chart illustrating a method that may be implemented by the personalized health care system depicted in fig. 1.
Detailed Description
The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
FIG. 1 is a diagram of an example system 100 for providing personalized health care services at a medical facility such as a hospital. These healthcare services may include, for example, imaging procedures via a scanner 102 (e.g., a CT scanner, MRI machine, PET scanner, etc.) or radiation therapy delivered by a medical linear accelerator (LINAC) (not shown). Such services may require precise knowledge of the patient's anatomical characteristics and/or require the patient to stay in a particular position or posture to enhance the accuracy and/or efficiency of the scan or treatment. For example, proper positioning of the patient may ensure that the target scan area of the patient is adequately and clearly captured and/or that the patient is not exposed to unnecessary radiation during treatment. At the same time, it may also be desirable for a medical expert to plan or perform a medical procedure to access personal medical information of a patient in order to obtain an accurate assessment of the patient's condition or to design an appropriate plan or protocol for the procedure.
The system 100 can facilitate the provision of the personalized services described above by automatically identifying a recipient of the services (e.g., the patient 104) and constructing a medical profile of the patient based on the patient's historic medical records. For example, the system 100 may include a sensing device 106 (e.g., an image capture device) configured to capture images of a patient in or around a medical facility. The sensing device 106 may include one or more sensors, such as cameras, red, green, and blue (RGB) sensors, depth sensors, thermal sensors, and/or infrared (FIR) or Near Infrared (NIR) sensors, configured to detect the presence of a patient and in response generate an image of the patient. Depending on the type of sensing device used, the image may be, for example, a photograph of the patient taken by a camera or a thermal image of the patient generated by a thermal sensor. The sensing device 106 may be mounted at various locations in the medical facility, such as inside a treatment room, above a doorway, on an imaging device, and so forth. Alternatively or additionally, the sensing device 106 may include a scanner configured to obtain an image of the patient based on an existing photograph of the patient (e.g., a driver's license presented by the patient during enrollment).
The patient image generated by the sensing device 106 may be representative of one or more characteristics of the patient. Such characteristics may include, for example, facial features of the patient, body contours of the patient, walking patterns of the patient, and the like. As described in more detail below, the processing device may identify these characteristics of the patient based on features extracted from the images.
The system 100 may include an interface unit 108 configured to receive the patient image generated by the sensing device 106. The interface unit 108 may be communicatively coupled to the sensing device 106, for example, by a wired or wireless communication link. The interface unit 108 may be configured to retrieve or receive images from the sensing device 106 periodically (e.g., once per minute, according to a schedule, etc.), or the interface unit 108 may be configured to receive notifications from the sensing device 106 when images have been generated and retrieve images from the sensing device in response to receiving the notifications. The sensing device 106 may also be configured to send the image to the interface unit 108 without first sending a notification.
The interface unit 108 may operate as a pre-processor for images received from the sensing device 108. For example, the interface unit 108 may be configured to reject poor quality images or to convert received images into an appropriate format so that they may be further processed by downstream components of the system 100. The interface unit 108 may also be configured to prepare the image in a manner that will reduce the complexity of downstream processing. Such preparation may include, for example, converting a color image to grayscale, resizing the image to a uniform size, and the like. Further, while interface unit 108 is shown in fig. 1 as being separate from other components of system 100, interface unit 108 may also be part of the other components. For example, the interface unit 108 may be included in the sensing device 106 or in the processing unit 110 without affecting the functionality of the interface unit 108 described herein.
The patient image generated by the sensing device 106 and/or the interface unit 108 may be used to establish a medical profile of the patient, e.g., automatically upon detection of the patient at a medical facility or in a treatment room. It follows that the manual operations involved in the process can be minimized or reduced, which eliminates the risk of human error, unnecessary exposure to contamination or radiation, etc. Therefore, the service speed can also be improved.
The system 100 may include a processing unit 110 that can provide the improvements described above. The processing unit 110 may be communicatively coupled to the sensing device 106 and/or the interface unit 108 to receive an image of the patient. The processing unit 110 may be configured to extract features from the image representing the physiological characteristics of the patient and compare the extracted features with a set of known features of the patient to determine the identity of the patient. Alternatively or additionally, the processing unit 110 may utilize an artificial neural network trained to take as input an image of the patient and to generate an output indicative of the identity of the patient. Such a neural network may be a Convolutional Neural Network (CNN) comprising cascaded layers, each layer being trained to make pattern matching decisions (PATTERN MATCHING decisions) based on a respective level of abstraction (level of abstraction) of visual characteristics contained in the image. Training may be performed on the CNN using various data sets and loss functions such that the CNN becomes able to extract features (e.g., in the form of feature vectors) from the input image, determine whether the features match the features of a known person, and indicate the matching result at the output of the network. Example embodiments of neural networks and patient identification are described in more detail below.
The system 100 can also include at least one repository 112 (e.g., one or more repositories or databases) configured to store patient medical information (e.g., medical records). These medical records can include general patient information (e.g., patient ID, name, electronic and physical address, insurance, etc.), non-image medical data associated with medical procedures performed or to be performed on a patient (e.g., diagnostic history of the patient, treatment received by the patient, scan protocol, medical metadata, etc.), and/or image data associated with the medical procedures. In an example, the image data may include scanned images of the patient (e.g., MRI, CT scan, X-ray, ultrasound, etc.), visual representations of positions or poses employed by the patient during these scans (e.g., correct or incorrect positions or poses), visual representations of entry correction positions or poses made by the patient, and so forth.
Medical records may be stored in the repository 112 in a structured manner (e.g., arranged in a format or pattern). Medical records can be collected from a number of sources including, for example, hospitals, doctor's offices, insurance companies, and the like. The medical records can be collected and/or organized by the system 100, another system at a medical facility, or a different organization (e.g., the medical records can exist independent of the system 100). The collection and/or organization of medical records can be performed in an offline manner, or can be performed while repository 112 is being actively accessed (e.g., online) by other systems or applications.
Repository 112 may be hosted on one or more database servers coupled to processing unit 110 via a wired or wireless communication link (e.g., private computer network, public computer network, cellular network, service cloud, etc.). The wired or wireless communication link may be protected via encryption, virtual Private Network (VPN), secure Sockets Layer (SSL), etc. to secure medical information stored therein. Repository 112 may also utilize a distributed architecture, such as one established using blockchain technology.
The medical records stored in the repository 112 can be used to personalize healthcare services provided to a patient. For example, in response to identifying a patient based on one or more images of the patient, the processing unit 110 can retrieve all or a subset of medical records of the patient including the above-described image and/or non-image data from the repository 112 and use that information to generate personalized medical assistance information (e.g., medical profile) for the patient. The personalized medical assistance information may include, for example, a procedure to be performed by the patient and historical data associated with the procedure, such as scanned images of the patient from similar procedures performed in the past, the position or pose taken by the patient during the procedure, adjustments or corrections made to bring the patient into a desired or correct position or pose, and the like. The personalized medical assistance information may also include one or more parameters associated with the procedure, such as imaging parameters (e.g., image size, voxel size, repetition time, etc.) and/or operating parameters of the medical device used in the procedure (e.g., altitude, orientation, power, dose, etc.). Such information may provide guidance and insight to the patient regarding what may be needed for the upcoming procedure (e.g., in terms of positioning).
The medical profiles described herein may also be used to assist medical professionals in providing personalized services to patients. For example, the personalized medical assistance information described herein may include a diagnosis or treatment history of a patient that a medical professional may use to assess the patient's condition. The diagnosis or treatment history may include previously scanned images of the patient taken at different times. Each scan image may be characterized by at least one positioning parameter (positioning parameter) that indicates the position or pose of the patient during the scan. For example, the positioning parameters may be extracted from metadata associated with the respective scanned images. When generating the personalized medical assistance information described herein, the processing unit 110 may align these scanned images of the patient with pictures or models (e.g., 3D mesh models) of the patient that depict the patient in a position or pose substantially similar to the position or pose the patient assumed when the scanned images were created. In an example, the picture may be captured by an image capturing device, such as sensing device 106, and the model may be constructed based on the picture (e.g., using neural networks and/or parametric model building techniques to derive a human model from the 2D image). The processing unit 110 may then generate a visual representation of each of the aligned pictures (or models) and the scanned image, wherein the pictures (or models) of the patient overlap with the scanned image of the patient. The visual representation thus generated may demonstrate changes (or lack of changes) over time in the diseased area of the patient and do so with a higher level of accuracy, as the individual scan images are shown against a background containing a depiction of the patient in a position or pose similar to the scan position or pose.
Some or all of the personalized medical assistance information (e.g., medical profile) described above may be visually presented to the patient or medical professional via display device 114. The display device 114 may include one or more monitors (e.g., computer monitors, TV monitors, tablet computers, mobile devices such as smartphones, etc.), one or more speakers, one or more Augmented Reality (AR) devices (e.g., AR goggles), and/or other accessories configured to facilitate a visual representation. The display device 114 may be communicatively coupled to the processing unit 110 (e.g., via a wired or wireless communication link) and configured to display personalized medical assistance information generated by the processing unit 110. As described herein, such personalized medical assistance information may include basic patient information, desired configurations of upcoming medical procedures (e.g., according to a corresponding scanning protocol designed for the patient), previously captured scan images for the patient, positions or poses of the patient during these scans, adjustments or corrections made to bring the patient into desired scan positions or poses, overlapping scan images and pictures (or models) of the patient, and so forth. The personalized medical assistance information may be displayed in a variety of formats, including, for example, video, animation, and/or AR presentation. For example, an overlaid representation of a scanned image and picture of a patient may be displayed in an AR environment where a physician equipped with AR glasses and/or AR input devices may swipe the representation in a stereoscopic manner.
Fig. 2 is a simplified block diagram illustrating an example processing unit 200 (e.g., processing unit 110) described herein. The processing unit 200 may operate as a stand-alone device or may be connected (e.g., networked or clustered) with other computing devices to perform the functions described herein. In an example networking deployment, the processing unit 200 may operate in the capacity of a server or client device in a server-client network environment, or it may act as a peer device in a peer-to-peer (or distributed) network environment. Further, while only a single unit is shown in fig. 2, the term "processing unit" should be considered to potentially include multiple units or machines that individually or jointly execute a set of instructions to perform any one or more of the functions discussed herein. Multiple units or machines may belong to one location or multiple locations, e.g., under a distributed computing architecture.
Processing unit 200 may include at least one processor (e.g., one or more processors) 202, which in turn may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a microcontroller, a Reduced Instruction Set Computer (RISC) processor, an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or any other circuit or processor capable of performing the functions described herein. The processing unit 200 may also include communication circuitry 204, memory 206, mass storage 208, and/or input device 210. The communication circuitry 204 may be configured to transmit and receive information using one or more communication protocols (e.g., TCP/IP) and one or more communication networks including a Local Area Network (LAN), a Wide Area Network (WAN), the internet, a wireless data network (e.g., wi-Fi, 3G, 4G/LTE, or 5G network). The memory 206 may include a machine-readable medium configured to store instructions that, when executed, cause the processor 202 to perform one or more functions described herein. Examples of machine-readable media may include volatile or nonvolatile memory, including but not limited to semiconductor memory (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)), flash memory, and the like. The mass storage device 208 may include one or more magnetic disks, such as an internal hard disk, a removable disk, a magneto-optical disk, a CD-ROM or DVD-ROM disk, etc., on which instructions and/or data may be stored in order to perform the functions described herein. The input device 210 may include a keyboard, a mouse, a voice-controlled input device, a touch-sensitive input device (e.g., a touch screen), etc., for receiving user input of the processing unit 200.
In an example embodiment, the processing unit 200 may include or may be coupled to a feature database 212 configured to store images of the patient and/or visual representations of one or more characteristics of the patient (e.g., known characteristics of the patient). The images and/or visual representations may be prepared (e.g., pre-computed) and stored in the features database 212 based on image data of the patient collected from various sources, including, for example, pictures taken during past visits to the medical facility by the patient, a repository storing medical records of the patient (e.g., repository 112 shown in fig. 1), a public photo ID database (e.g., driver's license database), and so forth. Database 212 may be communicatively coupled to processor 202 and used by the processor to identify a patient based on a patient image obtained by a sensing device (e.g., sensing device 106 of fig. 1). For example, the processor 202 may be configured to receive an image of a patient from the sensing device and extract a set of features from the image that represent physiological characteristics of the patient. The processor 202 may also be configured to match at least one of the extracted features with image data (e.g., known features of the patient) stored in the feature database 212 to determine the identity of the patient.
The features and/or characteristics described herein may be associated with various attributes of the patient, such as body contours, height, facial features, walking patterns, gestures, and the like. In the context of digital images, these features or characteristics may correspond to structures in the image, such as points, edges, objects, and the like. Various techniques may be employed to extract these features from the image. For example, one or more keypoints associated with a feature may be identified, including points where the direction of the boundary of the object suddenly changes, intersections between two or more edge segments, and so forth. These keypoints may be characterized by well-defined locations in image space and/or stability of illumination/brightness disturbances. It follows that keypoints may be identified based on image derivatives, edge detection, curvature analysis, and the like.
Once identified, the keypoints and/or features associated with the keypoints may be described with feature descriptors or feature vectors. In example embodiments of such feature descriptors or vectors, information about the feature (e.g., the appearance of a local neighborhood of each keypoint) may be represented by (e.g., encoded into) a series of values stored in the feature descriptor or vector. The descriptor or vector may then be used as a "fingerprint" to distinguish one feature from another, or to match one feature to another.
Returning to the example shown in fig. 2, the processor 202 may be configured to determine what particular features are to be extracted from the received image of the patient, extract those features from the image (e.g., generate one or more feature vectors corresponding to the features), and determine the identity of the patient by matching at least one of the extracted features with the image data stored in the feature database 212. The processor may determine which one or more of the extracted features are to be used to determine the identity of the patient based on, for example, user input or configuration (e.g., system configuration parameters). Further, the processor 202 may be configured to determine that certain areas of the patient's body are blocked or covered and then avoid using features associated with the blocked areas for patient matching (e.g., the processor 202 may decide to use different features, such as the patient's walking pattern, to identify the patient). The blocked or covered area of the patient may be determined, for example, by: running occlusion detectors for one or more portions of the patient's body (e.g., in a bottom-up fashion), and/or recognizing the overall posture of the patient, and then inferring occlusion regions based on the overall posture of the patient. Depth information associated with one or more images of a patient may be used to determine a blocked or covered region. The processor 202 may also be configured to use features associated with the occlusion region for patient matching, but provide an indication that such matching may not be robust (e.g., give a low confidence score to the match).
In an example embodiment, the processing unit 200 may also include a neural network for identifying the patient based on the image obtained by the sensing device (e.g., sensing device 106), in addition to or in lieu of the feature database 212. The neural network may be a Convolutional Neural Network (CNN) or a Deep Neural Network (DNN) that includes multiple layers (e.g., an input layer, one or more convolutional layers, one or more pooling layers, one or more fully-connected layers, and/or an output layer). Each layer may correspond to a plurality of filters (or kernels (kernels)), and each filter may be designed to detect a particular type of visual feature. The filters may be associated with respective weights that, when applied to the input, produce an output indicating whether certain visual features have been detected. The weights associated with the filters may be learned by the neural network through a training process that includes: the patient image from the training dataset is input to a neural network (e.g., in forward pass), the loss due to the weights currently assigned to the filter is calculated based on a loss function (e.g., a margin-based loss function), and the weights assigned to the filter are updated (e.g., in reverse pass) to minimize the loss (e.g., based on random gradient descent). Once trained, the neural network is able to take an image of the patient at the input layer, extract and/or classify visual features of the patient from the image, and provide an indication at the output layer as to whether the input image matches an image of a known patient.
In any of the above examples, once a matching patient is found, the processor 202 can continue to query a repository (e.g., repository 112 in fig. 1) based on the identity of the patient to retrieve medical records (e.g., image and/or non-image data associated with the medical procedure) of the patient. The medical records can include, for example, positioning information associated with a medical procedure to be performed on a patient, previously scanned pictures or other types of images of the patient, diagnostic and treatment history of the patient, and so forth. The processor 202 can generate personalized medical assistance information for the patient (e.g., establish a medical profile) based on the retrieved medical records. The processor 202 may also display some or all of the personalized medical assistance information to the patient or medical professional via a graphical user interface, for example, as described in connection with fig. 1.
Fig. 3a and 3b are diagrams illustrating an example Graphical User Interface (GUI) for presenting personalized medical assistance information to a patient or medical professional in response to identifying the patient based on one or more images. Fig. 3a shows a first example GUI 300a for presenting personalized medical assistance information. GUI 300a may include areas 302a, 304a, and/or 306a. The area 302a may be configured to display basic patient information such as patient name, date of birth, time of last visit by the patient, diagnosis, prescription, etc. The region 304a may be configured to display a desired position or pose of the patient for an upcoming medical procedure (e.g., MRI scan), and the region 306a may be configured to show a position or pose (e.g., correct and/or incorrect position or pose) the patient previously assumed during a similar procedure and/or adjustments made by the patient in order to enter the desired position or pose. The position or pose and/or adjustment information may be presented in various formats including, for example, video or animation.
Fig. 3b shows a second example GUI 300b for presenting personalized medical assistance information as described herein. GUI 300b may include areas 302b, 304b, and/or 306b. The region 302b in fig. 3b may be configured to display basic patient information, similar to the region 302a in fig. 3a, and the region 304b may be configured to display a diagnostic or treatment history of the patient, such as a scanned image of the patient overlaid with a picture or model of the patient in a corresponding scan position or pose, as described herein. The service provider (e.g., doctor) may browse the presented information, for example, using the scroll bar 306b.
Fig. 4 is a flow chart illustrating a method 400 that may be implemented by the personalized health care system described herein (e.g., system 100 of fig. 1). For simplicity of explanation, the operations in method 400 are depicted and described herein in a particular order. However, it should be understood that these operations may occur in various orders and/or concurrently, and with other operations not presented and described herein. Moreover, not all illustrated acts may be required to implement a methodology as disclosed herein.
The method 400 may begin at 402 by a processing unit of a personalized health care system (e.g., the processing unit 110 of fig. 1 or the processing unit 200 of fig. 2). At 404, the processing unit may receive one or more images of the patient from a sensing device (e.g., sensing device 106 of fig. 1). Such images may include camera photographs, thermal images, and/or other types of images depicting characteristics of the patient. At 406, the processing unit may analyze the received images and determine an identity of the patient based on at least one of the features extracted from the images. As described herein, analysis and/or identification may be performed, for example, by: at least one of the extracted features is matched with known features of the patient stored in a feature database and/or a neural network trained for visual recognition is utilized. Once the identity of the patient is determined, the processing unit can retrieve medical records (e.g., image and/or non-image data) of the patient from one or more repositories (e.g., repository 112 in fig. 1) at 408, and use these medical records to generate personalized medical assistance information (e.g., medical profile) for the patient. Personalized healthcare services may then be provided to the patient using the personalized medical assistance information at 410, including, for example, positioning assistance, scanned image review, medical history analysis, and the like.
Although the present disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of the embodiments and methods will be apparent to those skilled in the art. Thus, the above description of example embodiments does not limit the present disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure. In addition, unless specifically stated otherwise, discussions utilizing terms such as "dividing," "analyzing," "determining," "enabling," "identifying," "modifying," or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulate and transform data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data represented as physical quantities within the computer system memories or other such information storage, transmission or display devices.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (10)

1. A system for providing personalized health care services, comprising:
one or more repositories configured to store an electronic medical record of a patient, the electronic medical record including image data associated with a medical procedure performed or to be performed for the patient; and
A processing unit configured to:
receiving one or more images of the patient;
extracting one or more features representing physiological characteristics of the patient from the one or more images;
determining an identity of the patient based on at least one of the extracted features;
retrieving the image data from the one or more repositories in response to determining the identity of the patient; and
Based on the image data retrieved from the one or more repositories, personalized medical assistance information relating to the patient is generated, wherein the personalized medical assistance information includes at least one parameter associated with the medical procedure or the patient position or pose for the medical procedure.
2. The system of claim 1, further comprising a display device configured to present the personalized medical assistance information, wherein the presentation comprises a visual depiction of the position or pose of the patient for the medical procedure.
3. The system of claim 1, wherein the processing unit configured to determine the identity of the patient based on at least one of the extracted features comprises: the processing unit is configured to match the at least one of the extracted features with known features of the patient stored in a feature database.
4. The system of claim 1, wherein the processing unit is configured to determine the identity of the patient based on at least one of the extracted features using a neural network trained for visual recognition.
5. The system of claim 1, wherein the personalized medical assistance information includes instructions on how to adjust to achieve a desired position or pose for the medical procedure.
6. The system of claim 1, wherein the image data includes a depiction of an incorrect position or pose of the medical procedure and the personalized medical assistance information includes instructions on how to avoid the incorrect position or pose.
7. The system of claim 1, wherein the image data comprises one or more scanned images of the patient related to the medical procedure, each of the one or more scanned images being associated with a scanned position or pose of the patient, and the processing unit is further configured to:
Aligning each of the one or more scanned images of the patient with a picture of the patient depicting the patient in a position or pose substantially similar to the scanning position or pose associated with the scanned image;
generating visual representations of respective pairs of aligned scanned images and pictures of the patient by overlapping at least the pictures of the patient with the scanned images; and
The visual representation is included in the personalized medical assistance information.
8. The system of claim 1, wherein the parameter associated with the medical procedure comprises a medical imaging parameter associated with the medical procedure or an operating parameter of a medical device.
9. A method for providing personalized health care services, the method comprising:
Receiving one or more images of a patient;
extracting one or more features representing physiological characteristics of the patient from the one or more images;
determining an identity of the patient based on at least one of the extracted features;
in response to determining the identity of the patient, retrieving image data from one or more repositories, the image data being associated with a medical procedure performed or to be performed for the patient;
Generating personalized medical assistance information related to the patient based on the image data retrieved from the one or more repositories, wherein the personalized medical assistance information includes at least one parameter associated with the medical procedure or a position or pose of the patient for the medical procedure; and
The personalized medical assistance information is presented on a display device.
10. An apparatus for providing personalized health care services, comprising:
A processing unit configured to:
Receiving one or more images of a patient;
extracting one or more features representing physiological characteristics of the patient from the one or more images;
determining an identity of the patient based on at least one of the extracted features;
In response to determining the identity of the patient, retrieving image data from one or more repositories, the image data relating to a medical procedure performed or to be performed for the patient;
generating personalized medical assistance information related to the patient based on the image data retrieved from the one or more repositories, wherein the personalized medical assistance information includes at least one parameter associated with the medical procedure or a position or pose of the patient for the medical procedure; and
Presenting the personalized medical assistance information.
CN202011346991.0A 2019-11-27 2020-11-26 Systems, methods and apparatus for providing personalized health care Active CN112420143B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962941203P 2019-11-27 2019-11-27
US62/941,203 2019-11-27
US16/814,373 2020-03-10
US16/814,373 US11430564B2 (en) 2019-11-27 2020-03-10 Personalized patient positioning, verification and treatment

Publications (2)

Publication Number Publication Date
CN112420143A CN112420143A (en) 2021-02-26
CN112420143B true CN112420143B (en) 2024-08-02

Family

ID=74843556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011346991.0A Active CN112420143B (en) 2019-11-27 2020-11-26 Systems, methods and apparatus for providing personalized health care

Country Status (1)

Country Link
CN (1) CN112420143B (en)

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110046979A1 (en) * 2008-05-09 2011-02-24 Koninklijke Philips Electronics N.V. Method and system for personalized guideline-based therapy augmented by imaging information
US9050471B2 (en) * 2008-07-11 2015-06-09 Medtronic, Inc. Posture state display on medical device user interface
EP2318970A2 (en) * 2008-08-15 2011-05-11 Koninklijke Philips Electronics N.V. Model enhanced imaging
EP2430577A1 (en) * 2009-05-13 2012-03-21 Koninklijke Philips Electronics N.V. Method and system for imaging patients with a personal medical device
CN101889870B (en) * 2010-07-20 2013-09-04 江苏同庚电子科技有限公司 Radiotherapy positioning device
US8437506B2 (en) * 2010-09-07 2013-05-07 Microsoft Corporation System for fast, probabilistic skeletal tracking
US10033979B2 (en) * 2012-03-23 2018-07-24 Avigilon Fortress Corporation Video surveillance systems, devices and methods with improved 3D human pose and shape modeling
BR112015002570A2 (en) * 2012-08-09 2018-05-22 Koninklijke Philips Nv radiotherapeutic treatment system of a patient; method of controlling the patient's movements; and computer program product for use in a method of controlling patient movements.
CN108175503B (en) * 2013-03-13 2022-03-18 史赛克公司 System for arranging objects in an operating room in preparation for a surgical procedure
US9592095B2 (en) * 2013-05-16 2017-03-14 Intuitive Surgical Operations, Inc. Systems and methods for robotic medical system integration with external imaging
CN105658167B (en) * 2013-08-23 2018-05-04 斯瑞克欧洲控股I公司 Computer for being determined to the coordinate conversion for surgical navigational realizes technology
US20170351911A1 (en) * 2014-02-04 2017-12-07 Pointgrab Ltd. System and method for control of a device based on user identification
CN106659453B (en) * 2014-07-02 2020-05-26 柯惠有限合伙公司 System and method for segmenting lungs
CN107334487A (en) * 2017-08-11 2017-11-10 上海联影医疗科技有限公司 A kind of medical image system and its scan method
EP3462461A1 (en) * 2017-09-28 2019-04-03 Siemens Healthcare GmbH Personalized patient model
JP6560331B2 (en) * 2017-12-19 2019-08-14 オリンパス株式会社 Medical support system, file server, patient image data acquisition method, patient image data acquisition program
CN109247940A (en) * 2018-11-22 2019-01-22 上海联影医疗科技有限公司 The scan method and magnetic resonance system of magnetic resonance system

Also Published As

Publication number Publication date
CN112420143A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
US11430564B2 (en) Personalized patient positioning, verification and treatment
US11553874B2 (en) Dental image feature detection
US11676701B2 (en) Systems and methods for automated medical image analysis
CN110889005B (en) Searching medical reference images
US10049457B2 (en) Automated cephalometric analysis using machine learning
US20210073977A1 (en) Systems and methods for automated medical image annotation
KR101874348B1 (en) Method for facilitating dignosis of subject based on chest posteroanterior view thereof, and apparatus using the same
WO2021046241A1 (en) Automated medical image annotation and analysis
US20140341449A1 (en) Computer system and method for atlas-based consensual and consistent contouring of medical images
US9002083B2 (en) System, method, and software for optical device recognition association
JP2017504370A (en) Method and system for wound assessment and management
US11941738B2 (en) Systems and methods for personalized patient body modeling
US20230032103A1 (en) Systems and methods for automated healthcare services
US9454814B2 (en) PACS viewer and a method for identifying patient orientation
US11734849B2 (en) Estimating patient biographic data parameters
CN112420143B (en) Systems, methods and apparatus for providing personalized health care
JP2015221141A (en) Medical image diagnosis support apparatus, operation method for medical image diagnosis support apparatus, and medical image diagnosis support program
WO2014087409A1 (en) Computerized iridodiagnosis
WO2022194855A1 (en) Detecting abnormalities in an x-ray image
US20210192717A1 (en) Systems and methods for identifying atheromatous plaques in medical images
KR20190143657A (en) Apparatus and method for alignment of bone suppressed chest x-ray image
US20240212153A1 (en) Method for automatedly displaying and enhancing AI detected dental conditions
KR102531626B1 (en) Method and device for taking patient photos by artificial intelligence
KR102208358B1 (en) Lung Cancer Diagnosis System Using Artificial Intelligence, Server and Method for Diagnosing Lung Cancer
Bin Mushfiq et al. A comparison of deep learning U‐Net architectures for semantic segmentation on panoramic X-ray images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant