WO2023174780A1 - Anonymisation de flux vidéo de sujet - Google Patents

Anonymisation de flux vidéo de sujet Download PDF

Info

Publication number
WO2023174780A1
WO2023174780A1 PCT/EP2023/055953 EP2023055953W WO2023174780A1 WO 2023174780 A1 WO2023174780 A1 WO 2023174780A1 EP 2023055953 W EP2023055953 W EP 2023055953W WO 2023174780 A1 WO2023174780 A1 WO 2023174780A1
Authority
WO
WIPO (PCT)
Prior art keywords
animated
subject
view
room
module
Prior art date
Application number
PCT/EP2023/055953
Other languages
English (en)
Inventor
Siva Chaitanya Chaduvula
Thomas Erik AMTHOR
Olga Starobinets
Christian Findeklee
Ekin KOKER
Robert Christiaan VAN OMMERING
Ranjith Naveen TELLIS
Sandeep Madhukar Dalal
Yuechen Qian
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP22166805.6A external-priority patent/EP4246445A1/fr
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2023174780A1 publication Critical patent/WO2023174780A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the invention relates to medical imaging, in particular to the monitoring of medical imaging procedures.
  • MRI Magnetic Resonance Imaging
  • Computed Tomography Positron Emission Tomography
  • Single Photon Emission Tomography enable detailed visualization of anatomical structure of a subject.
  • a common feature of all of these imaging modalities is that these machines are complicated and require expertise and training to be able to use and/or repair them.
  • United States patent application publication US 2021/0353235 Al discloses an avatar engine having a controller to retrieve a user profde of a user, present the user an avatar having characteristics that correlate to the user profile, detect one or more responses of the user during a communication exchange between the user and the avatar, identify from the one or more responses a need to determine a medical status of the user, establish communications with a medical diagnostic system, receive physiological information associated with the user, submit the physiological information to the medical diagnostic system, receive from the medical diagnostic system a diagnostic analysis of the physiological information, and present the diagnostic analysis to at least one of the user and a medical agent of the user, wherein the user is presented the diagnostic analysis by way of the avatar.
  • the invention provides for a medical system, a method, and a computer program in the independent claims. Embodiments are given in the dependent claims.
  • MRI magnetic resonance imaging
  • CT computed tomography system
  • MRI magnetic resonance imaging
  • CT computed tomography system
  • a difficulty with such a strategy is that the privacy of subjects being imaged should be protected.
  • a medical system that images the subject and displays an animation of the subject relative to the subject support of the medical imaging device. This may be used for providing detailed use of the medical imaging device in a confidential manner.
  • the invention provides for a medical system that comprises a memory that stores machine-executable instructions, a body pose determination module, a character mapping module, and a room view generator module.
  • the medical system further comprises a camera system.
  • the camera system is configured for acquiring a video stream of a subject support of a medical imaging device within an examination room.
  • a video stream as used herein encompasses a glimpse of images acquired by a camera system that are used to provide a video sequence or sequence of images.
  • the video stream comprises individual image frames.
  • the body pose determination module is configured to output a set of pose landmarks for at least one subject in response to receiving an individual frame of the individual imaging frames as input. In other words, when an individual frame is input into the body pose determination module a set of pose landmarks for at least one subject are output in response.
  • the use of body pose determination modules is well known.
  • the pose landmarks could be joint locations.
  • the pose landmarks may also include facial landmarks such as ear, nose etc. These facial landmarks could help in quantifying or displaying patient characteristics or moods such as anxiety or stress.
  • Kinect for the Xbox home entertainment system.
  • MediaPipe Library is able to provide high-fidelity body pose trackers for inferring 33 three-dimensional landmarks and background segmentation masks for the whole body from an RGB video.
  • the MediaPipe system for example, is able to run on current mobile phones, desktops within the Python programming language as well as being incorporated into websites.
  • the character mapping module is configured for providing an animated subject view by mapping an animated subject model onto the set of pose landmarks.
  • the character mapping module may be animation software that has an animated figure, in this case the animated subject model, which has its position specified by the same set of pose landmarks.
  • the room view generator module is configured for generating an animated room view of the examination room and an animated view of the medical imaging device that is registered to the individual frame. This may for example be provided in a variety of ways.
  • the room view generator module may take the video stream in some cases and use this for generating the room view generator module. In other cases, the animated room view may exist in advance and the individual frame is simply registered to this provided animated room view.
  • the room view generator module is further configured such that the animated view of the subject support is aligned with the use of the subject support in the individual image frames.
  • This for example may be achieved in a variety of ways.
  • an image segmentation algorithm may be used to segment the individual image and determine the position of the subject support and adjust the animated room view accordingly.
  • the medical system further comprises a computational system.
  • Execution of the machineexecutable instructions causes the computational system to repeatedly control the camera system to acquire the video stream of the examination room.
  • Execution of the machine-executable instructions further causes the computational system to repeatedly sequentially select the individual frame from the video stream.
  • execution of the machine-executable instructions causes the computational system to receive the set of pose landmarks for the at least one subject by inputting the individual frame into the body pose determination module.
  • execution of the machine-executable instructions causes the computational system to receive the animated room view of the medical imaging device from the room view generator module.
  • the animated room view is registered to the selected individual frame.
  • execution of the machine-executable instructions further causes the computational system to repeatedly generate at least one animated subject view on the animated room view by inputting the at least one set of pose landmarks into the character mapping module.
  • the character mapping module receives the set of pose landmarks and then uses this to properly generate the at least one animated subject.
  • execution of the machine-executable instructions further causes the computational system to create an animated image frame by overlaying the at least one animated subject view on the animated room view.
  • execution of the machine-executable instructions further causes the computational system to repeatedly assemble the animated image frame into an anonymized video feed.
  • This embodiment may be beneficial because it provides a means of depicting what is accurately occurring within the examination room without compromising any personal details of the subject. This for example, may improve the physical security of the examination room as well as provide a means of monitoring what is happening there and ensuring that the use of the medical imaging device is proceeding properly.
  • the anonymized video feed for example, could be stored and maintained with any medical images obtained using the medical imaging device. This may be useful for example in providing more information on any obtained medical images for example to be used when the quality of the medical images is below a standard or a procedure needs to be repeated.
  • the memory further stores an activity classification module.
  • the activity classification module is configured to output an activity classification in response to receiving the set of pose landmarks for the at least one subject as input.
  • Execution of the machine-executable instructions further causes the computational system to repeatedly receive the activity classification in response to inputting the set of pose landmarks into the activity classification module and to append the activity classification to the anonymized video feed.
  • the activity classification module may be used to identify what sort of activity or stage the subject is in. In the case of a medical imaging device, it may be useful for monitoring the pose or what stage the subject is in in preparation for execution of any medical imaging scan. Appending the activity classification to the anonymized video feed may also have the advantage in that the subject or people operating the camera system do not need to actively monitor the video feed in order to have a summary or report of what is occurring.
  • An activity detection module could be implemented in a variety of different ways.
  • Another way would be to look at a time series and use an LSTM neural network that looks at the poses as a function of time. For example, the neural network could take the various pose landmarks as input and then look at this as a time series and this may provide very accurate information as to what is occurring at a particular time in the examination room.
  • the memory further stores an object detection convolutional neural network that is configured to output an object identifier and object location for one or more objects selected from an object library in the individual frame.
  • the object detection convolutional neural network may be any one of a number of standard neural network architectures that are used to identify and classify objects in an image. Examples of neural network architectures that may be useful may be an R-CNN neural network architecture or any one of the YOLO architectures.
  • the object detection convolutional neural network may be trained by providing images from a video feed which have been labeled that label various objects from the object library. These labeled images may then be used fortraining the object detection convolutional neural network, for example, using a deep learning method.
  • Execution of the machine-executable instructions further causes the computational system to receive the object identifier and the object location if the one or more objects selected from the object library are detected in the individual frame. Execution of the machine-executable instructions further causes the computational system to overlay the one or more objects in the animated image frame by positioning them using the object location and the activity classification. This embodiment may be beneficial because it enables various objects to also be added to the anonymized video feed.
  • execution of the machine-executable instructions further causes the computational system to receive an activity sequence defining a sequence of allowed activity classifications. Execution of the machine-executable instructions further causes the computational system to iteratively step through the activity sequence to determine if the activity classification deviates from the sequence of allowed activity classifications. Execution of the machine-executable instructions further causes the computational system to append a warning signal to the anonymized video feed if the activity classification deviates from the sequence of allowed activity classifications.
  • This embodiment may be beneficial because it may provide for an automated means of informing the operator when the examination of the subject in the examination room is not proceeding as expected. This may also be useful in identifying when there are faults in using the medical imaging device.
  • the medical system further comprises a remote command center.
  • the remote command center is configured for receiving the anonymized video feed via a network connection.
  • the remote command center comprises a display configured for rendering the received anonymized video feed.
  • the display is configured for receiving and displaying multiple anonymized video feeds. This may be beneficial because the remote command center may be used to monitor the operation of many different medical imaging devices at possibly very many different locations.
  • the display is further configured for modifying the display in the anonymized video feed if a warning signal is received. This may be useful because this may be used to automatically draw the attention of the operator within the remote command center to the anonymized video feed if there is a problem.
  • the remote command center is configured for sending commands or instructions to the computational system in response to receiving the anonymized video feed.
  • commands or instructions may be commands or instructions which are provided to an operator of the medical imaging device.
  • this may be commands which are sent to a processor or computational system which is controlling the medical imaging device.
  • This may enable an operator in the remote command center to correct or assist in the procedure of imaging the subject with the medical imaging device. This may for example be useful in providing expertise which cannot be provided at every location economically.
  • the computational system is configured for receiving a subject support location signal from the medical imaging device.
  • a subject support location signal from the medical imaging device.
  • This could be a position or coordinate of the subject support.
  • this may be used for identifying where or how far the subject is into the medical imaging device.
  • the room view generator is configured to receive the subject support location signal as input and adjust the animated room view in response. For example, the position of the subject support can be moved such that it reflects the subject support location signal. Execution of the machine-executable instructions further causes the computational system to receive the subject support location signal from the medical imaging device. Execution of the machine-executable instructions further causes the computational system to receive an updated animated room view in response to inputting the subject support location signal into the room view generator.
  • the creation of the animated image frame is performed by overlaying the at least one animated subject view on the updated animated room view. This may for example be very useful in a situation where the subject has been placed on the subject support and then is in the process or has been loaded into the medical imaging device. This provides a means of making the animated room view more realistic and more reflect the actual configuration and use of the medical imaging device with the subject.
  • the anonymized video feed is assembled in real time.
  • anonymizing the video feed in real time may encompass providing the anonymized video feed within a predetermined delay, for example, of maybe several seconds, a second or within several milliseconds. This may be beneficial because it may provide for an effective means of providing the actual situation in the examination room.
  • the medical system comprises the medical imaging device.
  • the medical imaging device is a magnetic resonance imaging system.
  • the medical imaging device is a magnetic resonance guided high- intensity focused ultrasound system.
  • the medical imaging device is a computed tomography system. In another embodiment the medical imaging device is a digital X-ray system.
  • the medical imaging device is a digital fluoroscope.
  • the medical imaging device is a positron emission tomography system.
  • the medical imaging device is a single photon emission computed tomography system.
  • the invention provides for a method of medical imaging.
  • the method comprises controlling the camera system to acquire a video stream of an examination room.
  • the camera system is configured for acquiring the video stream of a subject support of a medical imaging device within the examination room.
  • the video stream comprises individual image frames.
  • the method further comprises sequentially selecting the individual frame from the video stream.
  • the method further comprises for each selected individual frame, receiving a set of pose landmarks for the at least one subject by inputting the individual frame into a body pose determination module.
  • the body pose determination module is configured to output a set of pose landmarks for at least one subject in response to receiving an individual frame of the individual image frames as input.
  • the method further comprises for each selected individual frame receiving an animated room view of the medical imaging device from a room view generator module.
  • the room view generator module is further configured such that an animated view of the subject support is aligned with views of the subject support in the individual image frames.
  • the method further comprises for each selected individual frame generating at least one animated subject view on the animated room view by inputting the at least one set of pose landmarks into a character mapping module.
  • the character mapping module is configured for providing an animated subject view by mapping an animated subject model onto the set of pose landmarks.
  • the room view generator module is configured for generating an animated room view of the examination room and an animated view of the medical imaging device that is registered to the individual frame.
  • the method further comprises for each selected individual frame, creating an animated image frame by overlaying the at least one animated subject view on the animated room view.
  • the method further comprises for each selected individual frame, assembling the animated image frame into an anonymized video feed.
  • the method further comprises storing the anonymized video feed.
  • the method further comprises training a machine learning module with the anonymized video feed.
  • the anonymized video feed may be used instead. This may be beneficial because for example, it can be provided without the need of compromising the privacy of the subject.
  • feedback can be collected from users by replaying the anonymized video feed and the corresponding predictions from various artificial intelligence models used during this process.
  • the users can correct the predictions.
  • Such corrections may be used for retraining the artificial intelligence models.
  • retrospective use of anonymized video feeds can be used for continuous learning of artificial intelligence models.
  • the method further comprises showing the anonymized video feed to an operator during training to operate the medical device.
  • realistic scenarios can be stored and then shown to operators for training purposes.
  • the method further comprises displaying the anonymized video feed in real time at a remote location.
  • a remote location For example, there may be a control center or centralized location where anonymized video feeds from many different devices may be displayed together.
  • the method further comprises showing the anonymized video feed to a subject prior to the use of the medical imaging device.
  • This embodiment may be beneficial because realistic situations can be shown to a subject to explain to them what may happen during their own procedure.
  • the use of the anonymized video feed may be beneficial because then the identity of people who have previously had the same procedure are not compromised.
  • the invention provides for a computer program that comprises machineexecutable instructions.
  • the computer program may for example be stored on a non-transitory storage medium.
  • the computer program comprises the machine-executable instructions as well as a body pose determination module, a character mapping module, and a room view generator module all for execution by a computational system.
  • Execution of the machine-executable instructions causes the computational system to control a camera system to acquire a video stream of an examination room.
  • the camera system is configured for acquiring the video stream of a subject support of a medical imaging device.
  • the medical imaging device may be within an examination room.
  • the video stream comprises individual image frames.
  • Execution of the machine-executable instructions further causes the computational system to sequentially select the individual frame from the video stream. Execution of the machine-executable instructions further causes the computational system, for each selected individual frame, to receive a set of pose landmarks for the at least one subject by inputting the individual frame into the body pose determination module.
  • the body pose determination module is configured to output a set of pose landmarks for at least one subject in response to receiving the individual frame of the individual imaging frames as input.
  • Execution of the machine-executable instructions further causes the computational system, for each selected individual frame, to receive the animated room view of the medical imaging device from the room view generator module.
  • the room view generator module is further configured such that an animated view of the subject support is aligned with views of the subject support in the individual image frames. Execution of the machine-executable instructions further causes the computational system, for each selected individual frame generated, to generate at least one animated subject view by inputting the at least one set of pose landmarks into the character mapping module.
  • the character mapping module is configured for providing an animated subject view by mapping an animated subject model onto the set of pose landmarks.
  • the room view generator module is configured for generating an animated room view of the examination room and an animated view of the medical imaging device that is registered to the individual frame.
  • Execution of the machine-executable instructions further causes the computational system, for each selected individual frame, to create an animated image frame by overlaying the at least one animated subject view on the animated room view. Execution of the machine-executable instructions further causes the computational system, for each selected individual frame, to assemble the animated image frame into an anonymized video feed.
  • aspects of the present invention may be embodied as an apparatus, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer executable code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a ‘computer-readable storage medium’ as used herein encompasses any tangible storage medium which may store instructions which are executable by a processor or computational system of a computing device.
  • the computer-readable storage medium may be referred to as a computer-readable non-transitory storage medium.
  • the computer-readable storage medium may also be referred to as a tangible computer readable medium.
  • a computer-readable storage medium may also be able to store data which is able to be accessed by the computational system of the computing device.
  • Examples of computer-readable storage media include, but are not limited to: a floppy disk, a magnetic hard disk drive, a solid state hard disk, flash memory, a USB thumb drive, Random Access Memory (RAM), Read Only Memory (ROM), an optical disk, a magneto-optical disk, and the register fde of the computational system.
  • Examples of optical disks include Compact Disks (CD) and Digital Versatile Disks (DVD), for example CD-ROM, CD-RW, CD-R, DVD-ROM, DVD-RW, or DVD-R disks.
  • the term computer readable-storage medium also refers to various types of recording media capable of being accessed by the computer device via a network or communication link.
  • data may be retrieved over a modem, over the internet, or over a local area network.
  • Computer executable code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire line, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • a computer readable signal medium may include a propagated data signal with computer executable code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer memory or ‘memory’ is an example of a computer-readable storage medium.
  • Computer memory is any memory which is directly accessible to a computational system.
  • ‘Computer storage’ or ‘storage’ is a further example of a computer-readable storage medium.
  • Computer storage is any non-volatile computer-readable storage medium. In some embodiments computer storage may also be computer memory or vice versa.
  • computational system encompasses an electronic component which is able to execute a program or machine executable instruction or computer executable code.
  • References to the computational system comprising the example of “a computational system” should be interpreted as possibly containing more than one computational system or processing core.
  • the computational system may for instance be a multi -core processor.
  • a computational system may also refer to a collection of computational systems within a single computer system or distributed amongst multiple computer systems.
  • the term computational system should also be interpreted to possibly refer to a collection or network of computing devices each comprising a processor or computational systems.
  • the machine executable code or instructions may be executed by multiple computational systems or processors that may be within the same computing device or which may even be distributed across multiple computing devices.
  • Machine executable instructions or computer executable code may comprise instructions or a program which causes a processor or other computational system to perform an aspect of the present invention.
  • Computer executable code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages and compiled into machine executable instructions.
  • the computer executable code may be in the form of a high-level language or in a pre-compiled form and be used in conjunction with an interpreter which generates the machine executable instructions on the fly.
  • the machine executable instructions or computer executable code may be in the form of programming for programmable logic gate arrays.
  • the computer executable code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • These computer program instructions may be provided to a computational system of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the computational system of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • machine executable instructions or computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the machine executable instructions or computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • a ‘user interface’ as used herein is an interface which allows a user or operator to interact with a computer or computer system.
  • a ‘user interface’ may also be referred to as a ‘human interface device.’
  • a user interface may provide information or data to the operator and/or receive information or data from the operator.
  • a user interface may enable input from an operator to be received by the computer and may provide output to the user from the computer.
  • the user interface may allow an operator to control or manipulate a computer and the interface may allow the computer to indicate the effects of the operator's control or manipulation.
  • the display of data or information on a display or a graphical user interface is an example of providing information to an operator.
  • the receiving of data through a keyboard, mouse, trackball, touchpad, pointing stick, graphics tablet joystick, gamepad, webcam, headset, pedals, wired glove, remote control, and accelerometer are all examples of user interface components which enable the receiving of information or data from an operator.
  • a ‘hardware interface’ as used herein encompasses an interface which enables the computational system of a computer system to interact with and/or control an external computing device and/or apparatus.
  • a hardware interface may allow a computational system to send control signals or instructions to an external computing device and/or apparatus.
  • a hardware interface may also enable a computational system to exchange data with an external computing device and/or apparatus. Examples of a hardware interface include, but are not limited to: a universal serial bus, IEEE 1394 port, parallel port, IEEE 1284 port, serial port, RS-232 port, IEEE-488 port, Bluetooth connection, Wireless local area network connection, TCP/IP connection, Ethernet connection, control voltage interface, MIDI interface, analog input interface, and digital input interface.
  • a ‘display’ or ‘display device’ as used herein encompasses an output device or a user interface adapted for displaying images or data.
  • a display may output visual, audio, and or tactile data. Examples of a display include, but are not limited to: a computer monitor, a television screen, a touch screen, tactile electronic display, Braille screen,
  • Cathode ray tube (CRT), Storage tube, Bi-stable display, Electronic paper, Vector display, Flat panel display, Vacuum fluorescent display (VF), Light-emitting diode (LED) displays, Electroluminescent display (ELD), Plasma display panels (PDP), Liquid crystal display (LCD), Organic light-emitting diode displays (OLED), a projector, and Head-mounted display.
  • CTR Cathode ray tube
  • Storage tube Bi-stable display
  • Electronic paper Electronic paper
  • Vector display Flat panel display
  • VF Vacuum fluorescent display
  • LED Light-emitting diode
  • ELD Electroluminescent display
  • PDP Plasma display panels
  • LCD Liquid crystal display
  • OLED Organic light-emitting diode displays
  • Medical imaging data is defined herein as being recorded measurements made by a tomographic medical imaging system descriptive of a subject.
  • the medical imaging data may be reconstructed into a medical image.
  • a medical image is defined herein as being the reconstructed two- or three-dimensional visualization of anatomic data contained within the medical imaging data. This visualization can be performed using a computer.
  • K-space data is defined herein as being the recorded measurements of radio frequency signals emitted by atomic spins using the antenna of a Magnetic resonance apparatus during a magnetic resonance imaging scan.
  • Magnetic resonance data is an example of tomographic medical image data.
  • a Magnetic Resonance Imaging (MRI) image or MR image is defined herein as being the reconstructed two- or three-dimensional visualization of anatomic data contained within the magnetic resonance imaging data. This visualization can be performed using a computer.
  • Fig. 1 illustrates an example of a medical system
  • Fig. 2 shows a flow chart which illustrates a method of using the medical system of Fig. 1;
  • Fig. 3 illustrates an example of a multiple anonymized video feeds
  • Fig. 4 illustrates a further example of a medical system
  • Fig. 5 illustrates how activity detection can be performed
  • Fig. 6 illustrates the construction of an animated subject view
  • Fig. 7 illustrates the construction of an animated image frame.
  • Fig. 1 illustrates an example of a medical system 100.
  • the medical system 100 is located within an examination room 101.
  • a magnetic resonance imaging system 102 is used as an example of a medical imaging device.
  • other types of medical imaging devices such as computed tomography systems, ultrasound systems or other medical scanners could be substituted in place of the magnetic resonance imaging system 102.
  • the magnetic resonance imaging system 102 comprises a magnet 104.
  • the magnet 104 is a superconducting cylindrical type magnet with a bore 106 through it.
  • the use of different types of magnets is also possible; for instance it is also possible to use both a split cylindrical magnet and a so called open magnet.
  • a split cylindrical magnet is similar to a standard cylindrical magnet, except that the cryostat has been split into two sections to allow access to the iso-plane of the magnet.
  • An open magnet has two magnet sections, one above the other with a space in-between that is large enough to receive a subject, the arrangement of the two sections area similar to that of a Helmholtz coil. Open magnets are popular, because the subject is less confined. Inside the cryostat of the cylindrical magnet there is a collection of superconducting coils.
  • an imaging zone 108 where the magnetic field is strong and uniform enough to perform magnetic resonance imaging.
  • a field of view 109 is shown within the imaging zone 108.
  • a subject support 120 supports a portion of a subject 118 in the imaging zone 108.
  • the magnetic resonance data that is acquired typically acquried for the field of view 109.
  • the magnetic field gradient coils 110 are intended to be representative. Typically magnetic field gradient coils 110 contain three separate sets of coils for spatially encoding in three orthogonal spatial directions.
  • a magnetic field gradient power supply supplies current to the magnetic field gradient coils. The current supplied to the magnetic field gradient coils 110 is controlled as a function of time and may be ramped or pulsed.
  • a radio-frequency coil 114 Adjacent to the imaging zone 108 is a radio-frequency coil 114 for manipulating the orientations of magnetic spins within the imaging zone 108 and for receiving radio transmissions from spins also within the imaging zone 108.
  • the radio frequency antenna may contain multiple coil elements.
  • the radio frequency antenna may also be referred to as a channel or antenna.
  • the radio-frequency coil 114 is connected to a radio frequency transceiver 116.
  • the radio-frequency coil 114 and radio frequency transceiver 116 may be replaced by separate transmit and receive coils and a separate transmitter and receiver. It is understood that the radio-frequency coil 114 and the radio frequency transceiver 116 are representative.
  • the radio-frequency coil 114 is intended to also represent a dedicated transmit antenna and a dedicated receive antenna.
  • the transceiver 116 may also represent a separate transmitter and receivers.
  • the radio-frequency coil 114 may also have multiple receive/transmit elements and the radio frequency transceiver 116 may have multiple receive/transmit
  • the transceiver 116 and the gradient controller 112 are shown as being connected to the hardware interface 106 of the computer system 102. Both of these components, as well as others such as the subject support supplying positional data, may supply the sensor data 126.
  • the medical system 100 additionally comprises a camera system 122 that takes images within the examination room 101 such that at least the subject support 120 is imaged.
  • the magnetic resonance imaging system 102 may or may not be part of the medical system 100.
  • the medical system 100 further comprises a computer 130 that has a computational system 132.
  • the computer 130 is intended to represent one or more computer systems that may be located at the same location or networked together.
  • the computer 130 is shown as comprising a computational system 132 that may represent one or more computational cores.
  • the computational system 132 is shown as being in connection with a hardware interface 134 that enables the computational system 132 to control and operate the magnetic resonance imaging system 102 or possibly another type of medical imaging device.
  • the computational system 132 is further shown as being in connection with a network interface 136 and a memory 138.
  • the network interface 136 enables the computational system 132 to communicate with other computer and computational systems.
  • the memory 138 is intended to represent various types of memory or storage devices that may be in communication with the computational system 132.
  • the memory 138 may be a non-transitory storage medium.
  • the memory 138 is shown as containing machine-executable instructions 140.
  • the machine-executable instructions 140 enable the computational system 132 to perform various control and computational tasks. This may include such things as data and image processing.
  • the memory 138 is further shown as containing a video stream 142 that has been acquired by the computational system 132 controlling the camera system 122.
  • the memory 138 is further shown as containing an individual frame 144 that has been extracted from the video stream 142. Individual frames 144 may be extracted in sequence from the video stream 142.
  • the memory 138 is further shown as containing a body pose determination module 146.
  • the memory 138 is further shown as containing a set of pose landmarks 148 that have been received from the body pose determination module 146 after inputting the individual frame 144 into it.
  • the memory 138 is further shown as containing a room view generator module 150.
  • the room view generator module 150 is configured to output an animated room view 152.
  • the animated room view 152 may be provided in a variety of ways. For example, there could be a sensor or detector which detects the position of the subject support 120 and, depending upon the location of the subject support, a particular animated room view 152 is retrieved.
  • the animated room view 152 may be generated from images of the video stream 142, where there are no subjects 118 present.
  • an image may be taken of the subject support 120 when there are no subjects present and the image may be put through an automated algorithm to turn the image into an animated image.
  • the memory 138 is further shown as containing a character mapping module 154.
  • the character mapping module 154 may for example be animation software that has been configured to receive the set of pose landmarks 148. Upon receiving the set of pose landmarks 148 the character mapping module 154 generates an animated subject view 156.
  • the animated subject view 156 is shown as being stored in the memory 138.
  • the memory 138 is further shown as containing an animated image frame 158 that is generated by combining the animated subject view 156 with the animated room view 152.
  • the memory 138 is then shown as containing an anonymized video feed 160 that has been generated by combining the animated image frame 158 as they are individually generated.
  • the memory 138 is further shown as optionally containing an activity classification module 162.
  • the activity classification module 162 can for example take the set of pose landmarks 148 and use this to generate an activity. This may for example be done using a neural network or the relative position of the set of pose landmarks 148 may be used to classify the activity within the examination room 101. This for example could be a simple rule-based way of generating this activity classification.
  • the memory 138 is further shown as containing an object detection convolutional neural network 164.
  • the object detection convolutional neural network 164 is configured to output an object identifier 166 and an object location 168.
  • This may be used for identifying various objects, for example in a predetermined library of objects, such as common items that might be found in an examination room like a wheelchair, a contrast agent injector or other equipment.
  • the activity classification or objects detected by the object detection convolutional neural network 164 may also be appended or added to the animated image frame 158.
  • the memory 138 is optionally containing an activity sequence.
  • an activity sequence This for example, may be a list of various activity classifications within the examination room 101 during a particular procedure or imaging technique.
  • the activity classification generated by the activity classification module 162 can be for example compared against this activity sequence 170 and it can be detected if the activity or sequence of activities is not what is expected.
  • an activity sequence warning signal 172 can be generated. This for example could be an optical or audio warning is provided to an operator of the magnetic resonance imaging system 102 as well as appending additional information to the anonymized video feed 160.
  • the memory 138 is shown as containing pulse sequence commands 174.
  • the pulse sequence commands are commands or instructions which the computational system 132 can be used to control the magnetic resonance imaging system 102 to acquire k-space data 176 that is descriptive of the field of view 109.
  • the memory 138 is shown as containing k-space data 176 that has been acquired by controlling the magnetic resonance imaging system 102 with the pulse sequence commands 174.
  • the memory 138 is further shown as containing a magnetic resonance image 178 that has been reconstructed from the k-space data 176.
  • a region 180 that represents an optional remote command center 180.
  • a remote computer 182 that has a remote computational system 184, a remote network interface 186, a remote memory 188, and a remote user interface 190.
  • the remote user interface 190 comprises a remote display.
  • the network interface 136 and the remote network interface 186 form a network connection 196 that enable computer 130 and the remote computer 182 to exchange data and information.
  • Within the remote memory 188 there are remote machine-executable instructions 194 that enable the remote computational system 184 to perform various data processing and computational tasks.
  • the memory 188 is shown as further containing a copy of the anonymized video feed 160. This may be displayed on the remote display 192. This for example could enable technical experts or medical experts at a remote location to monitor the function and operation of the medical system 100. Because the anonymized video feed 160 has had the personal information of the subject 118 removed, there are no longer any privacy concerns.
  • Fig. 2 shows a flowchart which illustrates one method of operating the medical system 100 of Fig. 1.
  • the camera system 122 is controlled to acquire the video stream 142 of the examination room 101.
  • the individual frame 144 is sequentially selected from the video stream 142.
  • the set of pose landmarks 148 is received by inputting the individual frame 144 into the body pose determination module 146.
  • the animated room view 152 is received from the room view generator module 150.
  • step 208 for each selected individual frame 144 at least one animated subject view 156 is received by inputting the at least one set of pose landmarks 148 into the character mapping module 154.
  • step 210 for each selected individual frame 144 an animated image frame 158 is created by overlaying the at least one animated subject view 156 on the animated room view 152.
  • the anonymized video feed 160 is assembled from the animated image frame 158.
  • Fig. 3 illustrates an example of a multiple anonymized video feeds 300. This for example may be displayed on the remote display 192.
  • One of the multiple anonymized video feeds 300 is the anonymized video feed 160.
  • the animated subject view 156 superimposed on the animated room view 152 is visible.
  • the anonymized video feed 160 additionally comprises a number of communication controls 302 that may enable the operator of the remote command center 180 to communicate with the operator of the medical system 100 via the network connection 196.
  • the communication controls 302 may also enable communication via other communication systems such as the telephone system.
  • the anonymized video feed 160 may in its user interface also contain other information such as location and status information 304.
  • ROCC has a feature of relaying the video feed from a camera mounted in the local tech’s scanner/control room (with a view of the examination room 101).
  • This video feed enables the remotely located expert user get a better understanding of the situation that the local tech is in and thereby help in troubleshooting.
  • This video feed may contain face and body of different individuals including patient and staff. There is a need to protect the privacy of the individuals involved in this video. This privacy preservation is especially important for patients as some imaging exams may require patients to expose certain body parts while being scanned.
  • ROCC In ROCC, an expert user monitors multiple imaging devices or scanners (MR, CT etc.) at the same time. In order to help the expert user to identify the scanner that they need to pay attention to, ROCC provides alerts (warning signals 408) to the exam card. In turn, these alerts are based on the clinical and operational situation of each imaging scanner.
  • the operational situation is primarily derived from information captured on the console screen.
  • the video feed from camera has some rich operational information such as patient on/off table, patient in a wheelchair.
  • local tech may close the blinds on the scanner room window facing the control room or stop the video feed entirely. These practices inhibit ROCC from capturing this rich operational information from the video feed.
  • this invention disclosure we provide a system that enables ROCC to derive the required operational information from the video feed while preserving the privacy of all the individuals in the video feed.
  • ROCC may create a level of abstraction between restricted, privacy-protected information/views and experts that may not have complete organizational privileges.
  • Examples may include one or more of the following features:
  • Module 1 A module to extract the images from video feed from camera
  • Module 2 A module to detect the human pose and its associated activity (optionally: detect other movable objects and their position in space)
  • Module 3 A module to derive animated/movable objects based on the pose detected in Module 2
  • Module 4 A module to create animated video based on animated/movable objects from Module 3
  • Module 5 A module to present operational and clinical insights from pose and activity detected in Module 2
  • Fig. 4 illustrates a further example of a medical system 400. Each module in this figure is described in detail in this section.
  • the medical system 400 in Fig. 4 is illustrated in a functional manner.
  • a first module represents the capturing of the live video feed 400.
  • a second module, 402 represents a post estimation and optionally activity detection.
  • the animated room view 152 is a view of the examination room 101 without any subjects in the room.
  • superimposed on the image 152 are activity classifications 404 for the two subjects. One is lying on the table and the other is standing.
  • a database 406 which may be used for a variety of functions or may represent multiple databases.
  • the database 406 is used in conjunction with the character mapping module 154 or animation engine.
  • the database 406 can be used to select virtual characters which may be used for providing characters for the animated subject view 156.
  • the set of pose landmarks 148 is input to this animation engine 154 as well as the detected locations of any objects in the video feed also. This then results in animated characters which can be superimposed on the animated room view 152 to provide an anonymized video feed 160.
  • the anonymized video feed 160 shows two subjects 156 within the animated room view 152.
  • the database 406 may also be used to provide the animated room view 152.
  • the position of the subject support 120 may be used to recall a prerecorded image which may be used as the animated room view 152.
  • the database 406 may also contain various rules or artificial intelligence modules which can be used to provide alerts 408 which may be displayed on the anonymized video feed 160.
  • a module for parsing the live video feed from camera into images This module converts the live video feed from camera into images.
  • a module to detect the human pose and its associated activity, and other movable objects This module has a stack of algorithms such as object detection, object tracking to identify the object in a given image. This detected object is further classified into a person or non-living object.
  • a pose estimation algorithm is run on the detected persons in the image and keypoints such as joints for each detected person are identified. These keypoints are further fed into a classification algorithm to classify the pose into certain human actions such as sitting, lying on table, standing etc.
  • the module may optionally also detect other movable objects of specific interest, such as the position of a movable patient table, the position of MRI coils, the position of a contrast injector, or other devices.
  • Fig. 5 illustrates how activity detection can be performed using keypoints or pose landmarks 148.
  • the first image 144 is an individual frame and represents a raw image. This may then be used in some form of pose estimation. In examples this may be done using the body pose determination module 146.
  • the body pose determination module 146 then outputs a set of pose landmarks 148 which are shown as being represented on the individual frame 144.
  • the set of pose landmarks 148 is equivalent to key point identification.
  • the sets of pose landmarks 148 have been determined, these may then be input into a pose classification or an activity classification module 162.
  • Either an artificial intelligence module may look at the evolution in time of the various coordinates for subjects or, for example, a rulebased system may be used.
  • the activity classification module 162 then outputs a number of activity classifications 404.
  • This module can use off-the-shelf Al models such as mediapipe, openpose, alphapose for estimating the pose.
  • the computational source available to run these Al models is a surface pro tablet. The selection of these algorithms is tricky for the following two reasons: 1) The results from the stack of these algorithms need to be in real time and, 2) ROCC tablets offer a limited computational power.
  • a module to derive animated objects based on the pose/activity detected in Module 2 It is assumed that there is a set of animated characters available on ROCC database. Before beginning of an exam, animated characters are selected to represent different persons including staff and patient. It is possible to hide patient characteristics including gender, BMI of the patient by choosing the virtual character appropriately. This choice can be made by patient or local tech or a random selection by the software. These animated characters may provide the surface representation of the animated object and the keypoints, pose and activity provide the skeletal representation of the animated object. All this information about the animated object is rigged into an animated object for a given image. There are multiple standard softwares such as Mixamo that provide this functionality of superimposing the skeletal pose onto a virtual character.
  • movable objects such as a patient table, MRI coils, or other equipment, can also be represented by 3D models from model libraries.
  • Fig. 6 illustrates the construction of an animated subject view 156.
  • the character mapping module 154 or animation module takes one of these virtual characters 600 and a skeletal pose or set of pose landmarks and then uses this to output or generate the animated subject view 156. In this case it represents an animated person.
  • a module to create animated video based on animated objects from Module 3 This module receives a continuous feed of animated objects from module 3. These objects are overlayed onto a background image to generate a video feed. Note that this video feed does not contain the real person’s face or body and there by this animated video feed can be relayed to expert user. In addition to the overlay of the animated person, other detected movable objects can also be overlayed on the image at their respective locations in space. In this way, the observer would immediately see that, e.g., a flexible MRI coil has been placed on the patient’s body.
  • the position of the MRI or CT table/couch can lead to unrealistic representations of the animated patient figure. If a patient is moved into the bore, the animated figure would be moving while the patient table on the background image remains static. To solve this problem, the patient table could also be represented in a different way: Instead of using just one reference image for the background, a series of images with different patient table positions is recorded. The location of the patient table detected by the object detection algorithm in Module 2 is then used to select the background image that matches the real patient table position best. In this way, a patient being moved in and out of the MRI bore would be displayed as an animated figure on a (stepwise) “moving” patient table.
  • a video corresponding to patient’s exam can help them learn key instructions such as how to lie on the patient table, breathing instructions and thereby reduce anxiety on what to expect during the exam.
  • these animated videos can be used for staff education, especially for novice technologists, as well.
  • Fig. 7 illustrates the construction of an animated image frame 158.
  • the database 406 is used to select an animated room view 152 based on a particular subject support 120 position.
  • the animated room view 152 is a real-life image that was acquired when no subjects or people were in the examination room.
  • various views of the animated subject view 156 are provided such as was illustrated in Fig. 6.
  • the animated subject view 156 is then superimposed on the animated room view 152 to provide an animated image frame 158.
  • These frames 158, as they are generated, can be combined into an anonymized video feed 160.
  • a module to capture operational and clinical information using pose and activity detected in Module 2 Pose and activity from Module 2 reveal novel operational and clinical insights about patient and status of the imaging exam. This information can be used to create alerts on the ROCC UI in various contexts (see Figure 6 for more details). A few examples are discussed below: a. Pose can be used in estimating patient’s mobility. For instance, patient arrived for imagining exam in a wheelchair/gumey/walked-in is a good indicator of patient mobility. This detection of wheelchair or gurney can be performed by using object detection algorithms such as detectron2, Yolo etc. Alert can be triggered to expert tech so that he/she work with transport department at the hospital while local tech is busy scanning the patient. b.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
  • each selected individual frame create an animated image frame by overlaying the at least one animated subject view on the animated room view

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Est divulgué un système médical (100, 400) comprenant une mémoire (138) stockant des instructions exécutables par machine (140), un module de détermination de pose de corps (146), un module de mappage de caractères (154) et un module générateur de vue de pièce (150). Le système médical comprend en outre un système de caméra (122) configuré pour acquérir un flux vidéo (142) d'un support de sujet (120) d'un dispositif d'imagerie médicale (102) à l'intérieur d'une salle d'examen. L'exécution de l'exécution d'instructions exécutables par machine amène le système informatique à : commander (200) de manière répétée le système de caméra pour acquérir le flux vidéo ; sélectionner de manière séquentielle (202) la trame individuelle à partir du flux vidéo ; recevoir (204) l'ensemble de points de repère de pose pour le ou les sujets en entrant la trame individuelle dans le module de détermination de pose de corps ; recevoir (206) la vue de salle animée du dispositif d'imagerie médicale à partir du module de générateur de vue de salle ; générer (208) au moins une vue de sujet animée (156) sur la vue de salle animée en entrant le ou les ensembles de points de repère de pose dans le module de mappage de caractères ; créer (210) une trame d'image animée (158) en superposant la ou les vues de sujet animé sur la vue de salle animée ; et assembler (212) la trame d'image animée en un flux vidéo anonymisé (160).
PCT/EP2023/055953 2022-03-18 2023-03-09 Anonymisation de flux vidéo de sujet WO2023174780A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263321364P 2022-03-18 2022-03-18
US63/321,364 2022-03-18
EP22166805.6 2022-04-05
EP22166805.6A EP4246445A1 (fr) 2022-03-18 2022-04-05 Anonymisation de flux vidéo de sujets

Publications (1)

Publication Number Publication Date
WO2023174780A1 true WO2023174780A1 (fr) 2023-09-21

Family

ID=85477797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/055953 WO2023174780A1 (fr) 2022-03-18 2023-03-09 Anonymisation de flux vidéo de sujet

Country Status (1)

Country Link
WO (1) WO2023174780A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020204645A1 (fr) * 2019-04-05 2020-10-08 고려대학교산학협력단 Dispositif d'imagerie ultrasonore équipé d'une fonction de guidage de position d'examen ultrasonore
US20210353235A1 (en) 2008-11-14 2021-11-18 At&T Intellectual Property I, L.P. System and Method for Performing a Diagnostic Analysis of Physiological Information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210353235A1 (en) 2008-11-14 2021-11-18 At&T Intellectual Property I, L.P. System and Method for Performing a Diagnostic Analysis of Physiological Information
WO2020204645A1 (fr) * 2019-04-05 2020-10-08 고려대학교산학협력단 Dispositif d'imagerie ultrasonore équipé d'une fonction de guidage de position d'examen ultrasonore

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JAMES COTTON R: "PosePipe: Open-Source Human Pose Estimation Pipeline for Clinical Research", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 March 2022 (2022-03-16), XP091192063 *
MAXIM MAXIMOV ET AL: "CIAGAN: Conditional Identity Anonymization Generative Adversarial Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 May 2020 (2020-05-19), XP081672771 *
YUVAL NIRKIN ET AL: "On Face Segmentation, Face Swapping, and Face Perception", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 22 April 2017 (2017-04-22), XP080764687, DOI: 10.1109/FG.2018.00024 *

Similar Documents

Publication Publication Date Title
JP7418358B2 (ja) 医用撮像のための位置フィードバックインジケータ
CN111989710A (zh) 医学成像中的自动切片选择
US20210156940A1 (en) Automatic artifact detection and pulse sequence modification in magnetic resonance imaging
US20230067146A1 (en) Automated scout scan examination
US11925418B2 (en) Methods for multi-modal bioimaging data integration and visualization
EP3776466B1 (fr) Détection automatique de configuration anormale d'un sujet pour l'imagerie médicale
EP3861531B1 (fr) Génération d'images pseudo-radiographiques à partir d'images optiques
EP4246445A1 (fr) Anonymisation de flux vidéo de sujets
WO2021094123A1 (fr) Classification de pose de sujet à l'aide de coordonnées d'emplacement d'articulation
WO2023174780A1 (fr) Anonymisation de flux vidéo de sujet
US20230368386A1 (en) Anonymous fingerprinting of medical images
EP3785227B1 (fr) Surveillance automatisée de sujets pour l'imagerie médicale
JP7449302B2 (ja) カメラ支援被検者支持構成
EP3824814A1 (fr) Évaluation de données tomographiques mesurées
EP3893245A1 (fr) Correction de données d'étiquettes automatisées pour des systèmes d'imagerie médicale tomographique
EP4365907A1 (fr) Maintien d'une connexion de téléconférence avec une résolution et/ou une fréquence de trame minimale
CN117981004A (zh) 用于数据采集参数推荐和技术人员训练的方法与系统
Oyama System integration of VR-simulated surgical support system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23709232

Country of ref document: EP

Kind code of ref document: A1