CN114093451A - Method and system for managing user data by PACS (Picture archiving and communication System) - Google Patents

Method and system for managing user data by PACS (Picture archiving and communication System) Download PDF

Info

Publication number
CN114093451A
CN114093451A CN202111452315.6A CN202111452315A CN114093451A CN 114093451 A CN114093451 A CN 114093451A CN 202111452315 A CN202111452315 A CN 202111452315A CN 114093451 A CN114093451 A CN 114093451A
Authority
CN
China
Prior art keywords
image
data
medical
medical image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111452315.6A
Other languages
Chinese (zh)
Inventor
袁本祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaping Xiangsheng Shanghai Medical Technology Co ltd
Original Assignee
Huaping Xiangsheng Shanghai Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaping Xiangsheng Shanghai Medical Technology Co ltd filed Critical Huaping Xiangsheng Shanghai Medical Technology Co ltd
Priority to CN202111452315.6A priority Critical patent/CN114093451A/en
Publication of CN114093451A publication Critical patent/CN114093451A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention provides a method and a system for managing user data by a PACS system, which comprises the following steps: receiving image, audio, text information and bookmark information; storing information input by an input device, managing image and audio data generated for each surgical procedure, storing and storing bookmark information in corresponding edit data, and including the edit data of the bookmark information in corresponding matching data; after receiving the history data created and stored in the image processing apparatus, updating the history data according to the diagnosis procedure, the surgical procedure, the post-operative treatment procedure, and the treatment procedure after discharge, generating integrated history data, and the integrated history data may be stored in a database; the PACS system receives medical images related to patients input by the medical diagnosis imaging equipment and stores the medical images into an electronic folder; and displaying the received image and video data on an output device.

Description

Method and system for managing user data by PACS (Picture archiving and communication System)
Technical Field
The invention relates to the technical field of data management systems, in particular to a method and a system for managing user data by a PACS (picture archiving and communication system).
Background
Generally, when a patient goes to a hospital, the patient's personal information, the name of a previously affected disease, medical history, and the like are written and submitted in written form according to the requirements of the hospital. After filling and receiving the document, the hospital will make a chart according to the written content, and if the person is a person who receives treatment at the hospital, the hospital will prepare for the treatment, for example, search the past chart.
However, with regard to the written medical records as described above, it has taken a lot of time in the past to search for charts or create new charts by referring to information of patients, which has a problem that cannot be effectively solved. In particular, these files must be retained for a certain time, and there is a problem in that a place where the files need to be stored and storage costs.
To solve this problem, techniques and contents for sharing various types of information about patients through various types of networks (such as the internet, LAN, and Intranet) as a network have been developed. Electronic medical record processing performed in this manner is called Electronic Medical Record (EMR). However, information sharing currently performed in a hospital is only simple contents such as personal information of a user, treatment items to be treated, treatment reservation information, treatment history information, and hospitalization information. However, since the course of treatment of a patient cannot be understood in detail only by such simple information, it is impossible to provide a countermeasure against a medical accident. When a medical accident occurs in the above environment, both the hospital and the patient often waste more time and money than is needed to solve the problem, and in some cases, the hospital business may be paralyzed.
The most important information for treating or caring for patients in hospitals is not only simple information as described above, but also a nursing diary recorded by a nurse who observes symptoms of patients and information on diseases of patients, which are personal conditions. Information provided according to the disease symptoms of each patient, such as counseling contents, a doctor's treatment result and treatment method, a test result, and a treatment result. Particularly in operations with a high incidence of accidents, hospitals communicate preventive measures or risks to patients and obtain patient consent, and it is difficult to confirm the transfer of such information after a medical accident has occurred. The difficulty caused by disputes is solved.
Therefore, in order to easily solve the disputes caused by medical accidents, the basic treatment process of the patient's disease and the contents of conversation with the patient or detailed data of the treatment process are required.
In addition, a medical environment such as a hospital or clinic includes a clinical information system such as a Hospital Information System (HIS) and a Radiology Information System (RIS), and a storage system such as a Picture Archiving and Communication System (PACS). For example, the stored information may include patient history, imaging data, test results, diagnostic information, administrative information, and/or planning information. This information may be centrally stored or partitioned among multiple locations. A healthcare practitioner may wish to access patient information or other information at various points in the healthcare workflow. For example, during surgery, medical personnel may access patient information stored in a medical information system, a typical application of a PACS system is to provide one or more medical images for review by a medical professional. The configuration and operation of PACS are complex. In addition, the training and preparation work involved in using PACS may vary from user to user. Therefore, systems and methods that facilitate the operation of PACS are highly desirable.
Disclosure of Invention
The invention aims to provide a method for managing user data by a PACS system, which specifically comprises the following steps:
step 1: receiving image, audio, text information, and bookmark information for at least one of;
step 2: storing information input by the input device 100, managing image and audio data generated for each surgical procedure, storing and storing bookmark information in corresponding edit data, and including the edit data of the bookmark information in corresponding matching data;
and step 3: after receiving the history data created and stored in the image processing apparatus 200, updating the history data according to the procedure, the surgical procedure, the post-operative treatment procedure, and the treatment procedure after discharge, generating integrated history data, and may store the integrated history data in a database;
and 4, step 4: the PACS system 600 receives a patient-related medical image input by a medical diagnostic imaging device and stores it in an electronic folder, wherein the PACS system 600 includes a medical image receiving and preprocessing module 610, an NLP and feature extraction unit 612, an operation unit 613;
and 5: the received image, video data is displayed on the output device 400.
Preferably, step 3 further comprises:
creating an electronic folder in a database related to a patient, and storing the integrated historical data in the electronic folder;
medical images of a patient are received and the medical images are stored in an electronic folder.
Preferably, step 4 further comprises:
the medical image reception and pre-processing module 610 converts the raw medical image data into a DICOM standard format or attaches a DICOM header and enhances features within the image, adjusts the image of the patient's anatomy, changes the visual appearance or representation of the image data, including any one or more of: flipping an image, magnifying an image, translating across an image, changing a window and/or level in a grayscale representation of image data, and changing a contrast and/or brightness image.
Preferably, step 4 further comprises: NLP and feature extraction unit 612 processes text contained in the medical image report to identify specific keywords, identify features in the medical image, which use image segmentation to identify image features.
Preferably, step 4 further comprises: the NLP and feature extraction unit 612 extracts the measurements that have been performed and adds them to the image as part of the pixel data or as a presence reference.
Preferably, step 4 further comprises:
generating a new connected image allows the medical image and the medical image report to be connected so that they can be viewed simultaneously without separate searching, the connected image being inserted as part of the medical image.
Preferably, step 4 further comprises:
acquiring a medical image report, and identifying a specific keyword, namely a first feature, included in a medical image report text;
identifying a feature, i.e. a second feature, of a medical image associated with the medical image report;
the first feature and the second feature are compared for a match, and in response to the first feature and the second feature comprising a match, a new connection image is created between the medical image report and the at least one medical image and inserted into the medical image report.
The invention also provides a system for managing user data by the PACS system, which specifically comprises the following steps:
an input device 100 receiving images and audio for at least one of text information, bookmark information;
the image processing apparatus 200 stores information input by the input device 100, manages image and audio data generated for each surgical procedure, stores and stores bookmark information in corresponding edit data, and includes edit data of the bookmark information in corresponding matching data;
the server 300 updates the history data according to the procedure, the operation procedure, the post-operation procedure, and the treatment procedure after discharge after receiving the history data created and stored in the image processing apparatus 200, generates integrated history data, and may store the integrated history data in the database;
the output device 400 displays the received video data on the output device 400;
a portable terminal 500 carried by a doctor or a nurse, which inputs and checks information on the treatment and treatment procedure of the inpatient;
a PACS system 600 that receives patient-related medical images input by the medical diagnostic imaging apparatus 700 and stores them in an electronic folder;
the medical diagnostic imaging apparatus 700 acquires one or more images of a patient.
Preferably, the database further comprises an electronic folder, and the integrated historical data is stored in the electronic folder after the electronic folder related to the patient is created;
medical images of a patient are received and the medical images are stored in an electronic folder.
Preferably, the PACS system 600 includes a medical image receiving and preprocessing module 610, an NLP and feature extraction unit 612, and an operation unit 613.
Preferably, the medical image reception and pre-processing module 610 converts the raw medical image data into a DICOM standard format or appends a DICOM header and enhances features within the image, adjusts the image of the patient's anatomy, changes the visual appearance or representation of the image data, including any one or more of: flipping an image, magnifying an image, translating across an image, changing a window and/or level in a grayscale representation of image data, and changing a contrast and/or brightness image.
Preferably, NLP and feature extraction unit 612 processes text contained in the medical image report to identify specific keywords, identify features in the medical image, which use image segmentation to identify image features.
Preferably, the NLP and feature extraction unit 612 extracts the measurements that have been performed and adds them to the image as part of the pixel data or as a presence state reference.
Preferably, the operation unit 130 generates a new connection image such that the medical image and the medical image report are connected so as to be viewed simultaneously without separate search, the connection image being inserted as a part of the medical image.
Preferably, the operation unit 130 performs the following functions:
acquiring a medical image report, and identifying a specific keyword, namely a first feature, included in a medical image report text;
identifying a feature, i.e. a second feature, of a medical image associated with the medical image report;
comparing the first and second features for a match, and in response to the first and second features comprising a match, creating a new connection image between the medical image report and the at least one medical image and inserting the connection image into the medical image report
Compared with the prior art, the invention has the advantages that: the present invention can view medical images and analysis information at a glance by connecting images without changing the existing medical image storage information system.
The present invention adds a bookmark function to a part (a part describing an operation or a corresponding disease, etc.) where important contents are recorded during consultation with a corresponding patient, so that information consultation can be rapidly performed.
Drawings
Fig. 1 is a schematic structural diagram of a system for managing user data by a PACS system in an embodiment of the present application.
FIG. 2 is a method for linking a medical image report to a medical image to form a new linked image according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be further described below.
Fig. 1 is a block diagram showing an example in which a PACS system according to the present invention manages user data, as shown in fig. 1.
The input device 100 receives images and audio for at least one of:
the image processing apparatus 200 stores the input images and audio and manages history data generated for each surgical procedure;
the server 300 performs integrated management on the history data;
an output device 400;
a portable terminal 500 carried by a doctor or nurse;
a PACS (picture archiving and communication system) 600 receives patient-related medical images input by the medical diagnostic imaging apparatus 700 and stores them in an electronic folder.
The medical diagnostic imaging apparatus 700 acquires one or more images of a patient.
The input device 100 is configured to include: at least three cameras 110, namely a boom camera 111, an indoor camera 112 and a video endoscope 113, a vital signs monitor 114; a conversion unit 120; a control signal input unit 130; a transfer unit 140 and a support unit 150.
Among them, one controller is assigned to each of the video signal sources of the three kinds of camera boom cameras 111, the room camera 112, and the video endoscope 113. The controller continuously evaluates the video signal of the video signal source with respect to predetermined characteristics, such as brightness and sharpness, in order to adjust the exposure time, aperture setting and focus setting of the video signal source. In the case of laparoscopic surgery, when the video endoscope 113 is inserted into the patient, the organ of the patient may be photographed by an imaging device located at the rear of the microscope.
The conversion unit 120 encodes the surgical image captured by the camera into a digital file. The encoding may use WMV (windows media video) methods to convert the surgical images into high definition, low volume files. In order for the transmission unit 140 to transmit the photographed surgical image to the image processing apparatus 200 in real time, the conversion unit 120 may encode the surgical image input from the camera 110 in real time without a separate start command signal.
The control signal input unit 130 receives a signal related to recording of a surgical image. At this time, the control signal input unit 130 is a voice recognizer (not shown) that recognizes the surgeon's voice and determines a signal related to the recording of the surgical image, or allows the surgeon to input a signal related to the recording of the surgical image. The control signal input unit 130 may also be a surgical image with a touch, and may be a touch screen (not shown), and the signals related to the recording of the surgical image include a signal to start recording the surgical image and a signal to stop recording the surgical image. When a voice recognizer is used as the input unit 130, since the surgeon can control the recording of the operation image only with voice, the operation image can be recorded more efficiently.
The transmission unit 140 transmits the surgical image captured by the camera 110 to the service-providing image processing apparatus 200. By transmitting the operation image in real time using a wired/wireless network, a person who does not participate in the operation can directly check the operation image using the output device 400, and users such as doctors, nurses, and patients can view and confirm the operation image.
The support unit 150 is used to support the camera 110 and control the position or angle of the camera 110 according to a control signal transmitted from the controller. It may be in the form of a crane, or it may be mounted in the camera 110 and automatically adjust the position or angle of the camera 110 according to a control signal, or the position or angle of the camera 110 may be manually adjusted by the surgeon.
The image processing apparatus 200 receives video and audio input through the camera 110 or the like of the input apparatus 100, converts the received video and audio into data, and synchronously generates video, and generates and stores matching data including the generated video.
The image processing apparatus 200 includes:
a data processing unit 210;
a bookmark information processing unit 220;
a transmission unit 230;
the data processing unit 210 receives and processes video, image, and voice data input by the output unit 140.
The transmission unit 230 transmits video, image, and voice data to the server 300, and is configured to enable short-range wired/wireless communication. Here, the personal information of the patient is matched with the corresponding video and transmitted to the server 300.
When receiving input edit data including text information (personal information of a patient, medical information, etc.), the data processing module 210 stores it as matching data corresponding to video, image, voice data. The matching data and the edit data are combined (included) to generate and store history data.
Another way to reduce the video data storage requirements is to eliminate portions of the video that have little clinical significance. For example, in certain surgical procedures, only a few minutes of an hour of video is considered sufficient for archiving. The storage requirements can be greatly reduced by using a simple, automated method to identify important portions of a video. One way to achieve this is to make a video clip using bookmarks placed by the surgeon as reference points. The bookmark information processing module 220 analyzes and processes bookmark information and transmits it to the data processing module 210, and at this time, the data processing module 210 stores and stores the bookmark information transmitted from the bookmark information processing module 220 in corresponding edit data, and includes the edit data of the bookmark information in corresponding matching data.
Also, if the doctor needs to check the video included in the stored history data due to negotiation with the patient or the like, the transmission unit 230 requests the video data stored in the history data accumulated in the server 300 in response to the request message when the message is input through a data input request message (such as a command transmitted by pressing the Enter key on the keyboard or clicking the mouse button). When video data is requested as described above, the transmission unit 230 transmits the video data from the server 300 and displays the received video data on the output device 400.
In addition, the generated video is output through the screen output device 400 such as an LCD, and through text information (e.g., treatment details, the current state of the patient, etc.) such as a keyboard or a mouse and bookmark information, and when edit data including at least one of the bookmark information is received, history data including the edit data is generated and stored in the matching data. Textual information may also be output via the output device 400.
The server 300 updates the history data according to the procedure, the operation procedure, the post-operation procedure, and the treatment procedure after discharge after receiving the history data created and stored in the image processing apparatus 200. Integration history data is then generated and may be stored in a database.
The server 300 includes:
a request input unit 310;
a search unit 320;
a database 330 comprising an electronic folder 331 for storing user data. A patient-related electronic folder is created, and location information of the electronic folder is saved, the location information indicating locations of the electronic folder created for past medical records of the patient. Once the electronic folder is created, the medical images of the patient may be stored in the electronic folder. In addition to creating new electronic files for new medical purposes, as described above in the preceding paragraph, in some embodiments, existing or previously created electronic files may be updated for existing medical purposes of the patient. For example, access to existing electronic files may be included, as well as new electronic folders created from one or more names and/or for medical purposes.
When video information or patient information is input from the terminal 500 or the request input unit 310, the image processing apparatus analyzes and processes information such as the video information or patient personal information included in the patient information, and transmits it to the database 330. The database 330 checks whether history data for the patient has already been stored based on the processing result, and as a result of the check, whether history data for the patient has been stored, the data is called and updated, and the updated history data is saved.
The search unit 320 searches the history data of the patient stored in the database 330 in response to a request of the image processing apparatus 200, and makes a transmission request for streaming the searched history data so that the contained video can be transmitted to the image processing apparatus 200.
The server 300 may store, receive, transmit and/or generate imaging datasets and related data, such as, but not limited to, DICOM data. For example, when a new subsequent imaging dataset is acquired for a particular medical purpose of a patient, the results may be analyzed using the immediately previous imaging dataset or all previous imaging datasets created for the particular medical purpose of the patient. Thus, each history for a patient-specific medical purpose will be saved together in the same electronic file for the patient created electronic folder 331. The electronic folder 331 includes the various saved data analyzed above, the results from each analysis, and the name of each imaging dataset created for the particular medical purpose of the patient associated with the electronic file 331. In some embodiments, the electronic file 331 may also include a time, date, and/or identification, such as, but not limited to, a unique identification number, for each imaging dataset created for a particular medical purpose of the patient. The electronic file 331 further includes location information linked to each imaging dataset associated with the electronic file 331 created for the particular medical purpose of the patient. The location information indicates a location of the respective imaging dataset so that the respective imaging dataset can be retrieved for analysis. The location information may be any suitable information in any suitable format that enables the location information to function as described herein, such as, but not limited to, metadata. The electronic file 331 may also include other information related to the particular medical purpose of the patient with which the electronic file is associated.
The terminal 500 is a personal terminal including a Personal Computer (PC) and a portable terminal including a Personal Digital Assistant (PDA), and can mainly input and check information on treatment and treatment procedures of inpatients. Login information is received from an electronic device of a user. The electronic devices of the users include computers, mobile phones and other terminals, and may be devices that can access the service providing server through the internet. The user may be a doctor, a trainee, a medical student, a nurse or a patient who has undergone surgery. At this time, the member grades may be classified according to the location or membership of the user. And if the login information is matched with the member information, transmitting the operation image playlist to the electronic equipment of the user. By performing the login process, it is possible to prevent a third party from viewing or editing the operation image stored in the service providing server without authorization.
Each configuration of fig. 1 described above is configured to enable data communication through a wired/wireless communication network, and external hacking can fundamentally prevent such a situation from occurring by data transmission through a closed communication network inside a hospital according to the requirements of those skilled in the art.
The data processing unit 210 receives and processes video, image, and voice data input by the output unit 140. Since video data storage is large and cannot be archived in a PACS system for a long period of time, a plurality of video processing rules are set to automatically process video data and process and store the video data in the DICOM format. Multiple video processing rules may be applied to any given video, image or patient record. Video processing rules may also be applied at different times during information processing. For example, some rules may apply to video files, while other rules are used to convert video to DICOM formatted patient records or transmit them to other systems (e.g., PACS systems). The rules may also optionally manage the method of compressing the size of the captured video data and routing the predetermined video data to various long term storage destinations.
The specific processing rules are as follows: (1) and (3) transcoding setting of the video: specific video transcoding settings are defined based on characteristics of the received video or video contained in patient records received from other sources. A single source video may be transcoded into multiple reduced-size videos by decoding the source video into a single frame, changing it to include a lower resolution, lower bitrate, corresponding frame, low-level frame rate video data. Since different videos require different resolutions according to their application scenarios, for example, a doctor preparing to present a medical conference of new surgical techniques may want a high resolution video to show exceptional details, while a surgeon performing a surgical review at home may require a lower resolution and a lower bit rate. The settings of resolution, frame rate and bit rate can be changed as required by transcoding settings to change the size of the video.
(2) Encoding settings of video: i.e., using different video compression techniques. MPEG2 is a popular video encoding method found in consumer and commercial applications and is used in many input devices 100. New encoding techniques such as h.264 can reduce storage requirements by up to 25% of the original MPEG2 data size without visually significant changes in video quality. The data processing unit includes therein a plurality of video encoders that can be combined with settings of resolution, frame rate and bit rate in different rules.
(3) And (3) bookmark setting: the rule extracts video clip bookmarks from the corresponding video using different settings of a number of seconds specified before the image time index and a number of seconds after the image time index to be used at the time of extraction. Configurable time settings "before" and "after" are both part of the rule. The start time and end time bookmarks for the low resolution and high resolution video may be set simultaneously, in this way, the user is not burdened with problems associated with viewing high resolution video, but may still achieve the goal of extracting high resolution clips using low resolution versions of the video, which are easier to view and navigate remotely. At the same time, it may also allow video clip information of significant events to be generated from multiple reference points in multiple input devices 100, all of which input devices 100 capture patient information from different angles simultaneously during the same procedure.
(4) And (3) transmission setting: the rules communicate the specific resolution of any video to the PACS. For example, it is specified that only 480p resolution of video is transmitted to the PACS. The patient may also include patient, physician, and/or procedure information in the decision making process when determining which medical records to select for transfer or other operation. For example, a particular surgeon may wish to save all of their automatically generated video clips and images from gallbladder surgery to a PACS system.
(5) Saving the setting: it is determined how long a video of a particular resolution, frame rate or bit rate will be stored on the server. Different rules may be created for different time ranges. For example, a rule may be set on the server to delete all 1080p high resolution video immediately after transcoding to lower resolutions 480p and 240 p. A second rule may be created on the same server to delete all 480p videos 60 days after the last view of all 480p videos. The "medical records" may also include patient, physician, and/or procedure information in the decision making process when determining which medical records to select for deletion or other operation.
Multiple rules may be created that may be used in combination to improve their usefulness and functionality.
To further reduce the size of video and other data delivered to a PACS system at a given time, a predetermined set of rules may be applied to data received in a designated temporary storage folder, where each local video data is stored on the server.
The medical diagnostic imaging device 700 obtains one or more medical images of a patient, which may include any device capable of capturing images of a patient's anatomy, such as a medical diagnostic imaging device. For example, an X-ray imager, an ultrasound scanner, a magnetic resonance imager, or the like may be included. The image data may be electronically communicated, for example, via a wired or wireless connection. At least one medical image of a patient may be bundled and sent to a PACS system as a series of data.
The PACS system includes a medical image receiving and preprocessing module 610, an NLP and feature extraction unit 612, and an operation unit 613.
The medical image receiving and preprocessing module 610 converts the raw medical image data into a DICOM standard format or attaches a DICOM header thereto. The pre-processing function may be characterized as, for example, a medical image enhancement applied at the beginning of the imaging and display workflow (e.g., a contrast or frequency compensation function of a particular X-ray imaging device). A user may wish to apply pre-processing functions to enhance features within an image representing image data, and the image of the patient's anatomy may be adjusted to facilitate diagnosis of the image by the user. May include that the visual appearance or representation of the image data may be changed, including any one or more of: flipping an image, magnifying an image, translating across an image, changing a window and/or level in a grayscale representation of image data, and changing a contrast and/or brightness image.
The NLP and feature extraction unit 612 comprises a software program for processing natural language (text language) instructions included in the medical image report, processing text contained in the medical image report to identify specific keywords. At the same time, features in the medical image, such as anatomical structures or views, such as the parasternal long axis, which contains a plurality of different anatomical structures, may also be identified. In some embodiments, it uses image segmentation to identify image features. Image segmentation is the process of dividing a digital image into a plurality of segments (sets of pixels) to locate objects and boundaries in the image. For example, the medical image may be configured to process the image and determine anatomical location information using one or more anatomical atlases or from different localization or identification methods (e.g., neural network identification methods). For example, in thoracic computed tomography ("CT") images, anatomical regions, such as the ascending aorta, left ventricle, T3 vertebra and other regions, may be identified and labeled, for example, by using multi-atlas segmentation. Each atlas includes a medical image obtained in a known imaging modality and using a particular imaging procedure and technique, where each pixel in the medical image is labeled as a particular anatomical structure. Thus, the atlas may be used to label anatomical structures in the image.
After the medical image is pre-processed by the medical image receiving and pre-processing module 610, the NLP and feature extraction unit 612 may output the extracted image features. Additionally, in some embodiments, NLP and feature extraction unit 612 is configured to extract the measurements that have been performed and add to the image as part of the pixel data or as a presence state reference. For example, with ultrasound images, sonographers typically make a number of measurements during the exam, which are typically saved as image screenshots. These measurements can be further enhanced by matching the short screen with the measurements to frames of a multi-frame acquisition or series (ultrasound image film) using the registration process used in the anatomical localization process described above.
Since the medical image report and the medical image are separate, in order to view the medical image report while viewing the medical image, a search is required again to find the medical image report, which causes inefficiency. And the operation unit 130 generates a new connection image so that the medical image and the medical image report are connected so as to be easily viewed without a separate search. The connection image is formed as an image file, such as a medical image, and may be inserted as part of the medical image. The connected images may output a medical image report through medical image analysis and generate a new analysis image based on the medical image report. The transmission may be by placing the connection image in the analysis image. The analysis image may be a table or a chart.
FIG. 2 illustrates a method 200 of connecting a medical image report to a medical image to form a new connected image, according to one embodiment, the method 200 comprising the steps of:
step S201: a first plurality of medical image reports is acquired, identifying certain keywords, i.e. first features, included in the medical image report text. It may be configured to store data regarding the identified first plurality of medical image reporting features in a data structure, such as a table. For example, when a physician prepares a medical image report, the physician uses terminology to describe image report features, including medical structures and user artifacts.
The NLP and feature extraction unit 612 may be configured to process text included in the body of the report. However, it may be configured to identify the first plurality of medical structures using a format or structure of the report as a supplement. For example, a medical image report may handle report subsections or headings for various body parts, anatomical systems, image types, and the like. Using NLP and feature extraction unit 612 can quickly identify relevant terms used in the section using the section name.
In some embodiments, to apply better grayscale, the NLP and feature extraction unit 612 may add other subdivision layers in the map-set segmentation, or subsequently apply other rules to achieve finer subdivision (e.g., proximal, middle, distal segments of bones or vessels, medial or lateral/superior or inferior sides of structures).
Step 202: a feature, i.e. a second feature, of a medical image associated with the medical image report is identified. The NLP and feature extraction unit 612 may be configured to identify anatomical structures in the image using image segmentation. It may also be configured to process metadata of the image (such as header information) to identify image features such as user artifacts.
Step 203: the first feature and the second feature are compared for a match. For example, the first feature may include a medical structure identified as "liver" and the second feature may include a medical structure identified as "liver". It should be understood that when the identified image features are the same or related, they may "match". For example, when the first feature comprises a "marked tumor" and the second feature comprises a graphical annotation of a portion of the marked image, the features may be considered "matches".
Step 204: in response to the first and second features not including any matching features, consider other images included in the image study, or if all images are already after processing, return to wait for a new medical image report and a new medical image to identify features therein.
Step 205: in response to including the matched first and second features, a new connection image is created between the medical image report and the at least one medical image and inserted into the medical image report. The operation unit 130 connects the medical image and the medical image report to generate a new connection image, and the connection image 230 is generated as another layer. The generated layers of the connected image 230 may be inserted into the medical image 210. At this time, even if the connection image 230 is inserted into the medical image 210, it is formed not to overlap with the medical image 210.
According to the system of the present disclosure, it is possible to view medical images and analysis information at a glance by connecting images without changing the existing medical image storage information system. According to another medical image storage and transmission system of the present disclosure, medical images and analysis information can be more efficiently loaded into an existing medical image storage information system through a work unit and a loading unit.
The above description is only a preferred embodiment of the present invention, and does not limit the present invention in any way. It will be understood by those skilled in the art that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. A method for managing user data by a system is characterized by specifically comprising the following steps:
step 1: receiving image, audio, text information, and bookmark information for at least one of;
step 2: storing information input by the input device 100, managing image and audio data generated for each surgical procedure, storing and storing bookmark information in corresponding edit data, and including the edit data of the bookmark information in corresponding matching data;
and step 3: after receiving the history data created and stored in the image processing apparatus 200, updating the history data according to the procedure, the surgical procedure, the post-operative treatment procedure, and the treatment procedure after discharge, generating integrated history data, and may store the integrated history data in a database;
and 4, step 4: the PACS system 600 receives a patient-related medical image input by a medical diagnostic imaging device and stores it in an electronic folder, wherein the PACS system 600 includes a medical image receiving and preprocessing module 610, an NLP and feature extraction unit 612, an operation unit 613;
and 5: the received image, video data is displayed on the output device 400.
2. The PACS system user data management method of claim 1, wherein: step 3 also includes:
creating an electronic folder in a database related to a patient, and storing the integrated historical data in the electronic folder;
medical images of a patient are received and the medical images are stored in an electronic folder.
3. The PACS system user data management method of claim 2, wherein: step 4 also includes:
the medical image reception and pre-processing module 610 converts the raw medical image data into a DICOM standard format or attaches a DICOM header and enhances features within the image, adjusts the image of the patient's anatomy, changes the visual appearance or representation of the image data, including any one or more of: flipping an image, magnifying an image, translating across an image, changing a window and/or level in a grayscale representation of image data, and changing a contrast and/or brightness image.
4. The PACS system user data management method of claim 2, wherein: step 4 also includes: NLP and feature extraction unit 612 processes text contained in the medical image report to identify specific keywords, identify features in the medical image, which use image segmentation to identify image features.
5. The PACS system user data management method of claim 2, wherein: step 4 also includes: the NLP and feature extraction unit 612 extracts the measurements that have been performed and adds them to the image as part of the pixel data or as a presence reference.
6. The PACS system user data management method of claim 2, wherein: step 4 also includes:
generating a new connected image allows the medical image and the medical image report to be connected so that they can be viewed simultaneously without separate searching, the connected image being inserted as part of the medical image.
7. The PACS system user data management method of claim 2, wherein: step 4 also includes:
acquiring a medical image report, and identifying a specific keyword, namely a first feature, included in a medical image report text;
identifying a feature, i.e. a second feature, of a medical image associated with the medical image report;
the first feature and the second feature are compared for a match, and in response to the first feature and the second feature comprising a match, a new connection image is created between the medical image report and the at least one medical image and inserted into the medical image report.
8. A system for managing user data by a PACS system is characterized by specifically comprising:
an input device 100 receiving images and audio for at least one of text information, bookmark information;
the image processing apparatus 200 stores information input by the input device 100, manages image and audio data generated for each surgical procedure, stores and stores bookmark information in corresponding edit data, and includes edit data of the bookmark information in corresponding matching data;
the server 300 updates the history data according to the procedure, the operation procedure, the post-operation procedure, and the treatment procedure after discharge after receiving the history data created and stored in the image processing apparatus 200, generates integrated history data, and may store the integrated history data in the database;
the output device 400 displays the received video data on the output device 400;
a portable terminal 500 carried by a doctor or a nurse, which inputs and checks information on the treatment and treatment procedure of the inpatient;
a PACS system 600 that receives patient-related medical images input by the medical diagnostic imaging apparatus 700 and stores them in an electronic folder;
the medical diagnostic imaging apparatus 700 acquires one or more images of a patient.
9. The PACS system management subscriber data system of claim 8, wherein: the database further comprises an electronic folder, and the integrated historical data is stored in the electronic folder after the electronic folder related to the patient is created;
medical images of a patient are received and the medical images are stored in an electronic folder.
10. The PACS system managing user data of claim 9, wherein: the PACS system 600 includes a medical image receiving and preprocessing module 610, an NLP and feature extraction unit 612, and an operation unit 613.
11. The PACS system managing user data of claim 10, wherein: the medical image reception and pre-processing module 610 converts the raw medical image data into a DICOM standard format or attaches a DICOM header and enhances features within the image, adjusts the image of the patient's anatomy, changes the visual appearance or representation of the image data, including any one or more of: flipping an image, magnifying an image, translating across an image, changing a window and/or level in a grayscale representation of image data, and changing a contrast and/or brightness image.
12. The PACS system managing user data of claim 10, wherein: NLP and feature extraction unit 612 processes text contained in the medical image report to identify specific keywords, identify features in the medical image, which use image segmentation to identify image features.
13. The PACS system managing user data of claim 10, wherein: the NLP and feature extraction unit 612 extracts the measurements that have been performed and adds them to the image as part of the pixel data or as a presence reference.
14. The PACS system management subscriber data system of claim 10, wherein: the operation unit 130 generates a new connection image such that the medical image and the medical image report are connected so as to be viewed simultaneously without separate search, the connection image being inserted as a part of the medical image.
15. The PACS system managing user data of claim 10, wherein: the operation unit 130 performs the following functions:
acquiring a medical image report, and identifying a specific keyword, namely a first feature, included in a medical image report text;
identifying a feature, i.e. a second feature, of a medical image associated with the medical image report;
the first feature and the second feature are compared for a match, and in response to the first feature and the second feature comprising a match, a new connection image is created between the medical image report and the at least one medical image and inserted into the medical image report.
CN202111452315.6A 2021-12-01 2021-12-01 Method and system for managing user data by PACS (Picture archiving and communication System) Pending CN114093451A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452315.6A CN114093451A (en) 2021-12-01 2021-12-01 Method and system for managing user data by PACS (Picture archiving and communication System)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111452315.6A CN114093451A (en) 2021-12-01 2021-12-01 Method and system for managing user data by PACS (Picture archiving and communication System)

Publications (1)

Publication Number Publication Date
CN114093451A true CN114093451A (en) 2022-02-25

Family

ID=80306037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111452315.6A Pending CN114093451A (en) 2021-12-01 2021-12-01 Method and system for managing user data by PACS (Picture archiving and communication System)

Country Status (1)

Country Link
CN (1) CN114093451A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114756542A (en) * 2022-06-15 2022-07-15 深圳市三维医疗设备有限公司 Color Doppler ultrasound image processing control system based on data feedback
CN116013487A (en) * 2023-03-27 2023-04-25 深圳市浩然盈科通讯科技有限公司 Data adaptation method and system applied to medical system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260974A (en) * 2015-09-10 2016-01-20 济南市儿童医院 Method and system for generating electronic case history with informing and signing functions
CN107330238A (en) * 2016-08-12 2017-11-07 中国科学院上海技术物理研究所 Medical information collection, processing, storage and display methods and device
CN109166606A (en) * 2018-08-31 2019-01-08 上海轩昂医疗科技有限公司 A kind of electronic health record online editing platform and its implementation
CN111292821A (en) * 2020-01-21 2020-06-16 上海联影智能医疗科技有限公司 Medical diagnosis and treatment system
CN112382360A (en) * 2020-12-03 2021-02-19 卫宁健康科技集团股份有限公司 Automatic generation system of diagnosis report, storage medium and electronic equipment
CN113035326A (en) * 2021-02-26 2021-06-25 上海联影智能医疗科技有限公司 Information processing method of PACS (Picture archiving and communication System), medical image processing method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260974A (en) * 2015-09-10 2016-01-20 济南市儿童医院 Method and system for generating electronic case history with informing and signing functions
CN107330238A (en) * 2016-08-12 2017-11-07 中国科学院上海技术物理研究所 Medical information collection, processing, storage and display methods and device
CN109166606A (en) * 2018-08-31 2019-01-08 上海轩昂医疗科技有限公司 A kind of electronic health record online editing platform and its implementation
CN111292821A (en) * 2020-01-21 2020-06-16 上海联影智能医疗科技有限公司 Medical diagnosis and treatment system
CN112382360A (en) * 2020-12-03 2021-02-19 卫宁健康科技集团股份有限公司 Automatic generation system of diagnosis report, storage medium and electronic equipment
CN113035326A (en) * 2021-02-26 2021-06-25 上海联影智能医疗科技有限公司 Information processing method of PACS (Picture archiving and communication System), medical image processing method and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114756542A (en) * 2022-06-15 2022-07-15 深圳市三维医疗设备有限公司 Color Doppler ultrasound image processing control system based on data feedback
CN116013487A (en) * 2023-03-27 2023-04-25 深圳市浩然盈科通讯科技有限公司 Data adaptation method and system applied to medical system

Similar Documents

Publication Publication Date Title
US11043307B2 (en) Cognitive collaboration with neurosynaptic imaging networks, augmented medical intelligence and cybernetic workflow streams
US10332639B2 (en) Cognitive collaboration with neurosynaptic imaging networks, augmented medical intelligence and cybernetic workflow streams
US10790057B2 (en) Systems and methods for retrieval of medical data
US11961624B2 (en) Augmenting clinical intelligence with federated learning, imaging analytics and outcomes decision support
Arenson et al. Computers in imaging and health care: now and in the future
US20020091659A1 (en) Portable viewing of medical images using handheld computers
US11380427B2 (en) Prepopulating clinical events with image based documentation
US20080103828A1 (en) Automated custom report generation system for medical information
US20090182577A1 (en) Automated information management process
Clunie et al. Technical challenges of enterprise imaging: HIMSS-SIIM collaborative white paper
US20080059245A1 (en) Medical image management method, medical image management apparatus, and medical network system
EP1955237A2 (en) System and method for anatomy labeling on a pacs
JP2003506797A (en) Methods and systems for generating reports
US20060235936A1 (en) System and method for PACS workstation conferencing
CN114093451A (en) Method and system for managing user data by PACS (Picture archiving and communication System)
KR20130053587A (en) Medical device and medical image displaying method using the same
US9934539B2 (en) Timeline for multi-image viewer
US8923582B2 (en) Systems and methods for computer aided detection using pixel intensity values
CN114040138A (en) Visual operation process recording method and recording system
US20090245609A1 (en) Anatomical illustration selecting method, anatomical illustration selecting device, and medical network system
US9934356B2 (en) Multi-image viewer for multi-sourced images
JP2005278991A (en) Remote image diagnostic reading service system
US20220181005A1 (en) Utilizing multi-layer caching systems for storing and delivering images
Massat RSNA 2016 in review: AI, machine learning and technology
Yoshitaka et al. Analysis and Design of Personal Health Record Management System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination