CN113409928A - Medical information sharing system - Google Patents

Medical information sharing system Download PDF

Info

Publication number
CN113409928A
CN113409928A CN202110723134.6A CN202110723134A CN113409928A CN 113409928 A CN113409928 A CN 113409928A CN 202110723134 A CN202110723134 A CN 202110723134A CN 113409928 A CN113409928 A CN 113409928A
Authority
CN
China
Prior art keywords
image
medical
pixel
images
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110723134.6A
Other languages
Chinese (zh)
Other versions
CN113409928B (en
Inventor
冯聪
陈力
杨博
黄赛
陈骅
王莉荔
崔翔
张乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Medical Center of PLA General Hospital
Original Assignee
First Medical Center of PLA General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Medical Center of PLA General Hospital filed Critical First Medical Center of PLA General Hospital
Priority to CN202110723134.6A priority Critical patent/CN113409928B/en
Publication of CN113409928A publication Critical patent/CN113409928A/en
Application granted granted Critical
Publication of CN113409928B publication Critical patent/CN113409928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a medical information system sharing system, which comprises a security center and a plurality of user terminals, wherein the security center comprises a security verification module, the ID information of a selected user is used for generating a unique determined ID number through an appointed algorithm, and the ID number and a subject keyword are used for indexing the medical record information of the user; the private data of the user are guaranteed through safety verification, information transfer between hospitals is completed, safety of information of patients is improved, and diagnosis efficiency is accelerated while sharing efficiency of the information is improved.

Description

Medical information sharing system
Technical Field
The invention relates to a system in the field of information sharing, in particular to a sharing system of medical information.
Background
Mutual trust and mutual recognition of medical test results have become common knowledge, especially in medical institutions for medical treatment in large cities. However, currently, the existing hospital information-based construction is the integration and mutual recognition of various information data limited in the hospital, and the data access information system of the external device is a cautious attitude. Meanwhile, as each data is scattered and messy, each application system is also isolated from each other; there are also many software service vendors, and software system data are independent, so that they are isolated from each other in terms of data, workflow, and the like. Even the data of software systems provided by the same company are not completely shared with each other. Even because of the factors of company operation and maintenance, the indexes of user information and the like of the same company are different according to the historical basic data of hospitals, and the executed indexes and the like are not inconsistent.
In addition, in consideration of the safety of the user and the hospital information, the hospital risks leakage and difficulty in finding work control leakage for information such as patient data and medical records. Therefore, there is an apprehension about sharing information of hospitals, etc., which also causes the patients to visit different hospitals and departments while carrying the same report or repeating a large number of report executions.
Moreover, since the user carries various reports to check in various hospitals, the information such as the label of the previous hospital is not known by the subsequent hospital receiving hospital, and the original image data and the like can not provide real-time reference for the subsequent examination and the like, the unexpected burden of the patient is increased, and the workload of the doctor is not reduced.
Therefore, it is necessary to provide a system capable of improving hospital medical information sharing, completing information transfer between hospitals, and improving safety of patient information. This is particularly important when the patient is performing an imaging examination.
Disclosure of Invention
Therefore, the application provides a medical information sharing system, which comprises a security center and a plurality of user terminals, wherein the security center comprises a security verification module, and the security verification module is used for obtaining a public and private key pair, initializing the system and generating public parameters and a master key of the system; extracting abstract document information from medical record documents of each patient, acquiring topic keywords, selecting ID (identity) information of a user, and generating a unique determined ID number by an agreed algorithm, wherein the ID number indexes the medical record information of the user;
the user terminal is used for requesting medical data of a patient with a corresponding ID number to the server, after the user terminal and a safety verification module of the safety center execute handshake confirmation, the user terminal sends public parameters, a main secret key and user ID information, and after the safety center executes verification on the main secret key and the user ID information, the safety center downloads encrypted original medical data of the patient from the server; and the public parameter is used for verifying and identifying the system and the user terminal. The public parameter is used for verifying and identifying a system and a user terminal, and identifying whether the user terminal has an access right;
the safety verification unit of the safety center judges whether the attribute set of the data requester of the user terminal meets the attribute threshold value in the encrypted medical data access strategy set by the patient or not, and if the attribute set of the data requester meets the attribute threshold value in the access strategy set by the patient, the medical data of the patient is successfully decrypted to obtain the data plaintext; otherwise, the decryption is unsuccessful;
the medical data is an image data document, a history browsing archive of the image document is obtained, and optionally the document is a predetermined region of the patient's body that can be imaged using an MRI, PET scan or PET/CT scan. Scanning the patient may provide one or more images of a predetermined region of the patient's body.
And the user terminal acquires the labeling information of the plurality of image data, selects the image with the highest significance level from the plurality of images and fuses the image with the local detection image data of the user terminal.
Optionally, the significant image in the server is determined according to a diagnosis mode, clinical relevance, a segmentation region, and reading time as parameters, specifically: level of significance ═ a (clinical relevance) + b (segmented region) + c (read time) + d, where a, b and c are coefficients and d is an offset; the saliency level is stored in association with the image. And selecting a fusion image to be recommended according to the significance level.
The user terminal acquires the key image and performs fusion display on the key image and the shot image of the hospital, and a multimode image fusion module in the user terminal is connected with a medical image post-processing unit and is used for fusing the multimode medical image of the patient and performing three-dimensional reconstruction and visualization processing;
optionally, the multi-mode image fusion unit and the medical image post-processing unit in the user terminal are connected to each other and connected to the security center or the third-party server through a network, the medical image post-processing unit transmits post-processed medical images of multiple modalities to the multi-mode image fusion unit, and the multi-mode image fusion unit fuses the medical images of multiple modalities; the multi-mode image fusion unit fuses medical images of multiple modalities of a patient in a semi-automatic registration mode. For medical images of different modalities, the multi-mode image fusion unit respectively selects clear anatomical landmark points in one-to-one correspondence, and performs registration by adopting a registration algorithm from a point set to a point set;
optionally: selecting a reference image from the multiple fusion images, taking the rest as images to be registered, and extracting corner features related to the anatomical annotation points from the reference image and the images to be registered; secondly, calculating the similarity of the corner features in the two images one by one, and performing corner feature matching; then, estimating parameters of the geometric transformation model according to the successfully matched feature point pairs; and finally, performing image resampling and transformation on the image to be registered according to the parameters of the geometric transformation model.
Optionally, before the image is executed to select the reference image, the method further includes determining whether the pixels are consistent, if the pixels are inconsistent, performing down-sampling interaction processing on the high-pixel image feature and the low-pixel image feature corresponding to the image to be segmented and fused, and then fusing the high-pixel down-sampling interaction feature and the low-pixel down-sampling interaction feature; carrying out convolution interactive processing on the high-pixel downsampling interactive features and the low-pixel downsampling interactive features and then fusing to obtain high-pixel convolution interactive features and low-pixel convolution interactive features; performing up-sampling interactive processing on the high-pixel convolution interactive feature and the low-pixel convolution interactive feature and then fusing to obtain a high-pixel up-sampling interactive feature and a low-pixel up-sampling interactive feature; and segmenting the target object from the image to be segmented according to the high pixel up-sampling interactive feature and the low pixel up-sampling interactive feature.
The convention algorithm is a hash algorithm or an RSA algorithm.
Optionally, the performing medical image fusion may comprise the step of 1) geometrically registering the various images. The image registration is a process of space normalization of different images, geometric differences among the images are corrected through a mathematical model, and the two images are synthesized into the same coordinate system, so that the same scenery corresponds to different local images, and subsequent fusion processing is facilitated; 2) adjusting the gray scale of the high-resolution image to make the mean value and the variance equal to those of the low-resolution image; 3) then, the detailed information sensing image is decomposed by utilizing the wavelet, so that approximate information and detail information are obtained; 4) carrying out image enhancement or image compression on the obtained information so as to improve the visual degree of the image; 5) replacing approximate information obtained by wavelet decomposition by using an original clear and detailed local high-definition image map in the same region; 6) and performing wavelet reconstruction by using the replaced high-definition impression image and approximate information obtained by wavelet decomposition to obtain a fused image.
Optionally, the decomposing and fusing by using the wavelet image specifically includes: 1) for the registered source image I1,I2,…InRespectively performing two-dimensional wavelet decomposition, and setting the decomposition layer number as J; 2) adopting a fusion rule of average values for the low-frequency decomposition coefficients, and setting A1,J,A2,J…,An,JFor the image to be fused, the low-frequency component on the wavelet decomposition scale J is the fused low-frequency component
Figure BDA0003137404010000051
3) For high-frequency decomposition coefficient, removing wavelet coefficient with large absolute value of corresponding position as fusion image I1,I2,…InWavelet coefficients at corresponding positions, i.e.
Dj=max(D1,j,D2,j,…,Dn,j)
Wherein J is more than or equal to 1 and less than or equal to J, Di,j(I ═ 1,2, … n) is the source image IiHigh frequency decomposition coefficients on the j-th layer.
4) And performing wavelet inverse change on the fused wavelet coefficient to obtain a fused image.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way.
FIG. 1 is a schematic diagram of the system of the present invention.
Detailed Description
These and other features and characteristics of the present invention, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will be better understood upon consideration of the following description and the accompanying drawings, which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. It will be understood that the figures are not drawn to scale. Various block diagrams are used in the present invention to illustrate various variations of embodiments according to the present invention.
Example 1
As shown in fig. 1, in the embodiment of the present invention, the system includes a security center and a plurality of user terminals, remote servers. Alternatively, the user terminal may be a computer system, which may be, for example, a standard personal computer with standard CPU, memory and storage, an enhanced picture archiving communications system, or an add-on subsystem of an existing PACS and/or RIS. In an embodiment of the invention, the computer system may be arranged to analyze and prioritize images and patient cases.
The computer system may automatically retrieve medical images from an imaging module (e.g., CAT/CT scanner, MRI, PET/CT scanner) or database or PACS in which medical images are stored, automatically analyze the medical images, and provide the medical images and analysis results for review by a reviewing physician or a referring physician or a specialist such as a radiologist.
The system comprises a security center and a plurality of user terminals, wherein the security center comprises a security verification module which is used for obtaining a public and private key pair, initializing the system and generating public parameters and a master key of the system; extracting abstract document information from medical record documents of each patient, acquiring topic keywords, selecting ID (identity) information of a user, and generating a unique determined ID number by an agreed algorithm, wherein the ID number indexes the medical record information of the user;
the user terminal is used for requesting medical data of a patient with a corresponding ID number to the remote server, the user terminal sends pubulic parameters, a main secret key and user ID information after the user terminal and a safety verification module of the safety center execute handshake confirmation, and the safety center downloads encrypted original medical data of the patient from the server after verifying the main secret key and the user ID information; and the public parameter is used for verifying and identifying the system and the user terminal. The public parameter is used for verifying and identifying the system and the user terminal and identifying whether the user terminal has the access authority.
The safety verification unit of the safety center judges whether the attribute set of the data requester of the user terminal meets the attribute threshold value in the encrypted medical data access strategy set by the patient or not, and if the attribute set of the data requester meets the attribute threshold value in the access strategy set by the patient, the medical data of the patient is successfully decrypted to obtain the data plaintext; otherwise, the decryption is not successful.
Optionally, the attribute threshold, that is, the authorization right value set by the user differently, optionally, different attribute thresholds of the access policy correspond to different data in the access policy, and a high threshold may be considered for the access information of a low threshold. The set of attributes of the requestor corresponds to the job title, medical experience, specialty, department, etc. of the requestor.
The medical data is an image data document, and a history browsing file of the image document is obtained. When browsing the document, the annotation and annotation information of the doctor and the like at the user terminal during browsing are weighted to the corresponding image information as the saliency annotation. For example, the recording and labeling of information such as expert consultation, etc. offsets the labeling weight of significance, i.e., affects the d-value assignment.
Optionally, when the picture photo of the expert consultation or the significant photo determined by the doctor has a higher priority, and the like, so that the image photo of the corresponding number or only the annotation and the annotation information can be timely selected to be transmitted according to the transmission state of the access link during subsequent data access. Therefore, the safety of the access information is improved, and the safety center signs the information when only the endorsement and the annotation information are transmitted, so that the reliability of the information is ensured.
Optionally, the image document in the document is an image of a predetermined region of the patient's body using an MRI, PET scan or PET/CT scan. Scanning a patient to provide one or more images of a predetermined region of the patient's body; and the local user terminal obtains the marking information of the plurality of image data, selects the image with the highest significance level from the plurality of images and fuses the image with the local detection image data of the user terminal.
Illustratively, when detecting blood in pleural effusion (hemothorax), the area where the detected blood enters the pleural effusion can be highlighted, the patient can be automatically guided to the position for verification according to standard information of a server in the safety center, so that the radiologist can diagnose the hemothorax without further measuring the liquid intensity in other slices, parameters of local features can be obtained through image fusion, and the parameters are provided for subsequent diagnosticians based on the data fusion of the detected images, thereby improving the auditing efficiency.
The image knowledge of saliency is: may be weighted by setting one or more parameters to influence the determination of the level of significance. In practice, the above parameter may be set to d based on the higher priority such as expert consultation, and an offset may be added to the determination of the importance level of the image. The optional one or more parameters include, for example, clinical relevance, diagnostic mode, segmentation region, number of pixels, and/or image read time. A level of significance is determined for each of a plurality of images that are related to each other. In certain embodiments, the significance level is stored in association with the image. Preferably, the level of significance is determined using the equation: significance level ═ a (clinical relevance) + b (segmented region) + c (reading time) + d, where a, b and c are weighting coefficients, and d is an offset.
It is particularly noted that the images taken locally, the original images the doctor is browsing, etc., are also acquired according to the above parameters and are synchronously transmitted to the server of the security center.
Optionally, the degree of importance is determined automatically when the doctor marks the image as important. Preferably, the plurality of images may be viewed in an order determined based on the importance level. The image arrangement may be stored in a server based on the importance level, and the access to the corresponding image information is performed by the access policy attributes of the patient, thereby performing the subsequent image fusion.
The key image and the shot image of the hospital are fused and displayed, the user terminal acquires the image to be fused, the key image and the shot image of the hospital are fused and displayed, and a multimode image fusion module in the user terminal is connected with a medical image post-processing unit and used for fusing the multimode image of the patient and performing three-dimensional reconstruction and visualization processing; medical images of multiple modalities of a patient are fused in a semi-automatic registration mode. For medical images of different modalities, the multi-mode image fusion unit respectively selects clear anatomical landmark points in one-to-one correspondence, and performs registration in a point set-to-point set registration mode;
selecting a reference medical image from the multiple fused medical images, taking the rest of the fused medical images as medical images to be registered, and extracting corner features related to anatomical annotation points from the reference image and the medical images to be registered; secondly, similarity of corner features in the two images is calculated one by one, and corner feature matching is carried out.
Optionally, parameters of the geometric transformation model are estimated according to the successfully matched feature point pairs; and finally, performing image resampling and transformation on the image to be registered according to the parameters of the geometric transformation model. And (3) a geometric transformation model, wherein nonlinear transformation, affine transformation and the like can be selected.
Optionally, before the image is executed to select the reference image, the method further includes determining whether the pixels are consistent, if the pixels are inconsistent, performing down-sampling interaction processing on the high-pixel image feature and the low-pixel image feature corresponding to the image to be segmented, and then fusing the high-pixel and low-pixel down-sampling interaction features; carrying out convolution interactive processing on the high-pixel downsampling interactive features and the low-pixel downsampling interactive features and then fusing to obtain high-pixel convolution interactive features and low-pixel convolution interactive features; performing up-sampling interactive processing on the high-pixel convolution interactive feature and the low-pixel convolution interactive feature and then fusing to obtain a high-pixel up-sampling interactive feature and a low-pixel up-sampling interactive feature; and segmenting the target object from the image to be segmented according to the high pixel up-sampling interactive feature and the low pixel up-sampling interactive feature.
Optionally, the image fusion step comprises the steps of 1) geometrically registering the various images. The image registration is a process of space normalization of different images, geometric differences among the images are corrected through a mathematical model, and the two images are synthesized into the same coordinate system, so that the same scenery corresponds to different local images, and subsequent fusion processing is facilitated; 2) adjusting the gray scale of the high-resolution image to make the mean value and the variance equal to those of the low-resolution image; 3) then, the detailed information sensing image is decomposed by utilizing the wavelet, so that approximate information and detail information are obtained; 4) carrying out image enhancement or image compression on the obtained information so as to improve the visual degree of the image; 5) replacing approximate information obtained by wavelet decomposition by using the same local high-definition image map which is clear in original text and detailed in information; 6) and carrying out wavelet reconstruction by using the replaced high-definition image map and approximate information obtained by wavelet decomposition to obtain a fused image.
Optionally, the decomposing and fusing by using the wavelet image specifically includes: 1) for the registered source image I1,I2,…InRespectively performing two-dimensional wavelet decomposition, and setting the decomposition layer number as J; 2) adopting a fusion rule of average values for the low-frequency decomposition coefficients, and setting A1,J,A2,J…,An,JFor the image to be fused, the low-frequency component on the wavelet decomposition scale J is the fused low-frequency component
Figure BDA0003137404010000091
3) For high-frequency decomposition coefficient, removing wavelet coefficient with large absolute value of corresponding position as fusion image I1,I2,…InWavelet coefficients at corresponding positions, i.e.
Dj=max(D1,j,D2,j,…,Dn,j)
Wherein J is more than or equal to 1 and less than or equal to J, Di,j(I ═ 1,2, … n) is the source image IiHigh frequency decomposition coefficients on the j-th layer.
4) And performing wavelet inverse change on the fused wavelet coefficient to obtain a fused image.
Preferably, taking a lung image as an example, the synthesizing and displaying the image of the key region may be that overlapping identification of the key region is performed, whether each pixel of the MASK region is greater than or equal to a predetermined threshold is determined, if so, the pixel is determined to belong to the lung region, and if not, the pixel is determined to belong to the non-key region. For example, assuming that the predetermined threshold is 0.5, determining whether each pixel of the MASK region is greater than or equal to 0.5 through the sigmod function, and if the pixel is greater than or equal to 05, the pixel is the lung region; if less than 0.5, the pixel is a non-critical area.
Optionally, a fused lung feature map of the lung image may be obtained based on the fusion result; and inputting the fused lung feature map into a classification network, and determining the pneumoconiosis grade of each lung region of the lung image through the classification network.
Preferably, in practice, for example, the medical image contains a large number of backgrounds with single gray levels, the distance between adjacent gray levels is increased, the output image may have degradation phenomena such as false contours, and the like, the fusion module at the user's local end is further configured to define a rectangular sub-region and a moving step, move the sub-region according to the step, sequentially traverse the entire image, equalize all pixels in the corresponding sub-region during the period, perform equalization for a plurality of times on each pixel of the original image, and finally, use the equalized average value as the gray value of the corresponding pixel of the output image to enhance the image.
Example 2
It will be understood by those skilled in the art that all or part of the processes including the method steps in the above embodiment 1 can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the above embodiments of the methods when executed. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (10)

1. A medical information system sharing system characterized by: the system comprises a security center and a plurality of user terminals, wherein the security center comprises a security verification module which is used for obtaining a public and private key pair and generating public parameters and a master key of the system; extracting abstract document information from medical record documents of each patient to obtain topic keywords; selecting ID information of a user, and generating a unique determined ID number by an agreed algorithm, wherein the ID number and a subject keyword are used for indexing medical record information of the user;
the user terminal is used for requesting medical data of a patient with a corresponding ID number to the server, the user terminal interacts with a security verification module of the security center, the user terminal sends public parameters, a main key and user ID information after handshake confirmation is executed, and the security center downloads encrypted original medical data of the patient from the server after the main key and the user ID information are verified; the public parameter is used for verifying and identifying a system and a user terminal, and identifying whether the user terminal has an access right;
the safety verification module of the safety center judges whether the attribute set of the data requester of the user terminal meets the attribute threshold value in the encrypted medical data access strategy set by the patient or not, and if the attribute set of the data requester meets the attribute threshold value in the access strategy set by the patient, the medical data of the patient is successfully decrypted to obtain a data plaintext; otherwise, the decryption is unsuccessful;
the medical data comprises an image data document, the user terminal acquires a plurality of medical images and labeling information thereof in the image data document, and selects a medical image with the highest significance level from the plurality of medical images to be fused with a local detection medical image of the user terminal.
2. The system of claim 1, wherein: the labeling information of the plurality of pieces of medical image data comprises historical browsing archives of doctors.
3. The system of claim 2, wherein: the plurality of medical images are images of a predetermined region of the patient's body using an MRI, PET scan or PET/CT scan.
4. The system of claim 3, wherein: the user terminal obtains a plurality of medical images in an image data document and labeling information thereof, and selects the medical image with the highest significance level from the plurality of medical images, wherein the medical image is divided into areas according to clinical relevance, reading time is used as a parameter to judge the significance level, and the significance level is a (clinical relevance) + b (divided area) + c (reading time) + d, wherein a, b and c are weighting coefficients, and d is an offset.
5. The system of claim 4, wherein: the user terminal comprises a multimode image fusion unit and a hospital image post-processing unit, wherein the multimode image fusion unit and the medical image post-processing unit in the user terminal are mutually connected, and are connected with the medical image post-processing unit through a network server to transmit post-processed medical images of multiple modalities to the multimode image fusion unit, and the multimode image fusion unit fuses the medical images of multiple modalities.
6. The system of claim 5, wherein: the multi-mode image fusion unit fuses medical images of multiple modalities of a patient in a semi-automatic registration mode; and for the images of different modalities, the multi-mode image fusion unit respectively selects clear anatomical landmark points in one-to-one correspondence, and performs registration by adopting a point set-to-point set registration mode.
7. The system of claim 6, wherein: the multi-mode image fusion unit respectively selects clear anatomical landmark points in one-to-one correspondence, and the registration is carried out by adopting a point set-to-point set registration mode, specifically: selecting a reference image from the multiple fused medical images, taking the rest of the fused medical images as images to be registered, and extracting corner features related to anatomical annotation points from the reference image and the images to be registered; secondly, similarity of corner features in the two images is calculated one by one, and corner feature matching is carried out.
8. The system of claim 7, wherein: the user terminal, prior to selecting the reference image, comprises: judging whether the pixels of the images to be fused are consistent, if not, fusing the high-pixel image features and the low-pixel image features corresponding to the images to be segmented and fused after downsampling interactive processing is carried out on the high-pixel image features and the low-pixel image features, and obtaining downsampling interactive features of high pixels and low pixels; carrying out convolution interactive processing on the high-pixel downsampling interactive features and the low-pixel downsampling interactive features and then fusing to obtain high-pixel convolution interactive features and low-pixel convolution interactive features; performing up-sampling interactive processing on the high-pixel convolution interactive feature and the low-pixel convolution interactive feature and then fusing to obtain a high-pixel up-sampling interactive feature and a low-pixel up-sampling interactive feature; and segmenting the target object from the image to be segmented according to the high pixel up-sampling interactive feature and the low pixel up-sampling interactive feature.
9. The system of any of claims 1-8, wherein: the server is a far-end cloud server or a near-end edge server provided by a third party.
10. The system of any of claims 1-8, wherein: the convention algorithm is a hash algorithm or an RSA algorithm.
CN202110723134.6A 2021-06-29 2021-06-29 Medical information sharing system Active CN113409928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110723134.6A CN113409928B (en) 2021-06-29 2021-06-29 Medical information sharing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110723134.6A CN113409928B (en) 2021-06-29 2021-06-29 Medical information sharing system

Publications (2)

Publication Number Publication Date
CN113409928A true CN113409928A (en) 2021-09-17
CN113409928B CN113409928B (en) 2022-10-04

Family

ID=77679954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110723134.6A Active CN113409928B (en) 2021-06-29 2021-06-29 Medical information sharing system

Country Status (1)

Country Link
CN (1) CN113409928B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883461A (en) * 2023-05-18 2023-10-13 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof
CN117197593A (en) * 2023-11-06 2023-12-08 天河超级计算淮海分中心 Medical image pseudo tag generation system
CN117319084A (en) * 2023-11-28 2023-12-29 遂宁市中心医院 Medical examination data sharing method and system based on cloud authentication
CN117436132A (en) * 2023-12-21 2024-01-23 福建中科星泰数据科技有限公司 Data privacy protection method integrating blockchain technology and artificial intelligence
CN117455935A (en) * 2023-12-22 2024-01-26 中国人民解放军总医院第一医学中心 Abdominal CT (computed tomography) -based medical image fusion and organ segmentation method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760206A (en) * 2012-06-11 2012-10-31 杭州电子科技大学 System and method for sharing cross-regional medical image information
CN106570310A (en) * 2016-10-08 2017-04-19 黄永刚 Medical clinic information management system, and management method thereof
CN112910840A (en) * 2021-01-14 2021-06-04 重庆邮电大学 Medical data storage and sharing method and system based on alliance blockchain

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760206A (en) * 2012-06-11 2012-10-31 杭州电子科技大学 System and method for sharing cross-regional medical image information
CN106570310A (en) * 2016-10-08 2017-04-19 黄永刚 Medical clinic information management system, and management method thereof
CN112910840A (en) * 2021-01-14 2021-06-04 重庆邮电大学 Medical data storage and sharing method and system based on alliance blockchain

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883461A (en) * 2023-05-18 2023-10-13 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof
CN116883461B (en) * 2023-05-18 2024-03-01 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof
CN117197593A (en) * 2023-11-06 2023-12-08 天河超级计算淮海分中心 Medical image pseudo tag generation system
CN117319084A (en) * 2023-11-28 2023-12-29 遂宁市中心医院 Medical examination data sharing method and system based on cloud authentication
CN117319084B (en) * 2023-11-28 2024-01-30 遂宁市中心医院 Medical examination data sharing method and system based on cloud authentication
CN117436132A (en) * 2023-12-21 2024-01-23 福建中科星泰数据科技有限公司 Data privacy protection method integrating blockchain technology and artificial intelligence
CN117436132B (en) * 2023-12-21 2024-03-05 福建中科星泰数据科技有限公司 Data privacy protection method integrating blockchain technology and artificial intelligence
CN117455935A (en) * 2023-12-22 2024-01-26 中国人民解放军总医院第一医学中心 Abdominal CT (computed tomography) -based medical image fusion and organ segmentation method and system
CN117455935B (en) * 2023-12-22 2024-03-19 中国人民解放军总医院第一医学中心 Abdominal CT (computed tomography) -based medical image fusion and organ segmentation method and system

Also Published As

Publication number Publication date
CN113409928B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN113409928B (en) Medical information sharing system
CN108021819B (en) Anonymous and secure classification using deep learning networks
JP2022517098A (en) Methods for generating 3D printable models of patient-specific anatomy
JP6636506B2 (en) Image fingerprint generation
US10706534B2 (en) Method and apparatus for classifying a data point in imaging data
US20230083261A1 (en) Labeling, visualization, and volumetric quantification of high-grade brain glioma from mri images
CN113168912B (en) Determining growth rate of objects in 3D dataset using deep learning
US11462315B2 (en) Medical scan co-registration and methods for use therewith
WO2010070585A2 (en) Generating views of medical images
US11669960B2 (en) Learning system, method, and program
EP4156096A1 (en) Method, device and system for automated processing of medical images to output alerts for detected dissimilarities
Kharat et al. A peek into the future of radiology using big data applications
CN114331992A (en) Image sequence processing method and device, computing equipment and storage medium
WO2019102917A1 (en) Radiologist determination device, method, and program
US11923069B2 (en) Medical document creation support apparatus, method and program, learned model, and learning apparatus, method and program
JP2016224793A (en) Medical diagnosis support system, medical information display device, medical information management device and medical image processing program
US11308622B2 (en) Information processing apparatus and method for controlling the same to generate a difference image from first and second inspection images
US20240020842A1 (en) Systems and methods for image alignment and registration
US20230386032A1 (en) Lesion Detection and Segmentation
Sikkandar et al. Unsupervised local center of mass based scoliosis spinal segmentation and Cobb angle measurement
CN114334093A (en) Image sequence processing method and device, computing equipment and storage medium
JP2019017993A (en) Medical image processing apparatus, method, and program
Hebbar et al. Content Based Medical Image Retrieval–Performance Comparison of Various Methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant