CN111627508B - Medical image reading method and device and computer readable medium - Google Patents

Medical image reading method and device and computer readable medium Download PDF

Info

Publication number
CN111627508B
CN111627508B CN202010355195.7A CN202010355195A CN111627508B CN 111627508 B CN111627508 B CN 111627508B CN 202010355195 A CN202010355195 A CN 202010355195A CN 111627508 B CN111627508 B CN 111627508B
Authority
CN
China
Prior art keywords
focus
image
information
specific
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010355195.7A
Other languages
Chinese (zh)
Other versions
CN111627508A (en
Inventor
崔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shukun Shanghai Medical Technology Co ltd
Original Assignee
Shukun Beijing Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shukun Beijing Network Technology Co Ltd filed Critical Shukun Beijing Network Technology Co Ltd
Priority to CN202010355195.7A priority Critical patent/CN111627508B/en
Publication of CN111627508A publication Critical patent/CN111627508A/en
Application granted granted Critical
Publication of CN111627508B publication Critical patent/CN111627508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Abstract

The invention discloses a medical image reading method and device and a computer readable medium. One embodiment of the method comprises: acquiring medical image data and a corresponding diagnosis report; extracting lesion-related information from the diagnostic report; extracting a focus related image from the medical image data; and matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus. According to the embodiment of the invention, the acquired medical image data and the acquired diagnosis report are respectively extracted to obtain the relevant information and the relevant image of the focus, and the generated relevant information and the generated relevant image of the focus are matched to generate the image-text reading report based on the focus. Therefore, under the condition that the content of an original diagnosis report provided by a hospital is not changed or newly increased, the content matching and fusion between the medical image diagnosis report and the medical image data are realized in a mode of combining pictures and texts, and the understandability of a patient to the medical image diagnosis report is improved.

Description

Medical image reading method and device and computer readable medium
Technical Field
The invention belongs to the field of medical imaging, and particularly relates to a medical image reading method and device and a computer readable medium.
Background
The incidence and mortality of the coronary heart disease are high, and the early image diagnosis and accurate evaluation of the coronary heart disease have important significance for clinical decision and prognosis evaluation. Coronary artery CTA is the first non-invasive imaging examination method for suspicious coronary heart disease patients in China at present. In the clinical image examination process, after the image examination is finished, a radiologist gives a character image examination report and an examination image to a patient to be examined, and the patient can obtain the electronic report and the image for subsequent diagnosis and treatment.
In the conventional procedure, a patient acquires an electronic medical image diagnosis report and an examination image file which are independent of each other, the contents of the image and the diagnosis impression in the report are organized in a lesion mode, and the image generally covers all image scanning data. The two are independent of each other and have different organization modes. In the process, the report and the image content only have a global correspondence, and content association is not performed at a specific lesion level described by the report, so that the understandability of the patient on the content in the medical image diagnosis report is influenced.
Disclosure of Invention
In view of this, embodiments of the present invention provide a medical image interpretation method, device and computer readable medium, which can effectively improve the intelligibility of a medical image diagnosis report for a patient.
To achieve the above object, according to a first aspect of an embodiment of the present invention, there is provided a medical image interpretation method, including: acquiring medical image data and a corresponding diagnosis report; extracting lesion-related information from the diagnostic report; extracting a lesion-related image from the medical image data; and matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus.
Optionally, the extracting information related to a lesion from the diagnosis report includes: performing interpretation processing on the diagnosis report to obtain a plurality of information; wherein the information comprises lesion information and non-lesion information, and the lesion information at least comprises vascular lesion information and non-vascular lesion information; extracting specific focus information and part information corresponding to the specific focus from the plurality of information; and determining the information of the specific focus and the information of the part corresponding to the specific focus as focus related information.
Optionally, extracting a lesion-related image from the medical image data includes: selecting a focus image of a specific part from the medical image data; extracting a focus image with a specific angle from the focus image with the specific position; and taking the focus image of the specific part and the focus image of the specific angle as focus related images.
Optionally, the matching the information related to the lesion with the image related to the lesion to generate a lesion-based image reading report includes: matching the specific focus information with the specific focus image to obtain a first matching result; matching the part information corresponding to the specific focus with the focus image of the specific part to obtain a second matching result; and combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
Optionally, the selecting a specific region lesion image from the medical image data includes: selecting focus image data from the medical image data; carrying out anatomical recognition on each focus image in the focus image data to obtain focus images of different parts; marking the focus images of different parts according to parts to obtain part focus image data with a first label; and selecting an image of which the first label indicates a specific part from the part focus image data with the first label to obtain a specific part focus image.
Optionally, the extracting a specific lesion image from the specific part blood vessel image includes: identifying the focus of the focus image of the specific part to obtain focus images of different angles; marking the focus images of different angles according to angles to obtain focus image data with a second label; and extracting a focus image of which the second label indicates a specific angle from the focus image data with the second label to obtain the focus image of the specific angle.
Optionally, the selecting specific lesion image data from the medical image data includes: classifying and identifying the medical image data to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images; marking the images of different types according to types to obtain image data with a third label; and selecting a third label from the image data with the third label to indicate a focus image to obtain focus image data.
In order to achieve the above object, according to a second aspect of the embodiments of the present invention, there is also provided a medical image interpretation apparatus, including: the acquisition module is used for acquiring medical image data and a corresponding diagnosis report; a first extraction module for extracting lesion-related information from the diagnostic report; the second extraction module is used for extracting a focus related image from the medical image data, and the matching module is used for matching the focus related information with the focus related image to generate a focus-based image-text interpretation report.
Optionally, the first extraction module includes: an arithmetic unit for performing interpretation processing on the diagnosis report to obtain a plurality of information; wherein the information comprises lesion information and non-lesion information, and the lesion information at least comprises vascular lesion information and non-vascular lesion information; a first extraction unit, configured to extract information on a specific lesion and information on a region corresponding to the specific lesion from the plurality of pieces of information; and a determination unit configured to determine the specific lesion information and the site information corresponding to the specific lesion as lesion-related information.
To achieve the above object, according to a third aspect of the embodiments of the present invention, there is also provided a computer-readable medium on which a computer program is stored, the program, when executed by a processor, implementing the interpretation method according to the first aspect.
Based on the technical scheme, the embodiment of the invention aims at a medical image interpretation method, a medical image interpretation device and a computer readable medium, firstly medical image data and a corresponding diagnosis report are obtained, and then a focus related image is extracted from the medical image data and focus related information is extracted from the diagnosis report; and finally, matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus. Therefore, the invention obtains the relevant information of the focus and the relevant image of the focus by respectively extracting and processing the acquired medical image data and the diagnosis report, and generates the image-text reading report based on the focus by matching the generated relevant information of the focus and the relevant image of the focus. Therefore, under the condition that the content of an original diagnosis report issued by a hospital is not changed or newly increased, the content matching and fusion between the medical image diagnosis report and the medical image data are realized in a mode of combining pictures and texts, so that the problems of poor readability and the like of the medical image diagnosis report in the prior art are solved, and the understandability of a patient to the medical image diagnosis report is improved.
Further effects of the above-described non-conventional alternatives will be described below in connection with specific embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein like or corresponding reference numerals designate like or corresponding parts throughout the several views.
FIG. 1 is a flowchart illustrating a medical image interpretation method according to an embodiment of the present invention;
FIG. 2 is a schematic view of an apparatus for interpreting medical images according to an embodiment of the present invention;
FIG. 3 is a diagram of an exemplary system architecture in which embodiments of the present invention may be employed;
fig. 4 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Fig. 5 is a schematic view of a lesion-based image interpretation report according to an embodiment of the present invention.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given only to enable those skilled in the art to better understand and to implement the present invention, and do not limit the scope of the present invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment of the invention aims to generate a focus related image by extracting medical image data, generate focus related information by extracting a diagnosis report corresponding to the medical image data, and then generate a focus-based image-text reading report by matching the focus related information with the focus related image; therefore, matching and fusion between the medical image data and the diagnosis report are realized, the problems of poor readability and the like of the original medical image diagnosis report are solved, and the understandability of the patient on the medical image diagnosis report is improved.
Fig. 1 is a flowchart of a medical image interpretation method according to an embodiment of the present invention, the method at least includes the following steps: s101, medical image data and a corresponding diagnosis report are obtained.
Specifically, electronic medical image data and electronic medical image diagnostic reports are generally produced by hospital imaging departments and stored in the form of optical disks, usb disk storage media, network cloud disks, internet cloud films and the like.
S102, relevant information of the focus is extracted from the diagnosis report.
Illustratively, interpretation processing is performed on the diagnosis report to obtain a plurality of information; the information comprises focus information and non-focus information, and the focus information at least comprises blood vessel focus information and non-blood vessel focus information; extracting specific focus information and part information corresponding to the specific focus from the plurality of information; and determining the information of the specific focus and the information of the part corresponding to the specific focus as the relevant information of the focus.
Specifically, interpretation processing is performed on a diagnosis report to generate a plurality of pieces of information related to vascular lesions; extracting specific focus information and blood vessel information corresponding to the specific focus from the plurality of information; and determining the information of the specific focus and the blood vessel information corresponding to the specific focus as the information related to the focus. Or, performing interpretation processing on the diagnosis report to generate a plurality of pieces of information related to the non-vascular lesion; extracting specific focus information and non-blood vessel information corresponding to the specific focus from the plurality of information; and determining the information of the specific focus and the non-blood vessel information corresponding to the specific focus as the relevant information of the focus. In addition, non-focal information, such as coronary vessel origin information and coronary vessel shape information, may also be extracted from a plurality of information.
For example, natural language understanding algorithm processing is performed on the text content in the medical image diagnosis report to obtain a plurality of information; the plurality of information includes diseases such as origin information of coronary vessels, dominant type information, named information of coronary vessel anatomy, information of coronary vessel stenosis, and information of coronary vessel plaque.
Fig. 5 is a schematic view of a focus-based image-text interpretation report according to an embodiment of the present invention.
For example, the original content in a hospital-issued electronic report "coronary atherosclerosis: the left main trunk is slightly stenotic, the left anterior descending branch is slightly stenotic, the middle section can be seen as a myocardial bridge, the first diagonal branch and the second diagonal branch are slightly and slightly stenotic, and the left circumflex branch is severely stenotic, and the right coronary artery is severely stenotic ". Reading the text description content in the electronic report issued by the hospital to obtain reading report content, wherein the reading report content is as follows: "slight stenosis of the left main trunk: calcified plaques were visible in the left main vessel wall, with a slight narrowing of the lumen of about 7%. ".
S103, extracting a focus related image from the medical image data.
Exemplarily, classifying and identifying medical image data to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images; marking different types of images according to types to obtain image data with a third label; and selecting a third label from the image data with the third label to indicate a focus image to obtain focus image data. Carrying out anatomical recognition on each focus image in focus image data to obtain focus images of different parts; marking the focus images of different parts according to the parts to obtain part focus image data with a first label; and selecting a focus image of which the first label indicates a specific part from the position focus image data with the first label to obtain a specific position focus image. Identifying the focus of a specific part focus image to obtain focus images of different angles; marking the focus images of different angles according to the angles to obtain focus image data with a second label; and extracting a focus image of which the second label indicates a specific angle from the focus image data with the second label to obtain the focus image of the specific angle. And taking the focus image of the specific part and the focus image of the specific angle as focus related images.
Specifically, medical image data is classified and identified to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images; marking different types of images according to types to obtain image data with a third label; and selecting a third label from the image data with the third label to indicate the blood vessel focus image, so as to obtain the blood vessel focus image data. Carrying out anatomical recognition on each blood vessel focus image in the blood vessel focus image data to obtain blood vessel focus images of different parts; marking the blood vessel focus images of different parts according to the parts to obtain blood vessel focus image data with a first label; and selecting a blood vessel focus image of which the first label indicates a specific part from the blood vessel focus image data with the first label to obtain the blood vessel focus image of the specific part. Identifying the focus of a blood vessel focus image of a specific part to obtain blood vessel focus images of different angles; marking the blood vessel focus images of different angles according to angles to obtain blood vessel focus image data with a second label; and extracting a blood vessel focus image with a second label indicating a specific angle from the blood vessel focus image data with the second label to obtain the blood vessel focus image with the specific angle. And taking the blood vessel focus image of the specific part and the focus image of the specific angle as focus related images. The process of extracting the non-vascular lesion image of the specific location and the lesion image of the specific angle from the non-vascular lesion image is the same as described above, and is not repeated here.
In addition, coronary artery blood vessel shape-walking images and coronary artery blood vessel origin images can be extracted from the medical image data.
A lesion-related image is extracted from the medical image data, and the lesion-related image is an image associated with the current anatomy and lesion in the electronic image provided by the hospital, as shown in fig. 5.
And S104, matching the focus related information with the focus related image to generate a focus-based image-text interpretation report.
Illustratively, matching the specific focus information with the specific focus image to obtain a first matching result; matching the position information corresponding to the specific focus with the focus image of the specific position to obtain a second matching result; and combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
Specifically, specific focus information is matched with the specific focus image to obtain a first matching result; matching the blood vessel focus information corresponding to the specific focus with the blood vessel focus image of the specific part to obtain a second matching result; and combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
Specifically, the lesion-related information was "left trunk slight stenosis: calcified plaques were visible in the left main vessel wall, with a slight narrowing of the lumen of about 7%. "and the relevant image of the lesion are matched, and the matching result is shown in fig. 5.
In addition, the coronary artery origin information and the coronary artery origin image can be matched to obtain a third matching result, and the coronary artery shape information and the coronary artery shape image can be matched to obtain a fourth matching result. The first matching result, the second matching result, the third matching result and the fourth matching result can be combined or independently used for generating the image-text interpretation report.
The reading report is displayed in a graphic mode through a front-end interface or other modes.
The blood vessel lesion image is an image of a blood vessel with a lesion. The non-vascular lesion image refers to a non-vascular image with lesions. The specific focus image is a focus image of a specific focus, and the specific focus image can clearly display the structure of the focus. The specific part blood vessel focus image is a blood vessel focus image specific to a specific blood vessel, such as an a blood vessel focus image or a b blood vessel focus image. The specific angle blood vessel focus image is a certain angle blood vessel focus image capable of clearly displaying a specific part blood vessel focus image.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and the inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
According to the embodiment of the invention, medical image data is extracted to generate a focus related image, a diagnosis report corresponding to the medical image data is extracted to generate focus related information, and then the focus related information and the focus related image are matched to generate a focus-based image-text interpretation report. Under the condition of not changing or adding original report content provided by a hospital, matching and fusion between medical image data and a diagnosis report are realized in a mode of combining pictures and texts; the problem of the original medical image diagnosis report readability is poor is solved, and the understandability of the patient to the medical image diagnosis report is improved.
FIG. 2 is a diagram illustrating an information processing apparatus according to an embodiment of the present invention; the apparatus 200, comprising: an obtaining module 201, configured to obtain medical image data and a corresponding diagnosis report; a first extraction module 202 for extracting lesion-related information from the diagnosis report; a second extracting module 203 for extracting a lesion-related image from the medical image data, and a matching module 204 for matching the lesion-related information with the lesion-related image to generate a lesion-based image-text interpretation report.
In an alternative embodiment, the first extraction module 202 includes: an algorithm unit for performing interpretation processing on the diagnosis report to obtain a plurality of information; the information comprises focus information and non-focus information, and the focus information at least comprises blood vessel focus information and non-blood vessel focus information; a first extraction unit for extracting information on a specific lesion and information on a site corresponding to the specific lesion from a plurality of pieces of information; and a determination unit for determining the information of the specific focus and the information of the part corresponding to the specific focus as the information related to the focus.
In an alternative embodiment, the second extraction module 203 comprises: the first extraction unit is used for selecting a focus image of a specific part from medical image data; the second extraction unit is used for extracting a focus image with a specific angle from the focus image with a specific position; and the determining unit is used for taking the focus image of the specific part and the focus image of the specific angle as focus related images.
In an alternative embodiment, the matching module 204 includes: the first matching unit is used for matching the specific focus information with the specific focus image to obtain a first matching result; the second matching unit is used for matching the part information corresponding to the specific focus with the focus image of the specific part to obtain a second matching result; and the generating unit is used for combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
In an alternative embodiment, the first extraction unit comprises: the first selection subunit is used for selecting focus image data from the medical image data; the dissection subunit is used for carrying out dissection recognition on each focus image in the focus image data to obtain focus images of different parts; the marking subunit is used for marking the focus images of different parts according to the parts to obtain part focus image data with a first label; and the second selecting subunit is used for selecting the focus image of the specific part indicated by the first label from the part focus image data with the first label to obtain the focus image of the specific part.
In an alternative embodiment, the second extraction unit comprises: the identification subunit is used for identifying the focus of the blood vessel image of the specific part to obtain focus images of different angles; the marking subunit is used for marking the focus images at different angles according to the angles to obtain focus image data with a second label; and the extracting subunit is used for extracting the focus image of which the second label indicates the specific angle from the focus image data with the second label to obtain the focus image of the specific angle.
In an alternative embodiment, the first selecting subunit includes: the identification unit is used for classifying and identifying the medical image data to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images; a marking unit for marking different kinds of images according to kinds to obtain image data with a third label; and the selecting unit is used for selecting a third label from the image data with the third label to indicate the focus image to obtain focus image data.
The device can execute the information processing method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of the information processing method. For details of the information processing method provided in the embodiment of the present invention, reference may be made to the following description.
As shown in fig. 3, the system architecture 300 may include terminal devices 301, 302, 303, a network 304 and a server 305 for an exemplary system architecture diagram to which embodiments of the present invention may be applied. The network 304 serves as a medium for providing communication links between the terminal devices 301, 302, 303 and the server 305. Network 304 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal device 301, 302, 303 to interact with the server 305 via the network 304 to receive or send messages or the like. The terminal devices 301, 302, 303 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 301, 302, 303 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 305 may be a server providing various services, such as a background management server (for example only) providing support for click events generated by users using the terminal devices 301, 302, 303. The background management server may analyze and perform other processing on the received click data, text content, and other data, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the processing method provided in the embodiment of the present application is generally executed by the server 305, and accordingly, the interpretation device is generally disposed in the server 305.
It should be understood that the number of terminal devices, networks, and servers in fig. 3 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 4, shown is a block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment. The terminal device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 4, the computer system 400 includes a Central Processing Unit (CPU)401 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the system 400 are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404. The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output section 407 including a display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 408 including a hard disk and the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. A driver 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 409, and/or installed from the removable medium 411. The computer program performs the above-described functions defined in the system of the present invention when executed by a Central Processing Unit (CPU) 401.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a sending module, an obtaining module, a determining module, and a first processing module. The names of these modules do not in some cases constitute a limitation on the unit itself, and for example, the sending module may also be described as a "module that sends a picture acquisition request to a connected server".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: s101, acquiring medical image data and a corresponding diagnosis report; s102, extracting relevant information of the focus from the diagnosis report; s103, extracting a focus related image from the medical image data; and S104, matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus.
The embodiment of the invention is directed to a medical image interpretation method, a medical image interpretation device and electronic equipment, wherein medical image data and a corresponding diagnosis report are firstly obtained, and then focus related images are extracted from the medical image data and focus related information is extracted from the diagnosis report; and finally, matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus. Therefore, the invention obtains the relevant information of the focus and the relevant image of the focus by respectively extracting and processing the acquired medical image data and the diagnosis report, and generates the image-text reading report based on the focus by matching the generated relevant information of the focus and the relevant image of the focus. Therefore, under the condition that the content of an original diagnosis report issued by a hospital is not changed or newly increased, the content matching and fusion between the medical image diagnosis report and the medical image data are realized in a mode of combining pictures and texts, so that the problems of poor readability and the like of the medical image diagnosis report in the prior art are solved, and the understandability of a patient to the medical image diagnosis report is improved.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A medical image interpretation method is characterized by comprising the following steps:
acquiring medical image data and a corresponding diagnosis report;
extracting lesion-related information from the diagnostic report;
extracting a lesion-related image from the medical image data, wherein the lesion-related image comprises a specific lesion image;
matching the relevant information of the focus with the relevant image of the focus to generate a picture and text reading report based on the focus;
extracting a lesion-related image from the medical image data, comprising: selecting a focus image of a specific part from the medical image data; extracting a focus image with a specific angle from the focus image with the specific position; taking the focus image of the specific part and the focus image of the specific angle as focus related images;
the extracting of the specific-angle lesion image from the specific-part lesion image comprises: identifying the focus of the focus image of the specific part to obtain focus images of different angles; marking the focus images of different angles according to angles to obtain focus image data with a second label; and extracting a focus image of which the second label indicates a specific angle from the focus image data with the second label to obtain the focus image of the specific angle.
2. The interpretation method according to claim 1, wherein the extracting of lesion-related information from the diagnosis report includes:
performing interpretation processing on the diagnosis report to obtain a plurality of information; wherein the information comprises lesion information and non-lesion information, and the lesion information at least comprises vascular lesion information and non-vascular lesion information;
extracting specific focus information and part information corresponding to the specific focus from the plurality of information;
and determining the information of the specific focus and the information of the part corresponding to the specific focus as focus related information.
3. The interpretation method according to claim 2, wherein the matching the lesion-related information with the lesion-related image to generate a lesion-based teletext interpretation report includes:
matching the specific focus information with the specific focus image to obtain a first matching result;
matching the part information corresponding to the specific focus with the focus image of the specific part to obtain a second matching result;
and combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
4. The interpretation method according to claim 1, wherein the selecting of the specific portion lesion image from the medical image data includes:
selecting focus image data from the medical image data;
carrying out anatomical recognition on each focus image in the focus image data to obtain focus images of different parts;
marking the focus images of different parts according to parts to obtain part focus image data with a first label;
and selecting a focus image of which the first label indicates a specific part from the position focus image data with the first label to obtain a specific position focus image.
5. The interpretation method according to claim 4, wherein the selecting of the lesion image data from the medical image data includes:
classifying and identifying the medical image data to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images;
marking the images of different types according to types to obtain image data with a third label;
and selecting a third label from the image data with the third label to indicate a focus image to obtain focus image data.
6. An interpretation apparatus for medical images, comprising:
the acquisition module is used for acquiring medical image data and a corresponding diagnosis report;
a first extraction module for extracting lesion-related information from the diagnostic report;
the second extraction module is used for extracting a focus related image from the medical image data, wherein the focus related image comprises a specific focus image;
the matching module is used for matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus;
the second extraction module comprises: the first extraction unit is used for selecting a focus image of a specific part from medical image data; the second extraction unit is used for extracting a focus image with a specific angle from the focus image with a specific position; the determining unit is used for taking the focus image of the specific part and the focus image of the specific angle as focus related images;
the second extraction unit includes: the identification subunit is used for identifying the focus of the blood vessel image of the specific part to obtain focus images of different angles; the marking subunit is used for marking the focus images at different angles according to the angles to obtain focus image data with a second label; and the extracting subunit is used for extracting the focus image of which the second label indicates the specific angle from the focus image data with the second label to obtain the focus image of the specific angle.
7. The interpretation device according to claim 6, wherein the first extraction module includes:
an arithmetic unit for performing interpretation processing on the diagnosis report to obtain a plurality of information; wherein the information comprises lesion information and non-lesion information, and the lesion information at least comprises vascular lesion information and non-vascular lesion information;
a first extraction subunit, configured to extract information of a specific lesion and information of a region corresponding to the specific lesion from the plurality of pieces of information;
and a determination unit configured to determine the specific lesion information and the site information corresponding to the specific lesion as lesion-related information.
8. The interpretation device according to claim 7, wherein the matching module comprises: the first matching unit is used for matching the specific focus information with the specific focus image to obtain a first matching result; the second matching unit is used for matching the part information corresponding to the specific focus with the focus image of the specific part to obtain a second matching result; and the generating unit is used for combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
9. The interpretation device according to claim 6, wherein the first extraction unit includes: the first selection subunit is used for selecting focus image data from the medical image data; the dissection subunit is used for carrying out dissection recognition on each focus image in the focus image data to obtain focus images of different parts; the marking subunit is used for marking the focus images of different parts according to the parts to obtain part focus image data with a first label; and the second selecting subunit is used for selecting the focus image of the specific part indicated by the first label from the part focus image data with the first label to obtain the focus image of the specific part.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202010355195.7A 2020-04-29 2020-04-29 Medical image reading method and device and computer readable medium Active CN111627508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010355195.7A CN111627508B (en) 2020-04-29 2020-04-29 Medical image reading method and device and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010355195.7A CN111627508B (en) 2020-04-29 2020-04-29 Medical image reading method and device and computer readable medium

Publications (2)

Publication Number Publication Date
CN111627508A CN111627508A (en) 2020-09-04
CN111627508B true CN111627508B (en) 2021-05-11

Family

ID=72271756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010355195.7A Active CN111627508B (en) 2020-04-29 2020-04-29 Medical image reading method and device and computer readable medium

Country Status (1)

Country Link
CN (1) CN111627508B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488493B (en) * 2020-11-27 2023-06-23 西安电子科技大学 Medical imaging physician focus recognition capability assessment method, system and computer readable medium for fusing position information
CN112686891B (en) * 2021-03-09 2022-05-13 数坤(北京)网络科技股份有限公司 Image processing method and device and computer readable medium
CN114066969B (en) * 2021-04-23 2022-04-26 数坤(北京)网络科技股份有限公司 Medical image analysis method and related product
CN113380380A (en) * 2021-06-23 2021-09-10 上海电子信息职业技术学院 Intelligent reading device for medical reports
CN113838560A (en) * 2021-09-09 2021-12-24 王其景 Remote diagnosis system and method based on medical image
CN113889236B (en) * 2021-10-08 2022-05-17 数坤(北京)网络科技股份有限公司 Medical image processing method and device and computer readable storage medium
CN114242197B (en) * 2021-12-21 2022-09-09 数坤(北京)网络科技股份有限公司 Structured report processing method and device and computer readable storage medium
CN116798596A (en) * 2022-03-14 2023-09-22 数坤(北京)网络科技股份有限公司 Information association method, device, electronic equipment and storage medium
CN115018795B (en) * 2022-06-09 2023-04-07 北京医准智能科技有限公司 Method, device and equipment for matching focus in medical image and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139443A (en) * 2015-07-30 2015-12-09 芜湖卫健康物联网医疗科技有限公司 Three-dimensional imaging system and method of diagnosis result
CN106529115A (en) * 2015-09-09 2017-03-22 佳能株式会社 Information processing device, information processing method, and information processing system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106569673B (en) * 2016-11-11 2021-05-28 科亚医疗科技股份有限公司 Display method and display equipment for multimedia medical record report
CN106898044B (en) * 2017-02-28 2020-08-04 成都金盘电子科大多媒体技术有限公司 Organ splitting and operating method and system based on medical images and by utilizing VR technology
CN107273657A (en) * 2017-05-15 2017-10-20 慧影医疗科技(北京)有限公司 The generation method and storage device of diagnostic imaging picture and text report
US10140421B1 (en) * 2017-05-25 2018-11-27 Enlitic, Inc. Medical scan annotator system
JP7127370B2 (en) * 2018-06-08 2022-08-30 コニカミノルタ株式会社 Interpretation report creation device
CN109460756B (en) * 2018-11-09 2021-08-13 天津新开心生活科技有限公司 Medical image processing method and device, electronic equipment and computer readable medium
CN110148127B (en) * 2019-05-23 2021-05-11 数坤(北京)网络科技有限公司 Intelligent film selection method, device and storage equipment for blood vessel CTA post-processing image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139443A (en) * 2015-07-30 2015-12-09 芜湖卫健康物联网医疗科技有限公司 Three-dimensional imaging system and method of diagnosis result
CN106529115A (en) * 2015-09-09 2017-03-22 佳能株式会社 Information processing device, information processing method, and information processing system

Also Published As

Publication number Publication date
CN111627508A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111627508B (en) Medical image reading method and device and computer readable medium
US10657220B2 (en) System and methods for medical reporting
CN107665736B (en) Method and apparatus for generating information
EP2380140B1 (en) Generating views of medical images
JP2014012208A (en) Efficient imaging system and method
KR20140024788A (en) Advanced multimedia structured reporting
US11900266B2 (en) Database systems and interactive user interfaces for dynamic conversational interactions
CN109887077B (en) Method and apparatus for generating three-dimensional model
WO2021259391A2 (en) Image processing method and apparatus, and electronic device and storage medium
US10977796B2 (en) Platform for evaluating medical information and method for using the same
CN113658175B (en) Method and device for determining sign data
JP2019149005A (en) Medical document creation support apparatus, method, and program
US20230368893A1 (en) Image context aware medical recommendation engine
JPWO2019193982A1 (en) Medical document creation support device, medical document creation support method, and medical document creation support program
CN114359295A (en) Focus segmentation method and device, electronic device and storage medium
US11132793B2 (en) Case-adaptive medical image quality assessment
Chandrashekhar et al. CAD-RADS: a giant first step toward a common lexicon?
US10176569B2 (en) Multiple algorithm lesion segmentation
CN111627023B (en) Method and device for generating coronary artery projection image and computer readable medium
CN113870178A (en) Plaque artifact correction and component analysis method and device based on artificial intelligence
Kampaktsis et al. Artificial intelligence in atherosclerotic disease: Applications and trends
CN111738986B (en) Fat attenuation index generation method and device and computer readable medium
CN115330696A (en) Detection method, device and equipment of bracket and storage medium
US20170322684A1 (en) Automation Of Clinical Scoring For Decision Support
US11704793B2 (en) Diagnostic support server device, terminal device, diagnostic support system, diagnostic support process,diagnostic support device, and diagnostic support program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Patentee after: Shukun (Beijing) Network Technology Co.,Ltd.

Address before: Rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Patentee before: SHUKUN (BEIJING) NETWORK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230110

Address after: Room 307, Zone A, Floor 2, No. 420, Fenglin Road, Xuhui District, Shanghai, 200000

Patentee after: Shukun (Shanghai) Medical Technology Co.,Ltd.

Address before: Rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Patentee before: Shukun (Beijing) Network Technology Co.,Ltd.