Disclosure of Invention
In view of this, embodiments of the present invention provide a medical image interpretation method, device and computer readable medium, which can effectively improve the intelligibility of a medical image diagnosis report for a patient.
To achieve the above object, according to a first aspect of an embodiment of the present invention, there is provided a medical image interpretation method, including: acquiring medical image data and a corresponding diagnosis report; extracting lesion-related information from the diagnostic report; extracting a lesion-related image from the medical image data; and matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus.
Optionally, the extracting information related to a lesion from the diagnosis report includes: performing interpretation processing on the diagnosis report to obtain a plurality of information; wherein the information comprises lesion information and non-lesion information, and the lesion information at least comprises vascular lesion information and non-vascular lesion information; extracting specific focus information and part information corresponding to the specific focus from the plurality of information; and determining the information of the specific focus and the information of the part corresponding to the specific focus as focus related information.
Optionally, extracting a lesion-related image from the medical image data includes: selecting a focus image of a specific part from the medical image data; extracting a focus image with a specific angle from the focus image with the specific position; and taking the focus image of the specific part and the focus image of the specific angle as focus related images.
Optionally, the matching the information related to the lesion with the image related to the lesion to generate a lesion-based image reading report includes: matching the specific focus information with the specific focus image to obtain a first matching result; matching the part information corresponding to the specific focus with the focus image of the specific part to obtain a second matching result; and combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
Optionally, the selecting a specific region lesion image from the medical image data includes: selecting focus image data from the medical image data; carrying out anatomical recognition on each focus image in the focus image data to obtain focus images of different parts; marking the focus images of different parts according to parts to obtain part focus image data with a first label; and selecting an image of which the first label indicates a specific part from the part focus image data with the first label to obtain a specific part focus image.
Optionally, the extracting a specific lesion image from the specific part blood vessel image includes: identifying the focus of the focus image of the specific part to obtain focus images of different angles; marking the focus images of different angles according to angles to obtain focus image data with a second label; and extracting a focus image of which the second label indicates a specific angle from the focus image data with the second label to obtain the focus image of the specific angle.
Optionally, the selecting specific lesion image data from the medical image data includes: classifying and identifying the medical image data to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images; marking the images of different types according to types to obtain image data with a third label; and selecting a third label from the image data with the third label to indicate a focus image to obtain focus image data.
In order to achieve the above object, according to a second aspect of the embodiments of the present invention, there is also provided a medical image interpretation apparatus, including: the acquisition module is used for acquiring medical image data and a corresponding diagnosis report; a first extraction module for extracting lesion-related information from the diagnostic report; the second extraction module is used for extracting a focus related image from the medical image data, and the matching module is used for matching the focus related information with the focus related image to generate a focus-based image-text interpretation report.
Optionally, the first extraction module includes: an arithmetic unit for performing interpretation processing on the diagnosis report to obtain a plurality of information; wherein the information comprises lesion information and non-lesion information, and the lesion information at least comprises vascular lesion information and non-vascular lesion information; a first extraction unit, configured to extract information on a specific lesion and information on a region corresponding to the specific lesion from the plurality of pieces of information; and a determination unit configured to determine the specific lesion information and the site information corresponding to the specific lesion as lesion-related information.
To achieve the above object, according to a third aspect of the embodiments of the present invention, there is also provided a computer-readable medium on which a computer program is stored, the program, when executed by a processor, implementing the interpretation method according to the first aspect.
Based on the technical scheme, the embodiment of the invention aims at a medical image interpretation method, a medical image interpretation device and a computer readable medium, firstly medical image data and a corresponding diagnosis report are obtained, and then a focus related image is extracted from the medical image data and focus related information is extracted from the diagnosis report; and finally, matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus. Therefore, the invention obtains the relevant information of the focus and the relevant image of the focus by respectively extracting and processing the acquired medical image data and the diagnosis report, and generates the image-text reading report based on the focus by matching the generated relevant information of the focus and the relevant image of the focus. Therefore, under the condition that the content of an original diagnosis report issued by a hospital is not changed or newly increased, the content matching and fusion between the medical image diagnosis report and the medical image data are realized in a mode of combining pictures and texts, so that the problems of poor readability and the like of the medical image diagnosis report in the prior art are solved, and the understandability of a patient to the medical image diagnosis report is improved.
Further effects of the above-described non-conventional alternatives will be described below in connection with specific embodiments.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given only to enable those skilled in the art to better understand and to implement the present invention, and do not limit the scope of the present invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment of the invention aims to generate a focus related image by extracting medical image data, generate focus related information by extracting a diagnosis report corresponding to the medical image data, and then generate a focus-based image-text reading report by matching the focus related information with the focus related image; therefore, matching and fusion between the medical image data and the diagnosis report are realized, the problems of poor readability and the like of the original medical image diagnosis report are solved, and the understandability of the patient on the medical image diagnosis report is improved.
Fig. 1 is a flowchart of a medical image interpretation method according to an embodiment of the present invention, the method at least includes the following steps: s101, medical image data and a corresponding diagnosis report are obtained.
Specifically, electronic medical image data and electronic medical image diagnostic reports are generally produced by hospital imaging departments and stored in the form of optical disks, usb disk storage media, network cloud disks, internet cloud films and the like.
S102, relevant information of the focus is extracted from the diagnosis report.
Illustratively, interpretation processing is performed on the diagnosis report to obtain a plurality of information; the information comprises focus information and non-focus information, and the focus information at least comprises blood vessel focus information and non-blood vessel focus information; extracting specific focus information and part information corresponding to the specific focus from the plurality of information; and determining the information of the specific focus and the information of the part corresponding to the specific focus as the relevant information of the focus.
Specifically, interpretation processing is performed on a diagnosis report to generate a plurality of pieces of information related to vascular lesions; extracting specific focus information and blood vessel information corresponding to the specific focus from the plurality of information; and determining the information of the specific focus and the blood vessel information corresponding to the specific focus as the information related to the focus. Or, performing interpretation processing on the diagnosis report to generate a plurality of pieces of information related to the non-vascular lesion; extracting specific focus information and non-blood vessel information corresponding to the specific focus from the plurality of information; and determining the information of the specific focus and the non-blood vessel information corresponding to the specific focus as the relevant information of the focus. In addition, non-focal information, such as coronary vessel origin information and coronary vessel shape information, may also be extracted from a plurality of information.
For example, natural language understanding algorithm processing is performed on the text content in the medical image diagnosis report to obtain a plurality of information; the plurality of information includes diseases such as origin information of coronary vessels, dominant type information, named information of coronary vessel anatomy, information of coronary vessel stenosis, and information of coronary vessel plaque.
Fig. 5 is a schematic view of a focus-based image-text interpretation report according to an embodiment of the present invention.
For example, the original content in a hospital-issued electronic report "coronary atherosclerosis: the left main trunk is slightly stenotic, the left anterior descending branch is slightly stenotic, the middle section can be seen as a myocardial bridge, the first diagonal branch and the second diagonal branch are slightly and slightly stenotic, and the left circumflex branch is severely stenotic, and the right coronary artery is severely stenotic ". Reading the text description content in the electronic report issued by the hospital to obtain reading report content, wherein the reading report content is as follows: "slight stenosis of the left main trunk: calcified plaques were visible in the left main vessel wall, with a slight narrowing of the lumen of about 7%. ".
S103, extracting a focus related image from the medical image data.
Exemplarily, classifying and identifying medical image data to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images; marking different types of images according to types to obtain image data with a third label; and selecting a third label from the image data with the third label to indicate a focus image to obtain focus image data. Carrying out anatomical recognition on each focus image in focus image data to obtain focus images of different parts; marking the focus images of different parts according to the parts to obtain part focus image data with a first label; and selecting a focus image of which the first label indicates a specific part from the position focus image data with the first label to obtain a specific position focus image. Identifying the focus of a specific part focus image to obtain focus images of different angles; marking the focus images of different angles according to the angles to obtain focus image data with a second label; and extracting a focus image of which the second label indicates a specific angle from the focus image data with the second label to obtain the focus image of the specific angle. And taking the focus image of the specific part and the focus image of the specific angle as focus related images.
Specifically, medical image data is classified and identified to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images; marking different types of images according to types to obtain image data with a third label; and selecting a third label from the image data with the third label to indicate the blood vessel focus image, so as to obtain the blood vessel focus image data. Carrying out anatomical recognition on each blood vessel focus image in the blood vessel focus image data to obtain blood vessel focus images of different parts; marking the blood vessel focus images of different parts according to the parts to obtain blood vessel focus image data with a first label; and selecting a blood vessel focus image of which the first label indicates a specific part from the blood vessel focus image data with the first label to obtain the blood vessel focus image of the specific part. Identifying the focus of a blood vessel focus image of a specific part to obtain blood vessel focus images of different angles; marking the blood vessel focus images of different angles according to angles to obtain blood vessel focus image data with a second label; and extracting a blood vessel focus image with a second label indicating a specific angle from the blood vessel focus image data with the second label to obtain the blood vessel focus image with the specific angle. And taking the blood vessel focus image of the specific part and the focus image of the specific angle as focus related images. The process of extracting the non-vascular lesion image of the specific location and the lesion image of the specific angle from the non-vascular lesion image is the same as described above, and is not repeated here.
In addition, coronary artery blood vessel shape-walking images and coronary artery blood vessel origin images can be extracted from the medical image data.
A lesion-related image is extracted from the medical image data, and the lesion-related image is an image associated with the current anatomy and lesion in the electronic image provided by the hospital, as shown in fig. 5.
And S104, matching the focus related information with the focus related image to generate a focus-based image-text interpretation report.
Illustratively, matching the specific focus information with the specific focus image to obtain a first matching result; matching the position information corresponding to the specific focus with the focus image of the specific position to obtain a second matching result; and combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
Specifically, specific focus information is matched with the specific focus image to obtain a first matching result; matching the blood vessel focus information corresponding to the specific focus with the blood vessel focus image of the specific part to obtain a second matching result; and combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
Specifically, the lesion-related information was "left trunk slight stenosis: calcified plaques were visible in the left main vessel wall, with a slight narrowing of the lumen of about 7%. "and the relevant image of the lesion are matched, and the matching result is shown in fig. 5.
In addition, the coronary artery origin information and the coronary artery origin image can be matched to obtain a third matching result, and the coronary artery shape information and the coronary artery shape image can be matched to obtain a fourth matching result. The first matching result, the second matching result, the third matching result and the fourth matching result can be combined or independently used for generating the image-text interpretation report.
The reading report is displayed in a graphic mode through a front-end interface or other modes.
The blood vessel lesion image is an image of a blood vessel with a lesion. The non-vascular lesion image refers to a non-vascular image with lesions. The specific focus image is a focus image of a specific focus, and the specific focus image can clearly display the structure of the focus. The specific part blood vessel focus image is a blood vessel focus image specific to a specific blood vessel, such as an a blood vessel focus image or a b blood vessel focus image. The specific angle blood vessel focus image is a certain angle blood vessel focus image capable of clearly displaying a specific part blood vessel focus image.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and the inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
According to the embodiment of the invention, medical image data is extracted to generate a focus related image, a diagnosis report corresponding to the medical image data is extracted to generate focus related information, and then the focus related information and the focus related image are matched to generate a focus-based image-text interpretation report. Under the condition of not changing or adding original report content provided by a hospital, matching and fusion between medical image data and a diagnosis report are realized in a mode of combining pictures and texts; the problem of the original medical image diagnosis report readability is poor is solved, and the understandability of the patient to the medical image diagnosis report is improved.
FIG. 2 is a diagram illustrating an information processing apparatus according to an embodiment of the present invention; the apparatus 200, comprising: an obtaining module 201, configured to obtain medical image data and a corresponding diagnosis report; a first extraction module 202 for extracting lesion-related information from the diagnosis report; a second extracting module 203 for extracting a lesion-related image from the medical image data, and a matching module 204 for matching the lesion-related information with the lesion-related image to generate a lesion-based image-text interpretation report.
In an alternative embodiment, the first extraction module 202 includes: an algorithm unit for performing interpretation processing on the diagnosis report to obtain a plurality of information; the information comprises focus information and non-focus information, and the focus information at least comprises blood vessel focus information and non-blood vessel focus information; a first extraction unit for extracting information on a specific lesion and information on a site corresponding to the specific lesion from a plurality of pieces of information; and a determination unit for determining the information of the specific focus and the information of the part corresponding to the specific focus as the information related to the focus.
In an alternative embodiment, the second extraction module 203 comprises: the first extraction unit is used for selecting a focus image of a specific part from medical image data; the second extraction unit is used for extracting a focus image with a specific angle from the focus image with a specific position; and the determining unit is used for taking the focus image of the specific part and the focus image of the specific angle as focus related images.
In an alternative embodiment, the matching module 204 includes: the first matching unit is used for matching the specific focus information with the specific focus image to obtain a first matching result; the second matching unit is used for matching the part information corresponding to the specific focus with the focus image of the specific part to obtain a second matching result; and the generating unit is used for combining the first matching result and the second matching result to generate a focus-based image-text interpretation report.
In an alternative embodiment, the first extraction unit comprises: the first selection subunit is used for selecting focus image data from the medical image data; the dissection subunit is used for carrying out dissection recognition on each focus image in the focus image data to obtain focus images of different parts; the marking subunit is used for marking the focus images of different parts according to the parts to obtain part focus image data with a first label; and the second selecting subunit is used for selecting the focus image of the specific part indicated by the first label from the part focus image data with the first label to obtain the focus image of the specific part.
In an alternative embodiment, the second extraction unit comprises: the identification subunit is used for identifying the focus of the blood vessel image of the specific part to obtain focus images of different angles; the marking subunit is used for marking the focus images at different angles according to the angles to obtain focus image data with a second label; and the extracting subunit is used for extracting the focus image of which the second label indicates the specific angle from the focus image data with the second label to obtain the focus image of the specific angle.
In an alternative embodiment, the first selecting subunit includes: the identification unit is used for classifying and identifying the medical image data to obtain different types of images, wherein the different types of images comprise focus images and non-focus images, and the focus images at least comprise blood vessel focus images and non-blood vessel focus images; a marking unit for marking different kinds of images according to kinds to obtain image data with a third label; and the selecting unit is used for selecting a third label from the image data with the third label to indicate the focus image to obtain focus image data.
The device can execute the information processing method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of the information processing method. For details of the information processing method provided in the embodiment of the present invention, reference may be made to the following description.
As shown in fig. 3, the system architecture 300 may include terminal devices 301, 302, 303, a network 304 and a server 305 for an exemplary system architecture diagram to which embodiments of the present invention may be applied. The network 304 serves as a medium for providing communication links between the terminal devices 301, 302, 303 and the server 305. Network 304 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal device 301, 302, 303 to interact with the server 305 via the network 304 to receive or send messages or the like. The terminal devices 301, 302, 303 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 301, 302, 303 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 305 may be a server providing various services, such as a background management server (for example only) providing support for click events generated by users using the terminal devices 301, 302, 303. The background management server may analyze and perform other processing on the received click data, text content, and other data, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the processing method provided in the embodiment of the present application is generally executed by the server 305, and accordingly, the interpretation device is generally disposed in the server 305.
It should be understood that the number of terminal devices, networks, and servers in fig. 3 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 4, shown is a block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment. The terminal device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 4, the computer system 400 includes a Central Processing Unit (CPU)401 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the system 400 are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404. The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output section 407 including a display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 408 including a hard disk and the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. A driver 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 409, and/or installed from the removable medium 411. The computer program performs the above-described functions defined in the system of the present invention when executed by a Central Processing Unit (CPU) 401.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a sending module, an obtaining module, a determining module, and a first processing module. The names of these modules do not in some cases constitute a limitation on the unit itself, and for example, the sending module may also be described as a "module that sends a picture acquisition request to a connected server".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: s101, acquiring medical image data and a corresponding diagnosis report; s102, extracting relevant information of the focus from the diagnosis report; s103, extracting a focus related image from the medical image data; and S104, matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus.
The embodiment of the invention is directed to a medical image interpretation method, a medical image interpretation device and electronic equipment, wherein medical image data and a corresponding diagnosis report are firstly obtained, and then focus related images are extracted from the medical image data and focus related information is extracted from the diagnosis report; and finally, matching the relevant information of the focus with the relevant image of the focus to generate a picture and text interpretation report based on the focus. Therefore, the invention obtains the relevant information of the focus and the relevant image of the focus by respectively extracting and processing the acquired medical image data and the diagnosis report, and generates the image-text reading report based on the focus by matching the generated relevant information of the focus and the relevant image of the focus. Therefore, under the condition that the content of an original diagnosis report issued by a hospital is not changed or newly increased, the content matching and fusion between the medical image diagnosis report and the medical image data are realized in a mode of combining pictures and texts, so that the problems of poor readability and the like of the medical image diagnosis report in the prior art are solved, and the understandability of a patient to the medical image diagnosis report is improved.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.