CN116823829B - Medical image analysis method, medical image analysis device, computer equipment and storage medium - Google Patents

Medical image analysis method, medical image analysis device, computer equipment and storage medium Download PDF

Info

Publication number
CN116823829B
CN116823829B CN202311094780.6A CN202311094780A CN116823829B CN 116823829 B CN116823829 B CN 116823829B CN 202311094780 A CN202311094780 A CN 202311094780A CN 116823829 B CN116823829 B CN 116823829B
Authority
CN
China
Prior art keywords
medical
medical image
information
target
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311094780.6A
Other languages
Chinese (zh)
Other versions
CN116823829A (en
Inventor
蒋逸韬
徐明杰
吴玺
石思远
崔晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Minimally Invasive Heart Operator Medical Technology Co ltd
Original Assignee
Shenzhen Minimally Invasive Heart Operator Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minimally Invasive Heart Operator Medical Technology Co ltd filed Critical Shenzhen Minimally Invasive Heart Operator Medical Technology Co ltd
Priority to CN202311094780.6A priority Critical patent/CN116823829B/en
Publication of CN116823829A publication Critical patent/CN116823829A/en
Application granted granted Critical
Publication of CN116823829B publication Critical patent/CN116823829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The application relates to a medical image analysis method, a medical image analysis device, computer equipment and a storage medium. The method comprises the following steps: acquiring a video stream of a medical object in the process of checking the medical object; determining target key auxiliary information required for checking the medical object based on the frame medical image in the video stream; and analyzing the target key auxiliary information, and determining and outputting medical information of the region of interest in the frame of medical image. By adopting the method, the medical examination of the medical object can be assisted in real time, and the accuracy of the medical examination is improved.

Description

Medical image analysis method, medical image analysis device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for analyzing medical images, a computer device, and a storage medium.
Background
When an operator examines a medical object, the operator operates the image acquisition device to try to scan different sections of the medical object from multiple angles according to clinical guidelines, observes the conditions of the medical object at different angles to perform instant diagnosis and saves a certain length of video under each section for subsequent measurement.
With the development of artificial intelligence, an auxiliary system incorporating artificial intelligence techniques can be used to help operators achieve more normative acquisition of slice sequences and more accurate diagnosis.
However, the black box nature of the related art lacks the interpretability of artificial intelligence (Artificial Intelligence, AI), focusing mainly on post-hoc analysis, which is prone to misdiagnosis.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, and a storage medium for analyzing medical images, which can implement real-time processing of video streams and present medical information to an operator in real time to assist the operator in medical examination of a medical object, so that the examination result is more accurate.
In a first aspect, the present application provides a method for analyzing medical images, the method comprising:
acquiring a video stream of a medical object in the process of checking the medical object;
determining target key auxiliary information required for checking the medical object based on the frame medical image in the video stream;
and analyzing the target key auxiliary information, and determining and outputting medical information of the region of interest in the frame of medical image.
In one embodiment, the determining, based on the frame medical image in the video stream, target critical auxiliary information required in examining the medical object includes: determining current key auxiliary information required when the medical object is checked based on the current frame of medical image in the video stream; determining historical key auxiliary information required for checking the medical object based on the historical medical images in the video stream; wherein the time sequence of the historical medical image is before the time sequence of the current frame medical image and is continuous with the current frame medical image; and determining the current key auxiliary information and the historical key auxiliary information as the target key auxiliary information.
In one embodiment, the determining current key auxiliary information required when examining the medical object based on the current frame of medical image in the video stream includes: and inputting the current frame medical image into an analysis network to obtain the current key auxiliary information.
In one embodiment, the analysis network comprises: the system comprises a quality control network, a section navigation network, a region of interest detection network and a structure segmentation network, wherein the current key auxiliary information comprises: quality control information, high latitude characteristics, confidence of the region of interest, position information of the region of interest and segmentation results; inputting the current frame medical image into an analysis network to obtain the current key auxiliary information, wherein the method comprises the following steps: performing quality control analysis on the current frame medical image by adopting the quality control network to obtain quality control information representing the quality of a section to which the current frame medical image belongs; extracting the position characteristics of the current frame medical image by adopting the section navigation network to obtain navigation information representing the current frame medical image; detecting the region of interest of the current frame of medical image by adopting the region of interest detection network to obtain the confidence that the region of interest exists in the current frame of medical image and the position information of the region of interest; and adopting the structure segmentation network to segment the medical object for the current frame medical image, and obtaining the segmentation result.
In one embodiment, the performing quality control analysis on the current frame of medical image by using the quality control network to obtain quality control information representing quality of a section to which the current frame of medical image belongs includes: adopting the quality control network to detect the anatomical structure of the current frame medical image to obtain target detection information of the anatomical structure of the current frame medical image; classifying the section categories of the current frame medical image to obtain the target section categories of the current frame medical image; and evaluating the quality of the section to which the current frame medical image belongs based on the target detection information and the target section category to obtain the quality control information.
In one embodiment, the method further includes, after performing segmentation of the medical object on the current frame medical image using the structural segmentation network to obtain the segmentation result: and filtering the segmentation result in the current key auxiliary information based on the segmentation result in the historical key auxiliary information to obtain an optimized segmentation result.
In one embodiment, the method further comprises: respectively caching quality control information, high latitude characteristics, confidence coefficient of the region of interest, position information of the region of interest and an optimized segmentation result in the current key auxiliary information in a corresponding cache pool; wherein, the history key auxiliary information is cached in the cache pool.
In one embodiment, the analyzing the target key auxiliary information, determining and outputting the medical information of the region of interest in the frame medical image includes: determining and outputting whole body control information representing the type of the section to which the current frame medical image belongs based on the current key auxiliary information and the historical key auxiliary information; and under the condition that the whole body quality control information meets a preset quality threshold, analyzing the current key auxiliary information and the historical key auxiliary information in the buffer pool to obtain and output medical information of an interested region in a frame medical image in the video stream.
In one embodiment, the medical information includes: the medical evaluation value of the medical object and the classification result of the region of interest, where under the condition that the overall quality control information meets a preset quality threshold, analyzing the current key auxiliary information and the historical key auxiliary information in the buffer pool to obtain and output the medical information of the region of interest in the frame medical image in the video stream, includes: when the whole body quality control information reaches the preset quality threshold, a target segmentation result is called from a cache pool for caching segmentation results, and target confidence is called from a cache pool for caching confidence of a region of interest; the target segmentation result comprises: the segmentation result in the current key auxiliary information and the segmentation result in the historical key auxiliary information; the target confidence level includes: the confidence of the region of interest in the current key auxiliary information and the confidence of the region of interest in the historical key auxiliary information; determining a medical evaluation value of the medical object based on the target segmentation result; and classifying the current frame medical image and the historical medical image into a region of interest based on the target confidence coefficient, and obtaining a classification result for representing whether the region of interest exists.
In one embodiment, the determining the medical evaluation value of the medical object based on the target segmentation result includes: determining area change information of the medical object in the current frame medical image and the historical medical image based on the target segmentation result; extracting contour information of the medical object from the target segmentation result; and obtaining the medical evaluation value of the medical object based on the area change information and the contour information.
In one embodiment, the classifying the region of interest of the current frame medical image and the historical medical image based on the target confidence level to obtain a classification result indicating whether the region of interest exists, includes: determining a target medical image with highest corresponding confidence in the current frame medical image and the historical medical image based on the target confidence; determining a video frame sequence comprising a plurality of frames of continuous medical images in the current frame medical image and the historical medical image by taking the target medical image as a center; and classifying the region of interest in the video frame sequence to obtain the classification result representing whether the region of interest exists in the video frame sequence.
In one embodiment, the method further comprises: and outputting the target medical image and the position information of the region of interest to a user interaction interface under the condition that the region of interest exists in the video frame sequence.
In one embodiment, the determining and outputting the overall quality control information characterizing the current frame medical image in the section category includes: determining candidate key auxiliary information corresponding to the historical medical image with the same section class as the current frame medical image in the historical key auxiliary information; performing time sequence arrangement on the candidate key auxiliary information and quality control information in the key auxiliary information to determine a quality control information sequence; and generating the whole quality control information based on the quality control information sequence.
In one embodiment, the method further comprises: determining target navigation information of the current frame medical image based on the historical key auxiliary information and the current key auxiliary information; and outputting the target navigation information and the overall quality control information to a user interaction interface.
In one embodiment, the determining the target navigation information of the current frame medical image based on the historical key assistance information and the current key assistance information includes: based on the high latitude characteristics in the historical key auxiliary information, the high latitude characteristics in the current key auxiliary information are adjusted to obtain the adjusted high latitude characteristics; and generating the target navigation information based on the feature of the heightened latitude.
In one embodiment, the outputting the target navigation information and the overall quality control information to the user interaction interface includes: presenting the target navigation information on a user interaction interface in the form of a prompt window; and displaying the whole physical control information on the user interaction interface in real time.
In one embodiment, the method further comprises: acquiring a medical image to be analyzed; determining target latitude characteristics in the medical image to be analyzed based on the target navigation information and the overall quality control information; and inputting the target high-latitude characteristics into a medical treatment network to obtain a medical treatment result of the medical image to be analyzed.
In one embodiment, the method further comprises: outputting the current frame of medical image in the video stream to a user interaction interface so as to present the current frame of medical image for medical examination of the medical object on the user interaction interface in real time.
In a second aspect, the present application further provides an analysis device for medical images, the device comprising:
the acquisition module is used for acquiring the video stream of the medical object in the process of checking the medical object;
the first determining module is used for determining target key auxiliary information required by the examination of the medical object based on the frame medical image in the video stream;
and the second determining module is used for analyzing the target key auxiliary information, determining and outputting medical information of the region of interest in the frame medical image.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a video stream of a medical object in the process of checking the medical object;
determining target key auxiliary information required for checking the medical object based on the frame medical image in the video stream;
and analyzing the target key auxiliary information, and determining and outputting medical information of the region of interest in the frame of medical image.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a video stream of a medical object in the process of checking the medical object;
determining target key auxiliary information required for checking the medical object based on the frame medical image in the video stream;
and analyzing the target key auxiliary information, and determining and outputting medical information of the region of interest in the frame of medical image.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a video stream of a medical object in the process of checking the medical object;
determining target key auxiliary information required for checking the medical object based on the frame medical image in the video stream;
and analyzing the target key auxiliary information, and determining and outputting medical information of the region of interest in the frame of medical image.
The analysis method, the analysis device, the computer equipment and the storage medium of the medical image acquire the video stream of the medical object in the process of medical examination of the medical object; thus, the video stream for medical examination of the medical object can be acquired in real time, and each frame of medical image can be presented to the operator in real time. Then, determining target key auxiliary information required for checking the medical object based on the frame medical image in the video stream; in this way, since the frame medical images in the video stream are acquired in real time, the obtained target key assistance information is also real-time, thereby being capable of facilitating the provision of real-time medical assistance to the operator. Finally, analyzing the target key auxiliary information, and determining and outputting medical information of the region of interest in the frame medical image; in this way, through analyzing the interested area in the key auxiliary information of the target, the medical information which can be used for assisting the operator in medical examination is presented to the operator, so that the operator can view the medical information in real time in the examination process, and the operator can be effectively assisted in medical examination; even if the experience of the operator is insufficient, the medical examination can be performed on the medical object according to the medical information displayed in real time, the accuracy of the medical examination can be improved, and the probability of misdiagnosis is reduced.
Drawings
FIG. 1 is a diagram of an application environment of a method of analyzing medical images in one embodiment;
FIG. 2 is a flow chart of a method of analyzing medical images in an embodiment;
FIG. 3A is a flow chart of a method of analyzing medical images in one embodiment;
FIG. 3B is a flow chart illustrating a method of analyzing medical images according to one embodiment;
FIG. 4 is a schematic diagram of an implementation framework of a method of analyzing medical images in one embodiment;
FIG. 5 is a schematic diagram showing the structure of a buffer pool in a method for analyzing a medical image according to an embodiment;
FIG. 6 is a schematic diagram of another implementation of a method of analyzing a medical image in one embodiment;
FIG. 7 is a cut-plane navigation presentation of a medical object in a medical image according to one embodiment;
FIG. 8 is a block diagram of an analysis device for medical images in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The medical image analysis method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Fig. 1 provides a medical image analysis system for implementing a medical image analysis method. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 acquires a video stream of the medical object during the medical examination of the medical object; determining target key auxiliary information required by the medical object during examination based on the frame medical image in the video stream; and then analyzing the target key auxiliary information, determining the medical information of the region of interest in the frame medical image, and outputting the medical information of the region of interest in the frame medical image to the terminal 102. Throughout the process, the terminal 102 may be a device with image capturing function, so that the terminal 102 may transmit the captured video stream of the medical object to the server in real time. The terminal 102 may be, but is not limited to, various notebook computers, medical display devices, etc. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a method for analyzing a medical image is provided, and the method is applied to the server and the terminal in fig. 1, and is described as an example, and includes the following steps:
step 201, obtaining a video stream of a medical object during an examination of the medical object.
Wherein the medical object may be any body part for which a medical examination is required, the medical object may comprise at least one body part. The video stream of the medical object is a succession of multiple frames of images during a medical examination of the medical object. The medical examination may be any type of examination of the medical object, such as a B-ultrasound examination, a color ultrasound examination, etc.
For example, the video stream of the medical object may be video data acquired in real time during a medical examination of the medical object; but also a real-time video stream of the medical object received from other acquisition devices.
Step 202, determining target key auxiliary information required when the medical object is checked based on the frame medical image in the video stream.
Wherein the frame medical images in the video stream include a current frame medical image and a historical medical image in the video stream. In some possible implementations, the target key auxiliary information is extracted from the frame medical image in the video stream by decoding the video stream of the medical object to obtain the current frame medical image with the image format satisfying the format supported by the neural network.
Wherein the target key auxiliary information includes: a plurality of different critical data for assisting the medical examination; thus, the items of medical examination are different, and the target key auxiliary information is different. For example, medical examination is ultrasonic detection, and target key auxiliary information may include: quality control information for representing the quality of the section, confidence of the region of interest and position information of the region of interest when the region of interest exists in the medical image, segmentation results of the outline of the medical object, high latitude characteristics for representing navigation information of the frame medical image in the medical object and the like. In this way, neural networks with different functions can be adopted for reasoning the frame medical images in the video stream aiming at different medical examinations, so that the target key auxiliary information capable of assisting the medical examinations is obtained.
Illustratively, the frame medical images in the video stream are input into a neural network comprising a plurality of parallel branches, through which the frame medical images in the video stream are inferred to output target critical auxiliary information required in the examination of the medical subject. In this way, the frame medical image in the video stream is inferred through the neural network, and after the target key auxiliary information is obtained, the medical information for assisting medical examination can be conveniently provided for an operator after the target key auxiliary information is analyzed.
And 203, analyzing the target key auxiliary information, and determining and outputting medical information of the region of interest in the frame medical image.
The region of interest in the frame medical image is a region that needs to be focused in the medical examination process of the medical object, for example, a region with abnormal conditions.
The medical information is used for representing whether a region of interest exists in the frame medical image or not and relevant information of the region of interest; for example, the information may include: classification results of the region of interest, location, area, etc. of the region of interest.
The target key auxiliary information is analyzed for the region of interest after the target key auxiliary information is obtained through the neural network, so that information related to the region of interest in the frame medical image is obtained, and the information is output to a user interaction interface, so that an operator can view the information in real time in the process of medical examination of a medical object.
In some possible implementation manners, firstly, performing time sequence fusion processing on target key auxiliary information of a plurality of frames of medical images in a video stream to obtain and output integral quality control information representing the quality of a section; and then, judging whether the whole body quality control information meets a preset quality threshold or not, and analyzing the target key auxiliary information to determine and output medical information of the region of interest in the frame medical image under the condition that the judging result shows that the whole body quality control information meets the preset quality threshold. Thus, the whole body quality control information meeting the preset quality threshold value indicates that the section quality is good, so that in this case, the accuracy of determining the medical information can be improved.
In the embodiment of the application, the video stream of the medical object is acquired in the process of medical examination of the medical object; thus, the video stream for medical examination of the medical object can be acquired in real time, and each frame of medical image can be presented to the operator in real time. Then, determining target key auxiliary information required for checking the medical object based on the frame medical image in the video stream; in this way, since the frame medical images in the video stream are acquired in real time, the obtained target key assistance information is also real-time, thereby being capable of facilitating the provision of real-time medical assistance to the operator. Finally, analyzing the target key auxiliary information, and determining and outputting medical information of the region of interest in the frame medical image; in this way, through analyzing the interested area in the key auxiliary information of the target, the medical information which can be used for assisting the operator in medical examination is presented to the operator, so that the operator can view the medical information in real time in the examination process, and the operator can be effectively assisted in medical examination; even if the experience of the operator is insufficient, the medical examination can be performed on the medical object according to the medical information displayed in real time, the accuracy of the medical examination can be improved, and the probability of misdiagnosis is reduced.
In order to obtain the key auxiliary information of each frame of medical image in real time to obtain the target key auxiliary information, the step S202 may be implemented by the steps shown in fig. 3A:
step 301, determining current key auxiliary information required when the medical object is checked based on the current frame of medical image in the video stream.
The current key auxiliary information is obtained by inputting the current frame of medical image into an analysis network for reasoning and outputting.
Illustratively, to quickly obtain the current key auxiliary information, the current frame medical image is input into an analysis network to obtain the current key auxiliary information. Thus, by inputting each frame of medical image into the neural network in real time throughout the video stream, key auxiliary information of each frame of medical image can be obtained.
Step 302, determining historical key assistance information required when inspecting the medical object based on the historical medical images in the video stream.
The time sequence of the historical medical image is before the time sequence of the current frame medical image and is continuous with the current frame medical image, so that the historical medical image in the video stream is a multi-frame medical image.
Illustratively, the historical medical image is input into the neural network in real-time at the time the historical medical image is acquired to obtain historical key assistance information for the historical medical image. The types of information included in the current key auxiliary information and the historical key auxiliary information are the same; for example, the current key auxiliary information includes: representing quality control information under the category of a section to which the current frame of medical image belongs, confidence of existence of a region of interest in the current frame of medical image, position information of the region of interest, segmentation results of contours of medical objects, and high latitude characteristics of navigation information representing the current frame of medical image; then the historical key assistance information includes: the method comprises the steps of representing quality control information under the category of a section to which a history frame medical image belongs, confidence of existence of a region of interest in the history frame medical image, position information of the region of interest, a segmentation result of a contour of a medical object and high latitude characteristics of navigation information representing the history frame medical image.
And step 303, determining the current key auxiliary information and the historical key auxiliary information as the target key auxiliary information.
In the embodiment of the application, each frame of medical image in the video stream is input into the neural network in real time to obtain the current key auxiliary information and the historical key auxiliary information and serve as the target key auxiliary information, so that the obtained target key auxiliary information has real-time property, and an operator can be assisted in real time to perform medical examination on a medical object in the medical examination process.
In one embodiment, after obtaining a video stream of a medical object during examination of the medical object, a current frame medical image is obtained by decoding the video stream; and outputting the current frame of medical image in the video stream to a user interaction interface while determining the current medical auxiliary information of the current frame of medical image so as to present the current frame of medical image for medical examination of the medical object on the user interaction interface in real time. In this manner, each time a frame of medical image is acquired, the medical image is presented in real-time to the user interface, enabling the operator to view the current video stream in real-time.
The analysis network if outputting the current key assistance information includes: the quality control network, the section navigation network, the region of interest detection network and the structure segmentation network, the current key auxiliary information of the current frame medical image comprises: quality control information, high latitude characteristics, confidence of the region of interest, position information of the region of interest and segmentation results. Based on this, the current key assistance information can be obtained by the following procedure:
and step 1, performing quality control analysis on the current frame of medical image by adopting the quality control network to obtain quality control information representing the quality of the section to which the current frame of medical image belongs.
The quality control network is trained through a plurality of section images of the sample medical object. The quality control information may be a score representing the quality of the section to which the medical image of the previous frame belongs, and the quality of the section is represented by the height of the score, for example, the higher the score is, the higher the quality of the section is, the lower the score is, and the quality of the section is lower.
Illustratively, the quality control network employs a target detection network (YOLOX) as a network skeleton to detect all anatomical structures of the medical subject; and simultaneously, extracting the section characteristics by utilizing the framework and classifying the section types of the current frame medical image by combining the full-connection layer. And carrying out post-processing on the detection result of the structure through the result to obtain the definition and the structural integrity of the corresponding anatomical structure. And then, scoring the quality of the section of the current frame of medical image according to a preset scoring rule so as to obtain quality control information of the quality of the section of the current frame of medical image.
In some possible implementations, to improve accuracy of evaluating the quality of the section to which the current frame of medical image belongs, the step 1 may be implemented by:
firstly, adopting the quality control network to detect the anatomical structure of the current frame medical image to obtain target detection information of the anatomical structure of the current frame medical image.
Wherein the target detection information includes: the location, height, width, confidence level, etc. of the anatomical structure to which the current frame of medical image belongs.
Illustratively, all anatomical structures to which the current frame of medical image belongs are detected by employing a skeleton of a quality control network to determine the location, height, width, confidence, etc. of the anatomical structure to which the current frame of medical image belongs. Thus, the possibility that the current frame medical image belongs to each anatomical structure can be represented by the confidence in the target detection information, the specific size of the anatomical structure to which the current frame medical image belongs can be represented by the confidence in the target detection information, and the position of the anatomical structure to which the current frame medical image belongs in the three-dimensional model of the medical object can be represented by the position in the target detection information.
And secondly, classifying the section categories of the current frame of medical image to obtain the target section categories of the current frame of medical image.
The quality control network is used for extracting the section characteristics of the section to which the current frame of medical image belongs, and the section type classification is carried out on the current frame of medical image by combining with the full-connection layer in the quality control network, so that the target section type to which the previous frame of medical image belongs can be obtained.
And thirdly, evaluating the quality of the section to which the current frame medical image belongs based on the target detection information and the target section category to obtain the quality control information.
The definition and the structural integrity degree of the anatomical structure can be obtained through the position, the height, the width, the confidence and the like of the anatomical structure to which the current frame medical image belongs in the target detection information, so that the quality of the section to which the current frame medical image belongs is scored by combining the target section type to obtain quality control information. For example, the higher the sharpness and structural integrity of the anatomy, the higher the scoring of the target facet class, indicating that the quality of the facet for that class is better; the lower the sharpness and structural integrity of the anatomy, the lower the score for the target facet class, indicating poor quality of the facet for that class.
In the first to third steps, in the quality control network, the quality of the section to which the current frame medical image belongs is evaluated by detecting the target detection information of the anatomical structure to which the current frame medical image belongs and the section type to which the current frame medical image belongs, so that the quality control information obtained can more accurately reflect the quality of the section.
And 2, carrying out position feature extraction on the current frame medical image by adopting the section navigation network to obtain navigation information representing the current frame medical image.
The section navigation network is obtained by training a data pair formed by a single-frame sample image of a sample medical object and a navigation information label of a section to which the single-frame sample image belongs. In this way, the navigation network can extract the navigation features in the current frame of medical image to obtain the high latitude features representing the navigation information. Therefore, the navigation information of the current frame of medical image can be accurately represented through the high latitude feature, so that the subsequent output of the navigation information of the previous frame of medical image to an operator through the high latitude feature is facilitated, the operator can timely adjust the current section through observing the navigation information, and the standard section is acquired.
And 3, detecting the region of interest of the current frame of medical image by adopting the region of interest detection network to obtain the confidence that the region of interest exists in the current frame of medical image and the position information of the region of interest.
The region of interest detection network is obtained through truth training of whether an object of interest exists in a single-frame sample image of the sample medical object. The confidence level of the existence of the region of interest in the current frame of medical image is used for representing the possibility of the existence of the region of interest in the current frame of medical image, and the position information of the region of interest is the two-dimensional coordinate of the region of interest in the current frame of medical image.
And 4, adopting the structure segmentation network to segment the medical object of the current frame medical image, and obtaining the segmentation result.
The structure segmentation network is trained by a truth value segmentation result of the medical object outline in a single-frame sample image of the sample medical object. The region where the medical object is located can be represented in the segmentation result. Thus, the area of the medical object and the outline of the medical object in the current frame of medical image can be rapidly obtained through the segmentation result; and so on, the area of the medical object and the outline of the medical object can be obtained from the segmentation result for each frame of medical image.
In the embodiment of the present application, the steps 1 to 4 are not related to each other, and may be performed in parallel, that is, the quality control network, the section navigation network, the region of interest detection network, and the structure segmentation network may be multiple networks in parallel. The input current frame medical image is inferred through a plurality of parallel branch networks included in the analysis network, so that the obtained current key auxiliary information is richer, and an operator can be accurately assisted in medical examination of a medical object.
In some possible implementation manners, the segmentation result of the current frame of medical image is filtered through the segmentation result corresponding to the multi-frame historical medical image before the current frame of medical image in the video stream, so as to filter abnormal segmentation results, that is, after the segmentation result corresponding to the current frame of medical image is obtained in the step 4, the segmentation result in the current key auxiliary information is filtered based on the segmentation result in the historical key auxiliary information, so as to obtain an optimized segmentation result.
Illustratively, if the current frame of medical image is the nth frame of medical image, the segmentation result of the nth frame of medical image is filtered through the segmentation result of the N-1 frame of historical medical image before the current frame of medical image, so as to filter abnormal areas in the segmentation result of the nth frame of medical image, such as areas with poor picture quality or areas with abrupt change, and the like. In this way, the obtained optimized segmentation result has no abnormal characteristics, and the smoothness of the segmentation result of all the frames of medical images in the video stream can be improved.
After reasoning is carried out on the medical image of the current frame through each parallel branch network in the analysis network, quality control information, high latitude characteristics, confidence coefficient of the region of interest, position information of the region of interest and an optimized segmentation result are obtained and serve as key auxiliary information, and in order to improve reusability of the key auxiliary information, the quality control information, the high latitude characteristics, the confidence coefficient of the region of interest, the position information of the region of interest and the optimized segmentation result in the current key auxiliary information are respectively cached in corresponding cache pools.
And the history key auxiliary information is cached in the cache pool. The buffer pool is used for buffering a fixed amount of data, and each buffer pool is used for buffering any one of key auxiliary information corresponding to N frames of medical images. For example, a buffer pool for buffering high latitude features can buffer high latitude features corresponding to N frames of medical images. Therefore, when the buffer memory amount in the buffer memory pool reaches the fixed buffer memory amount, new data are added, then the old data are withdrawn, so that key auxiliary information corresponding to the latest N frames of medical images is continuously buffered in the buffer memory pool, further the subsequent multiplexing of the key auxiliary information can be facilitated, the processing speed is improved, and the time for reasoning the frames of medical images is saved.
In one embodiment, to make the medical information presented to the operator more accurate, the step 203 may be implemented by the steps shown in fig. 3B:
step 321, determining and outputting overall quality control information representing the current frame medical image under the section class of the current frame medical image based on the current key auxiliary information and the historical key auxiliary information.
The whole quality control information is used for representing the whole level of the quality control information corresponding to all frames of medical images of the section class under the current time.
Illustratively, if the current frame of medical image is the nth frame of medical image, after obtaining the current key auxiliary information of the nth frame of medical image and the historical key auxiliary information of the previous N-1 frames of historical medical images in the video stream, buffering the current key auxiliary information and the historical key auxiliary information in corresponding buffer pools; and analyzing the quality control information of the frame medical image belonging to the section class at the current time through the quality control information in the current key auxiliary information and the historical key auxiliary information, thereby obtaining the whole quality control information of the current frame medical image belonging to the section class.
And step 322, analyzing the current key auxiliary information and the historical key auxiliary information in the buffer pool to obtain and output medical information of a region of interest in a frame medical image in the video stream under the condition that the whole body control information meets a preset quality threshold.
For example, if the overall quality control information meets the preset quality threshold, it is indicated that under the section category to which the current frame of medical image belongs, the overall level of the quality control information corresponding to the multi-frame medical image is higher, that is, the quality of the section corresponding to the multi-frame medical image belonging to the section category is higher; because the buffer data volume of the buffer pool is fixed, if the whole body quality control information reaches a preset quality threshold, the quality of the section of the frame medical image corresponding to the target key auxiliary information currently buffered in the buffer pool is higher, so that the accuracy of the target key auxiliary information buffered in the buffer pool is higher. Based on the information, the current key auxiliary information and the historical key auxiliary information are called from the buffer pool so as to analyze and obtain the medical information of the interested region in the frame medical image in the video stream.
In the embodiment of the application, after the current frame medical video and the historical medical video in the video stream are inferred through a plurality of networks to obtain the current key auxiliary information and the historical key auxiliary information, if the analysis results in that the whole body control information under the section class to which the current frame medical image belongs reaches the standard, the accuracy of the current key auxiliary information and the historical key auxiliary information cached in the cache pool is higher; therefore, the medical information of the region of interest in the frame medical image in the video stream is analyzed by calling the current key auxiliary information and the historical key auxiliary information in the cache pool, and the accuracy of the medical information can be improved, so that an operator can be assisted to perform medical examination on a medical object in real time and effectively through the displayed medical information, and the accuracy of the medical examination is improved.
In some possible implementations, to obtain content-rich and accurate medical information, the above step 322 may be implemented by the following steps 3221 to 3223 (not shown in the drawings):
in step 3221, when the overall quality control information reaches the preset quality threshold, the target segmentation result is fetched from the cache pool for caching the segmentation result, and the target confidence is fetched from the cache pool for caching the confidence of the region of interest.
Wherein the target segmentation result comprises: the segmentation result in the current key auxiliary information and the segmentation result in the historical key auxiliary information; the target confidence level includes: the confidence of the region of interest in the current key auxiliary information and the confidence of the region of interest in the historical key auxiliary information.
Illustratively, if the number of image frames corresponding to the fixed number of caches of the cache pool is N frames, then in the cache pool, the auxiliary results corresponding to all the frames of medical images are invoked, along with the confidence of the region of interest.
Step 3222, determining a medical evaluation value of the medical object based on the target segmentation result.
Wherein the medical evaluation value of the medical subject may be a score for evaluating the health condition of the medical subject.
For example, the region where the medical object is located in the frame medical image can be represented in the segmentation result in the current key auxiliary information and the segmentation result in the history key auxiliary information, respectively. Therefore, the target segmentation result can analyze information such as the outline of the medical object and the area of the occupied area in each frame of medical image, thereby obtaining the medical evaluation value.
Illustratively, step 3222 above may be accomplished by:
first, based on the target segmentation result, area change information of the medical object in the current frame medical image and the history medical image is determined.
The area change information comprises the change condition between the area of the medical object in the current frame of medical image and the area of the medical object in the multi-frame of historical medical image.
For example, since the region where the medical object is located can be represented in the target segmentation result, the area of the medical object can be quickly calculated from the target segmentation result. After the area of the medical image in each frame of medical object is obtained, an area change curve is drawn, and the area change curve is the area change information. Thus, the maximum value and the minimum value of the area can be obtained from the area change information.
Next, in the target segmentation result, contour information of the medical object is extracted.
Since the region of the medical object can be already represented in the target segmentation result, the contour information of the medical object can be obtained by extracting the edge of the region.
Finally, the medical evaluation value of the medical object is obtained based on the area change information and the contour information.
By way of example, the area change information is adjusted by the contour, for example, the area of the medical object in each frame of medical image can be more accurately obtained by the contour information, so that the area change information is adjusted by the area to make the area change information more accurate. The difference is obtained by dividing the minimum value and the maximum value in the area change information and then subtracting the division result of the maximum value, and the difference is used as the medical evaluation value. In this way, the medical evaluation value of the medical object is obtained by comprehensively considering the area change information and the contour information, so that the accuracy of the medical evaluation value can be improved, and the operator can be better assisted in performing medical examination.
And 3223, classifying the region of interest on the basis of the target confidence, and obtaining a classification result representing whether the region of interest exists or not.
Wherein the classification result includes a presence region of interest and a non-presence region of interest.
Illustratively, the frame medical images most likely to have the region of interest, i.e. the frame medical images with higher confidence, are selected in the video stream by the target confidence. Then, the regions of interest are classified by the frames of medical images to detect whether the regions of interest exist in the frames of medical images. By multiplexing the target segmentation result and the target confidence from the buffer pool, medical information including the medical evaluation value of the medical object and the classification result of the region of interest can be accurately obtained, and the time consumption of processing the frame medical image by the network in the analysis stage can be reduced.
In some possible implementations, to improve the accuracy of determining whether the region of interest exists in the frame medical image, the step 3223 may be implemented through the following procedures:
first, based on the target confidence, a target medical image with the highest corresponding confidence is determined from the current frame medical image and the history medical image.
The target confidence coefficient comprises the confidence coefficient of the region of interest in the current key auxiliary information and the confidence coefficient of the region of interest in the historical key auxiliary information, so that the target medical image with the highest confidence coefficient can be selected from the current frame medical image and the historical medical image, namely N frames of medical videos in which the key auxiliary information is cached. Thus, the target medical image is the image in which the region of interest is most likely to exist in the N frames of medical images.
Next, a video frame sequence including a plurality of frames of consecutive medical images is determined from the current frame medical image and the history medical image centering on the target medical image.
Wherein, in the video stream composed of the N frames of medical images, the video frame sequence is composed by taking the target medical image as the center, and by sequential multi-frame medical images which are arranged in front of the target medical image in time sequence and behind the target medical image in time sequence.
And finally, classifying the region of interest in the video frame sequence to obtain the classification result representing whether the region of interest exists in the video frame sequence.
And in the video frame sequence, classifying and detecting the region of interest to judge whether the region of interest exists in the video frame sequence or not, so as to obtain a classification result. Therefore, by selecting the video frame sequence composed of the target medical images with highest confidence in the video stream and performing classification detection on the region of interest in the video frame sequence, whether the region of interest exists in the video frame sequence can be detected more accurately, so that the accuracy of classification results is improved, and more accurate classification results are provided for operators in real time for reference.
If the classification result is that the region of interest does not exist, the fact that the region of interest does not exist in the video frame sequence is indicated. And if the classification result is that the region of interest exists, namely, if the region of interest exists in the video frame sequence, outputting the target medical image and the position information of the region of interest to a user interaction interface. Therefore, the positions of the target medical image and the region of interest in the target medical image are jointly output to the user interaction interface, so that an operator can check the specific position of the specific region of interest and the target frame medical image where the specific region of interest is positioned in real time in the process of implementing medical examination, the operator is assisted in determining the examination result of the medical object, and the accuracy of the examination result can be improved.
In one embodiment, to be able to effectively evaluate the overall facet quality of the facet class at the current time, step 321 may be implemented by the following steps 3211 to 3213 (not shown in the drawing):
step 3211, determining candidate key auxiliary information corresponding to the historical medical image with the same section category as the current frame medical image from the historical key auxiliary information.
If the fixed amount of the buffer memory pool is the key auxiliary information corresponding to the N frames of medical images, searching the N frames of medical images for the historical medical images with the same category as the section to which the current frame of medical image belongs, and searching the key auxiliary information of the historical medical images from the buffer memory pool to obtain the candidate key auxiliary information.
Step 3212, performing timing sequence arrangement on the candidate key auxiliary information and quality control information in the key auxiliary information, and determining a quality control information sequence.
And (3) according to the acquisition time of each frame of medical image, carrying out time sequence arrangement on the candidate key auxiliary information and the quality control information in the key auxiliary information, so as to obtain a quality control information sequence.
Step 3213, generating the overall quality control information based on the quality control information sequence.
And drawing the change coring of the quality control information according to the quality control information sequence so as to display the quality control information of the currently cached frame medical image in real time. And obtaining the whole quality control information of the N frames of medical images in the past under the current section category by carrying out weighted average on a plurality of quality control information in the quality control information sequence. Therefore, the quality control information in the history key auxiliary information and the quality control information in the current key auxiliary information cached in the cache pool are used for evaluating the whole quality control information which belongs to the section class to which the current frame of medical image belongs, so that the whole condition of the section quality of each frame of medical image in the section class can be more accurately obtained.
While the step 321 is executed, that is, after the current key auxiliary information and the historical key auxiliary information are obtained through a plurality of networks and cached in the cache pool, the quality control information is determined, the method further includes the following steps:
first, determining target navigation information of the current frame medical image based on the historical key auxiliary information and the current key auxiliary information.
The target navigation information is movement direction information or rotation information indicating that the operator moves the acquisition device, such as upward movement, downward movement, clockwise rotation, counterclockwise rotation, rightward movement, and the like.
The navigation information of each frame of medical image can be obtained by the high latitude characteristics cached in the cache pool, and thus, the navigation information of the current frame of medical image (namely, the nth frame of medical image) can be adjusted by the navigation information of the N frames of medical images cached in the cache pool, and the target navigation information of the current frame of medical image can be obtained.
In some possible implementation manners, the navigation information of the current frame of medical image is optimized through the navigation information of the multiple frames of historical medical images, so as to obtain more accurate target navigation information of the current frame of medical image, namely the first step can be realized through the following processes:
firstly, based on the high latitude characteristics in the history key assistance, the high latitude characteristics in the current key assistance information are adjusted to obtain the adjusted high latitude characteristics.
The navigation information of the historical medical images can be obtained through the high latitude characteristics of the multi-frame historical medical images cached in the cache pool. The navigation information of the current frame medical image can be obtained through the high latitude characteristics in the current key information. The navigation information of the current frame of medical image is corrected through the navigation information of the history medical images, so that the moving direction and the rotating information represented by the feature of the height-adjusted latitude can be more accurate.
Then, the target navigation information is generated based on the feature of the adjusted altitude.
And performing feature decoding on the feature of the height-adjusted latitude in the section navigation network to obtain target navigation information. Therefore, the high latitude characteristics of the N-1 frame of historical medical images in the buffer pool are corrected, so that the accuracy of determining the target navigation information in the current frame of medical images can be improved. Therefore, accurate navigation of the current frame of medical image can be presented for an operator, so that the operator can adjust the angle and the azimuth of the section being acquired in real time.
And secondly, outputting the target navigation information and the whole quality control information to a user interaction interface.
Illustratively, the target navigation information and the quality control information of each frame of medical image under the same-section category are presented on a user interaction interface in real time; therefore, when an operator sees the whole quality control information in real time, in order to obtain higher quality control score, the operator can refer to the target navigation information of the current frame medical image, and the current acquired tangent plane is adjusted to be overlapped with the standard tangent plane as much as possible.
In some possible implementations, the second step described above may be implemented by:
Firstly, the target navigation information is presented on a user interaction interface in the form of a prompt window.
The position is used as target navigation information and is presented in a user interaction interface in real time in a prompt window mode, so that an operator can refer to the position in real time in the checking process.
And then, the whole physical control information is presented on the user interaction interface in real time. Therefore, the target navigation information of the current frame of medical image and the whole quality control information showing the quality of the section are displayed on the user interaction interface in real time, and the operator can be more intuitively and effectively assisted in medical examination.
And in the first step and the second step, the target navigation information and the whole quality control information of the current frame medical image are presented to an operator in real time by analyzing the target navigation information, so that the operator can adjust the angle and the azimuth of the section being acquired in real time by referring to the target navigation information, and the standard section is acquired to obtain the whole quality control information with higher value.
After the whole body control information and the target navigation information are obtained through the quality control network and the section navigation network, the more accurate medical treatment result of the medical image to be analyzed can be obtained through the following processes:
First, a medical image to be analyzed is acquired.
Here, the medical image to be analyzed is a medical image for which a judgment of a medical result is required. The medical image to be analyzed may be a frame of image in a video stream or a newly input frame of image.
And secondly, determining target latitude characteristics in the medical image to be analyzed based on the target navigation information and the overall quality control information.
Here, the current tangent plane is adjusted to coincide with the standard tangent plane as much as possible by referring to the target navigation information and the whole body control information, so that the determined target high latitude characteristic can be more in accordance with the expected data distribution of the subsequent medical treatment algorithm (the algorithm for determining the ejection fraction, the algorithm for detecting pericardial effusion).
And finally, inputting the target high-latitude characteristics into a medical treatment network to obtain a medical treatment result of the medical image to be analyzed.
Here, the medical treatment network is an artificial intelligence network for medical analysis of medical treatment to be analyzed, such as a network for determining ejection fraction, a network for detecting pericardial effusion, or the like. Because the accuracy of the output result of the artificial intelligent network is strongly related to the input data, the target high-latitude characteristic is input into the medical treatment network because the target high-latitude characteristic accords with the expected data distribution of the medical treatment algorithm in the medical treatment network, and the final medical treatment result can be obviously improved.
In one embodiment, taking a medical object as a heart as an example, in the process of performing heart examination, the dynamic and real-time ultrasonic scanning has very high requirements on operators, and meanwhile, the high difficulty of the ultrasonic heart moving scanning also makes the acquisition and diagnosis process highly subjective and is greatly influenced by the operators. Thus, the use of echocardiography requires a very high learning cost for the user. In a clinically practical scenario, there is a very strong need for echocardiographic diagnosis on the one hand and on the other hand it is difficult to have enough time and opportunity to develop ultrasound skills, resulting in unmet medical needs. In the case where the medical object is a heart, the region of interest may be a region in which pericardial effusion is located, and for example, the diagnosis requirement of pericardial effusion in a endocardial setting is required, when an operator tries to examine a patient for the presence or absence of an object of interest, a plurality of sections need to be examined in detail. The method relates to the fact that the section is more, the pericardial effusion can be hidden, high requirements are made on ultrasonic manipulation and film reading experience, and when the experience of an operator is insufficient, misdiagnosis can be caused by that the standard section is not hit or frames for displaying the pericardial effusion are omitted.
In the related art, a full-flow technical scheme from acquisition to quality control to analysis is not realized, and an efficient model scheduling method when a plurality of models are called to realize a plurality of functions is not proposed.
The embodiment of the application provides a medical image analysis method, taking an ultrasonic examination auxiliary system of a whole-flow medical object from acquisition and guide to automatic diagnosis as an example for the following description. The embodiment of the application adopts an efficient AI deployment scheme, so that the embodiment has excellent real-time performance. In the acquisition stage, the quality control score is provided, so that a user can evaluate the quality of the currently acquired video sequence in real time, and meanwhile, the section navigation information of the medical object is provided, so that the user can hit a standard section with higher quality control score as much as possible under the guidance of the section navigation information. When the quality control score reaches a certain threshold set by a user, the automatic recording of the software can be triggered, so that the user obtains a video sequence with the specified quality control score. After the video sequence is obtained, the AI reasoning results which are cached in advance in the acquisition stage are multiplexed, the possibly existing video sequence segments of the interested target are found, and a video sequence analysis model is called to generate the diagnosis results of the interested target and the region of the suspected interested target. Meanwhile, a medical evaluation value of the medical object is calculated based on the cached structure division result. In the embodiment, the characteristic framework part of the YOLOX detection network is multiplexed, so that the space occupation of model deployment is compressed, and the running speed is improved. By establishing an efficient cache mechanism, the time consumption of post-processing in an analysis stage is greatly reduced, and the diagnosis instantaneity is greatly improved.
The method for analyzing a medical image provided in the embodiment of the present application may be implemented by using a structure shown in fig. 4, and the following description is made with reference to fig. 4:
in a first step, in data acquisition phase 1, real-time frame data is acquired from a data source 401.
Illustratively, each frame of data in a video stream as a data source is processed to obtain single frame of data.
In the second step, the single frame data is decoded by the frame decoder 402 so that the format of the decoded frame data 403 satisfies an AI-supported format (e.g., red Green Blue (RGB), or GRAY (GRAY)).
In a third step, the frame data 403 may be split into two branches, one branch directly transmitting the original frame data to the user interface 404 via a transmission protocol, and the other branch inputting a pipeline composed of a plurality of real-time AI models, including: AI model 1, AI model 2, etc. Each AI model infers frame data and generates corresponding inferred data.
Fourth, the output results of the AI models are cached in the cache pool 405.
Illustratively, the cache pool of AI data is an important component of the architecture, and a fixed amount of AI-reasoning data is cached by the cache pool according to software settings to be transmitted to the user and invoked at a post-processing stage. The buffer pool is implemented by constructing a buffer queue in a first-in first-out manner, as shown in fig. 5, the number of frames in the buffer pool 501 is fixed, for example, fixed X frames; new frame 502 joins and old frame 503 exits.
And fifthly, after the buffered data in the buffer pool 405 is generated, the buffered data may be input into the timing sequence processing module 406, so as to realize that the characteristics of the multi-frame data on the fusion timing sequence generate a more reliable real-time AI result. The generated real-time AI results 407 are sent to the user interaction interface 404.
Sixth, when the data acquisition is completed, i.e. the data acquisition stage 1 is completed, and the data analysis stage 2 is entered, the software inputs the buffered fixed frame data into the post-processing module 408, so as to perform calculation and analysis on the buffered target data to generate a diagnosis result required by the user and present the diagnosis result to the user interaction interface 404.
In this embodiment of the present application, taking a medical object as a heart, and taking a region of interest as a region where pericardial effusion is located as an example, detection of the region of interest (for example, detection of pericardial effusion) and outputting a medical evaluation value (for example, left ventricular ejection fraction) can be achieved by the architecture shown in fig. 6, and the following description is made in conjunction with fig. 6:
in a first step, ultrasound device data 601 is acquired from an ultrasound device during data acquisition phase 1.
Illustratively, the ultrasound device data 601 is video data acquired by the ultrasound device.
And secondly, acquiring current screen information from the ultrasonic equipment in real time through the acquisition equipment, and forming a video stream to be transmitted to software. The frame decoder 602 in the software decodes the acquired video stream to obtain the current frame data 603, and shunts the current frame data 603, and one is sent to the user interface 604 through the video transmission protocol, so that the user can view the current video stream in real time. The other branch enters the real-time AI model 605.
Illustratively, the real-time AI model 605 processes the input frame data. The real-time AI model 605 is composed of four networks, a quality control network, a cut-plane navigation network, an object of interest detection network, and a structure-segmentation network, respectively. The specific composition and function of each network is as follows:
quality control network 61: the quality control network 61 inputs single-frame image data and outputs a slice type and a slice quality control score (corresponding to the quality control information in the above embodiment) to which the single-frame image belongs. The network detects all anatomical structures of the plurality of cuts using the YOLOX target detection network as a skeleton. Meanwhile, the section type is classified by utilizing the extracted section characteristics of the skeleton network and the conjugation full-connection layer. And acquiring information of the concerned anatomical structure by combining target detection information (structure position, width, height and confidence) of the anatomical structure, and performing post-processing to acquire definition and structural integrity of the corresponding anatomical structure. And scoring the quality of the cut surface according to a preset scoring rule.
The section navigation network 62: the input of the section navigation network is single-frame image data, and the output is high-dimensional characteristics extracted from the single-frame image by the network. The section navigation network trains on a data set formed by single-frame image-heart section navigation prompt tag data pairs, and after training is completed, a characteristic extractor part is intercepted for extracting single-frame characteristics in a framework, and a decoder is used for data processing after buffering.
Region of interest detection network 63: the input of the region of interest detection network is single-frame image data, and the input is the confidence and the position of the region of interest in the current frame image. The network structure is a YOLOX target detection network. In a specific example, taking a medical object as a heart, and taking a region of interest as a region where pericardial effusion is located as an example, the region of interest detection network 63 may be used to perform pericardial effusion detection, and input single frame image data to output confidence and location of existence of pericardial effusion in a current frame medical image.
The structure-splitting network 64: the input of the structure segmentation network is single-frame image data, and the segmentation result of the contour of the medical object in the current frame is output. In a specific example, taking a medical object as a heart, the region of interest is a region where pericardial effusion is located, the structure segmentation network 64 inputs single-frame image data, and outputs a segmentation result of the endocardial apex contour of the inner left ventricle as a current frame medical image. On the network structure, a decoding layer of a target segmentation network (UNet) is used to restore the multi-scale features extracted by the extracted YOLOX network skeleton back to the original size, so that the segmentation of the anatomical structure is realized.
The four networks all adopt a light-weight data network architecture, and are suitable for deployment on edge equipment.
Third, the buffer pool 606 buffers AI reasoning data.
Illustratively, after the AI real-time model has completed reasoning about the input data, the data is cached using a cache pool. The buffer pool is configured to buffer the quality control network scoring data 661 of a fixed number N of frames, the detection result 663 of the region of interest (corresponding to the confidence of the region of interest and the position information of the region of interest in the above embodiment), and the segmentation result 664 of the medical object, and simultaneously buffer the tangent plane navigation feature 662 of N frames (corresponding to the high latitude feature in the above embodiment). In a specific example, taking a medical object as a heart, a region of interest is a region where pericardial effusion is located as an example, the N-frame quality control network scoring data 661 may be N-frame tangent plane quality control data, a detection result 663 of the region of interest is a target detection result of pericardial effusion, and a segmentation result 664 of the medical object is a left ventricle segmentation result.
And fourthly, performing time sequence processing on the AI reasoning data.
Illustratively, when new data is added to the cache pool, the timing processing by the timing processing module 665 using the already cached data plus the new added data includes:
Step a, carrying out weighted average on quality control scoring data: according to the cached section quality control data, firstly judging which section belongs to at present, and then obtaining the quality control score of each moment in the history cache under the section category to form a quality control score time sequence. And carrying out weighted average on the quality control score time sequence to generate the integral quality control score of the past N frames of data under the current section category.
Step b, performing attention mechanism optimization and decoding on the tangential navigation features: and (5) carrying out the attention mechanism calculation once for caching the high-dimensional features of the N frames of images, and optimizing the distribution of navigation features. The optimized navigation features utilize the information of continuous multi-frames in time sequence to correct the navigation information of the current frame. Inputting the optimized navigation features into a feature decoder of the tangent plane navigation network, and determining the position of the medical image of the current frame in the heart tangent plane based on the tangent plane guiding prompt of the current frame.
Step c, performing characteristic filtering on the segmentation result: and adding a frame structure segmentation result every time, namely performing feature filtering on the current segmentation feature by utilizing the preamble segmentation feature to filter abnormal feature areas in the current segmentation result. Decoding the segmentation features to generate an optimized segmentation result, and caching the segmentation result in a cache pool.
Fifth, after the data acquisition phase 1 is completed, the data analysis phase 2 is entered and the real-time results are transmitted to the user interface 604.
Illustratively, the quality control score result and the section navigation prompt are sent to the user interface in real time for guiding the user to perform standard section data acquisition in the following implementation manner:
first, a section navigation prompt 671.
By way of example, taking a medical object as a heart, taking a region of interest as a region where pericardial effusion is located as an example, displaying a real-time heart section navigation prompt window on a software user interface, and prompting displacement and angular deflection directions of a probe according to a current ultrasonic image and a target section of a user when a standard section is acquired. And the user adjusts the probe to hit a standard section with higher quality control score according to the prompt of the cardiac section navigation. As shown in fig. 7, the section navigation prompt 671 includes: upward movement, downward movement, clockwise rotation, counterclockwise rotation, leftward movement, and rightward movement; therefore, the user can adjust the position of the probe in real time according to the indication information in the section navigation prompt, so that the target section coincides with the standard section as much as possible.
And secondly, visually displaying the quality control score.
Illustratively, a quality control score bar 671 is drawn on a software user interface to display the quality control score of the currently cached video in real time. The user adjusts the orientation and angle of the probe appropriately according to the current quality control score to obtain a higher quality control score as much as possible. And meanwhile, a user can preset a quality control score threshold value, when the quality control score reaches the preset threshold value, the automatic recording can be triggered, the cached video is stored locally, the acquisition of the current section is completed at the same time, and after the acquisition of all sections is completed, the analysis flow is entered.
And step six, after the real-time acquisition is completed, entering a data analysis stage 2 to analyze the obtained data.
Illustratively, the data analysis stage 2 provides two functions, namely a medical assessment value calculation module 607 and an object of interest diagnosis and region of interest identification, a video frame sequence selection module 608 and a region of interest classification network 609 as shown in FIG. 6. Taking a medical object as a heart, a region of interest as a region where pericardial effusion is located as an example, the following description will be made:
medical evaluation value calculation module 607: and acquiring the area of the left ventricle intima of each frame of data according to the cached segmentation result of the left ventricle, and drawing an area change curve. From the change curve, the maxima and minima in the curve are determined and identified as end diastole and end systole frames. And acquiring end diastole and end systole frame images and segmentation results of the left ventricle. The contour of the left ventricular endocardium is extracted from the segmentation result, and the medical evaluation value 681 (for example, a biplane ejection fraction value) is obtained using the contour of the left ventricular endocardium as input data of the simpson biplane algorithm.
A video frame sequence selection module 608 and a region of interest classification network 609: and according to the cached target detection result of the pericardial effusion, finding a frame with highest probability of the suspected pericardial effusion, taking 8 frames forward and backward by taking the frame as a center frame, and forming a video frame sequence of the suspected pericardial effusion. The video frame sequence of the suspected pericardial effusion is input into the region of interest classification network, yielding a classification result 682 of the presence or absence of pericardial effusion. When the network output is that the pericardial effusion exists, diagnosing that the pericardial effusion exists in the video sequence, and returning the target frame medical image 683 with the highest pericardial effusion probability and the region of the intra-frame suspected pericardial effusion. When the network output is that there is no pericardial effusion, diagnosing that pericardial effusion is not visible in the video.
In the embodiment of the application, the artificial intelligent system for assisting an operator to diagnose the region of interest through the whole process is provided, so that the functions of section quality control, section navigation and efficient intelligent diagnosis can be integrated. The real-time AI processing architecture in the system can realize real-time reasoning on the video stream and buffer AI reasoning data for standby in real time. By means of the data caching mechanism, the framework can realize an effective time sequence information utilization mechanism while realizing real-time single-frame reasoning, and can multiplex cached AI reasoning data after entering an analysis stage, so that AI processing time consumption in the analysis stage is greatly reduced. The artificial intelligent ultrasonic medical object moving picture auxiliary diagnosis system can effectively reduce the technical threshold of ultrasonic medical object moving picture examination, so that more non-professional ultrasonic operators can realize accurate ultrasonic medical object moving picture diagnosis. In a specific example, in the acquisition stage, the embodiment of the present application provides a quality control score to enable an operator to evaluate the quality of a currently acquired video sequence in real time, and simultaneously provides a cardiac surface navigation function, so that the operator can strike a standard surface with a higher quality control score as much as possible under the guidance of cardiac surface navigation. When the quality control score reaches a certain threshold set by an operator, the automatic recording of the software can be triggered, so that the operator obtains a video sequence with the specified quality control score. After the video sequence is obtained, the AI reasoning result which is cached in advance in the acquisition stage is multiplexed, a possible pericardial effusion video sequence fragment is found, and a video sequence analysis model is called to generate a diagnosis result of pericardial effusion and a suspected pericardial effusion region. Meanwhile, based on the cached structure segmentation result, left ventricular ejection fraction is calculated. And the characteristic framework part of the YOLOX detection network is multiplexed, so that the space occupation of model deployment is compressed, and the running speed is improved. By establishing an efficient cache mechanism, the time consumption of post-processing in an analysis stage is greatly reduced, and the diagnosis instantaneity is greatly improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a medical image analysis device for realizing the medical image analysis method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitations in the embodiments of the analysis device for one or more medical images provided below can be referred to as the limitations of the analysis method for medical images hereinabove, and are not repeated herein.
In one embodiment, as shown in fig. 8, there is provided an analysis apparatus of medical image, the analysis apparatus 800 of medical image including: an obtaining module 801, configured to obtain a video stream of a medical object during an examination of the medical object; a first determining module 802, configured to determine target key auxiliary information required when examining the medical object based on the frame medical image in the video stream; and a second determining module 803, configured to analyze the target key auxiliary information, determine and output medical information of the region of interest in the frame medical image.
In one embodiment, the first determining module 802 is further configured to: determining current key auxiliary information required when the medical object is checked based on the current frame of medical image in the video stream; determining historical key auxiliary information required for checking the medical object based on the historical medical images in the video stream; wherein the time sequence of the historical medical image is before the time sequence of the current frame medical image and is continuous with the current frame medical image; and determining the current key auxiliary information and the historical key auxiliary information as the target key auxiliary information.
In one embodiment, the first determining module 802 is further configured to: and inputting the current frame medical image into an analysis network to obtain the current key auxiliary information.
In one embodiment, the analysis network comprises: the system comprises a quality control network, a section navigation network, a region of interest detection network and a structure segmentation network, wherein the current key auxiliary information comprises: quality control information, high latitude characteristics, confidence of the region of interest, position information of the region of interest and segmentation results;
the first determining module 802 is further configured to: performing quality control analysis on the current frame medical image by adopting the quality control network to obtain quality control information representing the quality of a section to which the current frame medical image belongs; extracting the position characteristics of the current frame medical image by adopting the section navigation network to obtain navigation information representing the current frame medical image; detecting the region of interest of the current frame of medical image by adopting the region of interest detection network to obtain the confidence that the region of interest exists in the current frame of medical image and the position information of the region of interest; and adopting the structure segmentation network to segment the medical object for the current frame medical image, and obtaining the segmentation result.
In one embodiment, the first determining module 802 is further configured to: adopting the quality control network to detect the anatomical structure of the current frame medical image to obtain target detection information of the anatomical structure of the current frame medical image; classifying the section categories of the current frame medical image to obtain the target section categories of the current frame medical image; and evaluating the quality of the section to which the current frame medical image belongs based on the target detection information and the target section category to obtain the quality control information.
In one embodiment, the first determining module 802 is further configured to: and filtering the segmentation result in the current key auxiliary information based on the segmentation result in the historical key auxiliary information to obtain an optimized segmentation result.
In one embodiment, the first determining module 802 is further configured to: respectively caching quality control information, high latitude characteristics, confidence coefficient of the region of interest, position information of the region of interest and an optimized segmentation result in the current key auxiliary information in a corresponding cache pool; wherein, the history key auxiliary information is cached in the cache pool.
In one embodiment, the second determining module 803 is further configured to: determining and outputting whole body control information representing the type of the section to which the current frame medical image belongs based on the current key auxiliary information and the historical key auxiliary information; and under the condition that the whole body quality control information meets a preset quality threshold, analyzing the current key auxiliary information and the historical key auxiliary information in the buffer pool to obtain and output medical information of an interested region in a frame medical image in the video stream.
In one embodiment, the medical information includes: the second determining module 803 is further configured to: when the whole body quality control information reaches the preset quality threshold, a target segmentation result is called from a cache pool for caching segmentation results, and target confidence is called from a cache pool for caching confidence of a region of interest; the target segmentation result comprises: the segmentation result in the current key auxiliary information and the segmentation result in the historical key auxiliary information; the target confidence level includes: the confidence of the region of interest in the current key auxiliary information and the confidence of the region of interest in the historical key auxiliary information; determining a medical evaluation value of the medical object based on the target segmentation result; and classifying the current frame medical image and the historical medical image into a region of interest based on the target confidence coefficient, and obtaining a classification result for representing whether the region of interest exists.
In one embodiment, the second determining module 803 is further configured to: determining area change information of the medical object in the current frame medical image and the historical medical image based on the target segmentation result; extracting contour information of the medical object from the target segmentation result; and obtaining the medical evaluation value of the medical object based on the area change information and the contour information.
In one embodiment, the second determining module 803 is further configured to: determining a target medical image with highest corresponding confidence in the current frame medical image and the historical medical image based on the target confidence; determining a video frame sequence comprising a plurality of frames of continuous medical images in the current frame medical image and the historical medical image by taking the target medical image as a center; and classifying the region of interest in the video frame sequence to obtain the classification result representing whether the region of interest exists in the video frame sequence.
In one embodiment, the second determining module 803 is further configured to: and outputting the target medical image and the position information of the region of interest to a user interaction interface under the condition that the region of interest exists in the video frame sequence.
In one embodiment, the second determining module 803 is further configured to: determining candidate key auxiliary information corresponding to the historical medical image with the same section class as the current frame medical image in the historical key auxiliary information; performing time sequence arrangement on the candidate key auxiliary information and quality control information in the key auxiliary information to determine a quality control information sequence; and generating the whole quality control information based on the quality control information sequence.
In one embodiment, the second determining module 803 is further configured to: determining target navigation information of the current frame medical image based on the historical key auxiliary information and the current key auxiliary information; and outputting the target navigation information and the overall quality control information to a user interaction interface.
In one embodiment, the second determining module 803 is further configured to: based on the high latitude characteristics in the historical key auxiliary information, the high latitude characteristics in the current key auxiliary information are adjusted to obtain the adjusted high latitude characteristics; and generating the target navigation information based on the feature of the heightened latitude.
In one embodiment, the second determining module 803 is further configured to: presenting the target navigation information on a user interaction interface in the form of a prompt window; and displaying the whole physical control information on the user interaction interface in real time.
In one embodiment, the apparatus further comprises: the acquisition module is used for acquiring the medical image to be analyzed; the third determining module is used for determining target latitude characteristics in the medical image to be analyzed based on the target navigation information and the overall quality control information; and the processing module is used for inputting the target high-latitude characteristics into a medical treatment network to obtain a medical treatment result of the medical image to be analyzed.
In one embodiment, the apparatus further comprises:
and the output module is used for outputting the current frame of medical image in the video stream to a user interaction interface so as to present the current frame of medical image for medical examination of the medical object on the user interaction interface in real time.
The modules in the medical image analysis device may be all or partially implemented by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of analyzing medical images. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magneto resistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as Static Random access memory (Static Random access memory AccessMemory, SRAM) or dynamic Random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (19)

1. A method of analyzing medical images, the method comprising:
acquiring a video stream of a medical object in the process of checking the medical object;
inputting the frame medical images in the video stream into an analysis network comprising a plurality of parallel branches to determine target key auxiliary information required for examining the medical object; the target key auxiliary information comprises current key auxiliary information and historical key auxiliary information; the analysis network comprises: the system comprises a quality control network, a section navigation network, a region of interest detection network and a structure segmentation network, wherein the current key auxiliary information comprises: quality control information, high latitude characteristics, confidence of the region of interest, position information of the region of interest and segmentation results; performing quality control analysis on the current frame of medical image by adopting the quality control network to obtain quality control information representing the quality of the section to which the current frame of medical image belongs; extracting position features of the current frame of medical image by adopting the section navigation network to obtain navigation information representing the current frame of medical image; detecting the region of interest of the current frame medical image by adopting a region of interest detection network to obtain the confidence that the region of interest exists in the current frame medical image and the position information of the region of interest; the structure segmentation network is adopted to segment the medical object for the current frame medical image, and the segmentation result is obtained;
Analyzing the target key auxiliary information, and determining and outputting medical information of an interested region in the frame medical image;
acquiring a medical image to be analyzed; determining target latitude characteristics in the medical image to be analyzed based on the target navigation information and the whole body control information; inputting the target high-latitude characteristics into a medical treatment network to obtain a medical treatment result of the medical image to be analyzed; wherein the target high latitude characteristic corresponds to an expected data distribution of a medical treatment algorithm in a medical treatment network.
2. The method of claim 1, wherein the inputting into an analysis network comprising a plurality of parallel branches based on the frame medical images in the video stream, determining target critical auxiliary information required for examination of the medical object, comprises:
determining current key auxiliary information required when the medical object is checked based on the current frame of medical image in the video stream;
determining historical key auxiliary information required for checking the medical object based on the historical medical images in the video stream; wherein the time sequence of the historical medical image is before the time sequence of the current frame medical image and is continuous with the current frame medical image;
And determining the current key auxiliary information and the historical key auxiliary information as the target key auxiliary information.
3. The method of claim 2, wherein the determining current critical auxiliary information required in the examination of the medical object based on the current frame of medical image in the video stream comprises:
and inputting the current frame medical image into an analysis network to obtain the current key auxiliary information.
4. The method according to claim 1, wherein the performing, by using the quality control network, quality control analysis on the current frame of medical image to obtain quality control information characterizing quality of a section to which the current frame of medical image belongs includes:
adopting the quality control network to detect the anatomical structure of the current frame medical image to obtain target detection information of the anatomical structure of the current frame medical image;
classifying the section categories of the current frame medical image to obtain the target section categories of the current frame medical image;
and evaluating the quality of the section to which the current frame medical image belongs based on the target detection information and the target section category to obtain the quality control information.
5. The method of claim 1, wherein the segmenting the medical object is performed on the current frame medical image using the structure segmentation network, and after obtaining the segmentation result, the method further comprises:
and filtering the segmentation result in the current key auxiliary information based on the segmentation result in the historical key auxiliary information to obtain an optimized segmentation result.
6. The method according to any one of claims 1 to 5, further comprising:
respectively caching quality control information, high latitude characteristics, confidence coefficient of the region of interest, position information of the region of interest and an optimized segmentation result in the current key auxiliary information in a corresponding cache pool; wherein, the history key auxiliary information is cached in the cache pool.
7. The method of claim 6, wherein analyzing the target key assistance information, determining and outputting medical information for a region of interest in the frame of medical images, comprises:
determining and outputting whole body control information representing the type of the section to which the current frame medical image belongs based on the current key auxiliary information and the historical key auxiliary information;
And under the condition that the whole body quality control information meets a preset quality threshold, analyzing the current key auxiliary information and the historical key auxiliary information in the buffer pool to obtain and output medical information of an interested region in a frame medical image in the video stream.
8. The method of claim 7, wherein the medical information comprises: the medical evaluation value of the medical object and the classification result of the region of interest, where under the condition that the overall quality control information meets a preset quality threshold, analyzing the current key auxiliary information and the historical key auxiliary information in the buffer pool to obtain and output the medical information of the region of interest in the frame medical image in the video stream, includes:
when the whole body quality control information reaches the preset quality threshold, a target segmentation result is called from a cache pool for caching segmentation results, and target confidence is called from a cache pool for caching confidence of a region of interest; the target segmentation result comprises: the segmentation result in the current key auxiliary information and the segmentation result in the historical key auxiliary information; the target confidence level includes: the confidence of the region of interest in the current key auxiliary information and the confidence of the region of interest in the historical key auxiliary information;
Determining a medical evaluation value of the medical object based on the target segmentation result;
and classifying the region of interest on the basis of the target confidence, and obtaining a classification result for representing whether the region of interest exists.
9. The method of claim 8, wherein the determining the medical evaluation value of the medical subject based on the target segmentation result comprises:
determining area change information of the medical object in the current frame medical image and the historical medical image based on the target segmentation result;
extracting contour information of the medical object from the target segmentation result;
and obtaining the medical evaluation value of the medical object based on the area change information and the contour information.
10. The method of claim 8, wherein classifying the current frame medical image and the historical medical image for a region of interest based on the target confidence, resulting in a classification result that characterizes whether the region of interest is present, comprises:
determining a target medical image with highest corresponding confidence in the current frame medical image and the historical medical image based on the target confidence;
Determining a video frame sequence comprising a plurality of frames of continuous medical images in the current frame medical image and the historical medical image by taking the target medical image as a center;
and classifying the region of interest in the video frame sequence to obtain the classification result representing whether the region of interest exists in the video frame sequence.
11. The method according to claim 10, wherein the method further comprises:
and outputting the target medical image and the position information of the region of interest to a user interaction interface under the condition that the region of interest exists in the video frame sequence.
12. The method of claim 7, wherein determining and outputting overall quality control information characterizing a class of a slice to which the current frame medical image belongs based on the current key assistance information and the historical key assistance information comprises:
determining candidate key auxiliary information corresponding to the historical medical image with the same section class as the current frame medical image in the historical key auxiliary information;
performing time sequence arrangement on the candidate key auxiliary information and quality control information in the key auxiliary information to determine a quality control information sequence;
And generating the whole quality control information based on the quality control information sequence.
13. The method of claim 7, wherein the method further comprises:
determining target navigation information of the current frame medical image based on the historical key auxiliary information and the current key auxiliary information;
and outputting the target navigation information and the overall quality control information to a user interaction interface.
14. The method of claim 13, wherein the determining target navigation information for the current frame of medical image based on the historical key assistance information and the current key assistance information comprises:
based on the high latitude characteristics in the historical key auxiliary information, the high latitude characteristics in the current key auxiliary information are adjusted to obtain the adjusted high latitude characteristics;
and generating the target navigation information based on the feature of the heightened latitude.
15. The method of claim 13, wherein outputting the target navigation information and the overall quality control information to a user interaction interface comprises:
presenting the target navigation information on a user interaction interface in the form of a prompt window;
And displaying the whole physical control information on the user interaction interface in real time.
16. The method according to claim 1, wherein the method further comprises:
outputting the current frame of medical image in the video stream to a user interaction interface so as to present the current frame of medical image for medical examination of the medical object on the user interaction interface in real time.
17. An apparatus for analyzing medical images, the apparatus comprising:
the acquisition module is used for acquiring the video stream of the medical object in the process of checking the medical object;
the first determining module is used for inputting the frame medical images in the video stream into an analysis network comprising a plurality of parallel branches to determine target key auxiliary information required by the examination of the medical object; the target key auxiliary information comprises current key auxiliary information and historical key auxiliary information; the analysis network comprises: the system comprises a quality control network, a section navigation network, a region of interest detection network and a structure segmentation network, wherein the current key auxiliary information comprises: quality control information, high latitude characteristics, confidence of the region of interest, position information of the region of interest and segmentation results; performing quality control analysis on the current frame of medical image by adopting the quality control network to obtain quality control information representing the quality of the section to which the current frame of medical image belongs; extracting position features of the current frame of medical image by adopting the section navigation network to obtain navigation information representing the current frame of medical image; detecting the region of interest of the current frame medical image by adopting a region of interest detection network to obtain the confidence that the region of interest exists in the current frame medical image and the position information of the region of interest; the structure segmentation network is adopted to segment the medical object for the current frame medical image, and the segmentation result is obtained;
The second determining module is used for analyzing the target key auxiliary information, determining and outputting medical information of an interested region in the frame medical image;
the acquisition module is used for acquiring the medical image to be analyzed;
the third determining module is used for determining target latitude characteristics in the medical image to be analyzed based on the target navigation information and the whole body control information; wherein the target high latitude characteristic conforms to an expected data distribution of a medical treatment algorithm in a medical treatment network;
and the processing module is used for inputting the target high-latitude characteristics into a medical treatment network to obtain a medical treatment result of the medical image to be analyzed.
18. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 16 when the computer program is executed.
19. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 16.
CN202311094780.6A 2023-08-29 2023-08-29 Medical image analysis method, medical image analysis device, computer equipment and storage medium Active CN116823829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311094780.6A CN116823829B (en) 2023-08-29 2023-08-29 Medical image analysis method, medical image analysis device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311094780.6A CN116823829B (en) 2023-08-29 2023-08-29 Medical image analysis method, medical image analysis device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116823829A CN116823829A (en) 2023-09-29
CN116823829B true CN116823829B (en) 2024-01-09

Family

ID=88126114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311094780.6A Active CN116823829B (en) 2023-08-29 2023-08-29 Medical image analysis method, medical image analysis device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116823829B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193251A1 (en) * 2016-05-09 2017-11-16 深圳迈瑞生物医疗电子股份有限公司 Method and system for recognizing region of interest profile in ultrasound image
CN109920518A (en) * 2019-03-08 2019-06-21 腾讯科技(深圳)有限公司 Medical image analysis method, apparatus, computer equipment and storage medium
CN112712507A (en) * 2020-12-31 2021-04-27 杭州依图医疗技术有限公司 Method and device for determining calcified area of coronary artery
CN113936775A (en) * 2021-10-09 2022-01-14 西北工业大学 Fetal heart ultrasonic standard tangent plane extraction method based on human-in-loop intelligent auxiliary navigation
WO2022069208A1 (en) * 2020-09-29 2022-04-07 Koninklijke Philips N.V. Ultrasound image-based patient-specific region of interest identification, and associated devices, systems, and methods
CN116407154A (en) * 2021-12-31 2023-07-11 深圳开立生物医疗科技股份有限公司 Ultrasonic diagnosis data processing method and device, ultrasonic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5587332B2 (en) * 2009-10-27 2014-09-10 株式会社日立メディコ Ultrasonic imaging apparatus and program for ultrasonic imaging
US11277626B2 (en) * 2020-02-21 2022-03-15 Alibaba Group Holding Limited Region of interest quality controllable video coding techniques
CN117119990A (en) * 2020-10-22 2023-11-24 史赛克公司 Systems and methods for capturing, displaying and manipulating medical images and videos

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193251A1 (en) * 2016-05-09 2017-11-16 深圳迈瑞生物医疗电子股份有限公司 Method and system for recognizing region of interest profile in ultrasound image
CN109920518A (en) * 2019-03-08 2019-06-21 腾讯科技(深圳)有限公司 Medical image analysis method, apparatus, computer equipment and storage medium
WO2022069208A1 (en) * 2020-09-29 2022-04-07 Koninklijke Philips N.V. Ultrasound image-based patient-specific region of interest identification, and associated devices, systems, and methods
CN112712507A (en) * 2020-12-31 2021-04-27 杭州依图医疗技术有限公司 Method and device for determining calcified area of coronary artery
CN113936775A (en) * 2021-10-09 2022-01-14 西北工业大学 Fetal heart ultrasonic standard tangent plane extraction method based on human-in-loop intelligent auxiliary navigation
CN116407154A (en) * 2021-12-31 2023-07-11 深圳开立生物医疗科技股份有限公司 Ultrasonic diagnosis data processing method and device, ultrasonic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王健 ; 熊虹 ; 王婷 ; 于小利 ; 陈功 ; .医学影像设备和医疗信息系统的接口方法.中国卫生信息管理杂志.2017,(第03期),100-104. *

Also Published As

Publication number Publication date
CN116823829A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
WO2020215985A1 (en) Medical image segmentation method and device, electronic device and storage medium
EP3982292B1 (en) Method for training image recognition model, and method and apparatus for image recognition
CN108520519B (en) Image processing method and device and computer readable storage medium
CN109886933B (en) Medical image recognition method and device and storage medium
US9959615B2 (en) System and method for automatic pulmonary embolism detection
EP4006831A1 (en) Image processing method and apparatus, server, medical image processing device and storage medium
EP3937183A1 (en) Image analysis method, microscope video stream processing method, and related apparatus
CN111227864B (en) Device for detecting focus by using ultrasonic image and computer vision
CN111383214B (en) Real-time endoscope enteroscope polyp detection system
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
CN109791692A (en) Computer aided detection is carried out using the multiple images of the different perspectives from area-of-interest to improve accuracy in detection
CN111214255B (en) Medical ultrasonic image computer-aided method
US20160155227A1 (en) Computer-aided diagnostic apparatus and method based on diagnostic intention of user
CN111462049B (en) Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
Cai et al. Spatio-temporal visual attention modelling of standard biometry plane-finding navigation
CN111899244A (en) Image segmentation method, network model training method, device and electronic equipment
KR20190087681A (en) A method for determining whether a subject has an onset of cervical cancer
EP3872755A1 (en) Method for providing airway information
CN112926537A (en) Image processing method, image processing apparatus, electronic device, and storage medium
Behnami et al. Automatic detection of patients with a high risk of systolic cardiac failure in echocardiography
CN111754485A (en) Artificial intelligence ultrasonic auxiliary system for liver
CN112786163B (en) Ultrasonic image processing display method, system and storage medium
CN111429457B (en) Intelligent evaluation method, device, equipment and medium for brightness of local area of image
CN116823829B (en) Medical image analysis method, medical image analysis device, computer equipment and storage medium
CN116993699A (en) Medical image segmentation method and system under eye movement auxiliary training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant