CN111507981A - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111507981A
CN111507981A CN201910098985.9A CN201910098985A CN111507981A CN 111507981 A CN111507981 A CN 111507981A CN 201910098985 A CN201910098985 A CN 201910098985A CN 111507981 A CN111507981 A CN 111507981A
Authority
CN
China
Prior art keywords
image
endpoint
feature representation
image processing
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910098985.9A
Other languages
Chinese (zh)
Other versions
CN111507981B (en
Inventor
肖月庭
阳光
郑超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shukun Technology Co ltd
Original Assignee
Shukun Beijing Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shukun Beijing Network Technology Co Ltd filed Critical Shukun Beijing Network Technology Co Ltd
Priority to CN201910098985.9A priority Critical patent/CN111507981B/en
Publication of CN111507981A publication Critical patent/CN111507981A/en
Application granted granted Critical
Publication of CN111507981B publication Critical patent/CN111507981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium based on a plurality of neural network models. The image processing method based on the plurality of neural network models comprises the following steps: generating a plurality of corresponding image preprocessing results by utilizing the plurality of neural network models aiming at an input image; determining a corresponding feature representation image based on the plurality of image preprocessing results; selecting one of the neural network models as a reference model and corresponding feature representation thereof as a reference feature representation image, and selecting the other neural network models of the neural network models as reference models and corresponding feature representation thereof as a reference feature representation image; and adjusting the reference characteristic representation image based on the reference characteristic representation image to generate an output image.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing, and more particularly, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium based on a plurality of neural network models.
Background
Neural networks are a tool for large-scale, multi-parameter optimization. Depending on a large amount of training data, the neural network can learn hidden features which are difficult to summarize in the data, so that a plurality of complex tasks such as face detection, image semantic segmentation, object detection, motion tracking, natural language translation and the like can be completed. Currently, artificial intelligence techniques using neural networks have been applied to the processing and analysis of medical images such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and the like. For example, in a non-invasive coronary heart disease intelligent auxiliary diagnosis system, the automatic reconstruction and post-processing calculation of coronary artery blood vessel enhanced CT images can be completed by utilizing a computer vision and deep learning technology based on a neural network.
In image processing, such as automated coronary reconstruction, segmentation of the coronary arteries needs to be achieved. In the coronary artery segmentation process, the most common problems are the appearance of fractures and vein adhesions. At present, the coronary artery segmentation is generally performed by using a traditional algorithm or a neural network of a single model, and the two problems of fracture and vein adhesion are difficult to solve simultaneously.
Disclosure of Invention
The present disclosure has been made in view of the above problems. The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium based on a plurality of neural network models.
According to an aspect of the present disclosure, there is provided an image processing method based on a plurality of neural network models, including: generating a plurality of corresponding image preprocessing results by utilizing the plurality of neural network models aiming at an input image; determining a corresponding feature representation image based on the plurality of image preprocessing results; selecting one of the neural network models as a reference model and corresponding feature representation thereof as a reference feature representation image, and selecting the other neural network models of the neural network models as reference models and corresponding feature representation thereof as a reference feature representation image; and adjusting the reference characteristic representation image based on the reference characteristic representation image to generate an output image.
Further, according to an aspect of the present disclosure, wherein the input image is an image for imaging a tubular, and the determining a corresponding feature representation image based on the plurality of image preprocessing results includes: based on the plurality of image preprocessing results, a centerline of the tubular image in which the image is characterized is determined.
Further, according to an aspect of the present disclosure, the generating an output image by adjusting the reference feature representing image based on the reference feature representing image includes: determining a plurality of reference endpoints of the reference centerline; comparing the reference center line with the reference center line, and determining a pair of an end point to be extended and a breaking point in the plurality of reference end points; and extending the end points to be extended based on the reference center line and connecting the pairs of fracture points to generate the output image.
Further, an image processing method according to an aspect of the present disclosure, wherein determining a plurality of reference end points of the reference center line includes: and generating a minimum spanning tree of the reference center line, and determining a plurality of reference end points of the reference center line based on the node connectivity attribute of the minimum spanning tree.
Further, an image processing method according to an aspect of the present disclosure, wherein connecting the pair of fracture points based on the reference center line includes: for the first reference end point, determining a corresponding first reference end point center point sequence thereof, and determining a corresponding first reference end point in the reference center line and a corresponding first reference end point center point sequence with the lowest similarity; for the second reference end points, determining a corresponding second reference end point center point sequence thereof, and determining a corresponding second reference end point in the reference center line and a corresponding second reference end point center point sequence with the lowest similarity; if the first reference endpoint center point sequence and the second reference endpoint center point sequence have coincident center point sequences, the first reference endpoint and the second reference endpoint are the breakpoint pair; supplementing the sequence of coincident center points to the reference centerline to connect the pair of fracture points that includes the first reference end point and the second reference end point.
Further, an image processing method according to an aspect of the present disclosure, wherein extending the endpoint to be extended based on the reference center line includes: for the third reference endpoint, determining a corresponding third reference endpoint center point sequence, and determining a corresponding third reference endpoint in the reference center line and a corresponding third reference endpoint center point sequence with the lowest similarity; supplementing the third reference endpoint center point sequence to the reference center line to prolong the third reference endpoint serving as the endpoint to be prolonged.
Further, an image processing method according to an aspect of the present disclosure, wherein the tubular object is a coronary artery blood vessel.
According to another aspect of the present disclosure, there is provided an image processing apparatus based on a plurality of neural network models, including: the preprocessing module is used for generating a plurality of corresponding image preprocessing results by utilizing the plurality of neural network models aiming at an input image; a feature representation image determination module for determining a corresponding feature representation image based on the plurality of image preprocessing results; an output image generation module, configured to select one of the neural network models as a reference model, and use a corresponding feature representation thereof as a reference feature representation image, and use other neural network models of the neural network models as reference models, and use corresponding feature representations thereof as reference feature representation images; and adjusting the reference feature representation image based on the reference feature representation image to generate an output image.
Further, the image processing apparatus according to another aspect of the present disclosure, wherein the input image is an image imaged for a tubular object, and the feature representation image determining module determines a center line of the tubular object therein as the feature representation image based on the plurality of image preprocessing results.
Further, an image processing apparatus according to another aspect of the present disclosure, wherein the reference feature representing image is a reference center line, and the output image generating module determines a plurality of reference end points of the reference center line; comparing the reference center line with the reference center line, and determining a pair of an end point to be extended and a breaking point in the plurality of reference end points; and extending the end points to be extended based on the reference center line and connecting the pairs of fracture points to generate the output image.
Further, the image processing apparatus according to another aspect of the present disclosure, wherein the output image generation module generates a minimum spanning tree of the reference center line, and determines a plurality of reference end points of the reference center line based on a node connectivity attribute of the minimum spanning tree.
Furthermore, the image processing apparatus according to another aspect of the present disclosure, wherein the output image generation module determines, for a first reference endpoint, a corresponding first reference endpoint center point sequence thereof, and determines a corresponding first reference endpoint in the reference center line and a corresponding first reference endpoint center point sequence having a lowest similarity; for the second reference end points, determining a corresponding second reference end point center point sequence thereof, and determining a corresponding second reference end point in the reference center line and a corresponding second reference end point center point sequence with the lowest similarity; if the first reference endpoint center point sequence and the second reference endpoint center point sequence have coincident center point sequences, the first reference endpoint and the second reference endpoint are the breakpoint pair; a centerline to connect the pair of breakpoints comprising the first reference endpoint and the second reference endpoint.
Furthermore, the image processing apparatus according to another aspect of the present disclosure, wherein the output image generation module determines, for a third reference endpoint, a corresponding third reference endpoint center point sequence thereof, and determines a corresponding third reference endpoint in the reference center line and a corresponding third reference endpoint center point sequence having a lowest similarity; supplementing the third reference endpoint center point sequence to the reference center line to prolong the third reference endpoint serving as the endpoint to be prolonged.
Further, an image processing apparatus according to another aspect of the present disclosure, wherein the tubular object is a coronary artery blood vessel.
According to yet another aspect of the present disclosure, there is provided an electronic device including: a memory for storing computer readable instructions; and a processor for executing the computer readable instructions to perform the image processing method as described above.
According to still another aspect of the present disclosure, there is provided a computer-readable storage medium storing computer-readable instructions which, when executed by a computer, cause the computer to perform the image processing method as described above.
As will be described in detail below, according to the image processing method and the image processing apparatus of the embodiments of the present disclosure, a combination including a plurality of neural network models is utilized, so that complementation between different neural network models is realized, processing advantages of different neural networks on different problems in image processing (such as fracture and vein adhesion problems in coronary artery segmentation processing) are fully exerted, and processing accuracy of a neural network system is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and are intended to provide further explanation of the claimed technology.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram further illustrating an image processing method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram further illustrating an image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a fusion process in an image processing method according to an embodiment of the present disclosure;
fig. 5 is a diagram illustrating a connection determination process in an image processing method according to an embodiment of the present disclosure;
fig. 6 is a block diagram illustrating an image processing apparatus according to an embodiment of the present disclosure;
FIG. 7 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure; and
fig. 8 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, example embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
First, an image processing method according to an embodiment of the present disclosure is described with reference to fig. 1 to 5.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure. As shown in fig. 1, an image processing method according to an embodiment of the present disclosure includes the following steps.
In step S101, for an input image, a plurality of corresponding image preprocessing results are generated using the plurality of neural network models.
Generating neural network models by different methods includes, but is not limited to, employing different neural network structures (e.g., different number of network layers, specific number of layers per layer, etc.), using different training data sets, different data input image sizes (e.g., some models are input into the neural network with volume data of size 64 × 256 × 256, some models are input into the neural network with 32 × 320 × 320, etc.), different data preprocessing methods, and different loss functions, etc.
Further, in the embodiments of the present disclosure, multiple neural network models may be specifically and optimally configured for different problems, and it is often difficult to solve all problems in the image process through a single neural network model. And obtaining different image preprocessing results by the plurality of neural network models under the condition of setting different confidence coefficient thresholds. Thereafter, the process proceeds to step S102.
In step S102, based on the plurality of image preprocessing results, a corresponding feature representation image is determined. In particular, different feature representation images may be selected for different objects to be processed in the input image. For example, as will be described in detail below, when the image to be processed in the input image is a coronary artery blood vessel, the center line of the blood vessel may be selected as the feature representation image. Thereafter, the process proceeds to step S103.
In step S103, one of the neural network models is selected as a reference model, and the corresponding feature representation thereof is selected as a reference feature representation image, and the other neural network models are selected as reference models, and the corresponding feature representations thereof are selected as reference feature representation images. The reference model may be a model having the best evaluation index among a plurality of neural network models. Thereafter, the process proceeds to step S104.
In step S104, the reference feature representing image is adjusted based on the reference feature representing image, and an output image is generated. In the embodiment of the disclosure, based on the reference feature representation image, a part which may be missing in the supplementary reference feature representation image is fused, so that the adjusted reference feature representation image can embody the advantages of different neural network models.
Fig. 2 is a flow chart further illustrating an image processing method according to an embodiment of the present disclosure. Fig. 3 is a schematic diagram further illustrating an image processing method according to an embodiment of the present disclosure. In the example shown in fig. 2 and 3, the image processing method according to the embodiment of the present disclosure is used in the image processing of the automated coronary reconstruction, performing coronary artery segmentation while solving both the problems of rupture and vein adhesion.
In step S201, for an input image, a plurality of corresponding image preprocessing results are generated using the plurality of neural network models.
As already described above with reference to fig. 1, multiple neural network models may be specifically optimized for different problems. For example, some neural network models are trained on the problem of rupture, while some neural network models are trained on the problem of vein adhesion. It is often difficult to simultaneously solve both the problems of rupture and vein adhesion during coronary artery segmentation by a single neural network model. And obtaining different image preprocessing results by the plurality of neural network models under the condition of setting different confidence coefficient thresholds.
Referring to fig. 3, an input image 301 is a coronary CT angiography image. Input image 301 via a plurality of neural network models 30 in a neural network system1-303And processing to generate a plurality of corresponding image preprocessing results.
In step S202, based on the plurality of image preprocessing results, the centerline of the tubular image, which is the characteristic representation image, is determined in one embodiment of the present disclosure, in the case where the input image 301 is a tubular image such as a coronary CT angiographic image, the centerline of the tubular image is determined as the characteristic representation image.
In step S203, one of the neural network models is selected as a reference model, and the corresponding feature is represented as a reference center line, and the other neural network models are selected as reference models, and the corresponding feature is represented as a reference center line. In one embodiment of the present disclosure, a model with the least segmentation result, a model with the best overall effect, a model with the least fracture, and the like in the plurality of neural network models may be selected as the reference model. The features generated by the reference model are expressed as a reference center line, the other neural network models are used as reference models, and the generated features are expressed as reference center lines.
In step S204, a plurality of reference end points of the reference centerline are determined. In one embodiment of the present disclosure, the minimum spanning tree may be generated using a primum algorithm or a kruskal algorithm, based on the reference centerline. After the minimum spanning tree is generated, a plurality of reference end points of the reference center line are determined based on the node connectivity attribute of the minimum spanning tree.
In step S205, the reference center line is compared with the reference center line, and a pair of an end point to be extended and a breaking point in the plurality of reference end points is determined. In one embodiment of the present disclosure, the reference centerline may be compared with other reference centerline(s) to determine pairs of end points to be extended and break points among a plurality of reference end points of the reference centerline.
In step 206, the end points to be extended are extended based on the reference center line, and the pairs of fracture points are connected to generate the output image.
As shown in FIG. 3, a plurality of neural network models 301-303To select a reference model (e.g., neural network model 30)1) With other ones of the plurality of neural network models (e.g., neural network model 30)2And 303) As the reference model, a new center line is generated by fusing the reference center line of the reference model and the reference center line of the reference model, and the output image 302 including the fused new center line is output.
Fig. 4 is a schematic diagram illustrating a fusion process in an image processing method according to an embodiment of the present disclosure. As shown in fig. 4, the reference centerline 401 and the reference centerline 402 are merged to generate a new centerline 403.
Specifically, for a first reference endpoint a, determining a corresponding first reference endpoint center point sequence AO, and determining a corresponding first reference endpoint a ' in the reference center line and a corresponding first reference endpoint center point sequence a ' D ' with the lowest similarity; for example, the sequence of the central point of point A ' is A ' E ' and A ' D ', and comparing AO, A ' E ' and A ' D ', it can be seen that A ' D ' is the sequence of the central point of the deletion of point A (because AO is more similar to A ' E ').
For the second reference endpoint B, determining a corresponding second reference endpoint center point sequence BD, and determining a corresponding second reference endpoint B ' in the reference center line and a corresponding second reference endpoint center point sequence B ' E ' with the lowest similarity; for example, the center point sequence of B ' point is B ' E ' and B ' D ', and comparing BD, B ' E ' and B ' D ', B ' E ' is known to be the center point sequence of B point missing (because BD is more similar to B ' D ').
If there is a sequence of center points a 'B' (e.g., a 'D' and B 'E' overlap) where the sequence of first reference end point center points and the sequence of second reference end point center points overlap, then the first fiducial end point a and the second fiducial end point B are the pair of break points.
Supplementing the sequence of coincident center points A 'B' to the reference center line 401 to connect the pairs of fracture points including the first reference end point A and the second reference end point B.
Likewise, for a third reference endpoint C, its corresponding third sequence of reference endpoint center points CO is determined, and the corresponding third reference endpoint C ' in the reference centerline and the corresponding third sequence of reference endpoint center points (C ' H ') with the lowest similarity are determined; supplementing the third reference endpoint center point sequence C 'H' to the reference center line 401 to extend the third reference endpoint C as the endpoint to be extended.
Fig. 5 is a schematic diagram illustrating a connection determination process in an image processing method according to an embodiment of the present disclosure. Because the blood vessels are distributed in three dimensions in space, whether the two endpoints are connected with each other needs to be judged according to factors such as the distance between fracture points of the blood vessels, three-dimensional trend distribution and the like.
In case 1 as shown in 501, the vessel direction of the branch with a is not identical to that of the branch with a '(the judgment conditions are that whether the vessel direction is different and the vessel extending direction is identical (the vessel extending direction is identical, the extending angle is not identical, and only the extending angle is within the angle threshold)), and therefore, AA' is not connected.
In case 2 as shown in 502, the vessel course of the branch in which A is located is very consistent with that of the branch in which A 'is located, and therefore AA' is connected.
In case 3 as shown at 503, the vessel course of branch A is identical to that of branch A', but the two vessels are non-planar. This is most likely due to a sudden irregular beating of the heart once or uncontrolled breathing of the patient during a CT scan resulting in image slices, which in essence belong to the same vessel and are therefore connected to AA'.
In case 4 as shown in 504, the vessel trend of the branch with A is consistent with that of the branch with A ' in the initial segment, then is inconsistent, and has a large probability of being different from one branch, so AA ' is not connected '
Fig. 6 is a block diagram illustrating an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 60 according to the embodiment of the present disclosure as shown in fig. 6 may be used to perform the image processing method according to the embodiment of the present disclosure as shown in fig. 1, 3. As shown in fig. 6, the image processing apparatus 60 according to the embodiment of the present disclosure includes a preprocessing module 601, a feature representation image determination module 602, and an output image generation module 603. Those skilled in the art understand that: these unit modules may be implemented in various ways by hardware alone, by software alone, or by a combination thereof, and the present disclosure is not limited to any one of them.
The preprocessing module 601 is configured to generate a plurality of image preprocessing results for the input image by using the plurality of neural network models.
The feature representation image determination module 602 is configured to determine a corresponding feature representation image based on the plurality of image preprocessing results. The input image is an image for imaging a tubular, and the feature representation image determination module 602 determines a centerline of the tubular therein as a feature representation image based on the plurality of image preprocessing results.
The output image generating module 603 is configured to select one of the neural network models as a reference model, and use the corresponding feature representation thereof as a reference feature representation image, and use the other neural network models as reference models, and use the corresponding feature representations thereof as reference feature representation images; and adjusting the reference feature representation image based on the reference feature representation image to generate an output image.
The reference feature representing image is a reference center line, and the output image generating module 603 determines a plurality of reference end points of the reference center line; comparing the reference center line with the reference center line, and determining a pair of an end point to be extended and a breaking point in the plurality of reference end points; and extending the end points to be extended based on the reference center line and connecting the pairs of fracture points to generate the output image. The output image generation module generates a minimum spanning tree of the reference center line, and determines a plurality of reference end points of the reference center line based on node connectivity attributes of the minimum spanning tree.
Fig. 7 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in fig. 7, an electronic device 70 according to an embodiment of the present disclosure includes a memory 701 and a processor 702. The various components in the electronic device 70 are interconnected by a bus system and/or other form of connection mechanism (not shown).
The memory 701 is used to store computer readable instructions. In particular, memory 701 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 702 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 70 to perform desired functions. In an embodiment of the present disclosure, the processor 702 is configured to execute the computer readable instructions stored in the memory 701, so that the electronic device 70 executes the image processing method described with reference to fig. 1 and 2.
Further, it is to be understood that the components and configuration of the electronic device 70 shown in FIG. 7 are exemplary only, and not limiting, and that the electronic device 70 may have other components and configurations as desired. For example, an image acquisition device and an output device, etc. (not shown). The image capture device may be used to capture images to be processed for image processing and store the captured images in memory 701 for use by other components. Of course, other image capturing devices may be used to capture the image to be processed and send the captured image to the electronic device 70, and the electronic device 70 may store the received image in the memory 701. The output device may output various information such as image information and image processing results to the outside (e.g., a user). The output devices may include one or more of a display, speakers, projector, network card, etc.
Fig. 8 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 8, a computer-readable storage medium 800 according to embodiments of the present disclosure has computer-readable instructions 801 stored thereon. The computer readable instructions 801, when executed by a processor, perform the image processing method described with reference to fig. 1 and 2.
As described above, according to the image processing method and the image processing apparatus of the embodiments of the present disclosure, the combination including a plurality of neural network models is utilized, so that the complementation between different neural network models is realized, the processing advantages of different neural networks on different problems in image processing (such as fracture and vein adhesion problems in coronary artery segmentation processing) are fully exerted, and the processing accuracy of the neural network system is improved.
The terms "first," "second," and "third," etc. in the description and claims of the present disclosure and in the drawings are used for distinguishing between different objects and not for describing a particular order.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (16)

1. An image processing method based on a plurality of neural network models, comprising:
generating a plurality of corresponding image preprocessing results by utilizing the plurality of neural network models aiming at an input image;
determining a corresponding feature representation image based on the plurality of image preprocessing results;
selecting one of the neural network models as a reference model and corresponding feature representation thereof as a reference feature representation image, and selecting the other neural network models of the neural network models as reference models and corresponding feature representation thereof as a reference feature representation image;
and adjusting the reference characteristic representation image based on the reference characteristic representation image to generate an output image.
2. The image processing method of claim 1, wherein the input image is an image for imaging a tubular, and the determining a corresponding feature representation image based on the plurality of image pre-processing results comprises:
based on the plurality of image preprocessing results, a centerline of the tubular image in which the image is characterized is determined.
3. The image processing method according to claim 2, wherein the reference feature representation image is a reference center line, and the adjusting the reference feature representation image based on the reference feature representation image to generate the output image comprises:
determining a plurality of reference endpoints of the reference centerline;
comparing the reference center line with the reference center line, and determining a pair of an end point to be extended and a breaking point in the plurality of reference end points; and
extending the end points to be extended based on the reference centerline and connecting the pairs of fracture points to generate the output image.
4. The image processing method of claim 3, wherein determining a plurality of reference endpoints of the reference centerline comprises:
and generating a minimum spanning tree of the reference center line, and determining a plurality of reference end points of the reference center line based on the node connectivity attribute of the minimum spanning tree.
5. The image processing method of claim 3, wherein connecting the pair of fracture points based on the reference centerline comprises:
for the first reference end point, determining a corresponding first reference end point center point sequence thereof, and determining a corresponding first reference end point in the reference center line and a corresponding first reference end point center point sequence with the lowest similarity;
for the second reference end points, determining a corresponding second reference end point center point sequence thereof, and determining a corresponding second reference end point in the reference center line and a corresponding second reference end point center point sequence with the lowest similarity;
if the first reference endpoint center point sequence and the second reference endpoint center point sequence have coincident center point sequences, the first reference endpoint and the second reference endpoint are the breakpoint pair;
supplementing the sequence of coincident center points to the reference centerline to connect the pair of fracture points that includes the first reference end point and the second reference end point.
6. The image processing method of claim 3, wherein extending the endpoint to be extended based on the reference centerline comprises:
for the third reference endpoint, determining a corresponding third reference endpoint center point sequence, and determining a corresponding third reference endpoint in the reference center line and a corresponding third reference endpoint center point sequence with the lowest similarity;
supplementing the third reference endpoint center point sequence to the reference center line to prolong the third reference endpoint serving as the endpoint to be prolonged.
7. The image processing method of any of claims 1 to 6, wherein the tubular object is a coronary vessel.
8. An image processing apparatus based on a plurality of neural network models, comprising:
the preprocessing module is used for generating a plurality of corresponding image preprocessing results by utilizing the plurality of neural network models aiming at an input image;
a feature representation image determination module for determining a corresponding feature representation image based on the plurality of image preprocessing results;
an output image generation module, configured to select one of the neural network models as a reference model, and use a corresponding feature representation thereof as a reference feature representation image, and use other neural network models of the neural network models as reference models, and use corresponding feature representations thereof as reference feature representation images; and adjusting the reference feature representation image based on the reference feature representation image to generate an output image.
9. The image processing apparatus according to claim 8, wherein the input image is an image imaged for a tubular,
the feature representation image determination module determines a centerline of the tubular therein as a feature representation image based on the plurality of image preprocessing results.
10. The image processing apparatus according to claim 9, wherein the reference feature representation image is a reference center line,
the output image generation module determines a plurality of reference endpoints of the reference centerline;
comparing the reference center line with the reference center line, and determining a pair of an end point to be extended and a breaking point in the plurality of reference end points; and
extending the end points to be extended based on the reference centerline and connecting the pairs of fracture points to generate the output image.
11. The image processing apparatus according to claim 10, wherein the output image generation module generates a minimum spanning tree of the reference centerline, and determines a plurality of reference endpoints of the reference centerline based on a node connectivity attribute of the minimum spanning tree.
12. The image processing apparatus according to claim 11, wherein the output image generation module determines, for a first reference endpoint, a corresponding first sequence of reference endpoint center points and determines a corresponding first reference endpoint in the reference center line and a corresponding first sequence of reference endpoint center points with a lowest similarity;
for the second reference end points, determining a corresponding second reference end point center point sequence thereof, and determining a corresponding second reference end point in the reference center line and a corresponding second reference end point center point sequence with the lowest similarity;
if the first reference endpoint center point sequence and the second reference endpoint center point sequence have coincident center point sequences, the first reference endpoint and the second reference endpoint are the breakpoint pair;
supplementing the sequence of coincident center points to the reference centerline to connect the pair of fracture points that includes the first reference end point and the second reference end point.
13. The image processing apparatus according to claim 11, wherein the output image generation module determines, for a third reference endpoint, a corresponding third sequence of reference endpoint center points and determines a corresponding third reference endpoint in the reference center line and a corresponding third sequence of reference endpoint center points with the lowest similarity;
supplementing the third reference endpoint center point sequence to the reference center line to prolong the third reference endpoint serving as the endpoint to be prolonged.
14. The image processing apparatus of any of claims 8 to 13, wherein the tubular object is a coronary vessel.
15. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions to perform the image processing method of any of claims 1 to 7.
16. A computer-readable storage medium storing computer-readable instructions which, when executed by a computer, cause the computer to perform the image processing method according to any one of claims 1 to 7.
CN201910098985.9A 2019-01-31 2019-01-31 Image processing method and device, electronic equipment and computer readable storage medium Active CN111507981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910098985.9A CN111507981B (en) 2019-01-31 2019-01-31 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910098985.9A CN111507981B (en) 2019-01-31 2019-01-31 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111507981A true CN111507981A (en) 2020-08-07
CN111507981B CN111507981B (en) 2021-07-13

Family

ID=71877460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910098985.9A Active CN111507981B (en) 2019-01-31 2019-01-31 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111507981B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308857A (en) * 2020-12-25 2021-02-02 数坤(北京)网络科技有限公司 Method and device for determining center line of blood vessel and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441763A (en) * 2008-11-11 2009-05-27 浙江大学 Multiple-colour tone image unity regulating method based on color transfer
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107392891A (en) * 2017-06-28 2017-11-24 深圳先进技术研究院 Vessel tree extraction method, apparatus, equipment and storage medium
CN107423571A (en) * 2017-05-04 2017-12-01 深圳硅基仿生科技有限公司 Diabetic retinopathy identifying system based on eye fundus image
CN107451983A (en) * 2017-07-18 2017-12-08 中山大学附属第六医院 The three-dimensional fusion method and system of CT images
CN108197623A (en) * 2018-01-19 2018-06-22 百度在线网络技术(北京)有限公司 For detecting the method and apparatus of target
CN108399406A (en) * 2018-01-15 2018-08-14 中山大学 The method and system of Weakly supervised conspicuousness object detection based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441763A (en) * 2008-11-11 2009-05-27 浙江大学 Multiple-colour tone image unity regulating method based on color transfer
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107423571A (en) * 2017-05-04 2017-12-01 深圳硅基仿生科技有限公司 Diabetic retinopathy identifying system based on eye fundus image
CN107392891A (en) * 2017-06-28 2017-11-24 深圳先进技术研究院 Vessel tree extraction method, apparatus, equipment and storage medium
CN107451983A (en) * 2017-07-18 2017-12-08 中山大学附属第六医院 The three-dimensional fusion method and system of CT images
CN108399406A (en) * 2018-01-15 2018-08-14 中山大学 The method and system of Weakly supervised conspicuousness object detection based on deep learning
CN108197623A (en) * 2018-01-19 2018-06-22 百度在线网络技术(北京)有限公司 For detecting the method and apparatus of target

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308857A (en) * 2020-12-25 2021-02-02 数坤(北京)网络科技有限公司 Method and device for determining center line of blood vessel and readable storage medium
CN112308857B (en) * 2020-12-25 2021-05-11 数坤(北京)网络科技有限公司 Method and device for determining center line of blood vessel and readable storage medium

Also Published As

Publication number Publication date
CN111507981B (en) 2021-07-13

Similar Documents

Publication Publication Date Title
KR102016959B1 (en) Method and apparatus for generating blood vessel model
JP7058373B2 (en) Lesion detection and positioning methods, devices, devices, and storage media for medical images
CN111145206B (en) Liver image segmentation quality assessment method and device and computer equipment
CN109886933B (en) Medical image recognition method and device and storage medium
CN111696089B (en) Arteriovenous determination method, device, equipment and storage medium
US9471989B2 (en) Vascular anatomy modeling derived from 3-dimensional medical image processing
US8548213B2 (en) Method and system for guiding catheter detection in fluoroscopic images
EP2554120B1 (en) Projection image generation device, projection image generation programme, and projection image generation method
JP4914517B2 (en) Structure detection apparatus and method, and program
JP5700964B2 (en) Medical image processing apparatus, method and program
CN112861961B (en) Pulmonary blood vessel classification method and device, storage medium and electronic equipment
JP2009504297A (en) Method and apparatus for automatic 4D coronary modeling and motion vector field estimation
CN111145173A (en) Plaque identification method, device, equipment and medium for coronary angiography image
US9198603B2 (en) Device, method and program for searching for the shortest path in a tubular structure
JP2010500093A (en) Data set selection from 3D rendering for viewing
CN111178420B (en) Coronary artery segment marking method and system on two-dimensional contrast image
US8050470B2 (en) Branch extension method for airway segmentation
JP5072625B2 (en) Image processing apparatus and method
JP2023521773A (en) Method and apparatus for automatically processing blood vessel images
JP2024059614A (en) Method and device for processing blood vessel video based on user input
CN111507981B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112446911A (en) Centerline extraction, interface interaction and model training method, system and equipment
JP5558793B2 (en) Image processing method, image processing apparatus, and program
CN111507455A (en) Neural network system generation method and device, image processing method and electronic equipment
CN109410170A (en) Image processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 310, Jinhui building, Qiyang Road, Chaoyang District, Beijing

Applicant after: Shukun (Beijing) Network Technology Co.,Ltd.

Address before: Room 310, Jinhui building, Qiyang Road, Chaoyang District, Beijing

Applicant before: SHUKUN (BEIJING) NETWORK TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Applicant after: Shukun (Beijing) Network Technology Co.,Ltd.

Address before: Room 310, Jinhui building, Qiyang Road, Chaoyang District, Beijing

Applicant before: Shukun (Beijing) Network Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Patentee after: Shukun Technology Co.,Ltd.

Country or region after: China

Address before: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Patentee before: Shukun (Beijing) Network Technology Co.,Ltd.

Country or region before: China