CN108597589B - Model generation method, target detection method and medical imaging system - Google Patents

Model generation method, target detection method and medical imaging system Download PDF

Info

Publication number
CN108597589B
CN108597589B CN201810395323.3A CN201810395323A CN108597589B CN 108597589 B CN108597589 B CN 108597589B CN 201810395323 A CN201810395323 A CN 201810395323A CN 108597589 B CN108597589 B CN 108597589B
Authority
CN
China
Prior art keywords
distance field
sample
medical image
target
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810395323.3A
Other languages
Chinese (zh)
Other versions
CN108597589A (en
Inventor
周鑫
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201810395323.3A priority Critical patent/CN108597589B/en
Publication of CN108597589A publication Critical patent/CN108597589A/en
Application granted granted Critical
Publication of CN108597589B publication Critical patent/CN108597589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Abstract

The embodiment of the invention provides a model generation method, a target detection method and a medical imaging system, relates to the technical field of medical image processing, and aims to reduce the complexity of target detection and improve the stability of solving by determining a target frame through a distance field. The method comprises a model generation process and a target detection process: the model generation process comprises: acquiring a sample medical image and a sample target frame corresponding to the sample medical image; generating a distance field according to the sample target bounding box; learning the sample medical image and the distance field through an artificial intelligence network to obtain a mapping relation of the sample medical image and the distance field; and generating an artificial intelligence network model according to the mapping relation. The target detection process comprises the following steps: acquiring a medical image of a target region of a subject; processing the medical image through an artificial intelligent network model to obtain a distance field; from the distance field, a target bounding box is determined in the medical image. The technical scheme provided by the embodiment of the invention is suitable for the process of determining the target area in the medical image.

Description

Model generation method, target detection method and medical imaging system
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of medical image processing, in particular to a model generation method, a target detection method and a medical imaging system.
[ background of the invention ]
Object detection is a specific task of computer vision and image processing, and aims to determine an object in a medical image and mark a Bounding Box (also called an object Bounding Box) on the object. In the field of medical image processing technology, object detection has many important applications, such as automatically locating organs or detecting specific lesions.
At present, there are many target detection methods based on convolutional neural network, and the current methods (such as RCNN (Regions with CNN features, convolutional neural network feature Regions), Yolo (You Only Look one, uniform frame selection recognition), etc. are all based on a technique called frame-box regression, i.e. the target frame is defined as four parameters (x, y, w, h), where (x, y) is the center point coordinate of the target frame, w and h are the width and height of the target frame, and the center point translation and the width and height scaling are used as mathematical modeling methods. Each time frame regression is carried out, sufficient sampling is needed, most methods propose that regression is carried out on a target frame from nine frame models with different sizes and different length-width ratios; each parameter of the frame model is an independent one-dimensional, and in the convolutional neural network, an independent channel is needed to regress the value of the dimension of each parameter; more complicated, when building a model, a balance is built among a plurality of parameters, otherwise, the learning of each parameter is unbalanced during training, and an error model is easy to generate.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
in the target detection method in the prior art, a plurality of parameters (nine frame models and four or six parameters of each target frame) are required when the target frame is determined, so that the complexity of the problem is increased, and the stability of the solution is reduced.
[ summary of the invention ]
In view of this, embodiments of the present invention provide a model generation method, a target detection method, and a medical imaging system, in a target detection process, a target frame is determined through a distance field, so that complexity of target detection is reduced, and stability of solution is improved.
In a first aspect, an embodiment of the present invention provides a model generation method, where the method includes:
acquiring a sample medical image and a sample target frame corresponding to the sample medical image;
generating a distance field according to the sample target bounding box;
learning the sample medical image and the distance field through an artificial intelligence network to obtain a mapping relation between the sample medical image and the distance field;
and generating an artificial intelligent network model according to the mapping relation.
The above aspects, and any possible implementations, further provide an implementation,
the distance field comprises an intra-bounding distance field; alternatively, the first and second electrodes may be,
the distance field includes an intra-bounding distance field and an extra-bounding distance field; alternatively, the first and second electrodes may be,
the distance field includes a weighted combination of the intra-bounding distance field and the extra-bounding distance field.
The above aspect and any possible implementation further provides an implementation, where generating a distance field from the sample target bounding box includes:
respectively carrying out binarization processing on the inside and the outside of the sample target frame, and obtaining binarization results;
and performing distance transformation on the binarization result to obtain a distance field.
The above-described aspects and any possible implementations further provide an implementation, when the sample target border is a three-dimensional border,
generating a distance field according to the sample target bounding box, comprising: respectively carrying out binarization processing on the inside and the outside of the sample target frame to obtain binarization results; and carrying out three-dimensional distance transformation on the binarization result to obtain a three-dimensional distance field;
the learning of the sample medical image and the distance field through an artificial intelligence network to obtain a mapping relationship of the sample medical image and the distance field includes: learning each layer of the sample medical image and the three-dimensional distance field through an artificial intelligence network to obtain a mapping relationship between the sample medical image and the three-dimensional distance field.
The above-described aspects and any possible implementations further provide an implementation in which the artificial intelligence network includes at least one of a convolutional neural network, a back-propagation neural network, a radial basis neural network, a perceptron neural network, a linear neural network, a self-organizing neural network, a feedback neural network, a clustering network, a deep learning network, a feed-forward neural network.
In a second aspect, embodiments of the present invention provide a medical imaging system, which includes a processor and a memory; the memory is configured to store instructions that, when executed by the processor, cause the medical imaging system to implement the method of any of the above aspects or any possible implementation.
In a third aspect, an embodiment of the present invention provides a target detection method, where the method includes:
acquiring a medical image of a target region of a subject;
processing the medical image through an artificial intelligence network model to obtain a distance field corresponding to a target frame, wherein the artificial intelligence network model comprises a mapping relation between the medical image and the distance field corresponding to the target frame;
the target bounding box is determined in the medical image based on the distance field.
The aspect described above and any possible implementation further provide an implementation in which the distance field comprises a distance field within a bounding box; alternatively, the distance field includes an intra-bounding distance field and an extra-bounding distance field.
The above aspect and any possible implementation further provides an implementation, wherein determining the target bounding box in the medical image according to the distance field includes:
and performing inverse distance transform on the distance field to obtain the target frame.
The above-described aspects and any possible implementations further provide an implementation in which at least one of a bone discontinuity region, a lung nodule region, and a tumor region is included within the target frame.
In a fourth aspect, an embodiment of the present invention provides a medical imaging system, which is characterized in that the medical imaging system includes a processor and a memory; the memory is configured to store instructions that, when executed by the processor, cause the medical imaging system to implement the method of any of the above aspects or any possible implementation.
Compared with the prior art in which the target frame is determined by a plurality of parameters, the method provided by the embodiment of the invention only needs one parameter of the distance field to determine the target frame, reduces the number of the parameters to be calculated to one, thereby reducing the complexity of target detection and improving the stability of solution.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method for model generation according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a medical image, a target frame, and a distance field according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an inner distance field and an outer distance field of a bezel according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for model generation provided by embodiments of the present invention;
FIG. 5 is a flow chart of another method for model generation provided by embodiments of the present invention;
FIG. 6 is a flowchart of a method for target detection according to an embodiment of the present invention;
FIG. 7 is a flow chart of another method for target detection provided by embodiments of the present invention;
FIG. 8 is a block diagram of a model generation apparatus according to an embodiment of the present invention;
FIG. 9 is a block diagram of an object detection apparatus according to an embodiment of the present invention;
FIG. 10 is a block diagram of a medical imaging system according to an embodiment of the present invention;
FIG. 11 is a block diagram of another medical imaging system provided by an embodiment of the present invention;
fig. 12 is a schematic diagram of a specific medical imaging system provided by an embodiment of the invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that although the terms first and second may be used to describe the processing modules in the embodiments of the present invention, the processing modules should not be limited to these terms. These terms are only used to distinguish one processing module from another. For example, a first processing module may also be referred to as a second processing module, and similarly, a second processing module may also be referred to as a first processing module without departing from the scope of embodiments of the present invention.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The embodiment of the invention provides a model generation method, which is suitable for a network model generation process in target detection, and as shown in fig. 1, the method comprises the following steps:
101. and acquiring a sample medical image and a sample target frame corresponding to the sample medical image.
The sample medical image refers to a sample as an artificial intelligence network, the sample contains a medical image of a target area, and the sample is a previously known medical image. The sample medical image may be a two-dimensional medical image or a three-dimensional medical image. The medical image of the sample may be an image generated by any one of medical devices such as MR (Magnetic Resonance imaging), PET (Positron Emission Tomography), SPECT (Single-Photon Emission Computed Tomography), CT (Computed Tomography), DR (Digital Radiography), Ultrasound (Ultrasound), or a fused image between any of the foregoing devices. The sample target border refers to a border that can mark a target region in the sample medical image, can be determined from the sample medical image, and is a previously known target border. Accordingly, the sample target frame may be a two-dimensional frame or a three-dimensional frame.
In an alternative implementation, the sample medical image may be subjected to binarization processing before determining the sample target frame, for example: after binarization processing, the gray value of a point inside the sample target frame is 0, and the gray value of a point outside the sample target frame is 1; or after binarization processing, the gray value of a point inside the sample target frame is 1, and the gray value of a point outside the sample target frame is 0.
As shown in fig. 2, (1) is a sample medical image, wherein a target area is in a rectangular small frame at the mark, the target area contains a bone/skeleton fracture, and the fracture is located in the middle of the whole target frame; (2) and (3) representing a binarization image of the target area, wherein the gray value of a pixel point belonging to the sample target frame in the image is 1, the gray value of a pixel point outside the sample target frame is 0, and the black rectangular area identified by binarization processing is an area limited by the sample target frame.
102. A distance field is generated based on the sample target bounding box.
Specifically, the sample target frame is represented by a binarized image, the distance field is represented by a grayscale image, and the distance field refers to a set of minimum distances from each pixel point (i.e., a selected point) of the selected region to boundary pixel points outside the target region, so that the distance field can be generated from the sample target frame by a distance transformation method. Alternatively, the distance field can be a combination of one or more of Euclidean (Euclidean) distance transform, Chamfer (Chamfer) distance transform, Minkowsky distance transform, and neighborhood-block (city-block) distance transform.
In a first possible implementation, the distance field is a distance field within the bounding box. Specifically, the distance field in the frame refers to a distance field obtained by performing distance conversion with each pixel point in the target frame as a selected point and each pixel point outside the target frame as a background point.
In this embodiment, assume that an image in a target frame is a, and the image in a includes a plurality of pixel points p; the image outside the target frame is A and comprises a plurality of pixel points q, the minimum distance from each point in the target frame to the boundary point is the DT value corresponding to the point, and the DT value is expressed by a formula as follows:
DT1(p) ═ min { d (p, q) } (equation 1)
In this embodiment, min represents a minimum value operation, d represents a distance operation for determining a distance between two pixel points, and p is a selected point and q is a changed boundary point.
As shown in FIG. 2 (3), which is the distance field image corresponding to the sample target bounding box of FIG. 2(2) obtained using the above formula, because of the centered position in the skeleton relative to the object's boundary, the point that is theoretically closest to the object's center should have the largest DT1Value, so that the brighter a pixel point of a distance field image indicates DT1The larger the value is, the portion corresponds to the center position of the target frame or the bone fracture position; the darker the pixel points of the distance field image represent DT1The smaller the value, the greater the distance that the portion corresponds to the bone fracture location. While the completely black parts correspond to the outside of the target frame. The above-described manner of determining the boundary through the distance field of the target frame has a better localization effect on the center region of the target frame.
In a second possible implementation, the distance field includes an intra-bounding distance field and an extra-bounding distance field. Illustratively, the distance field outside the frame refers to a distance field obtained by performing distance transformation with each pixel outside the target frame as a selected point and each pixel inside the target frame as a background point.
In this embodiment, it is also assumed that an image in the target frame is a, and a includes a plurality of pixel points p; the image outside the target frame is
Figure BDA0001644451010000071
The method comprises a plurality of pixel points q, and the minimum distance from each point in a target frame to a boundary point is DT corresponding to the point2The values, formulated as follows:
DT2(q) ═ min { d (q, p) } (equation 2)
In this embodiment, min represents a minimum value operation, d represents a distance operation for determining a distance between two pixel points, and q is a selected point and p is a changed boundary point.
As shown in fig. 3, the distance field inside the frame and the distance field outside the frame are obtained by performing distance transformation on the target frame. The method comprises the steps of (1) obtaining a distance field from each pixel point in a target frame to the target frame from a left graph, wherein the distance field is defined differently, (2) obtaining a gray value image of the distance field in the frame from the left graph in the step (1), and (3) obtaining a gray value image of the distance field outside the frame from the right graph in the step (1).
In a third possible implementation, the distance field includes a weighted combination of the intra-bounding distance field and the extra-bounding distance field. In this embodiment, the distance field can be represented by the following formula:
DT3(p,q)=αDT1(p,q)+βDT2(q, p) (equation 3)
Wherein, DT3The DT values of pixel points p corresponding to the weighted distance field are represented; DT1The DT value of the pixel point p corresponding to the distance field in the frame is represented; DT1The DT values of pixel points p corresponding to the distance field outside the frame are represented; alpha is DT1Beta is DT20 < alpha > 1, 0 < beta > 1, and alpha + beta > 1. Different weights are set or different distance fields are obtained, so that training samples can be enriched on one hand; on the other hand, the accuracy of network training can be improved.
103. And learning the sample medical image and the distance field through an artificial intelligence network to obtain a mapping relation between the sample medical image and the distance field.
The Artificial Intelligence (AI) Network may select a Convolutional Neural Network (CNN), a Back Propagation (BP) Neural Network, a radial basis Neural Network, a perceptron Neural Network, a linear Neural Network, a self-organizing Neural Network, a feedback Neural Network, a clustering Network, a deep learning Network, or a feedforward Neural Network. Specifically, the CNN may be RCNN (Region-based full Convolutional network based on feature regions), fast-RCNN (fast Convolutional neural network feature Region), fast-RCNN (faster Convolutional neural network feature Region), Yolo2, Yolo9000, or SSD (Single tile multi box Detector, multi-level frame selection recognition unification).
In one possible implementation, step 103 learns the sample medical image and the distance field through the regression network in the CNN by a supervised learning method to obtain a mapping relationship therebetween, and according to the mapping relationship, all parameters in the CNN can be automatically set, that is, an artificial intelligence network model can be generated.
104. And generating an artificial intelligent network model according to the mapping relation.
According to the model generation method, machine learning is carried out on a plurality of pairs of sample medical images and distance fields through the AI network, network parameters are continuously adjusted to obtain the mapping relation between the sample medical images and the distance fields, and then the AI network model is generated according to the mapping relation, so that the whole network model training process is completed.
It should be noted that when the distance field includes an intra-frame distance field and an extra-frame distance field, the AI network may learn a mapping relationship between the sample medical image and the two distance fields, and then when the obtained AI network model is used, the two distance fields may be generated from the medical image, and further, the target frame may be determined by the two distance fields, which is more accurate.
Further, in conjunction with the foregoing method flow, for a specific implementation of the step 102 of generating the distance field from the bounding box of the sample target, another possible implementation of the embodiment of the invention also provides the following method flow, as shown in fig. 4, where the step 102 includes:
1021. and respectively carrying out binarization processing on the inside and the outside of the sample target frame, and obtaining a binarization result.
1022. And performing distance transformation on the binarization result to obtain a distance field.
Specifically, the sample target frame may be a two-dimensional frame or a three-dimensional frame, and when the sample target frame is a two-dimensional frame, two-dimensional distance transformation may be performed to obtain a two-dimensional distance field; when the sample target frame is a three-dimensional frame, three-dimensional distance transformation can be performed to obtain a three-dimensional distance field.
The Distance Transform in step 1022 may be Euclidean Distance Transform (Euclidean Distance Transform), or Chessboard Distance Transform (chess board Distance Transform), or City Block Distance Transform (City Block Distance Transform).
Further, with reference to the foregoing method flow, when the sample target frame is a three-dimensional frame, the technical solution provided by the embodiment of the present invention may obtain a three-dimensional distance field based on the three-dimensional frame, and then simplify the learning process of the neural network by a layered learning method. Therefore, another possible implementation manner of the embodiment of the present invention further provides the following method flows for the implementation of step 102 and step 103, as shown in fig. 5,
step 102 comprises:
1023. and respectively carrying out binarization processing on the inside and the outside of the sample target frame to obtain a binarization result, and carrying out three-dimensional distance transformation on the binarization result to obtain a three-dimensional distance field.
Step 103 comprises:
1031. and learning each layer of the sample medical image and the three-dimensional distance field through an artificial intelligence network to obtain a mapping relation between the sample medical image and the three-dimensional distance field.
And step 1031, layering the three-dimensional space to obtain at least one layer of three-dimensional distance field, and inputting the three-dimensional distance field into an artificial intelligence network to learn the mapping relation between the three-dimensional distance field and the sample medical image.
The embodiment can solve the three-dimensional problem by two-dimensional calculation through a three-dimensional distance field layered training method, and saves the memory of the display card.
An embodiment of the present invention provides a target detection method, which is applicable to a process of determining a target region in a medical image, and as shown in fig. 6, the method includes:
201. a medical image of a target region of a subject is acquired.
The medical image of the target region of the subject refers to a medical image to be processed including the target region of the subject, and may be a two-dimensional medical image or a three-dimensional medical image. The explanation of the medical image is not described in detail in step 101.
202. And processing the medical image through an artificial intelligent network model to obtain a distance field corresponding to a target frame.
The artificial intelligence network model is generated by the method implemented by the model generation embodiment or any possible implementation manner, and includes a mapping relationship between the medical image and the distance field corresponding to the target frame.
Wherein, the target frame refers to a frame which can mark the target area of the subject in the medical image to be processed. Within the target border may be a region of bone discontinuity, a region of a lung nodule or a region of a tumor, or the like.
The distance field corresponding to the frame of the target may be an intra-frame distance field; alternatively, an intra-bezel distance field and an extra-bezel distance field may be included. The distance field is explained in detail in step 102 and is not described further herein.
203. The target bounding box is determined in the medical image based on the distance field.
In particular, the target bounding box can be generated from the distance field by a method of inverse distance transformation.
The embodiment of the target detection method is a using process of an artificial intelligent network model, in the target detection, the medical image is processed through the artificial intelligent network model to obtain the distance field corresponding to the target frame, and then the target frame is determined through the distance field.
Further, in combination with the foregoing method flow, another possible implementation manner of the embodiment of the present invention provides the following method flow for a specific implementation process of generating a target bounding box of a distance field in step 203, as shown in fig. 7, where step 203 includes:
2031. and performing inverse distance transform on the distance field to obtain the target frame.
The inverse Distance Transform in step 2031 may be an inverse Transform based on Euclidean Distance Transform or Chessboard Distance Transform or City Block Distance Transform.
An embodiment of the present invention provides a model generation apparatus, which is suitable for a model generation related method process, and as shown in fig. 8, the apparatus includes:
the acquiring unit 31 is configured to acquire a sample medical image and a sample target frame corresponding to the sample medical image.
A first generation unit 32 generates a distance field based on the sample target bounding box.
A learning unit 33, configured to learn the sample medical image and the distance field through an artificial intelligence network, so as to obtain a mapping relationship between the sample medical image and the distance field.
And the second generating unit 34 is configured to generate an artificial intelligence network model according to the mapping relationship.
Optionally, as shown in fig. 8, the first generating unit 32 includes:
the first processing module 321 is configured to perform binarization processing on the inside and outside of the sample target frame, respectively, and obtain a binarization result.
And a second processing module 322, configured to perform distance transformation on the binarization result to obtain a distance field.
Optionally, as shown in fig. 8, when the sample target frame is a three-dimensional frame, the first generating unit 32 is specifically configured to perform binarization processing on the inside and the outside of the sample target frame respectively to obtain a binarization result; and performing three-dimensional distance transformation on the binarization result to obtain a three-dimensional distance field.
The learning unit 33 is specifically configured to learn each layer of the sample medical image and the three-dimensional distance field through an artificial intelligence network to obtain a mapping relationship between the sample medical image and the three-dimensional distance field.
The model generation device performs machine learning on a plurality of pairs of sample medical images and distance fields through the AI network, continuously adjusts network parameters to obtain the mapping relation between the sample medical images and the distance fields, further generates an AI network model according to the mapping relation, and completes the whole network model training process.
An embodiment of the present invention provides a target detection apparatus, which is suitable for a target detection related method flow, and as shown in fig. 9, the apparatus includes:
an acquisition unit 41 for acquiring a medical image of a target region of a subject.
And a processing unit 42, configured to process the medical image through an artificial intelligence network model to obtain a distance field corresponding to a target frame, where the artificial intelligence network model includes a mapping relationship between the medical image and the distance field corresponding to the target frame.
A determination unit 43 for determining the target bounding box in the medical image based on the distance field.
Optionally, as shown in fig. 9, the determining unit 43 includes:
a processing module 431 configured to perform inverse distance transform on the distance field to obtain the target bounding box.
Optionally, at least one of a bone discontinuity region, a lung nodule region, and a tumor region is included in the target border.
Compared with the prior art in which the target frame is determined by a plurality of parameters, the method provided by the embodiment of the invention only needs one parameter of the distance field to determine the target frame, reduces the number of the parameters to be calculated to one, thereby reducing the complexity of target detection and improving the stability of solving.
An embodiment of the present invention provides a medical imaging system, as shown in fig. 10, the medical imaging system includes a processor 51 and a memory 52; the memory 52 is for storing instructions that, when executed by the processor 51, cause the medical imaging system to implement a method flow related to model generation.
An embodiment of the present invention provides a medical imaging system, as shown in fig. 11, which includes a processor 61 and a memory 62; the memory 62 is configured to store instructions that, when executed by the processor 61, cause the medical imaging system to implement a method flow related to object detection.
It should be noted that, in the practical application process, the medical imaging system for implementing the model generation related method flow and the medical imaging system for implementing the target detection related method flow may be an integrated same medical imaging system. In a specific embodiment, as shown in FIG. 12, the medical imaging system provided by the embodiment of the invention may be a computer 71, and the computer 71 is used for implementing the specific method and apparatus disclosed in the embodiment of the invention.
Alternatively, the computer 71 may be a general purpose computer, or a computer having a specific purpose.
The computer 71 may implement the embodiments of the present invention by its hardware devices, software programs, firmware, and combinations thereof.
As shown in FIG. 12, computer 71 may include an internal communication bus 711, a processor 712 (processor 712 may be comprised of one or more processors), a Read Only Memory (ROM)713, a Random Access Memory (RAM)714, a communication port 715, input/output components 716, a hard disk 717, and a user interface 718. The internal communication bus 711 may enable data communication between the components of the computer 71, and the processor 712 may make decisions and issue prompts. The communication port 715 may enable the computer 71 to communicate data with other components (not shown), such as external devices, image capture devices, databases, external storage, and image processing workstations. Input/output component 716 supports the flow of input/output data between computer 71 and other components. User interface 718 may enable interaction and information exchange between computer 71 and a user.
Optionally, the computer 71 can also send and receive data information from the cloud via the communication port 715.
It should be noted that the computer 71 may include various forms of program storage units and data storage units, such as a hard disk 717, Read Only Memory (ROM)713, Random Access Memory (RAM)714, capable of storing various data files used in computer processing and/or communications, and possibly program instructions for execution by the processor 712.
In an embodiment of the present invention, the instructions of processor 712 are used to perform a related method flow for model generation or a related method flow for object detection.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method of model generation, the method comprising:
acquiring a sample medical image and a sample target frame corresponding to the sample medical image;
generating a distance field according to the sample target bounding box; the central position of the sample target frame is a bone fracture position;
learning the sample medical image and the distance field through an artificial intelligence network to obtain a mapping relation between the sample medical image and the distance field;
generating an artificial intelligent network model according to the mapping relation;
the distance field comprises a weighted combination of an intra-bounding distance field and an extra-bounding distance field;
the distance field in the frame is obtained by performing distance transformation on each pixel point in the frame of the sample target as a selected point and each pixel point outside the frame of the sample target as a background point;
the frame outer distance field is obtained by performing distance transformation on each pixel point outside the frame of the sample target as a selected point and each pixel point inside the frame of the sample target as a background point;
when the sample target bounding box is a three-dimensional bounding box, the distance field is a three-dimensional distance field,
the learning of the sample medical image and the distance field through an artificial intelligence network to obtain a mapping relationship of the sample medical image and the distance field includes: layering the three-dimensional space to obtain at least one layer of three-dimensional distance field, and learning each layer of the sample medical image and the three-dimensional distance field through an artificial intelligence network model to obtain a mapping relation between the sample medical image and the three-dimensional distance field.
2. The method of claim 1, wherein generating the distance field from the sample target bounding box comprises:
respectively carrying out binarization processing on the inside and the outside of the sample target frame, and obtaining a binarization result;
and performing distance transformation on the binarization result to obtain a distance field.
3. The method of claim 1, wherein generating the distance field from the sample target bounding box comprises: respectively carrying out binarization processing on the inside and the outside of the sample target frame to obtain binarization results; and performing three-dimensional distance transformation on the binarization result to obtain a three-dimensional distance field.
4. A method of object detection, the method comprising:
acquiring a medical image of a target region of a subject;
processing the medical image through an artificial intelligence network model to obtain a distance field corresponding to a target frame, wherein the artificial intelligence network model comprises a mapping relation between the medical image and the distance field corresponding to the target frame; when the target frame is a three-dimensional frame, the distance field is a three-dimensional distance field, and the mapping relation is obtained by layering a three-dimensional space to obtain at least one layer of the three-dimensional distance field and learning each layer of the medical image and the three-dimensional distance field through an artificial intelligence network model;
determining the target bounding box in the medical image according to the distance field; the central position of the target frame is a bone fracture position;
the distance field comprises a weighted combination of an intra-bounding distance field and an extra-bounding distance field;
the distance field in the frame is obtained by performing distance transformation on each pixel point in the target frame as a selected point and each pixel point outside the target frame as a background point;
and the distance field outside the frame is obtained by performing distance transformation on each pixel point outside the target frame as a selected point and each pixel point in the target frame as a background point.
5. The method of claim 4, wherein determining the target bounding box in the medical image based on the distance field comprises:
and performing inverse distance transform on the distance field to obtain the target frame.
6. The method of claim 5, wherein at least one of a lung nodule region, a tumor region is further included within the target border.
7. A medical imaging system, comprising a processor and a memory; the memory for storing instructions that, when executed by the processor, cause the medical imaging system to implement the method of any of claims 1 to 3.
8. A medical imaging system, comprising a processor and a memory; the memory for storing instructions that, when executed by the processor, cause the medical imaging system to implement the method of any of claims 4 to 6.
CN201810395323.3A 2018-04-27 2018-04-27 Model generation method, target detection method and medical imaging system Active CN108597589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810395323.3A CN108597589B (en) 2018-04-27 2018-04-27 Model generation method, target detection method and medical imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810395323.3A CN108597589B (en) 2018-04-27 2018-04-27 Model generation method, target detection method and medical imaging system

Publications (2)

Publication Number Publication Date
CN108597589A CN108597589A (en) 2018-09-28
CN108597589B true CN108597589B (en) 2022-07-05

Family

ID=63610925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810395323.3A Active CN108597589B (en) 2018-04-27 2018-04-27 Model generation method, target detection method and medical imaging system

Country Status (1)

Country Link
CN (1) CN108597589B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993066B (en) * 2019-03-06 2021-05-14 开易(北京)科技有限公司 Sideline-oriented vehicle positioning method and system
CN113221929A (en) * 2020-02-05 2021-08-06 华为技术有限公司 Image processing method and related equipment
CN114677502B (en) * 2022-05-30 2022-08-12 松立控股集团股份有限公司 License plate detection method with any inclination angle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016086744A1 (en) * 2014-12-02 2016-06-09 Shanghai United Imaging Healthcare Co., Ltd. A method and system for image processing
CN106485704A (en) * 2016-09-30 2017-03-08 上海联影医疗科技有限公司 The extracting method of vessel centerline
CN107909622A (en) * 2017-11-30 2018-04-13 上海联影医疗科技有限公司 Model generating method, the scanning planing method of medical imaging and medical image system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070116357A1 (en) * 2005-11-23 2007-05-24 Agfa-Gevaert Method for point-of-interest attraction in digital images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016086744A1 (en) * 2014-12-02 2016-06-09 Shanghai United Imaging Healthcare Co., Ltd. A method and system for image processing
CN106485704A (en) * 2016-09-30 2017-03-08 上海联影医疗科技有限公司 The extracting method of vessel centerline
CN107909622A (en) * 2017-11-30 2018-04-13 上海联影医疗科技有限公司 Model generating method, the scanning planing method of medical imaging and medical image system

Also Published As

Publication number Publication date
CN108597589A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
JP6514325B2 (en) System and method for segmenting medical images based on anatomical landmark-based features
CN109685060B (en) Image processing method and device
CN107578416B (en) Full-automatic heart left ventricle segmentation method for coarse-to-fine cascade deep network
CN110176012B (en) Object segmentation method in image, pooling method, device and storage medium
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
US20230104173A1 (en) Method and system for determining blood vessel information in an image
JP2021035502A (en) System and methods for image segmentation using convolutional neural network
CN111369525B (en) Image analysis method, apparatus and storage medium
CN106462963B (en) System and method for being sketched outline automatically in adaptive radiation therapy
CN108615237A (en) A kind of method for processing lung images and image processing equipment
Tang et al. A multi-stage framework with context information fusion structure for skin lesion segmentation
CN111784700A (en) Lung lobe segmentation, model training, model construction and segmentation method, system and equipment
WO2012109630A2 (en) Image registration
CN108597589B (en) Model generation method, target detection method and medical imaging system
CN111932552B (en) Aorta modeling method and device
US20230326173A1 (en) Image processing method and apparatus, and computer-readable storage medium
Shu et al. LVC-Net: Medical image segmentation with noisy label based on local visual cues
Lei et al. Echocardiographic image multi‐structure segmentation using Cardiac‐SegNet
CN111798424B (en) Medical image-based nodule detection method and device and electronic equipment
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
RU2721078C2 (en) Segmentation of anatomical structure based on model
Feng et al. Supervoxel based weakly-supervised multi-level 3D CNNs for lung nodule detection and segmentation
CN111402278A (en) Segmentation model training method, image labeling method and related device
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant