CN113658101A - Method and device for detecting landmark points in image, terminal equipment and storage medium - Google Patents
Method and device for detecting landmark points in image, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN113658101A CN113658101A CN202110812323.0A CN202110812323A CN113658101A CN 113658101 A CN113658101 A CN 113658101A CN 202110812323 A CN202110812323 A CN 202110812323A CN 113658101 A CN113658101 A CN 113658101A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- processed
- landmark
- landmark point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000001514 detection method Methods 0.000 claims abstract description 153
- 238000012549 training Methods 0.000 claims abstract description 90
- 238000012545 processing Methods 0.000 claims abstract description 29
- 210000004556 brain Anatomy 0.000 claims description 60
- 238000005481 NMR spectroscopy Methods 0.000 claims description 48
- 238000003709 image segmentation Methods 0.000 claims description 34
- 210000001259 mesencephalon Anatomy 0.000 claims description 30
- 210000000133 brain stem Anatomy 0.000 claims description 26
- 230000011218 segmentation Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 description 39
- 230000008569 process Effects 0.000 description 13
- 239000011159 matrix material Substances 0.000 description 12
- 238000010586 diagram Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 208000030713 Parkinson disease and parkinsonism Diseases 0.000 description 1
- 208000027089 Parkinsonian disease Diseases 0.000 description 1
- 206010034010 Parkinsonism Diseases 0.000 description 1
- 241000277331 Salmonidae Species 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 208000011580 syndromic disease Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The application is applicable to the technical field of image processing, and provides a method and a device for detecting a landmark in an image, a terminal device and a storage medium, wherein the detection method comprises the following steps: acquiring a training data set; inputting the training data set to a landmark point detection network to obtain a predicted landmark point; acquiring first difference information between the target landmark point and the predicted landmark point and second difference information between a first target area divided from the target area by the target landmark point and a second target area divided from the target area by the predicted landmark point; performing model back propagation on the landmark point detection network based on the first difference information and the second difference information to obtain the trained landmark point detection network; and processing the image to be processed based on the trained landmark point detection network to obtain target prediction landmark points in the image to be processed. The method improves the detection precision of the landmark points in the image.
Description
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a landmark in an image, a terminal device, and a storage medium.
Background
The current landmark point detection supports thousands of object identification and scene identification based on deep learning and large-scale image training, and is widely applied to scenes such as picture taking and image identification, preschool education science popularization, image classification, medical anatomy and the like. In medical anatomical scenarios in particular, landmark points may refer to anatomically significant key coordinate points, typically the intersection points of different tissues and organs or the most morphologically distinctive identification points of the study object. The landmark points can be used for tissue structure identification in medicine, so that the detection of the landmark points is of great significance.
In the current detection method of landmark points, the method of manually marking landmark points not only consumes time and manpower, but also is easy to have a phenomenon of wrong marking, and the existing means based on image identification can quickly detect the landmark points in the image, mainly comprising a neural network algorithm, a classification algorithm, a support vector machine algorithm and the like, but the adoption of the algorithm needs professional knowledge personnel to further confirm the identification result, ensures the identification accuracy and has low efficiency. Moreover, for a huge number of images to be analyzed, when the images are analyzed by means of image recognition, a professional can not confirm the result of each image to be analyzed any more, so how to improve the detection accuracy of landmark points in the images becomes an important problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a method and a device for detecting landmark points in an image, a terminal device and a storage medium, which can improve the detection accuracy of the landmark points in the image.
A first aspect of an embodiment of the present application provides a method for detecting a landmark point in an image, where the method includes:
acquiring a training data set comprising N images with a first label, wherein the first label is used for indicating a target landmark point in a target area in the images;
extracting the target landmark points from the images contained in the training data set according to the first label, and inputting the training data set to a landmark point detection network to obtain predicted landmark points;
acquiring first difference information between the target landmark point and the predicted landmark point and second difference information between a first target area divided from the target area by the target landmark point and a second target area divided from the target area by the predicted landmark point;
performing model back propagation on the landmark point detection network based on the first difference information and the second difference information to obtain the trained landmark point detection network;
and processing the image to be processed based on the trained landmark point detection network to obtain target prediction landmark points in the image to be processed.
A second aspect of an embodiment of the present application provides a device for detecting a landmark in an image, the device including:
the data acquisition module is used for acquiring a training data set which comprises N images with first labels, wherein the first labels are used for indicating target landmark points in a target area in the images;
the prediction module is used for extracting the target landmark points from the images contained in the training data set according to the first labels, and inputting the training data set into a landmark point detection network to obtain predicted landmark points;
an information acquisition module for acquiring first difference information between the target landmark point and the predicted landmark point and second difference information between a first target area divided from the target area by the target landmark point and a second target area divided from the target area by the predicted landmark point;
the network training module is used for performing model back propagation on the landmark point detection network based on the first difference information and the second difference information to obtain the trained landmark point detection network;
and the landmark point determining module is used for processing the image to be processed based on the trained landmark point detection network to obtain the target prediction landmark point in the image to be processed.
A third aspect of an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method for detecting landmark points in an image according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for detecting a landmark in an image according to the first aspect is implemented.
A fifth aspect of embodiments of the present application provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the method for detecting landmark points in an image according to the first aspect.
In the embodiment of the present application, by inputting the acquired training data set including N images having the target landmark points in the target region into the landmark point detection network, the predicted landmark points output by the landmark point detection network can be obtained, the first difference information is acquired from the obtained target landmark points and the predicted landmark points, and the second difference information between the first target region divided from the target region by the target landmark points and the second target region divided from the target region by the predicted landmark points is acquired. The first difference information and the second difference information can be used for a back propagation process of the landmark detection network, and then the trained landmark point detection network is obtained. The landmark point detection network is trained according to the first difference information and the second difference information, so that a more accurate landmark point detection network can be obtained, and when the image to be processed is processed based on the trained landmark point detection network, the landmark point detection network can output more accurate target predicted landmark points in the image to be processed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a first flowchart of a method for detecting a landmark in an image according to an embodiment of the present disclosure;
FIG. 2 is a diagram of a training process and an iterative relationship for a landmark detection network;
fig. 3 is a second flowchart of a method for detecting a landmark in an image according to an embodiment of the present application;
FIG. 4 is a sagittal screen shot of a MRI brain image;
FIG. 5 is a diagram of midbrain and pons regions in an MRI brain image;
fig. 6 is a structural diagram of a landmark detecting device in an image according to an embodiment of the present disclosure;
fig. 7 is a structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiment of the present application.
In order to explain the technical solution of the present application, the following description is given by way of specific examples.
Referring to fig. 1, a first flowchart of a method for detecting a landmark in an image according to an embodiment of the present application is shown, and as shown in fig. 1, the method for detecting a landmark in an image includes the following steps:
In an embodiment of the present application, the training data set may include N pictures with a first label indicating a target landmark point in a target region in an image.
In a possible implementation manner, the target landmark point in the target region in the image may be labeled by a manual labeling method, for example, a label is manually set on the image in the training data set, a matrix is constructed according to the position information of all pixel points in the image, and the element value of the pixel point belonging to the target landmark point in the matrix is set to 1 (i.e., a first label), the element value of the pixel point corresponding to other non-target landmark points in the matrix is set to 0, and the target landmark point in the target region in the image is identified according to the element value in the matrix.
In another possible embodiment, the target landmark point of the target area in the image is manually marked by performing binarization processing on the image, setting the gray value of the target landmark point to be 255, and setting the gray values of other non-target landmark points to be 0, so that all pixel points with the gray value equal to 255 are identified as the target landmark points, and the target landmark point of the target area in the image is identified according to the gray values of the pixel points in the image.
Illustratively, if the acquired training dataset is used for training a landmark point detection network for detecting key landmark points in the nmr brain image for distinguishing the midbrain and the pons, the acquired training dataset comprises N brain nmr images with first labels, wherein the first labels are used for indicating key landmark points in the brainstem region in the nmr brain image, and the key landmark points are manually labeled key landmark points that can correctly distinguish the midbrain and the pons.
It should be understood that the images in the training data set obtained in the embodiment of the present application may be adaptively transformed according to the training network, and different training data sets may be obtained according to different networks.
And 102, extracting target landmark points from the images contained in the training data set according to the first label, and inputting the training data set to a landmark point detection network to obtain predicted landmark points.
In the embodiment of the present application, since the target landmark point in the image included in the training data set has the first label, the target landmark point can be extracted from the image included in the training data set according to the first label. And extracting a target landmark point from an image contained in the training data set, firstly obtaining a first label of the image, if the element value of the first label in the matrix is 1, obtaining the position information of an element with the element value of 1 in the matrix, wherein the position information is the position information of the first label in the matrix, secondly obtaining the corresponding relation between the element in the matrix and a pixel point in the image, and extracting the pixel point corresponding to the element with the element value of 1 (namely the first label) in the image according to the corresponding relation, wherein the pixel point is the target landmark point in the image.
The landmark detection network is a model to be trained, and parameters of the landmark detection network can be initialized randomly. Alternatively, in a possible implementation manner, the landmark detection network may also be a pre-trained model, and the parameters of the landmark detection network are parameters obtained after pre-training.
Specifically, according to the initial parameters of the landmark point detection network, when the training data set is input to the landmark point detection network, the predicted landmark points in the image included in the training data set output by the landmark point detection network can be obtained.
Illustratively, if the landmark point detection network is used to detect key landmark points in an MRI brain image for distinguishing midbrain from pons, then the corresponding matrix element with a value of 1 (i.e. the first label) of the nmr brain image can be obtained first, secondly, acquiring the position information of the element with the matrix element value of 1, extracting the pixel point corresponding to the element with the matrix element value of 1 in the nuclear magnetic resonance image according to the corresponding relation between the element in the matrix and the pixel point in the nuclear magnetic resonance brain image, the pixel points are manually marked key landmark points extracted from the nuclear magnetic resonance brain images contained in the training data set, secondly, inputting a training data set into the landmark point detection network according to initial parameters of the landmark point detection network, and obtaining key landmark points of the nuclear magnetic resonance brain image contained in the training data set output by the landmark point detection network.
As a feasible implementation method, the landmark point detection Network may use a High Resolution Network (HRNet), so that the entire landmark point detection Network always maintains High Resolution representation, low Resolution convolutional layers are gradually introduced, and convolutional layers with different resolutions are connected in parallel. By adopting the HRNet network, the expression capacity of high-resolution and low-resolution representations can be improved by continuously exchanging information among the multi-resolution representations, the multi-resolution representations are better promoted mutually, and landmark points output by the landmark point detection network can be more accurate.
In practical application, if the position information of the predicted landmark point is the same as that of the target landmark point, the detection accuracy of the landmark point detection network is very accurate, but usually, difference information exists between the predicted landmark point and the target landmark point, and the difference information can be specifically used for parameter iteration of the landmark point detection network, so that the parameters of the landmark point detection network are optimized, and the detection accuracy of the landmark point detection network is improved.
In order to further improve the detection accuracy of the landmark point detection network, while first difference information between a target landmark point and a predicted landmark point is obtained, difference information between a first target area and a second target area is obtained, where the first target area is an area obtained by dividing the target area according to the target landmark point, the second target area is an area obtained by dividing the target area according to the predicted landmark point, and the target area may be a target area in an image included in a training data set.
In an implementation manner, the target region in the image included in the training data set may be obtained by manually labeling the target region in the image, and the training data set may be input into an image segmentation network, and the image in the training data set is segmented by the image segmentation network to obtain the target region in the image included in the training data set.
For example, acquiring first difference information between a target landmark point and a predicted landmark point from a nuclear magnetic resonance brain image may refer to acquiring a deviation distance between a manually marked key landmark point and a key landmark point output by a landmark point detection network, where the key landmark point is a landmark point for distinguishing a midbrain region from a pons region in the nuclear magnetic resonance brain image. The obtaining of the second difference information between the first target region and the second target region may be that a first midbrain region and a first pony region are firstly partitioned from a brainstem region according to a key landmark point manually marked in the nmr brain image, a second midbrain region and a second pony region are secondly partitioned from the brainstem region according to a key landmark point output by a landmark point detection network in the nmr brain image, a midbrain region difference value between the first midbrain region and the second midbrain region and a pony region difference value between the second pony region and the second pony region are obtained, where the second difference information includes a midbrain region difference value and a pony region difference value.
It should be understood that the difference between the midbrain region and the difference between the pons region in the embodiments of the present application are the area difference between the regions.
And 104, performing model back propagation on the landmark point detection network based on the first difference information and the second difference information to obtain the trained landmark point detection network.
In this embodiment of the application, a joint loss function may be constructed based on the first difference information and the second difference information, for example, the joint loss function may be constructed using an xor function representing a deviation distance of the first difference information and the second difference information, a model back propagation may be performed on the landmark point detection network by using a loss function value calculated by the joint loss function, a model parameter of the landmark point detection network is optimized, so as to update the model parameter of the landmark point detection network until a training model of the landmark point detection network converges, and after training is completed, the trained landmark point detection network may be obtained.
Specifically, constructing the joint loss function by using the offset distance of the first difference information and the xor function of the second difference information may use a weighted sum of the offset distance of the first difference information and the xor function of the second difference information as the joint loss function, where the offset distance of the first difference information may be obtained by calculating a distance between the target landmark point coordinate and the predicted landmark point coordinate, and a specific expression of the xor function of the second difference information may be:
wherein, Xor _ lossx,yIs an exclusive or function of the second difference information, x and y are a first target area and a second target area, respectively, and x and y are different areas.
As can be seen from the above formula, the xor function expresses the degree of difference between the first target region and the second target region. The function value is 1 when the first target area and the second target area are completely disjoint, and is 0 when the first target area and the second target area are completely equal, and the larger the difference between the two is, the larger the function value of the exclusive-or function is.
Illustratively, when landmark point detection is performed on the nuclear magnetic resonance brain image using the landmark point detection network, the first difference information is a deviation distance between a key landmark point manually marked in the nuclear magnetic resonance brain image and a key landmark point output by the landmark point detection network, the xor function of the second difference information includes a midbrain xor function between the first midbrain region and the second midbrain region, and a pons xor function between the first pons region and the second pons region, and the function value of the xor function of the second difference information may be an average of the function value of the pons xor function and the function value of the pons xor function. And then, calculating model parameters of the optimized landmark point detection network by adopting a back propagation method based on the function values of the first difference information and the second difference information XOR function, thereby updating the model parameters of the landmark point detection network.
Specifically, fig. 2 is a training process and an iterative relationship diagram of the landmark detection network. Taking the example of obtaining key landmark points for dividing the midbrain and the pons as an example, wherein input data is a nuclear magnetic resonance brain image, UNet is an image segmentation network, ground trout 1 is a manually marked brainstem region in the nuclear magnetic resonance brain image, and Dice _ loss is a loss function of the image segmentation network, and can be specifically expressed as a difference value between the manually marked brainstem region and the brainstem region output by the image segmentation network; HRNet is a landmark point detection network, ground route 2 is a manually marked key landmark point, ground route 3 is a first midbrain region and a first brain-bridge region which are obtained by dividing according to the manually marked key landmark point, and Xor _ loss is an exclusive OR function of second difference information, and specifically comprises a midbrain exclusive OR function between the first midbrain region and the second midbrain region and a brain-bridge exclusive OR function between the first brain-bridge region and the second brain-bridge region. Distance _ loss is the deviation Distance between the manually marked key landmark point and the key landmark point output by the landmark point detection network.
The overall training process of the landmark detection network can be expressed as follows: firstly inputting a nuclear magnetic resonance brain image, obtaining a brain stem region and a non-brain stem region through image segmentation network segmentation, further outputting the nuclear magnetic resonance brain image with a label in the brain stem region, secondly inputting the nuclear magnetic resonance brain image with the label into a landmark point detection network, performing iterative training (namely back propagation) on the landmark point detection network according to a combined loss function of Xor _ loss and Distance _ loss until the training is finished, obtaining the optimal parameters of the nuclear magnetic resonance brain image, outputting key landmark points of the nuclear magnetic resonance brain image based on the trained landmark point detection network, and dividing the brain stem region by using the key landmark points to obtain the final midbrain and brain bridge regions.
And 105, processing the image to be processed based on the trained landmark point detection network to obtain target predicted landmark points in the image to be processed.
The trained landmark point detection network may obtain a target predicted landmark point in the image to be processed according to the input image to be processed, where the image to be processed may be any type of image, such as a nuclear magnetic resonance image.
In this embodiment of the application, the image to be processed may be a nuclear magnetic resonance brain image, and the specific process of processing the image to be processed may be: firstly, a target area in an image to be processed is obtained, wherein the target area carries a target label, the image to be processed carrying the target label is input into a landmark point detection network, the landmark point detection network obtains the target area in the image to be processed by identifying the target label, finally, the target point detection network carries out convolution calculation on the target area, image characteristics are extracted, and a target prediction point in the image to be processed is obtained through the image characteristics.
In the embodiment of the present application, by inputting the acquired training data set including N images having the target landmark points in the target region into the landmark point detection network, the predicted landmark points output by the landmark point detection network can be obtained, the first difference information is acquired from the obtained target landmark points and the predicted landmark points, and the second difference information between the first target region divided from the target region by the target landmark points and the second target region divided from the target region by the predicted landmark points is acquired. The first difference information and the second difference information can be used for a back propagation process of the landmark detection network, and then the trained landmark point detection network is obtained. The landmark point detection network is trained according to the first difference information and the second difference information, so that a more accurate landmark point detection network can be obtained, and when the image to be processed is processed based on the trained landmark point detection network, the landmark point detection network can output more accurate target predicted landmark points in the image to be processed.
Referring to fig. 3, a second flowchart of a method for detecting a landmark in an image according to an embodiment of the present application is shown, and as shown in fig. 3, the method for detecting a landmark in an image includes the following steps:
The training data set includes N images with a first label indicating a target landmark point in a target region in the images.
And 304, performing model back propagation on the landmark point detection network based on the first difference information and the second difference information to obtain the trained landmark point detection network.
Steps 301-304 of this embodiment are similar to steps 101-104 of the previous embodiment, and reference may be made to these steps, which are not described herein again.
And 305, processing the image to be processed based on the trained landmark point detection network to obtain target predicted landmark points in the image to be processed.
In the embodiment of the present application, most of the different images in the obtained training data set do not have the same size and dimension, but the images with different sizes and dimensions do not have comparability, and any feature calculated through a network cannot be statistically analyzed, so the images in the training data set must be registered, the images in the training data set are aligned to the corresponding three-dimensional template space, all the images in the training data set have the same dimension and size, and the same structure between the different images is located at the same position in the three-dimensional template space.
For example, when the images in the acquired training dataset are all nuclear magnetic resonance brain images, all the nuclear magnetic resonance brain images included in the training dataset may be aligned to the MNI (simple Neurological institute) space, so that the same structures of the different brain images are in the same position in the MNI space. The MNI space is a coordinate system established according to a series of nuclear magnetic resonance images of normal human brain. After all images contained in the training data set are registered to the MNI space, the central position of the brain in the sagittal plane direction can be selected in the MNI space, the central sagittal plane image is used as an image after the registration of the nuclear magnetic resonance brain image in the training data set, as shown in fig. 4, the central sagittal plane image of the brain is input into the landmark point detection network, and the central sagittal plane image can be better used for training the landmark point detection network.
As an implementation manner, before processing the image to be processed based on the trained landmark point detection network, the method further includes:
aligning the image to be processed to a corresponding three-dimensional template space to obtain a registered image to be processed, wherein the three-dimensional template space corresponds to the standard space position of the structural point; the structure points in the registered images to be processed are positioned on the corresponding standard space positions;
correspondingly, the processing of the image to be processed comprises:
and processing the registered image to be processed.
The image to be processed is aligned to the corresponding three-dimensional template space, the structure point in the image to be processed can be located at the same position of the standard position of the corresponding structure point in the three-dimensional template space at the standard space position of the corresponding structure point in the three-dimensional template space, and the section in the midsagittal plane direction is selected from the three-dimensional template space to be used as the image to be processed after being registered. Because the registered image is adopted to train the network of the landmark point detection network, the registered image to be processed is processed to better adapt to the trained landmark point detection network, and a more accurate landmark point detection result is obtained.
Optionally, processing the image to be processed based on the trained landmark point detection network to obtain a target predicted landmark point in the image to be processed, including:
carrying out segmentation processing on the image to be processed through the trained image segmentation network to obtain a target area in the image to be processed;
marking a target area of an image to be processed by adopting a target label to obtain the image to be processed with the target label;
and inputting the image to be processed carrying the target label to the trained landmark point detection network to obtain a target prediction point in the image to be processed.
The image segmentation network may adopt a neural network structure, such as a UNet structure, where the UNet structure includes a U-shaped structure and a layer jump connection. The U-shaped structure is derived from a deep-first and shallow-second connection structure formed by multiple downsampling and deconvolution. After the image to be processed is input into the UNet network, each pixel can be predicted through the convolution layer, and meanwhile, the spatial information of the image can be reserved. And the jump layer connection ensures that the finally recovered feature graph fuses more shallow features instead of being trained on high-level semantic features, so that the segmentation result is more refined on multiple scales. Meanwhile, the image to be processed is input into the UNet network, the position of the target area can be directly located through segmentation, and each pixel in the target area is marked with a target label, so that the image to be processed carrying the target label is obtained.
Optionally, before the image to be processed is segmented by the trained image segmentation network to obtain the target region in the image to be processed, the method further includes:
inputting a training data set to an image segmentation network to obtain a target segmentation area;
extracting a target area from the images contained in the training data set according to the second label;
acquiring difference information between a target area and a target segmentation area;
and carrying out model back propagation on the image segmentation network based on the difference information to obtain the trained image segmentation network.
In this embodiment of the application, when the training data set is used to train the landmark point detection network, the training data set may be input into the image segmentation network, the image in the training data set is segmented by the image segmentation network to obtain the target region in the image included in the training data set, the training data set may be used to train the image segmentation network, and the trained image segmentation network is used to obtain the target region in the image included in the training data set, so that the training data set further includes a second label for indicating the target region, and the image segmentation network may be trained according to the second label for indicating the target region.
Illustratively, the brain stem region of a nuclear magnetic resonance brain image in a training data set is manually marked, the training data set is input into an image segmentation network to obtain a target segmentation region, meanwhile, the image segmentation network can extract the brain stem region from the nuclear magnetic resonance brain image according to the manual marking, the target segmentation region is compared with the brain stem region to obtain a difference value between the target segmentation region and the brain stem region, the difference value can be represented by an exclusive or function, a function value of the exclusive or function between the target segmentation region and the brain stem region is calculated according to the difference value between the target segmentation region and the brain stem region, model parameters of the image segmentation network are updated by adopting a back propagation method according to the function value of the exclusive or function until the training model converges, the training is completed, and the trained image segmentation network is obtained.
Optionally, inputting the image to be processed carrying the target label to the trained landmark point detection network to obtain the target prediction point in the image to be processed, including:
inputting the image to be processed carrying the target label to a trained landmark point detection network, identifying the target label through the landmark point detection network, and determining the area marked by the target label as the target area of the image to be processed;
and carrying out landmark point detection on the target area of the image to be processed to obtain a target prediction point in the image to be processed.
After an image to be processed carrying a target label is input to a trained landmark point detection network, the landmark point detection network needs to extract a region marked by the target label through the target label, and then detects landmark points on the target region.
It should be understood that the landmark point detection is performed on the target area, the detection range of the target prediction landmark point is narrowed, the data processed by the network is reduced, and the operation speed of the landmark point detection network can be increased.
If the image to be processed is a nuclear magnetic resonance brain image, processing the image to be processed based on the trained landmark point detection network to obtain target predicted landmark points in the image to be processed, including:
aligning the nuclear magnetic resonance brain image to a corresponding three-dimensional template space to obtain a registered nuclear magnetic resonance brain image;
carrying out image segmentation on the registered nuclear magnetic resonance brain image through the trained segmentation network to obtain a brain stem region of the registered nuclear magnetic resonance brain image;
marking the brain stem region by adopting a target label to obtain a nuclear magnetic resonance brain image carrying the target label;
inputting the nuclear magnetic resonance brain image carrying the target label into a trained landmark point detection network to obtain key landmark points in the nuclear magnetic resonance brain image, wherein the key landmark points are used for distinguishing a midbrain area and a pons area from the brainstem area.
In the embodiment of the present application, if a key landmark point capable of distinguishing the midbrain and the pony area is to be obtained according to the mri brain image, the target landmark point in the training data set used by the landmark point detection network during training should be a manually marked key landmark point for distinguishing the midbrain and the pony area, the key landmark point may be selected from point 1, point 2 and point 3 shown in fig. 4, point 1 is a junction edge point of the midbrain and the pony, point 2 is a pony protrusion edge termination point, and point 3 is a lowest point of the quadruple; line a is the line connecting point 1 and point 3, and line B is the parallel to line a and passes through point 2.
And step 306, dividing the target area of the image to be processed according to the target prediction landmark points in the image to be processed to obtain the final target area partitioned by the target prediction landmark points.
In the embodiment of the present application, after the target prediction landmark point in the image to be processed is obtained, the target prediction landmark point may be connected to divide the target area of the image to be processed.
Illustratively, dividing the brainstem region in the figure according to the line a and the line B in fig. 4 may result in the midbrain region 51 and the pons region 52 shown in fig. 5.
It should be understood that after the midbrain and the pons area are obtained, the Magnetic Resonance Parkinsonism Index (MRPI) can be obtained by calculating the area ratio of the midbrain area to the pons area, which is of great significance for identifying parkinson disease and Parkinsonism syndrome.
An example of the technical effect of applying the method of the above embodiment is given below.
436 parts of nuclear magnetic resonance brain images are obtained, relative errors of MRPI measured by the method and MRPI measured by manual marking are used as measurement indexes, the relative errors of the MRPI measured by the method of the embodiment of the application and the relative errors of the MRPI measured by only using a target landmark point training landmark point detection network are respectively compared, and the comparison results are shown in the following table:
from the above table, it can be seen that the method of the embodiment of the present application achieves better performance and effect than the existing landmark point detection method.
Compared with the first embodiment, the method and the device have the advantages that the registration of the image to be processed is added, the image segmentation network is adopted to automatically segment the target area of the registered image to be processed, the image to be processed carrying the target label is input into the trained landmark point detection network, the redundant data in the operation process of the landmark point detection network are reduced, and the detection accuracy of the landmark point detection network is improved through the registration means.
Referring to fig. 6, a block diagram of a landmark detection device in an image according to an embodiment of the present application is shown, and for convenience of description, only the portions related to the embodiment of the present application are shown.
The device for detecting the landmark points in the image may specifically include the following modules:
a data obtaining module 601, configured to obtain a training data set including N images with a first label, where the first label is used to indicate a target landmark point in a target area in the image;
the prediction module 602 is configured to extract a target landmark point from an image included in the training data set according to the first label, and input the training data set to a landmark point detection network to obtain a predicted landmark point;
an information obtaining module 603 configured to obtain first difference information between the target landmark point and the predicted landmark point, and second difference information between a first target region divided from the target region by the target landmark point and a second target region divided from the target region by the predicted landmark point;
a network training module 604, configured to perform model back propagation on the landmark detection network based on the first difference information and the second difference information, to obtain a trained landmark detection network;
and a landmark point determining module 605, configured to process the image to be processed based on the trained landmark point detection network, so as to obtain a target predicted landmark point in the image to be processed.
In this embodiment of the present application, the landmark detecting device may further include the following modules:
the registration module is used for aligning the image to be processed to the corresponding three-dimensional template space to obtain the registered image to be processed, wherein the three-dimensional template space corresponds to the standard space position of the structural point; the structure points in the registered images to be processed are positioned on the corresponding standard space positions;
in an embodiment of the present application, the registration module may be specifically configured to:
and processing the registered image to be processed.
In this embodiment, the landmark point determining module 605 may specifically include the following sub-modules:
the segmentation submodule is used for carrying out segmentation processing on the image to be processed through the trained image segmentation network to obtain a target area in the image to be processed;
the marking submodule is used for marking a target area of the image to be processed by adopting the target label to obtain the image to be processed with the target label;
and the target determining submodule is used for inputting the image to be processed carrying the target label to the trained landmark point detection network to obtain a target prediction point in the image to be processed.
In this embodiment, the partitioning sub-module may specifically include the following units:
the region segmentation unit is used for inputting the training data set into the image segmentation network to obtain a target segmentation region;
the target extraction unit is used for extracting a target area from the images contained in the training data set according to the second label;
a difference acquisition unit configured to acquire difference information between the target region and the target divided region;
and the training unit is used for carrying out model back propagation on the image segmentation network based on the difference information to obtain the trained image segmentation network.
In this embodiment of the present application, the target determination sub-module may specifically include the following units:
the identification unit is used for inputting the image to be processed carrying the target label to the trained landmark point detection network, identifying the target label through the landmark point detection network and determining the area marked by the target label as the target area of the image to be processed;
and the detection unit is used for carrying out landmark point detection on the target area of the image to be processed to obtain a target prediction point in the image to be processed.
In this embodiment of the application, when the image to be processed is a nuclear magnetic resonance brain image, the landmark determining module 605 may be further configured to:
aligning the nuclear magnetic resonance brain image to a corresponding three-dimensional template space to obtain a registered nuclear magnetic resonance brain image;
segmenting the registered nuclear magnetic resonance brain image through the trained image segmentation network to obtain a brain stem region of the registered nuclear magnetic resonance brain image;
marking the brain stem region by adopting a target label to obtain a nuclear magnetic resonance brain image carrying the target label;
inputting the nuclear magnetic resonance brain image carrying the target label into a trained landmark point detection network to obtain key landmark points in the nuclear magnetic resonance brain image, wherein the key landmark points are used for distinguishing a midbrain area and a pons area from the brainstem area.
In this embodiment of the present application, the device for detecting a landmark point in an image may further include:
and the region dividing module is used for dividing the target region of the image to be processed according to the target prediction landmark points in the image to be processed to obtain the final target region partitioned by the target prediction landmark points.
The landmark point detection device in the image provided in the embodiment of the present application may be applied to the foregoing method embodiments, and for details, reference is made to the description of the foregoing method embodiments, and details are not repeated here.
Fig. 7 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 7, the terminal device 700 of this embodiment includes: at least one processor 710 (only one shown in fig. 7), a memory 720, and a computer program 721 stored in the memory 720 and operable on the at least one processor 710, the processor 710 implementing the steps in any of the various embodiments of the method of landmark detection in images when the computer program 721 is executed by the processor 710.
The terminal device 700 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 710, a memory 720. Those skilled in the art will appreciate that fig. 7 is merely an example of the terminal device 700, and does not constitute a limitation of the terminal device 700, and may include more or less components than those shown, or combine some of the components, or different components, such as an input-output device, a network access device, etc.
The Processor 710 may be a Central Processing Unit (CPU), and the Processor 710 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 720 may in some embodiments be an internal storage unit of the terminal device 700, such as a hard disk or a memory of the terminal device 700. The memory 720 may also be an external storage device of the terminal device 700 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 700. Further, the memory 720 may also include both an internal storage unit and an external storage device of the terminal device 700. The memory 720 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The memory 720 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
When the computer program product runs on a terminal device, the steps in the method embodiments can be implemented when the terminal device executes the computer program product.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same. Although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (10)
1. A method for detecting a landmark point in an image, the method comprising:
acquiring a training data set comprising N images with a first label, wherein the first label is used for indicating a target landmark point in a target area in the images;
extracting the target landmark points from the images contained in the training data set according to the first label, and inputting the training data set to a landmark point detection network to obtain predicted landmark points;
acquiring first difference information between the target landmark point and the predicted landmark point and second difference information between a first target area divided from the target area by the target landmark point and a second target area divided from the target area by the predicted landmark point;
performing model back propagation on the landmark point detection network based on the first difference information and the second difference information to obtain the trained landmark point detection network;
and processing the image to be processed based on the trained landmark point detection network to obtain target prediction landmark points in the image to be processed.
2. The detection method according to claim 1, wherein before processing the image to be processed based on the trained landmark point detection network, the method further comprises:
aligning the image to be processed to a corresponding three-dimensional template space to obtain the registered image to be processed, wherein the three-dimensional template space corresponds to the standard space position of the structure point; the registered structural points in the image to be processed are positioned on the corresponding standard spatial positions;
correspondingly, the processing the image to be processed includes:
and processing the registered image to be processed.
3. The method as claimed in claim 1, wherein the processing the image to be processed based on the trained landmark point detection network to obtain the target predicted landmark points in the image to be processed comprises:
segmenting the image to be processed through the trained image segmentation network to obtain a target area in the image to be processed;
marking a target area of the image to be processed by adopting a target label to obtain the image to be processed carrying the target label;
and inputting the image to be processed carrying the target label to the trained landmark point detection network to obtain a target prediction point in the image to be processed.
4. The detection method as claimed in claim 3, wherein the training data set further includes a second label for indicating the target region, and before the segmentation processing is performed on the image to be processed through the trained image segmentation network to obtain the target region in the image to be processed, the method further includes:
inputting the training data set to the image segmentation network to obtain a target segmentation area;
extracting the target region from the images contained in the training data set according to the second label;
acquiring difference information between the target area and the target segmentation area;
and carrying out model back propagation on the image segmentation network based on the difference information to obtain the trained image segmentation network.
5. The detection method according to claim 3, wherein the inputting the to-be-processed image carrying the target label to the trained landmark point detection network to obtain the target prediction point in the to-be-processed image comprises:
inputting the image to be processed carrying the target label to the trained landmark point detection network, identifying the target label through the landmark point detection network, and determining the area marked by the target label as the target area of the image to be processed;
and carrying out landmark point detection on the target area of the image to be processed to obtain a target prediction point in the image to be processed.
6. The detection method according to claim 1, wherein the image to be processed is a nuclear magnetic resonance brain image, and the processing the image to be processed based on the trained landmark point detection network to obtain the target predicted landmark point in the image to be processed comprises:
aligning the nuclear magnetic resonance brain image to a corresponding three-dimensional template space to obtain the registered nuclear magnetic resonance brain image;
segmenting the registered nuclear magnetic resonance brain image through the trained image segmentation network to obtain a brain stem region of the registered nuclear magnetic resonance brain image;
marking the brain stem region by adopting a target label to obtain the nuclear magnetic resonance brain image carrying the target label;
inputting the nuclear magnetic resonance brain image carrying the target label into the trained landmark point detection network to obtain key landmark points in the nuclear magnetic resonance brain image, wherein the key landmark points are used for distinguishing a midbrain region and a pons region from the brainstem region.
7. The detection method according to any one of claims 1 to 6, wherein the processing the image to be processed based on the trained landmark point detection network to obtain the target predicted landmark point in the image to be processed comprises:
and dividing the target area of the image to be processed according to the target prediction landmark points in the image to be processed to obtain a final target area partitioned by the target prediction landmark points.
8. A device for detecting a landmark in an image, the device comprising:
the data acquisition module is used for acquiring a training data set which comprises N images with first labels, wherein the first labels are used for indicating target landmark points in a target area in the images;
the prediction module is used for extracting the target landmark points from the images contained in the training data set according to the first labels, and inputting the training data set into a landmark point detection network to obtain predicted landmark points;
an information acquisition module for acquiring first difference information between the target landmark point and the predicted landmark point and second difference information between a first target area divided from the target area by the target landmark point and a second target area divided from the target area by the predicted landmark point;
the network training module is used for performing model back propagation on the landmark point detection network based on the first difference information and the second difference information to obtain the trained landmark point detection network;
and the landmark point determining module is used for processing the image to be processed based on the trained landmark point detection network to obtain the target prediction landmark point in the image to be processed.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110812323.0A CN113658101B (en) | 2021-07-19 | 2021-07-19 | Method and device for detecting landmark points in image, terminal equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110812323.0A CN113658101B (en) | 2021-07-19 | 2021-07-19 | Method and device for detecting landmark points in image, terminal equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113658101A true CN113658101A (en) | 2021-11-16 |
CN113658101B CN113658101B (en) | 2023-06-30 |
Family
ID=78477475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110812323.0A Active CN113658101B (en) | 2021-07-19 | 2021-07-19 | Method and device for detecting landmark points in image, terminal equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113658101B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670525A (en) * | 2018-11-02 | 2019-04-23 | 平安科技(深圳)有限公司 | Object detection method and system based on once shot detection |
CN111104538A (en) * | 2019-12-06 | 2020-05-05 | 深圳久凌软件技术有限公司 | Fine-grained vehicle image retrieval method and device based on multi-scale constraint |
CN111310775A (en) * | 2018-12-11 | 2020-06-19 | Tcl集团股份有限公司 | Data training method and device, terminal equipment and computer readable storage medium |
CN112560999A (en) * | 2021-02-18 | 2021-03-26 | 成都睿沿科技有限公司 | Target detection model training method and device, electronic equipment and storage medium |
-
2021
- 2021-07-19 CN CN202110812323.0A patent/CN113658101B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670525A (en) * | 2018-11-02 | 2019-04-23 | 平安科技(深圳)有限公司 | Object detection method and system based on once shot detection |
CN111310775A (en) * | 2018-12-11 | 2020-06-19 | Tcl集团股份有限公司 | Data training method and device, terminal equipment and computer readable storage medium |
CN111104538A (en) * | 2019-12-06 | 2020-05-05 | 深圳久凌软件技术有限公司 | Fine-grained vehicle image retrieval method and device based on multi-scale constraint |
CN112560999A (en) * | 2021-02-18 | 2021-03-26 | 成都睿沿科技有限公司 | Target detection model training method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113658101B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110598714B (en) | Cartilage image segmentation method and device, readable storage medium and terminal equipment | |
CN107665486B (en) | Automatic splicing method and device applied to X-ray images and terminal equipment | |
CN109117773B (en) | Image feature point detection method, terminal device and storage medium | |
CN111860398B (en) | Remote sensing image target detection method and system and terminal equipment | |
CN108830835A (en) | It identifies the method for spinal sagittal bit image exception and calculates equipment | |
CN111145147B (en) | Multi-mode medical image segmentation method and terminal equipment | |
CN110689043A (en) | Vehicle fine granularity identification method and device based on multiple attention mechanism | |
CN111291825A (en) | Focus classification model training method and device, computer equipment and storage medium | |
An et al. | Medical image segmentation algorithm based on multilayer boundary perception-self attention deep learning model | |
Jiang et al. | miLBP: a robust and fast modality-independent 3D LBP for multimodal deformable registration | |
CN111145196A (en) | Image segmentation method and device and server | |
Jiang et al. | Fast and robust multimodal image registration using a local derivative pattern | |
CN115546270A (en) | Image registration method, model training method and equipment for multi-scale feature fusion | |
CN115330813A (en) | Image processing method, device and equipment and readable storage medium | |
Zheng et al. | Automatic liver tumour segmentation in CT combining FCN and NMF-based deformable model | |
CN111161348B (en) | Object pose estimation method, device and equipment based on monocular camera | |
CN107633506B (en) | Image symmetry detection method and device and terminal equipment | |
CN117911432A (en) | Image segmentation method, device and storage medium | |
CN113822323A (en) | Brain scanning image identification processing method, device, equipment and storage medium | |
WO2019109410A1 (en) | Fully convolutional network model training method for splitting abnormal signal region in mri image | |
Kamencay et al. | 3D image reconstruction from 2D CT slices | |
CN113191189A (en) | Face living body detection method, terminal device and computer readable storage medium | |
CN113658101B (en) | Method and device for detecting landmark points in image, terminal equipment and storage medium | |
CN111104965A (en) | Vehicle target identification method and device | |
CN112750124B (en) | Model generation method, image segmentation method, model generation device, image segmentation device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |