CN113658101B - Method and device for detecting landmark points in image, terminal equipment and storage medium - Google Patents

Method and device for detecting landmark points in image, terminal equipment and storage medium Download PDF

Info

Publication number
CN113658101B
CN113658101B CN202110812323.0A CN202110812323A CN113658101B CN 113658101 B CN113658101 B CN 113658101B CN 202110812323 A CN202110812323 A CN 202110812323A CN 113658101 B CN113658101 B CN 113658101B
Authority
CN
China
Prior art keywords
image
landmark
target
processed
detection network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110812323.0A
Other languages
Chinese (zh)
Other versions
CN113658101A (en
Inventor
唐晓颖
孙福海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202110812323.0A priority Critical patent/CN113658101B/en
Publication of CN113658101A publication Critical patent/CN113658101A/en
Application granted granted Critical
Publication of CN113658101B publication Critical patent/CN113658101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application is applicable to the technical field of image processing, and provides a method, a device, terminal equipment and a storage medium for detecting landmark points in an image, wherein the detection method comprises the following steps: acquiring a training data set; inputting the training data set into a landmark point detection network to obtain a predicted landmark point; acquiring first difference information between the target landmark point and the predicted landmark point, and second difference information between a first target area divided by the target landmark point from the target area and a second target area divided by the predicted landmark point from the target area; based on the first difference information and the second difference information, performing model back propagation on the landmark detection network to obtain the trained landmark detection network; and processing the image to be processed based on the trained landmark point detection network to obtain a target prediction landmark point in the image to be processed. The method improves the detection precision of landmark points in the image.

Description

Method and device for detecting landmark points in image, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a method and device for detecting landmark points in an image, terminal equipment and a storage medium.
Background
The current landmark detection supports thousands of object recognition and scene recognition based on deep learning and large-scale image training, and is widely applied to scenes such as shooting and image recognition, preschool education science popularization, image classification, medical anatomy and the like. Especially in medical anatomical scenarios, landmark points may refer to key coordinate points of anatomically important significance, typically the junction points of different tissues and organs or the identified points of the subject that are most morphologically characterized. The tissue structure identification can be carried out by utilizing the landmark points in medicine, so that the method has very important significance in detecting the landmark points.
In the existing landmark point detection method, the method of manually marking the landmark points not only consumes time and labor, but also is easy to cause the phenomenon of marking errors, the existing image recognition-based means can rapidly detect the landmark points in the image and mainly comprises a neural network algorithm, a classification algorithm, a support vector machine algorithm and the like, but the adoption of the algorithm requires professional knowledge to further confirm the recognition result, ensures the recognition accuracy and has low efficiency. And for a huge number of images to be analyzed, when the images are analyzed by adopting an image recognition method, the expert knowledge cannot further confirm the result of each image to be analyzed, so that how to improve the detection accuracy of landmark points in the images becomes an important problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a method, a device, terminal equipment and a storage medium for detecting landmark points in an image, which can improve the detection accuracy of the landmark points in the image.
A first aspect of an embodiment of the present application provides a method for detecting a landmark point in an image, where the method includes:
acquiring a training data set, wherein the training data set comprises N images with first labels, and the first labels are used for indicating target landmark points in a target area in the images;
extracting the target landmark point from the image contained in the training data set according to the first label, and inputting the training data set into a landmark point detection network to obtain a predicted landmark point;
acquiring first difference information between the target landmark point and the predicted landmark point, and second difference information between a first target area divided by the target landmark point from the target area and a second target area divided by the predicted landmark point from the target area;
based on the first difference information and the second difference information, performing model back propagation on the landmark detection network to obtain the trained landmark detection network;
And processing the image to be processed based on the trained landmark point detection network to obtain a target prediction landmark point in the image to be processed.
A second aspect of embodiments of the present application provides a landmark point detection apparatus in an image, where the detection apparatus includes:
the data acquisition module is used for acquiring a training data set and comprises N images with first labels, wherein the first labels are used for indicating target landmark points in a target area in the images;
the prediction module is used for extracting the target landmark point from the image contained in the training data set according to the first label, and inputting the training data set into a landmark point detection network to obtain a predicted landmark point;
the information acquisition module is used for acquiring first difference information between the target landmark point and the predicted landmark point and second difference information between a first target area divided by the target landmark point from the target area and a second target area divided by the predicted landmark point from the target area;
the network training module is used for carrying out model back propagation on the landmark detection network based on the first difference information and the second difference information to obtain the trained landmark detection network;
And the landmark point determining module is used for processing the image to be processed based on the trained landmark point detection network to obtain a target prediction landmark point in the image to be processed.
A third aspect of the embodiments of the present application provides a terminal device, including: the image landmark detection method according to the first aspect is implemented by a memory, a processor and a computer program stored in the memory and executable on the processor when the processor executes the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program, which when executed by a processor, implements the method for detecting a landmark point in an image according to the first aspect.
A fifth aspect of embodiments of the present application provides a computer program product, which when run on a terminal device, causes the terminal device to perform the method for detecting a landmark point in an image according to the first aspect.
In the embodiment of the application, the obtained training data set including the N images including the target landmark points in the target area is input into the landmark detection network, so that the predicted landmark points output by the landmark detection network can be obtained, the first difference information is obtained from the obtained target landmark points and the predicted landmark points, and the second difference information is obtained between the first target area divided from the target area by the target landmark points and the second target area divided from the target area by the predicted landmark points. The first difference information and the second difference information can be used for a back propagation process of the landmark detection network, so that the trained landmark detection network is obtained. Training the landmark detection network according to the first difference information and the second difference information to obtain a more accurate landmark detection network, and enabling the landmark detection network to output more accurate target prediction landmark points in the image to be processed when the image to be processed is processed based on the trained landmark detection network.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for detecting landmark points in an image according to an embodiment of the present application;
FIG. 2 is a training process and iterative relationship diagram for a landmark detection network;
fig. 3 is a second flowchart of a method for detecting landmark points in an image according to an embodiment of the present application;
FIG. 4 is a sagittal sectional view of a nuclear magnetic resonance brain image;
FIG. 5 is a midbrain and pontine region in a nuclear magnetic resonance brain image;
fig. 6 is a block diagram of a landmark detection device in an image according to an embodiment of the present application;
fig. 7 is a block diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
It should be understood that the sequence number of each step in this embodiment does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
In order to illustrate the technical solution of the present application, the following description is made by specific embodiments.
Referring to fig. 1, a flowchart of a method for detecting landmark points in an image according to an embodiment of the present application is shown, and as shown in fig. 1, a method for detecting landmark points in an image includes the following steps:
step 101, a training data set is acquired.
In an embodiment of the present application, the training data set may include N pictures with a first label, where the first label is used to indicate a target landmark point in a target area in an image.
In one possible implementation manner, the target landmark points of the target area in the image may be marked by a manual marking method, for example, a label is manually set on the image in the training data set, a matrix is constructed according to the position information of all the pixels in the image, the element values corresponding to the pixels belonging to the target landmark points in the matrix are set to 1 (i.e., the first label), the element values corresponding to the pixels corresponding to other non-target landmark points in the matrix are set to 0, and the target landmark points in the target area in the image are identified according to the element values in the matrix.
In another possible implementation manner, the target landmark point of the target area in the image is manually marked by performing binarization processing on the image, setting the gray level value of the target landmark point to 255 and setting the gray level values of other non-target landmark points to 0, so that all the pixel points with the gray level value equal to 255 are identified as the target landmark point, and identifying the target landmark point of the target area in the image according to the gray level value of the pixel points in the image.
For example, if the obtained training data set is used to train a landmark detection network, the landmark detection network is used to detect key landmarks in the nmr brain image for distinguishing the midbrain and the brain bridge, the obtained training data set includes an N Zhang Naobu nmr image with a first label, where the first label is used to indicate the key landmark of the brain stem region in the nmr brain image, and the key landmark is a manually marked key landmark that can correctly distinguish the midbrain and the brain bridge region.
It should be understood that the images in the training data set acquired in the embodiments of the present application may be adaptively transformed according to a trained network, and different training data sets may be acquired according to different networks.
Step 102, extracting a target landmark point from the image contained in the training data set according to the first label, and inputting the training data set into a landmark point detection network to obtain a predicted landmark point.
In the embodiment of the present application, since the target landmark point in the image included in the training data set has the first label, the target landmark point may be extracted from the image included in the training data set according to the first label. And extracting a target landmark point from an image contained in the training data set, firstly, acquiring a first label of the image, if the element value corresponding to the first label in the matrix is 1, acquiring the position information of the element with the element value of 1 in the matrix, wherein the position information is the position information of the first label in the matrix, secondly, acquiring the corresponding relation between the element in the matrix and the pixel point in the image, and extracting the pixel point corresponding to the element with the element value of 1 (namely the first label) in the image according to the corresponding relation, wherein the pixel point is the target landmark point in the image.
The landmark detection network is a model to be trained, and parameters of the landmark detection network can be randomly initialized. Alternatively, in a possible implementation manner, the landmark detection network may be a pre-trained model, where the parameters are parameters obtained after the pre-training.
Specifically, according to the initial parameters of the landmark detection network, when the training data set is input to the landmark detection network, the predicted landmark point in the image included in the training data set output by the landmark detection network can be obtained.
For example, if the landmark detection network is used for detecting key landmark points for distinguishing the midbrain and the brain bridge in the nuclear magnetic resonance brain image, firstly, an element (i.e., a first label) with a matrix element value of 1 corresponding to the nuclear magnetic resonance brain image can be obtained, secondly, position information of the element with the matrix element value of 1 is obtained, according to a corresponding relation between the element in the matrix and a pixel point in the nuclear magnetic resonance brain image, a pixel point corresponding to the element with the matrix element value of 1 in the nuclear magnetic resonance image is extracted, the pixel point is a manually marked key landmark point extracted from the nuclear magnetic resonance brain image contained in the training data set, secondly, the training data set is input into the landmark detection network according to an initial parameter of the landmark detection network, and the key landmark point of the nuclear magnetic resonance brain image contained in the training data set output by the landmark detection network is obtained.
As a possible implementation method, the landmark point detection network may use a high-resolution network (High Resolution Network, HRNet), so that the whole landmark point detection network always maintains high-resolution characterization, low-resolution convolution layers are gradually introduced, and the convolution layers with different resolutions are connected in parallel. By adopting the HRNet network, the expression capability of the high-resolution and low-resolution characterization can be improved by continuously carrying out information exchange among the multi-resolution characterization, so that the multi-resolution characterization can be better mutually promoted, and the landmark points output by the landmark point detection network can be more accurate.
Step 103, obtaining first difference information between the target landmark point and the predicted landmark point, and second difference information between a first target area divided by the target landmark point from the target area and a second target area divided by the predicted landmark point from the target area.
The target landmark point refers to a landmark point in an image marked manually (which can be understood as a reference landmark point of a landmark detection network outputting a predicted landmark point), the predicted landmark point refers to a landmark point in an image output by the landmark detection network, and in practical application, if the position information of the predicted landmark point is the same as the position information of the target landmark point, it indicates that the detection accuracy of the landmark detection network is very accurate, but often, difference information exists between the predicted landmark point and the target landmark point, and the difference information can be specifically used for parameter iteration of the landmark detection network, so that parameters of the landmark detection network are optimized, and the detection accuracy of the landmark detection network is improved.
In order to further improve the detection accuracy of the landmark detection network, the first difference information between the target landmark and the predicted landmark may be acquired, and at the same time, the difference information between a first target area and a second target area may be acquired, where the first target area is an area obtained by dividing the target area according to the target landmark, and the second target area is an area obtained by dividing the target area according to the predicted landmark, and the target area may be a target area in an image included in the training dataset.
In one implementation manner, the target area in the image included in the training data set may be obtained by manually marking the target area in the image, or the training data set may be input into an image segmentation network, and the image in the training data set is segmented by the image segmentation network, so as to obtain the target area in the image included in the training data set.
For example, the obtaining of the first difference information between the target landmark point and the predicted landmark point from the nuclear magnetic resonance brain image may refer to obtaining a deviation distance between a manually marked key landmark point and a key landmark point output by a landmark point detection network, wherein the key landmark point is a landmark point for distinguishing a midbrain and a brain bridge area in the nuclear magnetic resonance brain image. The obtaining of the second difference information between the first target area and the second target area may mean that, first, a first midbrain area and a first bridge area are divided from a brainstem area according to key landmark points manually marked in the nuclear magnetic resonance brain image, and second, a second midbrain area and a second bridge area are divided from the brainstem area according to key landmark points output by a landmark detection network in the nuclear magnetic resonance brain image, and a midbrain area difference value between the first midbrain area and the second midbrain area and a bridge area difference value between the second bridge area and the second bridge area are obtained, wherein the second difference information includes a midbrain area difference value and a bridge area difference value.
It should be understood that the midbrain region difference and the pontic region difference in the embodiments of the present application are the area differences between the regions.
And 104, performing model back propagation on the landmark detection network based on the first difference information and the second difference information to obtain the trained landmark detection network.
In this embodiment of the present application, a joint loss function may be constructed based on the first difference information and the second difference information, for example, a joint loss function may be constructed using an exclusive or function that represents a deviation distance of the first difference information and the second difference information, and a loss function value obtained by calculation of the joint loss function may be used to perform model back propagation on the landmark detection network, and optimize model parameters of the landmark detection network, so as to update model parameters of the landmark detection network until a training model of the landmark detection network converges, and after training is completed, the trained landmark detection network may be obtained.
Specifically, the weighted sum of the deviation distance of the first difference information and the exclusive or function of the second difference information can be used as the combined loss function to construct the combined loss function, wherein the deviation distance of the first difference information can be specifically obtained by calculating the distance between the target landmark point coordinate and the predicted landmark point coordinate, and the specific expression of the exclusive or function of the second difference information can be expressed as follows:
Figure BDA0003168683720000081
Wherein, xor_loss x,y Refers to an exclusive or function of the second difference information, x and y refer to the first target area and the second target area, respectively, and x and y are different areas.
From the above formula, the exclusive or function expresses the degree of difference between the first target region and the second target region. The function value is 1 when the first target area and the second target area are completely free from intersection, and the function value is 0 when the first target area and the second target area are completely equal, and the greater the difference between the first target area and the second target area is, the greater the function value of the exclusive-or function is.
For example, when landmark point detection is performed on the nmr brain image using the landmark point detection network, the first difference information is a deviation distance between a key landmark point manually marked in the nmr brain image and a key landmark point output by the landmark point detection network, the exclusive or function of the second difference information includes a midbrain exclusive or function between the first midbrain region and the second midbrain region, and a brain bridge exclusive or function between the first brain bridge region and the second brain bridge region, and the function value of the exclusive or function of the second difference information may be an average value of the function value of the midbrain exclusive or function and the function value of the brain bridge exclusive or function. And then, calculating and optimizing model parameters of the landmark detection network by adopting a back propagation method based on the function values of the first difference information and the second difference information exclusive OR function, so as to update the model parameters of the landmark detection network.
Specifically, referring to fig. 2, a landmark detection network training process and an iterative relationship diagram are shown. Taking a key landmark point for dividing midbrain and brain bridge as an example, wherein input data is a nuclear magnetic resonance brain image, UNet is an image segmentation network, group trunk 1 is a manually marked brain stem region in the nuclear magnetic resonance brain image, and position_loss is a loss function of the image segmentation network, and can be specifically expressed as a difference value between the manually marked brain stem region and a brain stem region output by the image segmentation network; HRNet is a landmark point detection network, group trunk 2 is a manually marked key landmark point, group trunk 3 is a first midbrain region and a first brain bridge region which are obtained by dividing according to the manually marked key landmark point, and XOR_loss is an exclusive OR function of second difference information, and specifically comprises a midbrain exclusive OR function between the first midbrain region and the second midbrain region and a brain bridge exclusive OR function between the first brain bridge region and the second brain bridge region. Distance_loss is the deviation Distance between the manually marked key landmark point and the key landmark point output by the landmark point detection network.
The overall training process of the landmark detection network can be expressed as follows: firstly inputting a nuclear magnetic resonance brain image, dividing the nuclear magnetic resonance brain image into a brain stem region and a non-brain stem region through an image dividing network, further outputting the nuclear magnetic resonance brain image with a label in the brain stem region, secondly inputting the nuclear magnetic resonance brain image with the label into a landmark detection network, performing iterative training (namely counter propagation) on the landmark detection network according to a joint loss function of xor_loss and distance_loss until training is completed, obtaining optimal parameters of the nuclear magnetic resonance brain image, outputting key landmark points of the brain image based on the trained landmark detection network, and dividing the brain stem region by the key landmark points to obtain final midbrain and brain bridge regions.
And 105, processing the image to be processed based on the trained landmark point detection network to obtain a target prediction landmark point in the image to be processed.
The trained landmark detection network may obtain the target prediction landmark in the to-be-processed image according to the input to-be-processed image, where the to-be-processed image may be any type of image, such as a nuclear magnetic resonance image.
In this embodiment of the present application, the image to be processed may be a nuclear magnetic resonance brain image, and a specific process of processing the image to be processed may be: firstly, a target area in an image to be processed is acquired, wherein the target area carries a target tag, the image to be processed carrying the target tag is input into a landmark detection network, the landmark detection network acquires the target area in the image to be processed by identifying the target tag, and finally the target detection network carries out convolution calculation on the target area, extracts image characteristics, and obtains a target prediction point in the image to be processed through the image characteristics.
In the embodiment of the application, the obtained training data set including the N images including the target landmark points in the target area is input into the landmark detection network, so that the predicted landmark points output by the landmark detection network can be obtained, the first difference information is obtained from the obtained target landmark points and the predicted landmark points, and the second difference information is obtained between the first target area divided from the target area by the target landmark points and the second target area divided from the target area by the predicted landmark points. The first difference information and the second difference information can be used for a back propagation process of the landmark detection network, so that the trained landmark detection network is obtained. Training the landmark detection network according to the first difference information and the second difference information to obtain a more accurate landmark detection network, and enabling the landmark detection network to output more accurate target prediction landmark points in the image to be processed when the image to be processed is processed based on the trained landmark detection network.
Referring to fig. 3, a second flowchart of a method for detecting landmark points in an image according to an embodiment of the present application is shown, and as shown in fig. 3, the method includes the following steps:
step 301, a training data set is acquired.
The training dataset includes N images with a first label, wherein the first label is used to indicate a target landmark point in a target area in the images.
Step 302, extracting a target landmark point from an image contained in the training data set according to the first label, and inputting the training data set into a landmark detection network to obtain a predicted landmark point.
Step 303, obtaining first difference information between the target landmark point and the predicted landmark point, and second difference information between a first target area divided from the target area by the target landmark point and a second target area divided from the target area by the predicted landmark point.
And step 304, model back propagation is carried out on the landmark detection network based on the first difference information and the second difference information, and the trained landmark detection network is obtained.
Steps 301 to 304 of this embodiment are similar to steps 101 to 104 of the previous embodiment, and can be referred to each other, and the description of this embodiment is omitted here.
And 305, processing the image to be processed based on the trained landmark detection network to obtain the target prediction landmark in the image to be processed.
In the embodiment of the application, most of the obtained different images in the training data set do not have the same size and dimension, but the images with different sizes and dimensions are not comparable, and any feature calculated through the network cannot be statistically analyzed, so that the images in the training data set must be registered, the images in the training data set are aligned to the corresponding three-dimensional template space, all the images in the training data set have the same dimension and size, and the same structure between the different images is located at the same position in the three-dimensional template space.
For example, when the images in the acquired training data set are all nuclear magnetic resonance brain images, all nuclear magnetic resonance brain images contained in the training data set may be aligned into MNI (Montreal Neurological Institute) space, so that different brain images have the same structure in the same position in MNI space. Wherein, MNI space is a coordinate system established according to a series of nuclear magnetic resonance images of normal human brain. When all images contained in the training dataset are registered to the MNI space, the position of the middle of the brain in the sagittal plane direction can be selected in the MNI space, the middle sagittal plane image is used as an image of the brain after nuclear magnetic resonance brain image registration in the training dataset, as shown in fig. 4, the registered image is input into a landmark detection network, and the method can be better used for training the landmark detection network.
As an implementation manner, based on the trained landmark detection network, before the image to be processed is processed, the method further includes:
aligning the image to be processed into a corresponding three-dimensional template space to obtain a registered image to be processed, wherein the standard space position of a corresponding structural point in the three-dimensional template space; the structural points in the registered images to be processed are positioned on the corresponding standard space positions;
correspondingly, processing the image to be processed includes:
and processing the registered image to be processed.
The method comprises the steps of aligning an image to be processed into a corresponding three-dimensional template space, locating the standard space position of a corresponding structural point in the three-dimensional template space, locating the structural point in the image to be processed at the same position of the standard position of the corresponding structural point in the three-dimensional template space, and selecting a cross section in the direction of a median sagittal plane from the three-dimensional template space as the registered image to be processed. Because the landmark point detection network is trained by adopting the registered images, the registered images to be processed can be better adapted to the trained landmark point detection network, and a more accurate landmark point detection result is obtained.
Optionally, processing the image to be processed based on the trained landmark detection network to obtain the target prediction landmark in the image to be processed, including:
dividing the image to be processed through the trained image dividing network to obtain a target area in the image to be processed;
marking a target area of the image to be processed by using a target label to obtain the image to be processed carrying the target label;
inputting the image to be processed carrying the target label into the trained landmark point detection network to obtain the target predicted point in the image to be processed.
The image segmentation network may adopt a neural network structure, for example, a UNet structure, where the UNet structure includes a U-type structure and a layer-jump connection. The U-shaped structure is derived from a deep-first-shallow connection structure formed by multiple downsampling and deconvolution. After the image to be processed is input into the UNet network, each pixel can be predicted through the convolution layer, and meanwhile, the spatial information of the image can be reserved. The jump layer connection ensures that the finally recovered feature map fuses more shallow features instead of training on the advanced semantic features, so that the segmentation result is finer on multiple scales. Meanwhile, the image to be processed is input into the UNet network, the position of the target area can be directly positioned through segmentation, and each pixel in the target area is marked with a target label, so that the image to be processed carrying the target label is obtained.
Optionally, the method further includes, before performing segmentation processing on the image to be processed through the trained image segmentation network to obtain the target region in the image to be processed:
inputting the training data set into an image segmentation network to obtain a target segmentation area;
extracting a target area from the image contained in the training data set according to the second label;
acquiring difference information between a target area and a target segmentation area;
and based on the difference information, performing model back propagation on the image segmentation network to obtain the trained image segmentation network.
In this embodiment of the present application, when the landmark detection network is trained by using the training data set, the training data set may be input into the image segmentation network, the image in the training data set is segmented by the image segmentation network, so as to obtain the target area in the image included in the training data set, the training data set may be used to train the image segmentation network, and the target area in the image included in the training data set is obtained by the trained image segmentation network, so that the training data set further includes a second tag for indicating the target area, and the image segmentation network may be trained according to the second tag for indicating the target area.
The method comprises the steps of firstly, manually marking a brainstem region of a nuclear magnetic resonance brain image in a training data set, inputting the training data set into an image segmentation network to obtain a target segmentation region, simultaneously, the image segmentation network can extract the brainstem region from the nuclear magnetic resonance brain image according to the manual marking, comparing the target segmentation region with the brainstem region, obtaining a difference value between the target segmentation region and the brainstem region, wherein the difference value can be represented by an exclusive or function, calculating a function value of the exclusive or function between the target segmentation region and the brainstem region according to the difference value between the target segmentation region and the brainstem region, updating model parameters of the image segmentation network by adopting a back propagation method according to the function value of the exclusive or function until the training model converges, and obtaining the trained image segmentation network after training.
Optionally, inputting the image to be processed carrying the target tag into the trained landmark detection network to obtain a target prediction point in the image to be processed, including:
inputting the image to be processed carrying the target tag into a trained landmark point detection network, identifying the target tag through the landmark point detection network, and determining the area marked by the target tag as the target area of the image to be processed;
And detecting landmark points in the target area of the image to be processed to obtain target predicted points in the image to be processed.
After the image to be processed carrying the target tag is input to the trained landmark point detection network, the landmark point detection network needs to extract the region marked by the target tag through the target tag, so as to detect the landmark point on the target region.
It should be understood that landmark point detection is performed on the target area, so that the detection range of the target predicted landmark point is reduced, the data processed by the network is reduced, and the running speed of the landmark point detection network can be increased.
If the image to be processed is a nuclear magnetic resonance brain image, processing the image to be processed based on the trained landmark detection network to obtain a target prediction landmark in the image to be processed, including:
aligning the nuclear magnetic resonance brain images to corresponding three-dimensional template spaces to obtain registered nuclear magnetic resonance brain images;
image segmentation is carried out on the registered nuclear magnetic resonance brain images through a trained segmentation network, so that brainstem areas of the registered nuclear magnetic resonance brain images are obtained;
marking the brainstem area by using a target label to obtain a nuclear magnetic resonance brain image carrying the target label;
Inputting the nuclear magnetic resonance brain image carrying the target label into a trained landmark detection network to obtain key landmark points in the nuclear magnetic resonance brain image, wherein the key landmark points are used for distinguishing midbrain and brain bridge areas from brainstem areas.
In the embodiment of the present application, if key landmark points capable of distinguishing the midbrain and the brain bridge area are to be obtained according to the nmr brain image, the target landmark points in the training data set adopted by the landmark detection network during training should be manually marked key landmark points for distinguishing the midbrain and the brain bridge area, and the key landmark points can be selected from points 1, 2 and 3 as shown in fig. 4, where the point 1 is an edge point at the junction of the midbrain and the brain bridge, the point 2 is a protruding edge termination point of the brain bridge, and the point 3 is the lowest point of the quadrangle; line a is the line connecting point 1 and point 3, and line B is a parallel line to line a and passes through point 2.
And 306, dividing the target area of the image to be processed according to the target prediction landmark in the image to be processed to obtain a final target area partitioned by the target prediction landmark.
In this embodiment of the present application, after a target prediction landmark in an image to be processed is obtained, the target prediction landmark may be connected to divide a target area of the image to be processed.
Illustratively, dividing the brainstem region of the drawing according to the lines a and B of fig. 4 may result in the midbrain region 51 and the pontine region 52 shown in fig. 5.
It will be appreciated that after obtaining the midbrain and pontine regions, nuclear magnetic resonance parkinsonism index (Magnetic Resonance Parkinsonism Index, MRPI) can be obtained by calculating the area ratio of the midbrain region to the pontine region, which is of great significance in the identification of parkinsonism and parkinsonism.
An example of the technical effects after applying the method of the above embodiment is given below.
And (3) acquiring 436 nuclear magnetic resonance brain images, wherein the relative error of the MRPI measured by the method and the MRPI measured by manual marking is used as a measurement index, and the relative error of the MRPI measured by the method of the embodiment of the application and the relative error of the MRPI measured by a landmark point detection network trained by using the target landmark point are respectively compared, and the comparison results are shown in the following table:
Figure BDA0003168683720000151
as can be seen from the table, the method of the embodiment of the application achieves better performance and better effect than the existing landmark point detection method.
Compared with the embodiment, the method and the device increase registering of the to-be-processed image, automatically divide the target area of the registered to-be-processed image by adopting the image dividing network, and input the to-be-processed image carrying the target label into the trained landmark detecting network, thereby reducing redundant data in the operation process of the landmark detecting network and increasing the detecting accuracy of the landmark detecting network by adopting the registering means.
Referring to fig. 6, a block diagram of a landmark detection device in an image provided in an embodiment of the present application is shown, and for convenience of explanation, only a portion related to the embodiment of the present application is shown.
The landmark point detection device in the image specifically comprises the following modules:
a data acquisition module 601, configured to acquire a training data set, including N images with a first tag, where the first tag is used to indicate a target landmark point in a target area in the images;
the prediction module 602 is configured to extract a target landmark point from an image included in the training data set according to the first label, and input the training data set to a landmark detection network to obtain a predicted landmark point;
an information obtaining module 603, configured to obtain first difference information between a target landmark point and a predicted landmark point, and second difference information between a first target area divided from the target area by the target landmark point and a second target area divided from the target area by the predicted landmark point;
the network training module 604 is configured to perform model back propagation on the landmark detection network based on the first difference information and the second difference information, so as to obtain a trained landmark detection network;
The landmark point determining module 605 is configured to process the image to be processed based on the trained landmark point detection network, so as to obtain a target prediction landmark point in the image to be processed.
In this embodiment of the present application, the landmark detection apparatus may specifically further include the following modules:
the registration module is used for aligning the image to be processed into a corresponding three-dimensional template space to obtain a registered image to be processed, wherein the standard space position of a corresponding structural point in the three-dimensional template space; the structural points in the registered images to be processed are positioned on the corresponding standard space positions;
in the embodiment of the present application, the registration module may specifically be used to:
and processing the registered image to be processed.
In the embodiment of the present application, the landmark determining module 605 may specifically include the following sub-modules:
the segmentation sub-module is used for carrying out segmentation processing on the image to be processed through the trained image segmentation network to obtain a target area in the image to be processed;
the marking sub-module is used for marking a target area of the image to be processed by adopting the target label to obtain the image to be processed carrying the target label;
the target determination submodule is used for inputting the image to be processed carrying the target label into the trained landmark point detection network to obtain a target prediction point in the image to be processed.
In the embodiment of the present application, the segmentation submodule may specifically include the following units:
the region segmentation unit is used for inputting the training data set into the image segmentation network to obtain a target segmentation region;
the target extraction unit is used for extracting a target area from the image contained in the training data set according to the second label;
a difference acquisition unit configured to acquire difference information between the target region and the target divided region;
and the training unit is used for carrying out model back propagation on the image segmentation network based on the difference information to obtain the trained image segmentation network.
In the embodiment of the present application, the targeting submodule may specifically include the following units:
the identification unit is used for inputting the image to be processed carrying the target tag into the trained landmark detection network, identifying the target tag through the landmark detection network, and determining the area marked by the target tag as the target area of the image to be processed;
the detection unit is used for carrying out landmark point detection on the target area of the image to be processed to obtain a target predicted point in the image to be processed.
In the embodiment of the present application, when the image to be processed is a nuclear magnetic resonance brain image, the landmark determining module 605 may be specifically further configured to:
Aligning the nuclear magnetic resonance brain images to corresponding three-dimensional template spaces to obtain registered nuclear magnetic resonance brain images;
segmenting the registered nuclear magnetic resonance brain image through a trained image segmentation network to obtain a brainstem region of the registered nuclear magnetic resonance brain image;
marking the brainstem area by using a target label to obtain a nuclear magnetic resonance brain image carrying the target label;
inputting the nuclear magnetic resonance brain image carrying the target label into a trained landmark detection network to obtain key landmark points in the nuclear magnetic resonance brain image, wherein the key landmark points are used for distinguishing midbrain and brain bridge areas from brainstem areas.
In this embodiment of the present application, the device for detecting a landmark point in an image may specifically further include the following modules:
the region dividing module is used for dividing the target region of the image to be processed according to the target prediction landmark point in the image to be processed to obtain a final target region partitioned by the target prediction landmark point.
The landmark point detection device in the image provided in the embodiment of the present application may be applied in the foregoing method embodiment, and details refer to descriptions of the foregoing method embodiment, which are not repeated herein.
Fig. 7 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 7, the terminal device 700 of this embodiment includes: at least one processor 710 (only one shown in fig. 7), a memory 720, and a computer program 721 stored in the memory 720 and executable on the at least one processor 710, the processor 710 implementing the steps in any of the various image landmark detection method embodiments described above when the computer program 721 is executed.
The terminal device 700 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 710, a memory 720. It will be appreciated by those skilled in the art that fig. 7 is merely an example of a terminal device 700 and is not limiting of the terminal device 700, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 710 may be a central processing unit (Central Processing Unit, CPU), the processor 710 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 720 may in some embodiments be an internal storage unit of the terminal device 700, such as a hard disk or a memory of the terminal device 700. The memory 720 may also be an external storage device of the terminal device 700 in other embodiments, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 700. Further, the memory 720 may also include both an internal storage unit and an external storage device of the terminal device 700. The memory 720 is used to store an operating system, application programs, boot Loader (Boot Loader), data, other programs, etc., such as program codes of the computer program. The memory 720 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The implementation of all or part of the flow of the method of the above embodiment may also be accomplished by a computer program product, which when run on a terminal device, causes the terminal device to perform the steps of the method embodiments described above.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting. Although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. The method for detecting the landmark points in the image is characterized by comprising the following steps of:
acquiring a training data set, wherein the training data set comprises N images with first labels, and the first labels are used for indicating target landmark points in a target area in the images;
extracting the target landmark point from the image contained in the training data set according to the first label, and inputting the training data set into a landmark point detection network to obtain a predicted landmark point;
Acquiring first difference information between the target landmark point and the predicted landmark point, and second difference information between a first target area divided by the target landmark point from the target area and a second target area divided by the predicted landmark point from the target area;
based on the first difference information and the second difference information, performing model back propagation on the landmark detection network to obtain the trained landmark detection network;
processing an image to be processed based on the trained landmark point detection network to obtain a target prediction landmark point in the image to be processed;
the model back propagation is performed on the landmark detection network based on the first difference information and the second difference information to obtain the trained landmark detection network, which comprises the following steps:
constructing a joint loss function based on the weighted sum of the first difference information and the second difference information, calculating a loss function value by using the joint loss function, and carrying out model back propagation on the landmark detection network according to the loss function value so as to update model parameters of the landmark detection network until a training model of the landmark detection network converges, so as to obtain the trained landmark detection network;
Wherein the first difference information is the deviation distance between the target landmark point coordinates and the predicted landmark point coordinates, and the second difference information is an exclusive or function xor_loss constructed according to the first target region and the second target region x,y
Figure FDA0004188641030000011
Wherein x and y refer to the first target region and the second target region, respectively, and x and y are different regions.
2. The method of detecting according to claim 1, wherein the training-based landmark detection network further includes, before processing the image to be processed:
aligning the to-be-processed image to a corresponding three-dimensional template space to obtain the registered to-be-processed image, wherein the standard space position of a corresponding structural point in the three-dimensional template space; the structure points in the images to be processed after registration are positioned on the corresponding standard space positions;
correspondingly, the processing the image to be processed includes:
and processing the registered image to be processed.
3. The detection method according to claim 1, wherein the processing the image to be processed based on the trained landmark detection network to obtain the target prediction landmark in the image to be processed includes:
Dividing the image to be processed through a trained image dividing network to obtain a target area in the image to be processed;
marking a target area of the image to be processed by using a target label to obtain the image to be processed carrying the target label;
inputting the image to be processed carrying the target label into the trained landmark point detection network to obtain a target predicted point in the image to be processed.
4. The detection method according to claim 3, wherein the training data set further includes a second tag for indicating the target region, and before the image to be processed is subjected to segmentation processing by the trained image segmentation network, the method further includes:
inputting the training data set into the image segmentation network to obtain a target segmentation area;
extracting the target area from the image contained in the training data set according to the second label;
acquiring difference information between the target region and the target segmentation region;
and based on the difference information, performing model back propagation on the image segmentation network to obtain the trained image segmentation network.
5. The detection method according to claim 3, wherein the inputting the image to be processed carrying the target tag into the trained landmark detection network, to obtain the target predicted point in the image to be processed, includes:
inputting the image to be processed carrying the target tag into the trained landmark point detection network, identifying the target tag through the landmark point detection network, and determining the area marked by the target tag as the target area of the image to be processed;
and detecting landmark points of the target area of the image to be processed to obtain target predicted points in the image to be processed.
6. The detection method according to claim 1, wherein the image to be processed is a brain nuclear magnetic resonance image, the processing the image to be processed based on the trained landmark detection network to obtain a target prediction landmark in the image to be processed includes:
aligning the nuclear magnetic resonance brain images to corresponding three-dimensional template spaces to obtain registered nuclear magnetic resonance brain images;
segmenting the registered nuclear magnetic resonance brain image through a trained image segmentation network to obtain a brain stem region of the registered nuclear magnetic resonance brain image;
Marking the brainstem region by using a target label to obtain the nuclear magnetic resonance brain image carrying the target label;
inputting the nuclear magnetic resonance brain image carrying the target tag into the trained landmark detection network to obtain key landmark points in the nuclear magnetic resonance brain image, wherein the key landmark points are used for distinguishing midbrain and brain bridge areas from the brain stem areas.
7. The detection method according to any one of claims 1 to 6, wherein the processing the image to be processed based on the trained landmark detection network to obtain the target predicted landmark in the image to be processed includes:
dividing a target area of the image to be processed according to the target prediction landmark point in the image to be processed to obtain a final target area partitioned by the target prediction landmark point.
8. A landmark point detection device in an image, the detection device comprising:
the data acquisition module is used for acquiring a training data set and comprises N images with first labels, wherein the first labels are used for indicating target landmark points in a target area in the images;
The prediction module is used for extracting the target landmark point from the image contained in the training data set according to the first label, and inputting the training data set into a landmark point detection network to obtain a predicted landmark point;
the information acquisition module is used for acquiring first difference information between the target landmark point and the predicted landmark point and second difference information between a first target area divided by the target landmark point from the target area and a second target area divided by the predicted landmark point from the target area;
the network training module is used for carrying out model back propagation on the landmark detection network based on the first difference information and the second difference information to obtain the trained landmark detection network;
the landmark point determining module is used for processing the image to be processed based on the trained landmark point detection network to obtain a target prediction landmark point in the image to be processed;
the network training module is further configured to:
constructing a joint loss function based on the weighted sum of the first difference information and the second difference information, calculating a loss function value by using the joint loss function, and carrying out model back propagation on the landmark detection network according to the loss function value so as to update model parameters of the landmark detection network until a training model of the landmark detection network converges, so as to obtain the trained landmark detection network;
Wherein the first difference information is the deviation distance between the target landmark point coordinates and the predicted landmark point coordinates, and the second difference information is an exclusive or function xor_loss constructed according to the first target region and the second target region x,y
Figure FDA0004188641030000041
Wherein x and y refer to the first target region and the second target region, respectively, and x and y are different regions.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
CN202110812323.0A 2021-07-19 2021-07-19 Method and device for detecting landmark points in image, terminal equipment and storage medium Active CN113658101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110812323.0A CN113658101B (en) 2021-07-19 2021-07-19 Method and device for detecting landmark points in image, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110812323.0A CN113658101B (en) 2021-07-19 2021-07-19 Method and device for detecting landmark points in image, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113658101A CN113658101A (en) 2021-11-16
CN113658101B true CN113658101B (en) 2023-06-30

Family

ID=78477475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110812323.0A Active CN113658101B (en) 2021-07-19 2021-07-19 Method and device for detecting landmark points in image, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113658101B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670525A (en) * 2018-11-02 2019-04-23 平安科技(深圳)有限公司 Object detection method and system based on once shot detection
CN111104538A (en) * 2019-12-06 2020-05-05 深圳久凌软件技术有限公司 Fine-grained vehicle image retrieval method and device based on multi-scale constraint
CN111310775A (en) * 2018-12-11 2020-06-19 Tcl集团股份有限公司 Data training method and device, terminal equipment and computer readable storage medium
CN112560999A (en) * 2021-02-18 2021-03-26 成都睿沿科技有限公司 Target detection model training method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670525A (en) * 2018-11-02 2019-04-23 平安科技(深圳)有限公司 Object detection method and system based on once shot detection
CN111310775A (en) * 2018-12-11 2020-06-19 Tcl集团股份有限公司 Data training method and device, terminal equipment and computer readable storage medium
CN111104538A (en) * 2019-12-06 2020-05-05 深圳久凌软件技术有限公司 Fine-grained vehicle image retrieval method and device based on multi-scale constraint
CN112560999A (en) * 2021-02-18 2021-03-26 成都睿沿科技有限公司 Target detection model training method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113658101A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
CN110874594B (en) Human body appearance damage detection method and related equipment based on semantic segmentation network
CN107665486B (en) Automatic splicing method and device applied to X-ray images and terminal equipment
CN109117773B (en) Image feature point detection method, terminal device and storage medium
CN110598714B (en) Cartilage image segmentation method and device, readable storage medium and terminal equipment
CN110738219A (en) Method and device for extracting lines in image, storage medium and electronic device
WO2021120961A1 (en) Brain addiction structure map evaluation method and apparatus
CN110689043A (en) Vehicle fine granularity identification method and device based on multiple attention mechanism
Jiang et al. miLBP: a robust and fast modality-independent 3D LBP for multimodal deformable registration
CN111161348B (en) Object pose estimation method, device and equipment based on monocular camera
CN107633506B (en) Image symmetry detection method and device and terminal equipment
CN113191189A (en) Face living body detection method, terminal device and computer readable storage medium
CN113269752A (en) Image detection method, device terminal equipment and storage medium
CN110647889B (en) Medical image recognition method, medical image recognition apparatus, terminal device, and medium
CN113658101B (en) Method and device for detecting landmark points in image, terminal equipment and storage medium
CN110634119B (en) Method, device and computing equipment for segmenting vein blood vessel in magnetic sensitivity weighted image
CN108765399B (en) Lesion site recognition device, computer device, and readable storage medium
CN111104965A (en) Vehicle target identification method and device
CN115439733A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN112750124B (en) Model generation method, image segmentation method, model generation device, image segmentation device, electronic equipment and storage medium
Wu et al. An accurate feature point matching algorithm for automatic remote sensing image registration
CN110796684B (en) Target tracking method and related device
Annaby et al. Fast template matching and object detection techniques using φ-correlation and binary circuits
CN113223033A (en) Poultry body temperature detection method, device and medium based on image fusion
Zhang A congruent hybrid model for conflation of satellite image and road database
Kéchichian et al. Automatic multiorgan segmentation using hierarchically registered probabilistic atlases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant