CN109712128B - Feature point detection method, feature point detection device, computer equipment and storage medium - Google Patents

Feature point detection method, feature point detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN109712128B
CN109712128B CN201811580783.XA CN201811580783A CN109712128B CN 109712128 B CN109712128 B CN 109712128B CN 201811580783 A CN201811580783 A CN 201811580783A CN 109712128 B CN109712128 B CN 109712128B
Authority
CN
China
Prior art keywords
neural network
feature point
image
feature
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811580783.XA
Other languages
Chinese (zh)
Other versions
CN109712128A (en
Inventor
姜娈
张剑锋
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201811580783.XA priority Critical patent/CN109712128B/en
Publication of CN109712128A publication Critical patent/CN109712128A/en
Application granted granted Critical
Publication of CN109712128B publication Critical patent/CN109712128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a feature point detection method, a feature point detection device, computer equipment and a storage medium. The method comprises the following steps: establishing a neural network model; inputting an interested image into the neural network model to obtain a characteristic point probability map; and calculating to obtain the position of the feature point based on the feature point probability graph. According to the feature point detection method, the device, the computer equipment and the storage medium, the full convolution neural network model is obtained through training, the interested image is input into the full convolution neural network model, and the feature point position is obtained through calculation, so that when the feature point is detected, the whole interested image can be directly detected without taking image blocks as units, and the speed and the accuracy of feature point detection are greatly improved.

Description

Feature point detection method, feature point detection device, computer equipment and storage medium
Technical Field
The present application relates to the field of deep learning technologies, and in particular, to a method and an apparatus for detecting feature points, a computer device, and a storage medium.
Background
The detection of key feature points in medical images is very important for many intelligent applications, for example, in the implementation of heart intelligent scanning, accurate and efficient automatic detection of heart-related key feature points/regions such as the apex, mitral valve, and tricuspid valve is a key to implementing automatic search in the long axis and short axis directions of the heart, and has very important clinical significance. However, due to the complexity and diversity of the cardiac structure, the detection of these key feature points in the image is very difficult.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device and a storage medium capable of detecting feature points, aiming at the technical problem that the detection of key feature points in an image is very difficult due to the complexity and diversity of the heart structure.
A method of feature point detection, the method comprising:
establishing a neural network model, extracting image block samples near the characteristic points from the training images as a training set to train the neural network to obtain the neural network model, wherein the neural network model is a full convolution neural network;
inputting an interested image into the neural network model to obtain a characteristic point probability map;
and calculating to obtain the position of the feature point based on the feature point probability graph.
In one embodiment, the extracting image block samples near the feature points from the training image as a training set to train the neural network includes:
taking the image blocks in the set size range near the feature points as positive samples to train the neural network;
and (4) taking the image blocks outside the set size range near the feature points as negative samples to train the neural network.
In one embodiment, the establishing a neural network model, extracting image block samples near the feature points from the training image as a training set to train the neural network, and obtaining the neural network model further includes:
and preprocessing the training image.
In one embodiment, the preprocessing the training image includes:
and performing one or more of up-sampling, down-sampling, isotropic processing, denoising and enhancement processing on the training image.
In one embodiment, the inputting the image of interest into the neural network model and obtaining the feature point probability map further includes:
the image of interest is pre-processed.
In one embodiment, the calculating the position of the feature point based on the feature point probability map includes:
and clustering the position coordinates of the first characteristic region in the characteristic point probability map by adopting a clustering method to obtain a second characteristic region, wherein the part of the characteristic point probability map with the probability value larger than a set threshold value is used as the first characteristic region.
In one embodiment, the calculating the position of the feature point based on the feature point probability map further includes:
carrying out weighted average calculation on the coordinates of the position of the second characteristic region to obtain the characteristic position in the characteristic point probability map;
and obtaining the position of the characteristic point in the interested image based on the characteristic position.
A feature point detection apparatus, the apparatus comprising:
the neural network establishing module is used for establishing a neural network model, extracting image block samples near the feature points from the training images as a training set to train the neural network to obtain the neural network model, and the neural network model is a full convolution neural network;
the input module is used for inputting the interested image into the neural network model to obtain a characteristic point probability graph;
and the calculation module is used for calculating the position of the characteristic point based on the characteristic point probability graph.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
establishing a neural network model, extracting image block samples near the characteristic points from the training images as a training set to train the neural network to obtain the neural network model, wherein the neural network model is a full convolution neural network;
inputting an interested image into the neural network model to obtain a characteristic point probability map;
and calculating to obtain the position of the feature point based on the feature point probability graph.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
establishing a neural network model, extracting image block samples near the characteristic points from the training images as a training set to train the neural network to obtain the neural network model, wherein the neural network model is a full convolution neural network;
inputting an interested image into the neural network model to obtain a characteristic point probability map;
and calculating to obtain the position of the feature point based on the feature point probability graph.
According to the method, the device, the computer equipment and the storage medium for detecting the feature points, the full convolution neural network model is obtained through training, the interested images are input into the neural network model to obtain the feature point probability graph, and the feature point positions are obtained through calculation based on the feature point probability graph, so that when the feature points are detected, the image blocks do not need to be used as units for detection, the whole interested image can be directly detected by using the neural network, and the speed and the accuracy of feature point detection are greatly improved.
Drawings
FIG. 1 is a schematic flow chart of a feature point detection method according to an embodiment;
FIG. 2 is a schematic diagram of a feature point probability map of feature points in one embodiment;
FIG. 3 is a block diagram showing the structure of a feature point detecting apparatus according to an embodiment;
FIG. 4 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The detection of the key feature points is realized by a plurality of traditional algorithms, but the traditional algorithm which is designed in a targeted manner is time-consuming and has poor universality due to the diversity of the key feature points in the medical images. In recent years, deep learning has made important breakthroughs in the field of artificial intelligence, such as: natural language processing, speech recognition, computer vision, and image and video analysis have met with great success, while there are currently few relevant studies using deep learning for key feature point detection in medical imaging.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a feature point detection method according to an embodiment of the invention.
In this embodiment, the feature point detection method includes:
step 100, preprocessing the training image.
In this embodiment, the preprocessing the training image includes performing one or more of up-sampling, down-sampling, isotropic processing, denoising, and enhancement processing on the training image. One big difference between medical images and natural images is that the size of the organs inside the medical images is of practical significance and cannot be scaled or rotated freely, etc. It can be understood that the images are up-sampled or down-sampled because the original images have different resolutions, some need to be up-sampled and some need to be down-sampled to unify the training images to the same resolution, and both the up-sampling and the down-sampling are processed by using an interpolation method, which is effective in unifying the training images to the same resolution to make the neural network easier to train and make the result more robust. Meanwhile, the resolution in the three directions of x, y and z is preferably the same, i.e. the training image is processed in an isotropic way. Illustratively, a resolution of 1mm × 1mm × 1mm is employed. Because an image may contain metal artifacts or noise and the like, noise reduction processing is required, meanwhile, the gray distribution range of a medical image is generally large, but organs which are concerned by the medical image may be only distributed in a specific gray range, and in order to reduce the difficulty of network training and obtain a good effect, the specific gray range can be enhanced, that is, a proper window width and window level is taken to perform truncation normalization on the gray level of the image, if the gray level range of an original image is-1024-4096, but the range of 0-1000 is concerned by the medical image, the following formula can be adopted for processing:
Figure BDA0001917770040000041
it can be understood that preprocessing the training images can make all the training images have the same resolution, gray scale distribution, etc. to reduce the difficulty of model training and improve accuracy.
And 110, establishing a neural network model, extracting image block samples near the feature points from the training images as a training set to train the neural network to obtain the neural network model, wherein the neural network model is a full convolution neural network.
Specifically, positive and negative samples are obtained from the vicinity of feature points in a training image and are used as a training set to train the neural network. In this embodiment, the training of the neural network by acquiring the positive and negative samples from the vicinity of the feature point in the training image as a training set includes training the neural network by using an image slice in a set size range near the feature point as a positive sample; and (4) taking the image blocks outside the set size range near the feature points as negative samples to train the neural network.
Illustratively, if the feature point is taken as the center, the area within the radius r is a positive sample area, the area within the radius r + x is considered as a transition area, the area outside the radius r + x is considered as not belonging to the feature point, and the area outside the radius r + x is considered as a negative sample area, the image slices cut out within the radius r + x are considered as positive samples if they contain the positive sample area, and the image slices cut out outside the radius r + x are considered as negative samples.
For example, when the apex of the heart is marked as the feature point, there is no clear point which is the apex of the heart, so that any point in the determined apex region can be taken as the feature point, and therefore, it can be understood that all image slices in the apex region can be used as positive samples to train the neural network. Specifically, r is selected only to achieve the effect of bringing all the apical regions into the positive sample regions, and x is selected only to have the effect of classifying the portions apparently not belonging to the apical regions into the negative sample regions.
In this embodiment, the image patches of the positive and negative samples are used as the input of the training set, the probability that the image patches contain the feature points is used as the output of the training set, and the neural network model is trained to obtain the neural network model, where the neural network model is a full convolution neural network, the input of the neural network model is the image patches, and the output is the probability that the image patches contain the feature points.
Specifically, the neural network model is a full convolution neural network, the full connection layer is a convolution layer with a convolution kernel size of 1, an interested image with any size can be input during testing, and the neural network model directly detects the interested image to obtain a feature point probability map.
Referring to fig. 2, fig. 2 is a schematic diagram of a feature point probability map of feature points. Specifically, the number in each bin represents the probability that the portion of the corresponding image of interest contains a feature point. It will be appreciated that the size of the slice of the feature point probability map output by the full convolution neural network depends on the size of the image slice of the sample used for training, and the size of the feature point probability map depends on the size of the image slice and the step size of the image of interest to be sliced. It will be appreciated that the slicing step size depends on the construction parameters of the full convolution neural network. It can be understood that there is a correspondence between the feature point probability map and the image of interest, so that each slice in the feature point probability map can correspond back to an image slice of the image of interest.
Step 120, preprocessing the image of interest.
In this embodiment, the preprocessing the image of interest includes one or more of up-sampling, down-sampling, isotropic processing, denoising, and enhancement processing. It is understood that the preprocessing performed on the image of interest is the same as the preprocessing performed on the training image, so that the states of the image of interest and the training image are the same to improve the accuracy of feature point detection.
And step 130, inputting the interested image into the neural network model to obtain a characteristic point probability graph.
In this embodiment, the image of interest of any size can be directly input into the neural network model without performing a dicing in advance. It can be understood that, when the neural network model detects the image of interest, the image of interest is diced, and a feature point probability map including feature point probabilities for each image tile is obtained, where the size of the tile of the feature point probability map depends on the size of the image tile of the sample used for training, and the size of the feature point probability map depends on the size of the image tile and the step size of the image of interest for dicing. It will be appreciated that the slicing step size depends on the construction parameters of the full convolution neural network.
And 140, calculating to obtain the positions of the feature points based on the feature point probability map.
In this embodiment, a portion of the feature point probability map where the probability is greater than a set threshold is set as a feature region. As can be understood, the feature region includes a region where no feature point exists in the corresponding image of interest, which is called a false positive feature region. It can be understood that a plurality of feature regions exist in the feature point probability map portion corresponding to the image of interest where the feature points are located, and the false positive feature regions generally exist in isolation. In this embodiment, in order to filter out false positive feature regions, the calculating to obtain the feature point positions based on the feature point probability map further includes processing the feature point probability map by using a clustering method. Specifically, the feature point probability map is processed by adopting a self-adaptive K-means clustering method, namely, the clustering number can be automatically adjusted along with the change of the clustering number, and a false positive feature region which exists in isolation is filtered by an algorithm which performs data classification by using the most appropriate clustering number. Specifically, a self-adaptive K-means clustering method is adopted to perform clustering processing on the position coordinates of a first feature region in a feature point probability map to obtain a second feature region, wherein the part, with the probability value larger than a set threshold value, in the feature point probability map is used as the first feature region.
Specifically, the calculating the feature point position based on the feature point probability map further includes performing weighted average calculation on the coordinates of the position of the second feature region by using the probability value corresponding to each position coordinate in the second feature region as a weight to obtain the feature position in the feature point probability map; and obtaining the position of the characteristic point in the interested image based on the characteristic position. Specifically, the set threshold may be 0.5. In other embodiments, the setting threshold may be set according to actual conditions. It will be appreciated that the coordinates of the location of the feature region may be the location coordinates of the centre of the feature region.
In this embodiment, obtaining the feature point position in the image of interest based on the feature position includes calculating the feature point position in the image of interest based on the block cutting step length, the block cutting side length of the image of interest, and the feature position when the full convolution neural network processes the image of interest. It will be appreciated that the slicing step size depends on the construction parameters of the full convolution neural network.
Illustratively, the feature point detection method includes establishing a neural network model, extracting image block samples near feature points from a preprocessed training image to serve as a training set to train the neural network to obtain a full convolution neural network model, inputting a preprocessed interested image into the neural network model to obtain a feature point probability map, processing the feature point probability map by using a self-adaptive K-means clustering method, using a part of the processed feature point probability map, where the probability is greater than a set threshold value, as a second feature region, performing weighted average calculation on coordinates of a position of the second feature region to obtain feature positions in the feature point probability map, detecting the feature points by using a method of obtaining the feature positions in the interested image based on the feature positions, so that the image blocks are not used as a unit for detection when the feature points are detected, the whole interested image can be directly detected, and the speed and the accuracy of characteristic point detection are greatly improved.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 3, there is provided a feature point detection apparatus including: a test image preprocessing module 200, a neural network building module 210, an image of interest preprocessing module 220, an input module 230, and a calculation module 240, wherein:
and the test image preprocessing module 200 is used for preprocessing the training image.
The neural network establishing module 210 is configured to establish a neural network model, extract image block samples near the feature points from the training images as a training set, and train the neural network to obtain the neural network model, where the neural network model is a full convolution neural network.
An image of interest preprocessing module 220 for preprocessing the image of interest.
An input module 230, configured to input the image of interest into the neural network model, so as to obtain a feature point probability map.
And a calculating module 240, configured to calculate a position of the feature point based on the feature point probability map.
For specific limitations of the feature point detection device, reference may be made to the above limitations of the feature point detection method, which are not described herein again. The modules in the above feature point detection device may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a feature point detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
establishing a neural network model, extracting image block samples near the characteristic points from the training images as a training set to train the neural network to obtain the neural network model, wherein the neural network model is a full convolution neural network;
inputting an interested image into the neural network model to obtain a characteristic point probability map;
and calculating to obtain the position of the feature point based on the feature point probability graph.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
taking the image blocks in the set size range near the feature points as positive samples to train the neural network;
and (4) taking the image blocks outside the set size range near the feature points as negative samples to train the neural network.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and preprocessing the training image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and performing one or more of up-sampling, down-sampling, isotropic processing, denoising and enhancement processing on the training image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the image of interest is pre-processed.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and clustering the position coordinates of the first characteristic region in the characteristic point probability map by adopting a clustering method to obtain a second characteristic region, wherein the part of the characteristic point probability map with the probability value larger than a set threshold value is used as the first characteristic region.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
carrying out weighted average calculation on the coordinates of the position of the second characteristic region to obtain the characteristic position in the characteristic point probability map;
and obtaining the position of the characteristic point in the interested image based on the characteristic position.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
establishing a neural network model, extracting image block samples near the characteristic points from the training images as a training set to train the neural network to obtain the neural network model, wherein the neural network model is a full convolution neural network;
inputting an interested image into the neural network model to obtain a characteristic point probability map;
and calculating to obtain the position of the feature point based on the feature point probability graph.
In one embodiment, the computer program when executed by the processor further performs the steps of:
taking the image blocks in the set size range near the feature points as positive samples to train the neural network;
and (4) taking the image blocks outside the set size range near the feature points as negative samples to train the neural network.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and preprocessing the training image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and performing one or more of up-sampling, down-sampling, isotropic processing, denoising and enhancement processing on the training image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the image of interest is pre-processed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and clustering the position coordinates of the first characteristic region in the characteristic point probability map by adopting a clustering method to obtain a second characteristic region, wherein the part of the characteristic point probability map with the probability value larger than a set threshold value is used as the first characteristic region.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out weighted average calculation on the coordinates of the position of the second characteristic region to obtain the characteristic position in the characteristic point probability map;
and obtaining the position of the characteristic point in the interested image based on the characteristic position.
According to the method, the device, the computer equipment and the storage medium for detecting the feature points, the full convolution neural network model is obtained through training, the interested images are input into the neural network model to obtain the feature point probability graph, and the feature point positions are obtained through calculation based on the feature point probability graph, so that when the feature points are detected, the image blocks do not need to be used as units for detection, the whole interested image can be directly detected by using the neural network, and the speed and the accuracy of feature point detection are greatly improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A method of feature point detection, the method comprising:
establishing a neural network model, extracting image block samples near the characteristic points from the training images as a training set to train the neural network to obtain the neural network model, wherein the neural network model is a full convolution neural network;
inputting an interested image into the neural network model to obtain a characteristic point probability map;
calculating to obtain the position of the feature point based on the feature point probability graph;
wherein the calculating the position of the feature point based on the feature point probability map comprises:
clustering the position coordinates of the first feature region in the feature point probability map by adopting a clustering method to obtain a second feature region, wherein the part of the feature point probability map with the probability value larger than a set threshold value is used as the first feature region; carrying out weighted average calculation on the coordinates of the position of the second characteristic region to obtain the characteristic position in the characteristic point probability map; and obtaining the position of the characteristic point in the interested image based on the characteristic position.
2. The method of claim 1, wherein the extracting image block samples near the feature points from the training image as a training set to train the neural network comprises:
taking the image blocks in the set size range near the feature points as positive samples to train the neural network;
and (4) taking the image blocks outside the set size range near the feature points as negative samples to train the neural network.
3. The method according to claim 1, wherein the establishing of the neural network model, extracting image block samples near the feature points from the training image as a training set to train the neural network, and before obtaining the neural network model, further comprises:
and preprocessing the training image.
4. The method of claim 3, wherein preprocessing the training image comprises:
and performing one or more of up-sampling, down-sampling, isotropic processing, denoising and enhancement processing on the training image.
5. The method of claim 1, wherein the inputting the image of interest into the neural network model further comprises, before obtaining the feature point probability map:
the image of interest is pre-processed.
6. A feature point detection apparatus, characterized in that the apparatus comprises:
the neural network establishing module is used for establishing a neural network model, extracting image block samples near the feature points from the training images as a training set to train the neural network to obtain the neural network model, and the neural network model is a full convolution neural network;
the input module is used for inputting the interested image into the neural network model to obtain a characteristic point probability graph;
the calculation module is used for calculating to obtain the position of the characteristic point based on the characteristic point probability graph;
wherein the calculating the position of the feature point based on the feature point probability map comprises:
clustering the position coordinates of the first feature region in the feature point probability map by adopting a clustering method to obtain a second feature region, wherein the part of the feature point probability map with the probability value larger than a set threshold value is used as the first feature region; carrying out weighted average calculation on the coordinates of the position of the second characteristic region to obtain the characteristic position in the characteristic point probability map; and obtaining the position of the characteristic point in the interested image based on the characteristic position.
7. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN201811580783.XA 2018-12-24 2018-12-24 Feature point detection method, feature point detection device, computer equipment and storage medium Active CN109712128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811580783.XA CN109712128B (en) 2018-12-24 2018-12-24 Feature point detection method, feature point detection device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811580783.XA CN109712128B (en) 2018-12-24 2018-12-24 Feature point detection method, feature point detection device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109712128A CN109712128A (en) 2019-05-03
CN109712128B true CN109712128B (en) 2020-12-01

Family

ID=66257413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811580783.XA Active CN109712128B (en) 2018-12-24 2018-12-24 Feature point detection method, feature point detection device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109712128B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097011A (en) * 2019-05-06 2019-08-06 北京邮电大学 A kind of signal recognition method and device
CN110675444B (en) * 2019-09-26 2023-03-31 东软医疗系统股份有限公司 Method and device for determining head CT scanning area and image processing equipment
CN110807364B (en) * 2019-09-27 2022-09-30 中国科学院计算技术研究所 Modeling and capturing method and system for three-dimensional face and eyeball motion
JP7321953B2 (en) * 2020-02-17 2023-08-07 株式会社神戸製鋼所 Automatic welding system, welding method, learning device, method for generating learned model, learned model, estimation device, estimation method, and program
CN114018275A (en) * 2020-07-15 2022-02-08 广州汽车集团股份有限公司 Driving control method and system for vehicle at intersection and computer readable storage medium
CN112365492A (en) * 2020-11-27 2021-02-12 上海联影医疗科技股份有限公司 Image scanning method, image scanning device, electronic equipment and storage medium
CN113662660A (en) * 2021-10-22 2021-11-19 杭州键嘉机器人有限公司 Joint replacement preoperative planning method, device, equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204025B (en) * 2017-04-18 2019-10-18 华北电力大学 The adaptive clothing cartoon modeling method of view-based access control model perception
CN108230390B (en) * 2017-06-23 2021-01-01 北京市商汤科技开发有限公司 Training method, key point detection method, device, storage medium and electronic equipment
CN107464250B (en) * 2017-07-03 2020-12-04 深圳市第二人民医院 Automatic breast tumor segmentation method based on three-dimensional MRI (magnetic resonance imaging) image
CN107767419A (en) * 2017-11-07 2018-03-06 广州深域信息科技有限公司 A kind of skeleton critical point detection method and device
CN108388841B (en) * 2018-01-30 2021-04-16 浙江大学 Cervical biopsy region identification method and device based on multi-feature deep neural network
CN108564120B (en) * 2018-04-04 2022-06-14 中山大学 Feature point extraction method based on deep neural network
CN108765368A (en) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 MRI lesion locations detection method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN109712128A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109712128B (en) Feature point detection method, feature point detection device, computer equipment and storage medium
CN111860670B (en) Domain adaptive model training method, image detection method, device, equipment and medium
CN110136103B (en) Medical image interpretation method, device, computer equipment and storage medium
CN111950329A (en) Target detection and model training method and device, computer equipment and storage medium
CN111291637A (en) Face detection method, device and equipment based on convolutional neural network
CN109308488B (en) Mammary gland ultrasonic image processing device, method, computer equipment and storage medium
CN111429460A (en) Image segmentation method, image segmentation model training method, device and storage medium
CN111862044A (en) Ultrasonic image processing method and device, computer equipment and storage medium
CN112132265B (en) Model training method, cup-disk ratio determining method, device, equipment and storage medium
EP3696725A1 (en) Tool detection method and device
CN110321968B (en) Ultrasonic image classification device
CN114037637B (en) Image data enhancement method and device, computer equipment and storage medium
CN111192678B (en) Pathological microscopic image diagnosis and model training method, device, equipment and medium
CN111274999A (en) Data processing method, image processing method, device and electronic equipment
CN110738643A (en) Method for analyzing cerebral hemorrhage, computer device and storage medium
EP3671635A1 (en) Curvilinear object segmentation with noise priors
CN115797929A (en) Small farmland image segmentation method and device based on double-attention machine system
CN111666813A (en) Subcutaneous sweat gland extraction method based on three-dimensional convolutional neural network of non-local information
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN111626379B (en) X-ray image detection method for pneumonia
CN110210314B (en) Face detection method, device, computer equipment and storage medium
KR101821770B1 (en) Techniques for feature extraction
EP3671634A1 (en) Curvilinear object segmentation with geometric priors
CN115205680A (en) Radar target SAR image joint detection and identification method based on significance migration
CN115424093A (en) Method and device for identifying cells in fundus image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Patentee after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Patentee before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

CP01 Change in the name or title of a patent holder