CN109712128A - Feature point detecting method, device, computer equipment and storage medium - Google Patents
Feature point detecting method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109712128A CN109712128A CN201811580783.XA CN201811580783A CN109712128A CN 109712128 A CN109712128 A CN 109712128A CN 201811580783 A CN201811580783 A CN 201811580783A CN 109712128 A CN109712128 A CN 109712128A
- Authority
- CN
- China
- Prior art keywords
- characteristic point
- neural network
- image
- network model
- probability graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
This application involves a kind of feature point detecting method, device, computer equipment and storage mediums.The described method includes: establishing neural network model;Image of interest is inputted in the neural network model, characteristic point probability graph is obtained;Characteristic point position is calculated based on the characteristic point probability graph.Features described above point detecting method, device, computer equipment and storage medium, full convolutional neural networks model is obtained by training, image of interest is inputted in the full convolutional neural networks model, the method that characteristic point position is calculated, so that when carrying out characteristic point detection, it is not that unit is detected to image slice, directly whole picture image of interest can be detected, substantially increase the speed and accuracy of characteristic point detection.
Description
Technical field
This application involves depth learning technology fields, set more particularly to a kind of feature point detecting method, device, computer
Standby and storage medium.
Background technique
The detection of key feature points is all very important many intelligent uses in medical image, is such as realizing heart intelligence
In capable of scanning, the automatic detection in the relevant key feature points/region of hearts such as the accurate efficient apex of the heart, bicuspid valve and tricuspid valve,
It is the key that realize that long axis of heart and short-axis direction are searched for automatically, there is very important clinical meaning.However, due to heart knot
The complexity and diversity of structure, cause the detection of these key feature points in image extremely difficult.
Summary of the invention
Based on this, it is necessary to for the complexity and diversity due to cardiac structure, lead to key feature points in image
The extremely difficult technical problem of detection, provide one kind can feature point detecting method, device, computer equipment and storage be situated between
Matter.
A kind of feature point detecting method, which comprises
Neural network model is established, from the image slice sample extracted near characteristic point in training image as training set pair
Neural network is trained, and obtains neural network model, and the neural network model is full convolutional neural networks;
Image of interest is inputted in the neural network model, characteristic point probability graph is obtained;
Characteristic point position is calculated based on the characteristic point probability graph.
It is described in one of the embodiments, to be used as instruction from the image slice sample extracted near characteristic point in training image
Practice to collect to be trained neural network and includes:
The image slice that characteristic point is nearby sized in range is trained neural network as positive sample;
The image slice that characteristic point is nearby sized outside range is trained neural network as negative sample.
It is described in one of the embodiments, to establish neural network model, it is extracted near characteristic point from training image
Image slice sample is trained neural network as training set, before obtaining neural network model further include:
Training image is pre-processed.
In one of the embodiments, it is described to training image carry out pretreatment include:
To training image carry out liter sampling, it is down-sampled, etc. one of the processing of sides' property, denoising, enhancing processing or a variety of places
Reason.
It is described in one of the embodiments, to input image of interest in the neural network model, obtain characteristic point
Before probability graph further include:
Image of interest is pre-processed.
Characteristic point position, which is calculated, based on the characteristic point probability graph described in one of the embodiments, includes:
Clustering processing is carried out to the position coordinates in fisrt feature region in characteristic point probability graph using the method for cluster, is obtained
Second feature region, wherein probability value in characteristic point probability graph is greater than the part of given threshold as fisrt feature region.
It is described in one of the embodiments, that characteristic point position is calculated based on the characteristic point probability graph further include:
Calculating is weighted and averaged to the coordinate of the position in the second feature region, obtains the spy in characteristic point probability graph
Levy position;
The characteristic point position in image of interest is obtained based on the feature locations.
A kind of feature point detection device, described device include:
Neural network module, for establishing neural network model, from the figure extracted in training image near characteristic point
As stripping and slicing sample is trained neural network as training set, neural network model is obtained, the neural network model is complete
Convolutional neural networks;
Input module obtains characteristic point probability graph for inputting image of interest in the neural network model;
Computing module, for characteristic point position to be calculated based on the characteristic point probability graph.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
Device performs the steps of when executing the computer program
Neural network model is established, from the image slice sample extracted near characteristic point in training image as training set pair
Neural network is trained, and obtains neural network model, and the neural network model is full convolutional neural networks;
Image of interest is inputted in the neural network model, characteristic point probability graph is obtained;
Characteristic point position is calculated based on the characteristic point probability graph.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
It is performed the steps of when row
Neural network model is established, from the image slice sample extracted near characteristic point in training image as training set pair
Neural network is trained, and obtains neural network model, and the neural network model is full convolutional neural networks;
Image of interest is inputted in the neural network model, characteristic point probability graph is obtained;
Characteristic point position is calculated based on the characteristic point probability graph.
Features described above point detecting method, device, computer equipment and storage medium obtain full convolutional Neural net by training
Network model inputs image of interest in the neural network model, obtains characteristic point probability graph, and general based on the characteristic point
The method that characteristic point position is calculated in rate figure, so that being not unit progress to image slice when carrying out characteristic point detection
Detection, can use neural network and directly detects to whole picture image of interest, substantially increase the speed of characteristic point detection
And accuracy.
Detailed description of the invention
Fig. 1 is the flow diagram of feature point detecting method in one embodiment;
Fig. 2 is the schematic diagram of the characteristic point probability graph of characteristic point in one embodiment;
Fig. 3 is the structural block diagram of feature point detection device in one embodiment;
Fig. 4 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
The detection of key feature points has many traditional algorithms and realizes to it, but due to crucial in medical image
The diversity of characteristic point, so that the traditional algorithm targetedly designed is time-consuming and universality is poor.In recent years, deep learning is in people
Work smart field achieves important breakthrough, such as: natural language processing, speech recognition, computer vision and image and video analysis
Equal numerous areas achieve huge success, and deep learning is used for the correlation that key feature points detect in medical image at present
It studies less.
Referring to Fig. 1, Fig. 1 is the flow diagram of the feature point detecting method of one embodiment of the invention.
In the present embodiment, the feature point detecting method includes:
Step 100, training image is pre-processed.
In the present embodiment, to training image carry out pretreatment include training image is carried out liter sampling, it is down-sampled, etc. side
Property processing, denoising, enhancing processing one of or a variety of processing.A medical image difference very big with natural image be,
Organ size inside medical image is of practical significance, and cannot arbitrarily scale or direction of rotation etc..It should be understood that right
Image carries out a liter sampling or down-sampled, is because the resolution ratio of original image is each different, some needs carry out a liter sampling, have
Need to carry out it is down-sampled, training image is unified to identical resolution ratio, rise sampling and it is down-sampled be all to use interpolation side
Method is handled, and effect is neural network will to be allowed to be easier to train after training image unification to same resolution ratio, as a result also more
Robust.Meanwhile best x, y, the resolution ratio in tri- directions z be also it is identical, i.e., to training image carry out grade sides' property processing.Example
Property, using 1mm × 1mm × 1mm resolution ratio.Since image may need to carry out noise reduction comprising metal artifacts or noise etc.
Processing, while the intensity profile range of medical image is generally bigger, but our organs be concerned about may only be distributed in it is specific
Tonal range obtain preferable effect to reduce network training difficulty, we can for this specific tonal range into
Row enhancing, that is, take suitable window width and window level to carry out truncation normalization to image grayscale, as the tonal range of original image is -1024~
4096, but we only focus on 0~1000 range, can use following formula manipulation:
It should be understood that all training images resolution ratio all having the same can be made by pre-processing to training image,
Tonal range distribution etc. improves accuracy to reduce the difficulty of model training.
Step 110, neural network model is established, from the image slice sample conduct extracted near characteristic point in training image
Training set is trained neural network, obtains neural network model, and the neural network model is full convolutional neural networks.
Specifically, positive negative sample is obtained near the characteristic point in training image as training set to instruct neural network
Practice.In the present embodiment, positive negative sample is obtained as training set to neural network near the characteristic point from training image
It is trained and neural network is trained as positive sample including the image slice for being nearby sized characteristic point in range;
The image slice that characteristic point is nearby sized outside range is trained neural network as negative sample.
Illustratively, centered on characteristic point, be then positive sample areas within the scope of radius r, is considered within the scope of radius r+x
Transition region then thinks to be not belonging to this feature point other than radius r+x, and be negative sample areas, then cuts out within the scope of radius r+x
In image slice if comprising positive sample region, for positive sample, the image slice cut out other than radius r+x is then considered
Negative sample.
It illustratively, is the apex of the heart there is no a specific point, therefore judging when marking the apex of the heart as characteristic point
To apex of the heart region in arbitrarily take a little as characteristic point, accordingly, it is to be understood that, the image in the apex of the heart region is cut
Block can be used as positive sample and be trained to neural network.Specifically, the selection of r, which need to only reach, is included in the apex of the heart region
The effect in positive sample region, the part that the selection of x only need to will be apparent from being not belonging to apex of the heart region are grouped into the effect in negative sample region
Fruit.
It in the present embodiment, include feature with image slice using the image slice of positive negative sample as the input of training set
Output of the probability of point as training set, is trained neural network model, obtains neural network model, the neural network
Model is full convolutional neural networks, and the input of the neural network model is image slice, and exporting as image slice includes feature
The probability of point.
Specifically, the neural network model is full convolutional neural networks, and full articulamentum is the convolution that convolution kernel size is 1
Layer can then input the image of interest of arbitrary size in test, and the neural network model is directly to the figure interested
As being detected, characteristic point probability graph is obtained.
Referring to Fig. 2, Fig. 2 is characterized the schematic diagram of characteristic point probability graph a little.Specifically, the number in every lattice, which represents, is somebody's turn to do
The corresponding image of interest in part includes the probability of characteristic point.It should be understood that the feature of the full convolutional neural networks output
The stripping and slicing size of point probability graph depends on the size of the image slice of the sample for being trained, the characteristic point probability graph
Size depends on the size of described image stripping and slicing and carries out the step-length of stripping and slicing to the image of interest.It should be understood that described
Stripping and slicing step-length depends on the building parameter of the full convolutional neural networks.It should be understood that the characteristic point probability graph with it is described
There are corresponding relationships between image of interest, therefore the stripping and slicing of each of characteristic point probability graph can correspond to back image of interest
Image slice.
Step 120, image of interest is pre-processed.
In the present embodiment, to image of interest carry out pretreatment include image of interest is carried out liter sampling, it is down-sampled,
One of the processing of grade sides' property, denoising, enhancing processing or a variety of processing.It should be understood that the image of interest is carried out
It is identical for pre-processing with the pretreatment carried out to the training image, keeps image of interest identical with the state of training image,
To improve the accuracy of characteristic point detection.
Step 130, image of interest is inputted in the neural network model, obtains characteristic point probability graph.
In the present embodiment, the image of interest of arbitrary size can directly input in the neural network model, nothing
Stripping and slicing need to be carried out in advance.It should be understood that when the neural network model detects the image of interest, it can be to described
Image of interest carries out stripping and slicing, and obtains the characteristic point probability graph that each image slice includes characteristic point probability, and the characteristic point is general
The stripping and slicing size of rate figure depends on the size of the image slice of the sample for being trained, the size of the characteristic point probability graph
The step-length of stripping and slicing is carried out depending on the size of described image stripping and slicing and to the image of interest.It should be understood that the stripping and slicing
Step-length depends on the building parameter of the full convolutional neural networks.
Step 140, characteristic point position is calculated based on the characteristic point probability graph.
In the present embodiment, probability in the characteristic point probability graph is greater than the part of given threshold as characteristic area.
It should be understood that including the region that characteristic point is not present in corresponding image of interest in the characteristic area, referred to as false positive is special
Levy region.It should be understood that the image of interest of characteristic point position corresponding characteristic point probability graph part can exist it is multiple
Characteristic area, and the generally isolated presence of false positive characteristic area.In the present embodiment, in order to filter out false positive characteristic area, institute
Stating and characteristic point position is calculated based on the characteristic point probability graph further includes the method using cluster to the characteristic point probability
Figure is handled.Specifically, the characteristic point probability graph is handled using the method for adaptive K mean cluster, that is, passed through
Can with the variation for the number that clusters adjust automatically cluster numbers, cluster number with most suitable and carry out the algorithm of data classification
Isolated existing false positive characteristic area is filtered out.Specifically, general to characteristic point using the method for adaptive K mean cluster
The position coordinates in fisrt feature region carry out clustering processing in rate figure, second feature region are obtained, wherein by characteristic point probability graph
Middle probability value is greater than the part of given threshold as fisrt feature region.
Specifically, described to be based on the characteristic point probability graph characteristic point position is calculated further including with second feature region
The corresponding probability value of middle each position coordinate is weighted and averaged meter to the coordinate of the position in the second feature region as weight
It calculates, obtains the feature locations in characteristic point probability graph;The characteristic point position in image of interest is obtained based on the feature locations.
Specifically, the given threshold can be 0.5.In other embodiments, the given threshold can carry out according to the actual situation
Setting.It should be understood that the coordinate of the position of the characteristic area can be characterized the position coordinates at the center in region.
In the present embodiment, obtaining the characteristic point position in image of interest based on the feature locations includes based on described
Stripping and slicing step-length, the stripping and slicing side length of image of interest and the feature when full convolutional neural networks handle image of interest
The characteristic point position in image of interest is calculated in position.It should be understood that the stripping and slicing step-length depends on the full convolution
The building parameter of neural network.
Illustratively, features described above point detecting method is by establishing neural network model, from carrying out pretreated training
The image slice sample near characteristic point is extracted in image to be trained neural network as training set, obtains full convolutional Neural
Network model will carry out pretreated image of interest and input the neural network model, and obtained characteristic point probability graph, used
The method of adaptive K mean cluster handles the characteristic point probability graph, by probability in processed characteristic point probability graph
Greater than given threshold part as second feature region, the coordinate of the position in the second feature region is weighted and averaged
It calculates, obtains the feature locations in characteristic point probability graph, the feature point in image of interest is obtained based on the feature locations
The method set detects characteristic point, so that be not that unit is detected to image slice when carrying out characteristic point detection,
Directly whole picture image of interest can be detected, substantially increase the speed and accuracy of characteristic point detection.
It should be understood that although each step in the flow chart of Fig. 1 is successively shown according to the instruction of arrow, this
A little steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these steps
It executes there is no the limitation of stringent sequence, these steps can execute in other order.Moreover, at least part in Fig. 1
Step may include that perhaps these sub-steps of multiple stages or stage are executed in synchronization to multiple sub-steps
It completes, but can execute at different times, the execution sequence in these sub-steps or stage, which is also not necessarily, successively to be carried out,
But it can be executed in turn or alternately at least part of the sub-step or stage of other steps or other steps.
In one embodiment, as shown in figure 3, providing a kind of feature point detection device, comprising: test image pretreatment
Module 200, neural network module 210, image of interest preprocessing module 220, input module 230 and computing module 240,
Wherein:
Test image preprocessing module 200, for being pre-processed to training image.
Neural network module 210 is extracted near characteristic point for establishing neural network model from training image
Image slice sample is trained neural network as training set, obtains neural network model, and the neural network model is
Full convolutional neural networks.
Image of interest preprocessing module 220, for being pre-processed to image of interest.
Input module 230 obtains characteristic point probability graph for inputting image of interest in the neural network model.
Computing module 240, for characteristic point position to be calculated based on the characteristic point probability graph.
Specific about feature point detection device limits the restriction that may refer to above for feature point detecting method,
This is repeated no more.Modules in features described above point detection device can come fully or partially through software, hardware and combinations thereof
It realizes.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with software
Form is stored in the memory in computer equipment, executes the corresponding operation of the above modules in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be terminal, internal structure
Figure can be as shown in Figure 4.The computer equipment includes processor, the memory, network interface, display connected by system bus
Screen and input unit.Wherein, the processor of the computer equipment is for providing calculating and control ability.The computer equipment is deposited
Reservoir includes non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system and computer journey
Sequence.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The network interface of machine equipment is used to communicate with external terminal by network connection.When the computer program is executed by processor with
Realize a kind of feature point detecting method.The display screen of the computer equipment can be liquid crystal display or electric ink is shown
Screen, the input unit of the computer equipment can be the touch layer covered on display screen, be also possible on computer equipment shell
Key, trace ball or the Trackpad of setting can also be external keyboard, Trackpad or mouse etc..
It will be understood by those skilled in the art that structure shown in Fig. 4, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is stored in memory
Computer program, the processor perform the steps of when executing computer program
Neural network model is established, from the image slice sample extracted near characteristic point in training image as training set pair
Neural network is trained, and obtains neural network model, and the neural network model is full convolutional neural networks;
Image of interest is inputted in the neural network model, characteristic point probability graph is obtained;
Characteristic point position is calculated based on the characteristic point probability graph.
In one embodiment, it is also performed the steps of when processor executes computer program
The image slice that characteristic point is nearby sized in range is trained neural network as positive sample;
The image slice that characteristic point is nearby sized outside range is trained neural network as negative sample.
In one embodiment, it is also performed the steps of when processor executes computer program
Training image is pre-processed.
In one embodiment, it is also performed the steps of when processor executes computer program
To training image carry out liter sampling, it is down-sampled, etc. one of the processing of sides' property, denoising, enhancing processing or a variety of places
Reason.
In one embodiment, it is also performed the steps of when processor executes computer program
Image of interest is pre-processed.
In one embodiment, it is also performed the steps of when processor executes computer program
Clustering processing is carried out to the position coordinates in fisrt feature region in characteristic point probability graph using the method for cluster, is obtained
Second feature region, wherein probability value in characteristic point probability graph is greater than the part of given threshold as fisrt feature region.
In one embodiment, it is also performed the steps of when processor executes computer program
Calculating is weighted and averaged to the coordinate of the position in the second feature region, obtains the spy in characteristic point probability graph
Levy position;
The characteristic point position in image of interest is obtained based on the feature locations.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor
Neural network model is established, from the image slice sample extracted near characteristic point in training image as training set pair
Neural network is trained, and obtains neural network model, and the neural network model is full convolutional neural networks;
Image of interest is inputted in the neural network model, characteristic point probability graph is obtained;
Characteristic point position is calculated based on the characteristic point probability graph.
In one embodiment, it is also performed the steps of when computer program is executed by processor
The image slice that characteristic point is nearby sized in range is trained neural network as positive sample;
The image slice that characteristic point is nearby sized outside range is trained neural network as negative sample.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Training image is pre-processed.
In one embodiment, it is also performed the steps of when computer program is executed by processor
To training image carry out liter sampling, it is down-sampled, etc. one of the processing of sides' property, denoising, enhancing processing or a variety of places
Reason.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Image of interest is pre-processed.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Clustering processing is carried out to the position coordinates in fisrt feature region in characteristic point probability graph using the method for cluster, is obtained
Second feature region, wherein probability value in characteristic point probability graph is greater than the part of given threshold as fisrt feature region.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Calculating is weighted and averaged to the coordinate of the position in the second feature region, obtains the spy in characteristic point probability graph
Levy position;
The characteristic point position in image of interest is obtained based on the feature locations.
Features described above point detecting method, device, computer equipment and storage medium obtain full convolutional Neural net by training
Network model inputs image of interest in the neural network model, obtains characteristic point probability graph, and general based on the characteristic point
The method that characteristic point position is calculated in rate figure, so that being not unit progress to image slice when carrying out characteristic point detection
Detection, can use neural network and directly detects to whole picture image of interest, substantially increase the speed of characteristic point detection
And accuracy.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of feature point detecting method, which is characterized in that the described method includes:
Neural network model is established, from the image slice sample extracted near characteristic point in training image as training set to nerve
Network is trained, and obtains neural network model, and the neural network model is full convolutional neural networks;
Image of interest is inputted in the neural network model, characteristic point probability graph is obtained;
Characteristic point position is calculated based on the characteristic point probability graph.
2. the method according to claim 1, wherein described from the image extracted in training image near characteristic point
Stripping and slicing sample, which is trained neural network as training set, includes:
The image slice that characteristic point is nearby sized in range is trained neural network as positive sample;
The image slice that characteristic point is nearby sized outside range is trained neural network as negative sample.
3. being mentioned from training image the method according to claim 1, wherein described establish neural network model
It takes the image slice sample near characteristic point to be trained as training set to neural network, obtains going back before neural network model
Include:
Training image is pre-processed.
4. according to the method described in claim 3, it is characterized in that, it is described to training image carry out pretreatment include:
To training image carry out liter sampling, it is down-sampled, etc. one of the processing of sides' property, denoising, enhancing processing or a variety of processing.
5. the method according to claim 1, wherein described input the neural network model for image of interest
In, before obtaining characteristic point probability graph further include:
Image of interest is pre-processed.
6. the method according to claim 1, wherein described be calculated feature based on the characteristic point probability graph
Point position includes:
Clustering processing is carried out to the position coordinates in fisrt feature region in characteristic point probability graph using the method for cluster, obtains second
Characteristic area, wherein probability value in characteristic point probability graph is greater than the part of given threshold as fisrt feature region.
7. according to the method described in claim 6, it is characterized in that, described be calculated feature based on the characteristic point probability graph
Point position further include:
Calculating is weighted and averaged to the coordinate of the position in the second feature region, obtains the Q-character in characteristic point probability graph
It sets;
The characteristic point position in image of interest is obtained based on the feature locations.
8. a kind of feature point detection device, which is characterized in that described device includes:
Neural network module is cut for establishing neural network model from the image extracted near characteristic point in training image
Block sample is trained neural network as training set, obtains neural network model, and the neural network model is full convolution
Neural network;
Input module obtains characteristic point probability graph for inputting image of interest in the neural network model;
Computing module, for characteristic point position to be calculated based on the characteristic point probability graph.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811580783.XA CN109712128B (en) | 2018-12-24 | 2018-12-24 | Feature point detection method, feature point detection device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811580783.XA CN109712128B (en) | 2018-12-24 | 2018-12-24 | Feature point detection method, feature point detection device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109712128A true CN109712128A (en) | 2019-05-03 |
CN109712128B CN109712128B (en) | 2020-12-01 |
Family
ID=66257413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811580783.XA Active CN109712128B (en) | 2018-12-24 | 2018-12-24 | Feature point detection method, feature point detection device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109712128B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097011A (en) * | 2019-05-06 | 2019-08-06 | 北京邮电大学 | A kind of signal recognition method and device |
CN110675444A (en) * | 2019-09-26 | 2020-01-10 | 东软医疗系统股份有限公司 | Method and device for determining head CT scanning area and image processing equipment |
CN110807364A (en) * | 2019-09-27 | 2020-02-18 | 中国科学院计算技术研究所 | Modeling and capturing method and system for three-dimensional face and eyeball motion |
CN112365492A (en) * | 2020-11-27 | 2021-02-12 | 上海联影医疗科技股份有限公司 | Image scanning method, image scanning device, electronic equipment and storage medium |
WO2021166555A1 (en) * | 2020-02-17 | 2021-08-26 | 株式会社神戸製鋼所 | Automated welding system, automated welding method, learning device, method for generating learned model, learned model, estimation device, estimation method, and program |
CN113662660A (en) * | 2021-10-22 | 2021-11-19 | 杭州键嘉机器人有限公司 | Joint replacement preoperative planning method, device, equipment and storage medium |
CN114018275A (en) * | 2020-07-15 | 2022-02-08 | 广州汽车集团股份有限公司 | Driving control method and system for vehicle at intersection and computer readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107204025A (en) * | 2017-04-18 | 2017-09-26 | 华北电力大学 | The adaptive clothing cartoon modeling method that view-based access control model is perceived |
CN107464250A (en) * | 2017-07-03 | 2017-12-12 | 深圳市第二人民医院 | Tumor of breast automatic division method based on three-dimensional MRI image |
CN107767419A (en) * | 2017-11-07 | 2018-03-06 | 广州深域信息科技有限公司 | A kind of skeleton critical point detection method and device |
CN108230390A (en) * | 2017-06-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Training method, critical point detection method, apparatus, storage medium and electronic equipment |
CN108388841A (en) * | 2018-01-30 | 2018-08-10 | 浙江大学 | Cervical biopsy area recognizing method and device based on multiple features deep neural network |
CN108564120A (en) * | 2018-04-04 | 2018-09-21 | 中山大学 | Feature Points Extraction based on deep neural network |
CN108765368A (en) * | 2018-04-20 | 2018-11-06 | 平安科技(深圳)有限公司 | MRI lesion locations detection method, device, computer equipment and storage medium |
-
2018
- 2018-12-24 CN CN201811580783.XA patent/CN109712128B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107204025A (en) * | 2017-04-18 | 2017-09-26 | 华北电力大学 | The adaptive clothing cartoon modeling method that view-based access control model is perceived |
CN108230390A (en) * | 2017-06-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Training method, critical point detection method, apparatus, storage medium and electronic equipment |
CN107464250A (en) * | 2017-07-03 | 2017-12-12 | 深圳市第二人民医院 | Tumor of breast automatic division method based on three-dimensional MRI image |
CN107767419A (en) * | 2017-11-07 | 2018-03-06 | 广州深域信息科技有限公司 | A kind of skeleton critical point detection method and device |
CN108388841A (en) * | 2018-01-30 | 2018-08-10 | 浙江大学 | Cervical biopsy area recognizing method and device based on multiple features deep neural network |
CN108564120A (en) * | 2018-04-04 | 2018-09-21 | 中山大学 | Feature Points Extraction based on deep neural network |
CN108765368A (en) * | 2018-04-20 | 2018-11-06 | 平安科技(深圳)有限公司 | MRI lesion locations detection method, device, computer equipment and storage medium |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097011A (en) * | 2019-05-06 | 2019-08-06 | 北京邮电大学 | A kind of signal recognition method and device |
CN110675444A (en) * | 2019-09-26 | 2020-01-10 | 东软医疗系统股份有限公司 | Method and device for determining head CT scanning area and image processing equipment |
CN110675444B (en) * | 2019-09-26 | 2023-03-31 | 东软医疗系统股份有限公司 | Method and device for determining head CT scanning area and image processing equipment |
CN110807364A (en) * | 2019-09-27 | 2020-02-18 | 中国科学院计算技术研究所 | Modeling and capturing method and system for three-dimensional face and eyeball motion |
CN110807364B (en) * | 2019-09-27 | 2022-09-30 | 中国科学院计算技术研究所 | Modeling and capturing method and system for three-dimensional face and eyeball motion |
JP7321953B2 (en) | 2020-02-17 | 2023-08-07 | 株式会社神戸製鋼所 | Automatic welding system, welding method, learning device, method for generating learned model, learned model, estimation device, estimation method, and program |
WO2021166555A1 (en) * | 2020-02-17 | 2021-08-26 | 株式会社神戸製鋼所 | Automated welding system, automated welding method, learning device, method for generating learned model, learned model, estimation device, estimation method, and program |
JP2021126693A (en) * | 2020-02-17 | 2021-09-02 | 株式会社神戸製鋼所 | Automatic welding system, welding method, learning device, learned model generation method, learned model, estimation device, estimation method, and program |
CN115151367B (en) * | 2020-02-17 | 2024-01-12 | 株式会社神户制钢所 | Automatic welding system, automatic welding method, learning device, neural network system, and estimation device |
CN115151367A (en) * | 2020-02-17 | 2022-10-04 | 株式会社神户制钢所 | Automatic welding system, automatic welding method, learning device, learned model generation method, learned model, estimation device, estimation method, and program |
CN114018275A (en) * | 2020-07-15 | 2022-02-08 | 广州汽车集团股份有限公司 | Driving control method and system for vehicle at intersection and computer readable storage medium |
CN112365492A (en) * | 2020-11-27 | 2021-02-12 | 上海联影医疗科技股份有限公司 | Image scanning method, image scanning device, electronic equipment and storage medium |
CN113662660A (en) * | 2021-10-22 | 2021-11-19 | 杭州键嘉机器人有限公司 | Joint replacement preoperative planning method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109712128B (en) | 2020-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109712128A (en) | Feature point detecting method, device, computer equipment and storage medium | |
US20220004744A1 (en) | Human posture detection method and apparatus, device and storage medium | |
CN111723860B (en) | Target detection method and device | |
Zhang et al. | Identification of maize leaf diseases using improved deep convolutional neural networks | |
CN111160269A (en) | Face key point detection method and device | |
CN110223323B (en) | Target tracking method based on depth feature adaptive correlation filtering | |
CN108647588A (en) | Goods categories recognition methods, device, computer equipment and storage medium | |
CN111860670A (en) | Domain adaptive model training method, image detection method, device, equipment and medium | |
CN110276745B (en) | Pathological image detection algorithm based on generation countermeasure network | |
CN110705425B (en) | Tongue picture multi-label classification method based on graph convolution network | |
CN109086711B (en) | Face feature analysis method and device, computer equipment and storage medium | |
CN109583325A (en) | Face samples pictures mask method, device, computer equipment and storage medium | |
CN112037263B (en) | Surgical tool tracking system based on convolutional neural network and long-term and short-term memory network | |
CN109684967A (en) | A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network | |
CN111275686B (en) | Method and device for generating medical image data for artificial neural network training | |
CN112150476A (en) | Coronary artery sequence vessel segmentation method based on space-time discriminant feature learning | |
CN111738344A (en) | Rapid target detection method based on multi-scale fusion | |
WO2021136368A1 (en) | Method and apparatus for automatically detecting pectoralis major region in molybdenum target image | |
CN111814611A (en) | Multi-scale face age estimation method and system embedded with high-order information | |
CN112307984B (en) | Safety helmet detection method and device based on neural network | |
CN113919442A (en) | Tobacco maturity state recognition model based on convolutional neural network | |
CN112861718A (en) | Lightweight feature fusion crowd counting method and system | |
EP3671635B1 (en) | Curvilinear object segmentation with noise priors | |
CN110135435B (en) | Saliency detection method and device based on breadth learning system | |
CN108154513A (en) | Cell based on two photon imaging data detects automatically and dividing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258 Patentee after: Shanghai Lianying Medical Technology Co., Ltd Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258 Patentee before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |