CN113706514B - Focus positioning method, device, equipment and storage medium based on template image - Google Patents
Focus positioning method, device, equipment and storage medium based on template image Download PDFInfo
- Publication number
- CN113706514B CN113706514B CN202111015494.7A CN202111015494A CN113706514B CN 113706514 B CN113706514 B CN 113706514B CN 202111015494 A CN202111015494 A CN 202111015494A CN 113706514 B CN113706514 B CN 113706514B
- Authority
- CN
- China
- Prior art keywords
- image
- focus
- tissue
- feature map
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention relates to the fields of digital medical treatment and artificial intelligence, and discloses a focus positioning method, a focus positioning device, focus positioning equipment and a storage medium based on a template image. The method comprises the following steps: acquiring a first tissue abnormality training image and performing focus area enhancement treatment; extracting spatial image features of a plurality of sizes corresponding to each tissue abnormal image block and a first feature map thereof; updating the focus positioning training model until the focus positioning training model converges, and obtaining a focus positioning model; obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image; and carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the similarity matching result in the template image. The invention realizes the positioning and identification of the specific position of the focus point.
Description
Technical Field
The invention relates to the fields of digital medical treatment and artificial intelligence, in particular to a focus positioning method, a focus positioning device, focus positioning equipment and a storage medium based on template images.
Background
With the development of medical technology, in the diagnosis and treatment process of lung diseases, locating the disease position is a key step of making a treatment scheme, and the segmentation of lung lobes and lung segments is a first step of locating position information. Traditional lung segment segmentation is manually sketched by a medical professional doctor in a diagnosis and treatment hospital, but due to the complex lung structure, the traditional lung identification and segmentation method also faces a plurality of problems, on one hand, the labeling of medical images can be completed by scarce personnel with medical background, on the other hand, the time and effort required for labeling three-dimensional images such as lung CT are an order of magnitude more than those of the conventional two-dimensional images, and the labeling is required for 18 lung segments with the complex lung structure. With the development of computer technology and medical images, doctors can assist in improving the accuracy and speed of lung segment segmentation by using related computer technology. In addition, in recent years, along with the development of deep learning technology, some medical imaging technology researchers combine the deep learning technology with medical imaging processing, and use corresponding computers to realize the processing of relevant lung segment identification segmentation, so that accurate anatomical structures can be provided for lung tumor disease diagnosis and lung segment excision surgery, and the method has very important significance for lung cancer disease diagnosis and surgery treatment, and can greatly promote the development of intelligent medical treatment.
Existing lung structure segmentation methods can be divided into traditional imaging algorithms and deep learning algorithms. The traditional imaging algorithm has small data demand, but has very limited generalization capability and segmentation accuracy. The deep learning algorithm uses convolutional neural network, cyclic neural network and other algorithms, and can achieve high segmentation accuracy and good generalization capability under the condition of having a large amount of training data, but the method needs a large amount of labeling data and is difficult to realize. Namely, a method for positioning and identifying the specific position of the focus point is lacking at present.
Disclosure of Invention
The invention mainly aims to solve the problem that digital medical treatment lacks a reliable and efficient method for locating, dividing and identifying focus points at present. The first aspect of the present invention provides a lesion locating method based on a template image, comprising: obtaining a tissue abnormality training image, and carrying out focus area enhancement processing on the tissue abnormality training image through an input layer in a preset focus positioning training model to obtain a plurality of tissue abnormality image blocks, wherein the tissue abnormality image blocks comprise labeling information of focus pixel points and non-focus pixel points; extracting spatial image features of a plurality of sizes corresponding to each tissue abnormal image block through a combined convolutional neural network in the focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes; updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain a focus positioning model; obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through the focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of the focus to be identified; and carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result.
Optionally, in a first implementation manner of the first aspect of the present invention, the performing, by presetting an input layer in a focus positioning training model, focus area enhancement processing on the first tissue anomaly training image, to obtain a plurality of tissue anomaly image blocks includes: performing pixel normalization processing on the first tissue abnormal training image according to the preset image window size through an input layer in a preset focus positioning training model to obtain a normalized tissue abnormal training image; performing image conversion on the normalized tissue abnormal training image by adopting a preset data enhancement method through an input layer in the focus positioning training model to obtain a plurality of initial tissue abnormal training image blocks; and up-sampling focus pixel points marked in the initial tissue anomaly training image block through an input layer in the focus positioning training model, and up-sampling non-focus pixel points marked in a preset area corresponding to the focus pixel points to obtain a plurality of focus area enhanced tissue anomaly pixel blocks.
Optionally, in a second implementation manner of the first aspect of the present invention, the combined convolutional neural network in the lesion localization training model includes a three-dimensional convolutional neural network and a feature enhancement convolutional neural network, and extracting, by using the combined convolutional neural network in the lesion localization training model, spatial image features corresponding to a plurality of sizes of each of the tissue anomaly image blocks includes: performing three-dimensional convolution processing on the tissue abnormal image block through the three-dimensional convolution neural network to obtain initial spatial image characteristics; performing downsampling processing on the initial spatial image features for preset times through the feature enhanced convolutional neural network to obtain shrinkage spatial image features with multiple sizes, and arranging the shrinkage spatial image features from large to small; performing dimension lifting processing on the shrink space image features at preset arrangement positions through the feature enhanced convolutional neural network to obtain first space image features; and carrying out dimension-lifting processing on the spatial image features of the next arrangement position through the feature-enhanced convolutional neural network to obtain second spatial image features, combining the first spatial image features and the second spatial image features to obtain first spatial image features with new sizes, stopping until the first spatial image features with the new sizes meet preset exit conditions, and taking the first spatial image features and the first spatial image features with the new sizes as spatial image features with the different sizes corresponding to the tissue abnormal image blocks.
Optionally, in a third implementation manner of the first aspect of the present invention, the extracting the first feature map of the tissue abnormality image block based on the spatial image features of the multiple sizes includes: selecting a first spatial image feature with the minimum size and a second spatial image feature with the maximum size from the spatial image features with the multiple sizes; and carrying out convolution processing on the first space image features by adopting a preset convolution check to obtain a first local feature map, carrying out convolution processing on the second space image features by adopting the convolution check to obtain a first global feature map, and taking the first local feature map and the first global feature map as the first feature map of the tissue abnormal image block.
Optionally, in a fourth implementation manner of the first aspect of the present invention, updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model converges, and obtaining the focus positioning model includes: determining a first feature vector of the focus pixel point and the non-focus pixel point on the first feature map according to the labeling information, and calculating a loss value of the focus positioning training model based on the first feature vector; judging whether the loss value is larger than a preset loss threshold value or not; if the loss value is larger than the loss threshold value, updating the focus positioning training model by adopting a preset optimization algorithm until the loss value is smaller than the loss threshold value, and obtaining the focus positioning model.
Optionally, in a fifth implementation manner of the first aspect of the present invention, the extracting, by the lesion localization model, the second feature map of the tissue anomaly image to be identified and the third feature map of the tissue structure template image respectively includes: extracting focus features of focus areas in the tissue abnormal image to be identified by adopting the focus positioning model to obtain a second feature map; and extracting all tissue characteristics in the tissue structure template image by adopting the focus positioning model to obtain a third characteristic diagram.
Optionally, in a sixth implementation manner of the first aspect of the present invention, the second feature map includes a second local feature map and a second global feature map, the third feature map includes a third local feature map and a third global feature map, and performing similarity matching on the second feature map and the third feature map based on the location information, to obtain a similarity matching result includes: performing organization structure matching on the second global feature map and the third global feature map, and determining the organization structure of each part of the second global feature map according to the organization structure matching result; according to the position information, a second local feature map and a third local feature map corresponding to the tissue structure of the focus to be identified are determined, and a second feature vector of a pixel point corresponding to the focus to be identified on the second local feature map is determined; according to the second feature vector and the third feature vector of each pixel point in the third local feature map, calculating the similarity between the corresponding pixel point of the focus to be identified on the second local feature map and each pixel point in the third local feature map, and taking the pixel point with the largest similarity in the third local feature map as a similarity matching result.
The second aspect of the present invention provides an apparatus for lesion localization based on template images, comprising: the image enhancement module is used for acquiring a tissue abnormality training image, and carrying out focus area enhancement processing on the tissue abnormality training image through a preset input layer in a focus positioning training model to obtain a plurality of tissue abnormality image blocks, wherein the tissue abnormality image blocks comprise focus pixel points and labeling information of non-focus pixel points; the feature extraction module is used for extracting spatial image features of a plurality of sizes corresponding to each tissue abnormal image block through a combined convolutional neural network in the focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes; the model training module is used for updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain a focus positioning model; the position acquisition module is used for acquiring a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through the focus positioning model, wherein the tissue abnormality image to be identified comprises position information of a focus to be identified; and the matching identification module is used for carrying out similarity matching on the second feature image and the third feature image based on the position information to obtain a similarity matching result, and determining the positioning information of the position information in the tissue structure template image according to the similarity matching result.
Optionally, in a first implementation manner of the second aspect of the present invention, the image enhancement module includes: the normalization processing unit is used for carrying out pixel normalization processing on the first tissue abnormal training image according to the size of a preset image window through an input layer in a preset focus positioning training model to obtain a normalized tissue abnormal training image; the image conversion unit is used for carrying out image conversion on the normalized tissue abnormal training image by adopting a preset data enhancement method through an input layer in the focus positioning training model to obtain a plurality of initial tissue abnormal training image blocks; the pixel sampling unit is used for upsampling focus pixel points marked in the initial tissue abnormal training image block through an input layer in the focus positioning training model, and upsampling non-focus pixel points marked in a preset area corresponding to the focus pixel points to obtain a plurality of focus area enhanced tissue abnormal pixel blocks.
Optionally, in a second implementation manner of the second aspect of the present invention, the feature extraction module includes: the three-dimensional convolution unit is used for carrying out three-dimensional convolution processing on the tissue abnormal image block through the three-dimensional convolution neural network to obtain initial spatial image characteristics; the feature sampling unit is used for carrying out downsampling processing on the initial spatial image features for preset times through the feature enhanced convolutional neural network to obtain shrinkage spatial image features with multiple sizes, and arranging the shrinkage spatial image features from large to small in size; the dimension-increasing processing unit is used for carrying out dimension-increasing processing on the shrink space image features of the preset arrangement positions through the feature-enhanced convolutional neural network to obtain first space image features; the feature enhancement unit is used for carrying out dimension-increasing processing on the spatial image features of the next arrangement position through the feature enhancement convolutional neural network to obtain second spatial image features, combining the first spatial image features and the second spatial image features to obtain first spatial image features with new sizes, stopping the operation until the first spatial image features with the new sizes meet preset exit conditions, and taking the first spatial image features and the first spatial image features with the new sizes as spatial image features with the different sizes corresponding to the tissue abnormal image blocks.
Optionally, in a third implementation manner of the second aspect of the present invention, the feature extraction module further includes: a size selection unit, configured to select a first spatial image feature with a minimum size and a second spatial image feature with a maximum size from the spatial image features with the multiple sizes; the feature convolution unit is used for carrying out convolution processing on the first space image features by adopting a preset convolution check to obtain a first local feature map, carrying out convolution processing on the second space image features by adopting the convolution check to obtain a first global feature map, and taking the first local feature map and the first global feature map as the first feature map of the tissue abnormal image block.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the model training module further includes: the vector calculation unit is used for determining a first feature vector of the focus pixel point and the non-focus pixel point on the first feature map according to the labeling information, and calculating a loss value of the focus positioning training model based on the first feature vector; the condition judging unit is used for judging whether the loss value is larger than a preset loss threshold value or not; and the model updating unit is used for updating the focus positioning training model by adopting a preset optimization algorithm according to the loss value if the loss value is larger than the loss threshold value, and stopping updating until the loss value is smaller than the loss threshold value to obtain the focus positioning model.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the location obtaining module further includes: extracting focus features of focus areas in the tissue abnormal image to be identified by adopting the focus positioning model to obtain a second feature map; and extracting all tissue characteristics in the tissue structure template image by adopting the focus positioning model to obtain a third characteristic diagram.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the matching identification module further includes: the structure matching unit is used for carrying out organization structure matching on the second global feature map and the third global feature map, and determining the organization structure of each part of the second global feature map according to the organization structure matching result; the feature recognition unit is used for determining a second local feature map and a third local feature map corresponding to the tissue structure where the focus to be recognized is located according to the position information, and determining a second feature vector of a pixel point corresponding to the focus to be recognized on the second local feature map; the similarity matching unit is used for calculating the similarity between the corresponding pixel point of the focus to be identified on the second local feature map and each pixel point in the third local feature map according to the second feature vector and the third feature vector of each pixel point in the third local feature map, and taking the pixel point with the largest similarity in the third local feature map as a similarity matching result.
A third aspect of the present application provides an apparatus for lesion localization based on template images, comprising: a memory and at least one processor, the memory storing instructions; the at least one processor invokes the instructions in the memory to cause the template image based lesion localization apparatus to perform the template image based lesion localization method described above.
A fourth aspect of the present application provides a computer-readable storage medium having instructions stored therein that, when executed on a computer, cause the computer to perform the above-described template image-based lesion localization method.
According to the technical scheme provided by the application, a first tissue abnormal training image is obtained, and the input layer in the focus positioning training model is preset to perform focus area enhancement processing on the tissue abnormal training image so as to obtain a plurality of tissue abnormal image blocks; extracting spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in the focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes; and updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain the focus positioning model. Compared with the prior art, the method and the device have the advantages that the image and the focus point are sampled through the tissue abnormal image, the focus pixel point information is convolved and the characteristics are extracted, the obtained characteristic information is subjected to model training to obtain a focus positioning model, the natural consistency of the human tissue structure can be utilized through the model, and the focus point can be rapidly positioned and identified through the model, so that the focus point can be positioned and identified efficiently.
Obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through a focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of a focus to be identified; and carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result. Compared with the prior art, the method has the advantages that the trained focus positioning model is used for carrying out feature recognition on the abnormal image and the template image, and further, the focus points in the tissue abnormal image are positioned through feature recognition and matching, a large amount of marking of tissue feature information is not needed by related professionals, and the positioning recognition on the specific positions of the focus points is realized.
Drawings
FIG. 1 is a schematic diagram of a first embodiment of a lesion localization method based on template images according to the present application;
FIG. 2 is a schematic diagram of a second embodiment of a lesion localization method based on template images according to the present application;
FIG. 3 is a schematic diagram of a third embodiment of a lesion localization method based on template images according to the present application;
FIG. 4 is a diagram illustrating a fourth embodiment of a template image-based lesion localization method according to the present invention;
FIG. 5 is a schematic diagram of a fifth embodiment of a lesion localization method according to the present invention based on template images;
FIG. 6 is a schematic diagram of one embodiment of an apparatus for template image-based lesion localization in accordance with the present invention;
fig. 7 is a schematic view of another embodiment of an apparatus for template image-based lesion localization in accordance with the present invention;
fig. 8 is a schematic diagram of an embodiment of an apparatus for lesion localization based on template images in accordance with the present invention.
Detailed Description
The embodiment of the invention provides a focus positioning method, a focus positioning device, focus positioning equipment, a focus positioning storage medium and digital medical treatment based on a template image. Acquiring a first tissue abnormality training image and performing focus area enhancement treatment; extracting spatial image features of a plurality of sizes corresponding to each tissue abnormal image block and a first feature map thereof; updating the focus positioning training model until the focus positioning training model converges, and obtaining a focus positioning model; obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image; and carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the similarity matching result in the template image. The invention realizes the positioning and identification of the specific position of the focus point.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and a first embodiment of a lesion positioning method based on a template image in the embodiment of the present invention includes:
101. obtaining a tissue abnormality training image, and carrying out focus area enhancement processing on the tissue abnormality training image through an input layer in a preset focus positioning training model to obtain a plurality of tissue abnormality image blocks, wherein the tissue abnormality image blocks comprise labeling information of focus pixel points and non-focus pixel points;
It will be appreciated that the subject of the present application may be a device for locating lesions based on template images, or may be a terminal or a server, and is not limited herein. The embodiment of the application is described by taking a server as an execution main body as an example.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
In this embodiment, the enhancement processing means that, in order to cope with the situation of the lack of the image training set, by improving the quantity and quality of the existing image data to train a better network, the main methods of data enhancement are as follows: geometric transformations, color space enhancement, kernel filters, image blending, random erasure, feature space enhancement, countermeasure training, generating countermeasure networks, neural style migration, and meta learning. The system firstly acquires a first tissue abnormal training image, and then carries out image enhancement processing on a focus area of the acquired first tissue abnormal training image according to an input layer in a preset focus positioning training model to obtain a plurality of tissue abnormal image blocks comprising labeling information of focus pixel points and non-focus pixel points.
In practical application, the system firstly acquires a tissue abnormality training image required by model training, then the system inputs the acquired tissue abnormality training image into an input layer of the model according to a preset focus positioning training model, a plurality of image blocks are generated on the same CT image in the tissue abnormality training image by using a preset data enhancement processing mode (such as cutting, translation, rotation, overturning, gaussian filtering, gaussian noise, mean filtering, brightness and contrast adjustment and the like), and then the pixel points of the relevant areas of the acquired image blocks are sampled according to pixel sampling requirements, so that a plurality of tissue abnormality image blocks are acquired, wherein the tissue abnormality image blocks comprise focus pixel points and marking information of non-focus pixel points.
102. Extracting spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in a focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes;
in this embodiment, the combined convolutional neural network refers to a combined convolutional neural network obtained by combining a 3D Resnet (3D residual network) and an FPN (feature pyramid network), and features of a 3D image can be extracted by a 3D network part by performing a convolution operation on an image block, where the 3D network part can effectively use spatial information, the Feature Pyramid Network (FPN) part is a feature extractor designed according to a feature pyramid concept, and features can be extracted in multiple scales by using the FPN, and global feature extraction and local feature extraction are achieved by using the FPN. The system extracts spatial image features of a plurality of sizes corresponding to each tissue abnormal image block through a combined convolutional neural network in a focus positioning training model, and further extracts a relevant feature map by utilizing the combined convolutional neural network based on the obtained spatial image features of the plurality of sizes to obtain a first feature map of the tissue abnormal image block.
In practical application, the system extracts relevant features by using a combined convolutional neural network in a focus positioning training model according to a plurality of tissue abnormal image blocks obtained in step 101, and firstly uses a 3D residual error network in the combined convolutional neural network to extract features of 3-dimensional images from spatial information in the tissue abnormal image blocks, and then uses a feature pyramid network in the combined convolutional neural network to extract multi-scale feature information from the tissue abnormal image blocks and feature information of 3-dimensional images thereof, so as to obtain a first feature map of the tissue abnormal image blocks.
103. Updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain a focus positioning model;
in this embodiment, based on the obtained labeling information and the first feature map, the labeling information and the first feature map are input into a focus positioning model trained for a system to update the model, and the focus positioning model meeting the requirements is obtained by optimizing a loss function in the model until the focus positioning model meets the preset convergence condition.
In practical application, the system inputs the labeling information and the first feature map of the plurality of tissue abnormal image blocks obtained through processing into a focus positioning model trained at this time to train the model so as to update relevant parameters of the model to meet the requirements. The method comprises the steps of substituting feature vectors of related pixels into a loss function to perform optimization calculation, conducting derivation processing on calculation results, conducting optimization processing on model parameters on gradient results obtained through derivation by using a preset Adam optimization algorithm, further conducting data enhancement on other tissue abnormal images by using a system, conducting model training processing on a plurality of tissue abnormal image blocks obtained after sampling pixel points of a corresponding region, conducting extraction of related feature vector images, and completing updating processing on the model until the model meets preset convergence conditions, and obtaining a focus positioning model meeting requirements.
104. Obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through a focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of a focus to be identified;
in this embodiment, the system performs feature extraction by acquiring a tissue structure template image and a tissue abnormality image to be identified, and further performs feature extraction by using the updated focus positioning model, to extract relevant features of the tissue abnormality image to be identified to obtain a second feature map, and performs feature extraction on the tissue structure template image to obtain a third feature map, where the tissue abnormality image to be identified is a tissue abnormality image containing focus positioning information to be identified.
In practical application, the system extracts image-related feature vectors by acquiring a pre-prepared tissue structure template image and a tissue abnormality image to be identified, further extracting the image feature vectors by using the focus positioning model to extract the tissue structure template image according to the focus positioning model obtained by training and updating S103 to obtain a third feature map, and extracting the image feature vectors by using the focus positioning model to extract the tissue abnormality image containing focus positioning information to be identified to obtain a second feature map.
105. And carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result.
In this embodiment, the similarity matching refers to that feature vector recognition and extraction are performed on an image through a focus positioning model to obtain a feature map, and then similarity matching is performed according to feature vector features of the feature map and the feature map to obtain a feature similarity matching result. And the system performs similarity matching of the image feature vectors on the second feature image and the third feature image which are obtained by processing based on the position information to obtain two image similarity matching results, and determines positioning information of the position information in the tissue structure template image according to the similarity matching results.
In practical application, the system processes the obtained second feature map and third feature map according to the focus positioning model of S104, and based on the position information of the focus to be identified, performs similarity matching of feature vectors according to feature vectors of focus position information in the second feature map and the third feature map of the template image, so as to obtain similarity matching results of two images of the position information, and further performs focus position result analysis by using the tissue structure template image according to the similarity matching results, so as to determine the positioning information of the position information in the tissue structure template image.
In the embodiment of the application, a first tissue abnormal training image is obtained, and a focus region enhancement treatment is carried out on the tissue abnormal training image by presetting an input layer in a focus positioning training model, so as to obtain a plurality of tissue abnormal image blocks; extracting spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in the focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes; and updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain the focus positioning model. Compared with the prior art, the method and the device have the advantages that the image and the focus point are sampled through the tissue abnormal image, the focus pixel point information is convolved and the characteristics are extracted, the obtained characteristic information is subjected to model training to obtain a focus positioning model, the natural consistency of the human tissue structure can be utilized through the model, and the focus point can be rapidly positioned and identified through the model, so that the focus point can be positioned and identified efficiently.
Obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through a focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of a focus to be identified; and carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result. Compared with the prior art, the method has the advantages that the trained focus positioning model is used for carrying out feature recognition on the abnormal image and the template image, and further, the focus points in the tissue abnormal image are positioned through feature recognition and matching, a large amount of marking of tissue feature information is not needed by related professionals, and the positioning recognition on the specific positions of the focus points is realized.
Referring to fig. 2, a second embodiment of a lesion localization method based on a template image according to an embodiment of the present application includes:
201. carrying out pixel normalization processing on the first tissue abnormal training image according to the size of a preset image window through an input layer in a preset focus positioning training model to obtain a normalized tissue abnormal training image;
In the present embodiment, the normalization processing here has two forms, one is to change the number to a fraction between (0, 1) and one is to change the dimensional expression to a dimensionless expression. In order to enable the picture data to be processed more conveniently, the data is mapped to be processed within the range of 0-1, and the picture data is more convenient and quick to process and belongs to the digital signal processing category. The system performs pixel normalization processing on the first tissue abnormal training image according to a preset image window size condition through an input layer in a preset focus positioning training model, and performs normalized pixel data processing on the pixel size of the member image according to the image window size to obtain a normalized tissue abnormal training image.
The system inputs the first tissue abnormality training image into the focus positioning training model through a preset focus positioning training model according to the obtained first tissue abnormality training image, performs pixel normalization processing on the first tissue abnormality training image input into the model according to a preset image window size condition, maps pixel data of the image to pixel data within a range of 0-1, and further obtains a normalized tissue abnormality training image.
202. Through an input layer in the focus positioning training model, performing image conversion on the normalized tissue abnormal training image by adopting a preset data enhancement method to obtain a plurality of initial tissue abnormal training image blocks;
in this embodiment, the image conversion refers to performing data enhancement processing on an image, and performing corresponding conversion enhancement processing on the image by using a preset data enhancement mode, so as to obtain a plurality of images. The system performs image conversion on the normalized tissue abnormality training images by presetting the normalized tissue abnormality training images in an input layer in the focus positioning training model and adopting a preset data enhancement method to obtain a plurality of initial tissue abnormality training image blocks.
In the implementation application, the system performs data enhancement on the tissue abnormal training image by presetting a normalized tissue abnormal training image in an input layer in a focus positioning training model by adopting a preset data enhancement method, so as to realize image conversion on the normalized tissue abnormal training image, for example, a certain rotation angle is set by using an image rotation mode, a plurality of rotation processing images are obtained after multiple times of rotation, and a plurality of initial tissue abnormal training image blocks can be obtained by processing the normalized tissue abnormal training image by combining other data enhancement modes.
203. Up-sampling focus pixel points marked in an initial tissue abnormal training image block through an input layer in a focus positioning training model, and up-sampling non-focus pixel points marked in a preset area corresponding to the focus pixel points to obtain a plurality of focus area enhanced tissue abnormal pixel blocks;
in the embodiment, the system performs up-sampling on focus pixel points marked in an initial tissue abnormal training image block through a preset input layer in a focus positioning training model to obtain positive sample pixel points of the focus pixel points; and up-sampling non-focus pixel points marked by focus pixel points corresponding to preset areas to obtain negative sample pixel points of the non-focus pixel points, further sorting the positive and negative pixel points to obtain a plurality of focus area enhanced tissue abnormal pixel blocks,
in practical application, the system performs up-sampling on focus pixel points marked in an initial tissue abnormal training image block through presetting a plurality of initial tissue abnormal training image blocks in an input layer in a focus positioning training model to obtain positive pixel sampling points; and upsampling the non-focus pixel points marked by the focus pixel points corresponding to the preset areas to obtain negative pixel sampling points. For example at the beginning Upsampling n in overlapping area of initial tissue abnormal training image block pos The positive sample pixels are aligned (the number of samples can be set to 100), and for each pair of positive sample pixels, n is sampled at a region distant from the pixel (a region which can be set to be more than 3mm from the pixel) neg And (the sampling number can be set to be 500) the negative sample pixel points, and then the positive and negative pixel points obtained through processing are arranged and grouped to obtain a plurality of abnormal tissue pixel blocks with enhanced focus areas.
204. Extracting spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in a focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes;
205. updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain a focus positioning model;
206. obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through a focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of a focus to be identified;
207. And carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result.
According to the embodiment of the invention, according to the preset image window size condition, the normalization processing is carried out on the obtained first tissue abnormal training image, and then the data enhancement data is carried out on the tissue abnormal training image after the normalization processing, so that a plurality of tissue abnormal training images are obtained; and sampling positive and negative pixel points of the obtained plurality of tissue abnormality training images according to a preset pixel point sampling method to obtain a plurality of tissue abnormality pixel blocks with enhanced focus areas. Compared with the prior art, the method has the advantages that the normalization processing and the data enhancement are utilized to process the image, the processed image is sampled according to the preset pixel sampling points, the data processing is enabled to be within the processing range set by the system, the subsequent data processing efficiency is quickened, the pixel point sampling is carried out on the processed image, more feature processing pixel blocks related to focus points are obtained, and the extraction and the corresponding feature recognition of focus point corresponding features are facilitated.
Referring to fig. 3, a third embodiment of a lesion localization method based on a template image according to an embodiment of the present invention includes:
301. obtaining a tissue abnormality training image, and carrying out focus area enhancement processing on the tissue abnormality training image through an input layer in a preset focus positioning training model to obtain a plurality of tissue abnormality image blocks, wherein the tissue abnormality image blocks comprise labeling information of focus pixel points and non-focus pixel points;
302. carrying out three-dimensional convolution processing on the tissue abnormal image blocks through a three-dimensional convolution neural network to obtain initial spatial image features;
in this embodiment, the three-dimensional convolutional neural network refers to a 3D Resnet (3D residual network), and three-dimensional image features of an image can be acquired by using spatial information through convolution operation. And the system carries out three-dimensional convolution processing on the tissue abnormal image block obtained by the image processing in the step 301 through a three-dimensional convolution neural network to obtain initial spatial image characteristics.
In practical application, the system identifies and extracts three-dimensional features in the spatial information of the image blocks by utilizing a three-dimensional convolutional neural network according to the processed tissue abnormal image blocks to obtain three-dimensional image features in the tissue abnormal image blocks, and initial spatial image features are obtained.
303. Performing downsampling treatment on the initial spatial image features for preset times through a feature enhancement convolutional neural network to obtain shrinkage spatial image features with multiple sizes, and arranging the shrinkage spatial image features from large to small according to the sizes;
in this embodiment, the feature enhanced convolutional neural network refers to FPN (feature pyramid network), and the feature extractor is designed according to the concept of feature pyramid, so as to improve the accuracy and speed of operation, and generate a higher quality image feature map pyramid. The system performs downsampling processing on the initial spatial image features for preset times through the feature enhancement convolutional neural network to obtain shrinkage spatial image features with multiple sizes, and arranges the shrinkage spatial image features from large to small according to the sizes.
In practical application, the system performs downsampling processing on the initial spatial image features for a preset number of times by utilizing a feature convolution neural network according to the processed initial spatial image features, continuously downsamples the initial spatial image features by utilizing a feature pyramid method until the preset number of times is reached, stops to obtain contracted spatial image features with a plurality of sizes, and arranges the contracted spatial image features from large to small in size.
304. Performing dimension lifting processing on the contracted space image features at preset arrangement positions through a feature enhancement convolutional neural network to obtain first space image features;
in this embodiment, the up-scaling processing herein refers to increasing the length and width of the region of interest of the image from the minimum resolution in the image to be up-scaled by two times, and increasing the length and width of the region of interest of other images at preset arrangement positions by two times, so as to obtain the desired image. The system carries out dimension-lifting processing on the shrink space image features of the preset arrangement positions through the feature enhancement convolutional neural network to obtain first space image features.
In practical application, the system performs dimension-increasing processing on spatial image features at preset sequence arrangement positions through a feature-enhanced convolutional neural network, changes the numerical value corresponding to the number of channels of the corresponding processed image, and increases the length and width numerical value of the image to twice as much as the original numerical value, so that the resolution of the original image is increased, the features of the image are increased, and the first spatial image feature is obtained.
305. Performing dimension lifting processing on the spatial image features of the next arrangement position through a feature enhancement convolutional neural network to obtain second spatial image features, combining the first spatial image features and the second spatial image features to obtain new-size first spatial image features, stopping until the new-size first spatial image features meet preset exit conditions, and taking the first spatial image features and the new-size first spatial image features as spatial image features corresponding to a plurality of sizes of organization abnormal image blocks;
In this embodiment, the first spatial image feature refers to the spatial image feature of the current agreeing pyramid layer. The system carries out dimension-lifting processing on the space image feature images at the next sequential arrangement position through the feature-enhanced convolutional neural network to obtain second space image features, further combines the first space image features and the second space image features to obtain new-size first space image features, and stops until the new-size first space image features meet preset exit conditions, so that the first space image features and the new-size first space image features serve as space image features corresponding to a plurality of sizes of tissue abnormal image blocks.
In practical application, the system carries out dimension lifting processing on the space feature images of the next sequential arrangement position through the feature enhancement convolutional neural network, calculates the space image features subjected to dimension lifting processing and the space image features of the current dimension by utilizing convolution combination to obtain a combined calculation result, carries out dimension lifting processing on the combined result, carries out convolution combination processing on the space feature images of the current pyramid hierarchy and the space image features subjected to dimension lifting processing until the space image features of the new dimension meet preset exit conditions, and stops to obtain the space image features of a plurality of dimensions.
306. Selecting a first spatial image feature with the minimum size and a second spatial image feature with the maximum size from the spatial image features with the multiple sizes;
in this embodiment, the system selects a first spatial image feature of a minimum size and a second spatial image feature of a maximum size from a plurality of spatial image features of a plurality of sizes.
In practical application, the system selects a first space image feature with the minimum size and a second space image feature with the maximum size according to a space image feature result obtained by up-scaling and convolution processing after downsampling.
307. Performing convolution processing on the first space image features by adopting preset convolution check to obtain a first local feature map, performing convolution processing on the second space image features by adopting convolution check to obtain a first global feature map, and taking the first local feature map and the first global feature map as first feature maps of the organization abnormal image blocks;
in this embodiment, the system performs convolution processing on the first spatial image feature by using a preset convolution check to obtain a first local feature map, performs convolution processing on the second spatial image feature by using a convolution check to obtain a first global feature map, and uses the first local feature map and the first global feature map as a first feature map for organizing abnormal image blocks.
The system carries out convolution processing of 3 x 3 by adopting a preset convolution check first space image feature to obtain a first local feature map, carries out convolution processing of 3 x 3 by adopting a convolution check second space image feature to obtain a first global feature map, and further takes the first local feature map and the first global feature map as a first feature map of an organization abnormal image block.
308. Updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain a focus positioning model;
309. obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through a focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of a focus to be identified;
310. and carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result.
In the embodiment of the invention, the three-dimensional convolutional neural network and the characteristic enhancement convolutional neural network are used for carrying out convolutional calculation processing on the tissue abnormal image block, further carrying out dimension lifting processing and convolutional combination processing on the obtained plurality of spatial image characteristics, thus obtaining a plurality of spatial image characteristics, further selecting the image characteristics meeting the selection condition from the plurality of spatial image characteristics, and carrying out convolutional processing to obtain a first characteristic diagram of the tissue abnormal image block. Compared with the prior art, the local feature map can be encoded by the combined convolution calculation of the two convolution networks to distinguish adjacent structures with similar appearance, so that accurate positioning is performed, the obtained global feature map is favorable for identifying texture information of each part of the lung, rough positioning equipment is performed, and feature parameters of more tissue abnormal image blocks are obtained, so that the identification accuracy of a focus positioning model to corresponding features can be improved better.
Referring to fig. 4, a fourth embodiment of a lesion localization method based on a template image according to an embodiment of the present invention includes:
401. obtaining a tissue abnormality training image, and carrying out focus area enhancement processing on the tissue abnormality training image through an input layer in a preset focus positioning training model to obtain a plurality of tissue abnormality image blocks, wherein the tissue abnormality image blocks comprise labeling information of focus pixel points and non-focus pixel points;
402. extracting spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in a focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes;
403. determining first feature vectors of focus pixel points and non-focus pixel points on a first feature map according to the labeling information, and calculating a loss value of a focus positioning training model based on the first feature vectors;
in this embodiment, the system determines a first feature vector of the focus pixel point and the non-focus pixel point on the first feature map according to the obtained labeling information, and uses a loss function (f i Is the eigenvector of the ith positive sample pixel pair, h ij Feature vector of the jth negative sample pixel, f ', which is the ith positive sample pixel' i Is f i The feature vector of the corresponding enhanced image, super parameter τ=0.5).
Further, a loss value of the focus positioning training model is calculated through the loss function.
In practical application, the system determines a first feature vector of focus pixel points and non-focus pixel points on a first feature map through the obtained labeling information, further calculates by adopting a preset loss function based on the first feature vector to obtain model parameters, and conducts derivative processing on the obtained model parameters to obtain gradients of the model parameters, and further obtains a loss value of a focus positioning training model through the loss function and derivative calculation processing.
404. Judging whether the loss value is larger than a preset loss threshold value or not;
in this embodiment, the system determines, by comparing the function, whether the loss value of the currently calculated lesion localization training model is greater than a preset loss threshold, thereby determining whether the currently calculated loss value meets the model training requirement.
405. If the loss value is larger than the threshold value, updating the focus positioning training model by adopting a preset optimization algorithm until the loss value is smaller than the threshold value, and stopping updating to obtain a focus positioning model;
In this embodiment, after the system continues to determine, if the result is greater than the loss value, performing optimization calculation processing by using a preset Aam optimization algorithm, and further updating the focus positioning training model until the loss value is less than the loss threshold value, and stopping updating to obtain the focus positioning model.
In practical application, the system judges whether the loss value is larger than a preset numerical threshold, if so, the obtained gradient of the model parameter is optimized by using a preset Adam optimization algorithm, and the model parameter is updated, and the preset Adam optimization algorithm can adaptively adjust the learning rate to enable the model to be better converged when training to a certain stage; and the system further performs data enhancement by obtaining a plurality of tissue abnormal image blocks obtained after sampling pixel points of other tissue abnormal images and corresponding areas, performs model training processing after extracting a first feature image obtained by extracting a related feature vector image, and stops updating until the loss value is smaller than a loss threshold value, thereby obtaining a focus positioning model meeting the requirements.
406. Obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through a focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of a focus to be identified;
407. And carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result.
In the embodiment of the application, a system determines a first feature vector of focus pixel points and non-focus pixel points on a first feature map according to labeling information obtained by processing, calculates a loss value of a focus positioning training model based on the first feature vector, and judges whether the loss value is larger than a preset loss threshold value or not; if the judgment result is larger than the loss value, updating the focus positioning training model by adopting a preset optimization algorithm until the loss value is smaller than the loss threshold value, and obtaining the focus positioning model. Compared with the prior art, the method and the device can optimize the feature vectors obtained by processing by using the loss function and the optimization algorithm, so that the lesion location model has more accurate recognition rate on the corresponding tissue abnormal image features, and the obtained lesion location model has more accurate location on the location information of the corresponding lesion points.
Referring to fig. 5, a fifth embodiment of a lesion localization method based on a template image according to an embodiment of the present application includes:
501. Obtaining a tissue abnormality training image, and carrying out focus area enhancement processing on the tissue abnormality training image through an input layer in a preset focus positioning training model to obtain a plurality of tissue abnormality image blocks, wherein the tissue abnormality image blocks comprise labeling information of focus pixel points and non-focus pixel points;
502. extracting spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in a focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes;
503. updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain a focus positioning model;
504. obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through a focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of a focus to be identified;
505. carrying out organization structure matching on the second global feature map and the third global feature map, and determining the organization structure of each part of the second global feature map according to the organization structure matching result;
In this embodiment, the system performs organization structure matching on the second global feature map and the third global feature map according to the second global feature map and the third global feature map obtained by extraction and identification, and determines an organization structure of each part of the second global feature map according to a result of the organization structure matching.
In practical application, the system performs feature recognition of a focus positioning model on a tissue structure template image and a tissue abnormality image to be recognized to obtain a second feature image of the tissue abnormality image to be recognized and a third feature image of the tissue structure template image, further performs tissue structure matching on the second global feature image and the third global feature image by using corresponding feature vectors in the feature images, and determines each partial tissue structure corresponding to the feature vectors of the second global feature image according to the result of the tissue structure matching.
506. According to the position information, a second local feature map and a third local feature map corresponding to the tissue structure of the focus to be identified are determined, and a second feature vector of a pixel point corresponding to the focus to be identified on the second local feature map is determined;
in this embodiment, according to the position information of the lesion to be identified, a second local feature map and a third local feature map corresponding to the tissue structure where the lesion to be identified is located are determined by using the lesion positioning model, and a second feature vector of a pixel point corresponding to the lesion to be identified on the second local feature map is determined.
In practical application, according to the position information of the focus to be identified, the focus positioning model is utilized to identify and extract the feature vector diagram of the tissue structure of the focus to be identified, so as to obtain a second local feature diagram and a third local feature diagram corresponding to the tissue structure of the focus to be identified, and the focus positioning model is utilized to identify and determine the second feature vector of the focus to be identified corresponding to the pixel point on the second local feature diagram.
507. According to the second feature vector and the third feature vector of each pixel point in the third local feature map, calculating the similarity between the corresponding pixel point on the second local feature map and each pixel point in the third local feature map of the focus to be identified, and taking the pixel point with the largest similarity in the third local feature map as a similarity matching result.
In this embodiment, the system calculates, according to the second feature vector obtained by processing and the third feature vector of each pixel point in the third local feature map, the similarity between the corresponding pixel point on the second local feature map and each pixel point in the third local feature map of the lesion to be identified by using the lesion location model, and uses the pixel point with the largest similarity in the obtained third local feature map as a similarity matching result, and further determines location information of the location information in the tissue structure template image according to the similarity matching result.
In practical application, the system calculates the similarity of the feature vector between the corresponding pixel point on the second local feature map and each pixel point of the third local feature map by using the focus positioning model according to the second feature vector obtained by the identification processing in steps S505 and S506 and the third feature vector of each pixel point in the third local feature map, and uses the pixel point with the largest similarity feature vector in the obtained third local feature map as a similarity matching result, and further determines the positioning information of the position information in the template image of the organization result according to the to-be-similar matching result.
In the embodiment of the application, the tissue structure of each part of the second global feature map is determined by carrying out tissue structure matching on the second global feature map and the third global feature map by using a focus positioning model obtained through training; further, according to the position information, a second local feature map corresponding to the tissue structure of the focus to be identified and a second feature vector and a third local feature map of corresponding pixel points of the second local feature map are determined; and the pixel point with the largest similarity in the third local feature image is used as a similarity matching result, and positioning information of the position information in the template image of the organization result is obtained according to the similarity matching result. Compared with the prior art, the method and the device have the advantages that the feature vector is recognized and matched on the tissue abnormal image to be recognized by using the focus positioning model and the tissue structure template image which are obtained through training, so that final positioning information is obtained, and the obtained positioning information is more efficient and more accurate.
The foregoing describes a method for locating a lesion based on a template image in the embodiment of the present invention, and the following describes a device for locating a lesion based on a template image in the embodiment of the present invention, referring to fig. 6, one embodiment of the device for locating a lesion based on a template image in the embodiment of the present invention includes:
the image enhancement module 601 is configured to obtain a tissue anomaly training image, and perform focus area enhancement processing on the tissue anomaly training image by presetting an input layer in a focus positioning training model to obtain a plurality of tissue anomaly image blocks, where the tissue anomaly image blocks include labeling information of focus pixel points and non-focus pixel points;
the feature extraction module 602 is configured to extract spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in the focus positioning training model, and extract a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes;
the model training module 603 is configured to update the focus positioning training model based on the labeling information and the first feature map, and stop until the focus positioning training model converges, so as to obtain a focus positioning model;
The position obtaining module 604 is configured to obtain a tissue structure template image and a tissue abnormality image to be identified, and extract a second feature map of the tissue abnormality image to be identified and a third feature map of the tissue structure template image respectively through a focus positioning model, where the tissue abnormality image to be identified includes position information of a focus to be identified;
the matching recognition module 605 is configured to perform similarity matching on the second feature map and the third feature map based on the location information, obtain a similarity matching result, and determine positioning information of the location information in the tissue structure template image according to the similarity matching result.
In the embodiment of the application, a first tissue abnormal training image is obtained, and a focus region enhancement treatment is carried out on the tissue abnormal training image by presetting an input layer in a focus positioning training model, so as to obtain a plurality of tissue abnormal image blocks; extracting spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in the focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes; and updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain the focus positioning model. Compared with the prior art, the method and the device have the advantages that the image and the focus point are sampled through the tissue abnormal image, the focus pixel point information is convolved and the characteristics are extracted, the obtained characteristic information is subjected to model training to obtain a focus positioning model, the natural consistency of the human tissue structure can be utilized through the model, and the focus point can be rapidly positioned and identified through the model, so that the focus point can be positioned and identified efficiently.
Obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through a focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of a focus to be identified; and carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result. Compared with the prior art, the method has the advantages that the trained focus positioning model is used for carrying out feature recognition on the abnormal image and the template image, and further, the focus points in the tissue abnormal image are positioned through feature recognition and matching, a large amount of marking of tissue feature information is not needed by related professionals, and the positioning recognition on the specific positions of the focus points is realized.
Referring to fig. 7, another embodiment of an apparatus for lesion localization based on template image according to an embodiment of the present application includes:
the image enhancement module 601 is configured to obtain a tissue anomaly training image, and perform focus area enhancement processing on the tissue anomaly training image by presetting an input layer in a focus positioning training model to obtain a plurality of tissue anomaly image blocks, where the tissue anomaly image blocks include labeling information of focus pixel points and non-focus pixel points;
The feature extraction module 602 is configured to extract spatial image features of each tissue abnormal image block corresponding to a plurality of sizes through a combined convolutional neural network in the focus positioning training model, and extract a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes;
the model training module 603 is configured to update the focus positioning training model based on the labeling information and the first feature map, and stop until the focus positioning training model converges, so as to obtain a focus positioning model;
the position obtaining module 604 is configured to obtain a tissue structure template image and a tissue abnormality image to be identified, and extract a second feature map of the tissue abnormality image to be identified and a third feature map of the tissue structure template image respectively through a focus positioning model, where the tissue abnormality image to be identified includes position information of a focus to be identified;
the matching recognition module 605 is configured to perform similarity matching on the second feature map and the third feature map based on the location information, obtain a similarity matching result, and determine positioning information of the location information in the tissue structure template image according to the similarity matching result.
Specifically, the image enhancement module 601 includes:
The normalization processing unit 6011 is configured to perform pixel normalization processing on the first abnormal tissue training image according to a preset image window size by presetting an input layer in the focus positioning training model, so as to obtain a normalized abnormal tissue training image;
the image conversion unit 6012 is configured to perform image conversion on the normalized abnormal tissue training image by using a preset data enhancement method through an input layer in the focus positioning training model, so as to obtain a plurality of initial abnormal tissue training image blocks;
the pixel sampling unit 6013 is configured to upsample, through an input layer in the focus positioning training model, focus pixel points marked in the initial tissue anomaly training image block, and upsample non-focus pixel points marked in a preset area corresponding to the focus pixel points, so as to obtain a plurality of focus area enhanced tissue anomaly pixel blocks.
Specifically, the feature extraction module 602 includes:
the three-dimensional convolution unit 6021 is used for performing three-dimensional convolution processing on the tissue abnormal image block through a three-dimensional convolution neural network to obtain an initial spatial image feature;
the feature sampling unit 6022 is configured to perform downsampling processing on the initial spatial image features for a preset number of times through a feature enhancement convolutional neural network to obtain a plurality of shrinkage spatial image features with multiple sizes, and arrange the shrinkage spatial image features according to the sizes from large to small;
The dimension-increasing processing unit 6023 is configured to perform dimension-increasing processing on the contracted spatial image features at the preset arrangement positions through the feature-enhanced convolutional neural network, so as to obtain first spatial image features;
the feature enhancement unit 6024 is configured to perform dimension-increasing processing on the spatial image features at the next arrangement position through the feature enhancement convolutional neural network to obtain a second spatial image feature, combine the first spatial image feature and the second spatial image feature to obtain a first spatial image feature with a new size, stop until the first spatial image feature with the new size meets a preset exit condition, and use the first spatial image feature and the first spatial image feature with the new size as spatial image features with a plurality of sizes corresponding to the tissue abnormal image block.
Specifically, the feature extraction module 602 further includes:
a size selecting unit 6025 for selecting a first spatial image feature of a minimum size and a second spatial image feature of a maximum size from the spatial image features of the plurality of sizes;
the feature convolution unit 6026 is configured to perform convolution processing on the first spatial image feature by using a preset convolution check to obtain a first local feature map, and perform convolution processing on the second spatial image feature by using a convolution check to obtain a first global feature map, and use the first local feature map and the first global feature map as a first feature map for organizing an abnormal image block.
Specifically, the model training module 603 includes:
the vector calculation unit 6031 is configured to determine a first feature vector of the focus pixel point and the non-focus pixel point on the first feature map according to the labeling information, and calculate a loss value of the focus positioning training model based on the first feature vector;
a condition judgment unit 6032 for judging whether the loss value is greater than a preset loss threshold;
and if the model updating unit 6033 is larger than the loss value, updating the focus positioning training model by adopting a preset optimization algorithm until the loss value is smaller than the loss threshold value, and obtaining the focus positioning model.
Specifically, the location acquisition module 604 includes:
extracting focus features of focus areas in the tissue abnormal images to be identified by adopting a focus positioning model to obtain a second feature map; and extracting all tissue characteristics in the tissue structure template image by adopting a focus positioning model to obtain a third characteristic diagram.
Specifically, the matching identification module 605 includes:
a structure matching unit 6051, configured to perform organization structure matching on the second global feature map and the third global feature map, and determine an organization structure of each part of the second global feature map according to a result of the organization structure matching;
The feature recognition unit 6052 is configured to determine, according to the location information, a second local feature map and a third local feature map corresponding to a tissue structure where the focus to be recognized is located, and determine a second feature vector of a pixel point corresponding to the focus to be recognized on the second local feature map;
the similarity matching unit 6053 is configured to calculate, according to the second feature vector and the third feature vector of each pixel point in the third local feature map, the similarity between the corresponding pixel point on the second local feature map and each pixel point in the third local feature map of the focus to be identified, and take the pixel point with the largest similarity in the third local feature map as a similarity matching result.
In the embodiment of the application, the recognition and extraction processing of the feature vector are carried out by adopting the tissue anomaly training image, so that the loss function calculation is carried out on the pixel points corresponding to the obtained feature vector, and the update processing of the focus positioning model is carried out, thus obtaining the focus positioning model meeting the convergence condition; and then based on the updated focus positioning model and tissue structure template image, identifying focus point position information of the tissue abnormal image to be identified, thereby obtaining positioning information of the position information in the tissue structure template image. Compared with the prior art, the method and the device have the advantages that the focus positioning model is updated by utilizing the feature images of the tissue anomaly training images, and further, the focus positioning model can be utilized to realize rapid identification of focus points of the tissue anomaly images to be identified; by utilizing the natural consistency of the human tissue structure, the most similar position is found on the template in a similarity matching mode, so that reliable and efficient positioning and identification of the focus point position is realized.
The apparatus for locating a lesion based on a template image in the embodiment of the present invention is described in detail above with reference to fig. 6 and 7 from the point of view of a modularized functional entity, and the apparatus for locating a lesion based on a template image in the embodiment of the present invention is described in detail below from the point of view of hardware processing.
Fig. 8 is a schematic structural diagram of an apparatus for locating a lesion based on a template image according to an embodiment of the present invention, where the apparatus 800 for locating a lesion based on a template image may have a relatively large difference due to configuration or performance, and may include one or more processors (central processing units, CPU) 810 (e.g., one or more processors) and a memory 820, and one or more storage media 830 (e.g., one or more mass storage devices) storing application programs 833 or data 832. Wherein memory 820 and storage medium 830 can be transitory or persistent. The program stored on the storage medium 830 may include one or more modules (not shown), each of which may include a series of instruction operations in the apparatus 800 for lesion localization based on template images. Still further, the processor 810 may be configured to communicate with the storage medium 830 to execute a series of instruction operations in the storage medium 830 on the template image-based lesion localization device 800.
The template image based lesion localization apparatus 800 may also include one or more power supplies 840, one or more wired or wireless network interfaces 850, one or more input/output interfaces 860, and/or one or more operating systems 831, such as Windows Server, mac OS X, unix, linux, freeBSD, etc. It will be appreciated by those skilled in the art that the apparatus structure for template image based lesion localization shown in fig. 8 does not constitute a limitation of the apparatus for template image based lesion localization and may include more or less components than illustrated, or may combine certain components, or may be a different arrangement of components.
The invention also provides a device for locating a focus based on a template image, wherein the computer device comprises a memory and a processor, and the memory is stored with computer readable instructions which when executed by the processor, cause the processor to execute the steps of the focus locating method based on the template image in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and which may also be a volatile computer readable storage medium, having instructions stored therein, which when executed on a computer, cause the computer to perform the steps of a method for lesion localization based on a template image.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. The focus positioning method based on the template image is characterized by comprising the following steps of:
obtaining a tissue abnormality training image, and carrying out focus area enhancement processing on the tissue abnormality training image through an input layer in a preset focus positioning training model to obtain a plurality of tissue abnormality image blocks, wherein the tissue abnormality image blocks comprise labeling information of focus pixel points and non-focus pixel points;
extracting spatial image features of a plurality of sizes corresponding to each tissue abnormal image block through a combined convolutional neural network in the focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes;
Updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain a focus positioning model;
obtaining a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through the focus positioning model, wherein the tissue abnormality image to be identified comprises the position information of the focus to be identified;
and carrying out similarity matching on the second feature map and the third feature map based on the position information to obtain a similarity matching result, and determining positioning information of the position information in the tissue structure template image according to the similarity matching result.
2. The method for locating a lesion based on a template image according to claim 1, wherein the performing lesion area enhancement processing on the first tissue abnormality training image by presetting an input layer in a lesion locating training model, to obtain a plurality of tissue abnormality image blocks comprises:
performing pixel normalization processing on the first tissue abnormal training image according to the preset image window size through an input layer in a preset focus positioning training model to obtain a normalized tissue abnormal training image;
Performing image conversion on the normalized tissue abnormal training image by adopting a preset data enhancement method through an input layer in the focus positioning training model to obtain a plurality of initial tissue abnormal training image blocks;
and up-sampling focus pixel points marked in the initial tissue anomaly training image block through an input layer in the focus positioning training model, and up-sampling non-focus pixel points marked in a preset area corresponding to the focus pixel points to obtain a plurality of focus area enhanced tissue anomaly pixel blocks.
3. The method for locating a lesion based on a template image according to claim 1, wherein the combined convolutional neural network in the lesion locating training model comprises a three-dimensional convolutional neural network and a feature-enhanced convolutional neural network, and extracting spatial image features of each tissue anomaly image block corresponding to a plurality of sizes through the combined convolutional neural network in the lesion locating training model comprises:
performing three-dimensional convolution processing on the tissue abnormal image block through the three-dimensional convolution neural network to obtain initial spatial image characteristics;
performing downsampling processing on the initial spatial image features for preset times through the feature enhanced convolutional neural network to obtain shrinkage spatial image features with multiple sizes, and arranging the shrinkage spatial image features from large to small;
Performing dimension lifting processing on the shrink space image features at preset arrangement positions through the feature enhanced convolutional neural network to obtain first space image features;
and carrying out dimension-lifting processing on the spatial image features of the next arrangement position through the feature-enhanced convolutional neural network to obtain second spatial image features, combining the first spatial image features and the second spatial image features to obtain first spatial image features with new sizes, stopping until the first spatial image features with the new sizes meet preset exit conditions, and taking the first spatial image features and the first spatial image features with the new sizes as spatial image features with the different sizes corresponding to the tissue abnormal image blocks.
4. The template image-based lesion localization method according to claim 1, wherein the extracting the first feature map of the tissue anomaly image block based on the plurality of dimensional spatial image features comprises:
selecting a first spatial image feature with the minimum size and a second spatial image feature with the maximum size from the spatial image features with the multiple sizes;
and carrying out convolution processing on the first space image features by adopting a preset convolution check to obtain a first local feature map, carrying out convolution processing on the second space image features by adopting the convolution check to obtain a first global feature map, and taking the first local feature map and the first global feature map as the first feature map of the tissue abnormal image block.
5. The method of claim 1, wherein updating the lesion localization training model based on the labeling information and the first feature map until the lesion localization training model converges, and obtaining the lesion localization model comprises:
determining a first feature vector of the focus pixel point and the non-focus pixel point on the first feature map according to the labeling information, and calculating a loss value of the focus positioning training model based on the first feature vector;
judging whether the loss value is larger than a preset loss threshold value or not;
if the loss value is larger than the loss threshold value, updating the focus positioning training model by adopting a preset optimization algorithm until the loss value is smaller than the loss threshold value, and obtaining the focus positioning model.
6. The method according to claim 1, wherein extracting the second feature map of the tissue abnormality image to be identified and the third feature map of the tissue structure template image by the lesion localization model respectively includes:
extracting focus features of focus areas in the tissue abnormal image to be identified by adopting the focus positioning model to obtain a second feature map;
And extracting all tissue characteristics in the tissue structure template image by adopting the focus positioning model to obtain a third characteristic diagram.
7. The method for locating a lesion based on a template image according to claim 1, wherein the second feature map includes a second local feature map and a second global feature map, the third feature map includes a third local feature map and a third global feature map, and the performing similarity matching on the second feature map and the third feature map based on the location information to obtain a similarity matching result includes:
performing organization structure matching on the second global feature map and the third global feature map, and determining the organization structure of each part of the second global feature map according to the organization structure matching result;
according to the position information, a second local feature map and a third local feature map corresponding to the tissue structure of the focus to be identified are determined, and a second feature vector of a pixel point corresponding to the focus to be identified on the second local feature map is determined;
according to the second feature vector and the third feature vector of each pixel point in the third local feature map, calculating the similarity between the corresponding pixel point of the focus to be identified on the second local feature map and each pixel point in the third local feature map, and taking the pixel point with the largest similarity in the third local feature map as a similarity matching result.
8. A template image-based lesion localization apparatus, wherein the template image-based lesion localization apparatus comprises:
the image enhancement module is used for acquiring a tissue abnormality training image, and carrying out focus area enhancement processing on the tissue abnormality training image through a preset input layer in a focus positioning training model to obtain a plurality of tissue abnormality image blocks, wherein the tissue abnormality image blocks comprise focus pixel points and labeling information of non-focus pixel points;
the feature extraction module is used for extracting spatial image features of a plurality of sizes corresponding to each tissue abnormal image block through a combined convolutional neural network in the focus positioning training model, and extracting a first feature map of the tissue abnormal image block based on the spatial image features of the plurality of sizes;
the model training module is used for updating the focus positioning training model based on the labeling information and the first feature map until the focus positioning training model is converged, so as to obtain a focus positioning model;
the position acquisition module is used for acquiring a tissue structure template image and a tissue abnormality image to be identified, and respectively extracting a second characteristic image of the tissue abnormality image to be identified and a third characteristic image of the tissue structure template image through the focus positioning model, wherein the tissue abnormality image to be identified comprises position information of a focus to be identified;
And the matching identification module is used for carrying out similarity matching on the second feature image and the third feature image based on the position information to obtain a similarity matching result, and determining the positioning information of the position information in the tissue structure template image according to the similarity matching result.
9. An apparatus for lesion localization based on a template image, the apparatus comprising: a memory and at least one processor, the memory storing instructions;
at least one processor invokes instructions in the memory to cause the template image based lesion localization apparatus to perform the steps of the template image based lesion localization method according to any one of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, perform the steps of the template image based lesion localization method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111015494.7A CN113706514B (en) | 2021-08-31 | 2021-08-31 | Focus positioning method, device, equipment and storage medium based on template image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111015494.7A CN113706514B (en) | 2021-08-31 | 2021-08-31 | Focus positioning method, device, equipment and storage medium based on template image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113706514A CN113706514A (en) | 2021-11-26 |
CN113706514B true CN113706514B (en) | 2023-08-11 |
Family
ID=78658269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111015494.7A Active CN113706514B (en) | 2021-08-31 | 2021-08-31 | Focus positioning method, device, equipment and storage medium based on template image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113706514B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114465769B (en) * | 2021-12-28 | 2024-03-15 | 尚承科技股份有限公司 | Network equipment, processing system and method for learning network behavior characteristics |
CN114757908B (en) * | 2022-04-12 | 2024-09-13 | 深圳平安智慧医健科技有限公司 | Image processing method, device, equipment and storage medium based on CT image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020151307A1 (en) * | 2019-01-23 | 2020-07-30 | 平安科技(深圳)有限公司 | Automatic lesion recognition method and device, and computer-readable storage medium |
CN112233117A (en) * | 2020-12-14 | 2021-01-15 | 浙江卡易智慧医疗科技有限公司 | New coronary pneumonia CT detects discernment positioning system and computing equipment |
CN112967287A (en) * | 2021-01-29 | 2021-06-15 | 平安科技(深圳)有限公司 | Gastric cancer focus identification method, device, equipment and storage medium based on image processing |
-
2021
- 2021-08-31 CN CN202111015494.7A patent/CN113706514B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020151307A1 (en) * | 2019-01-23 | 2020-07-30 | 平安科技(深圳)有限公司 | Automatic lesion recognition method and device, and computer-readable storage medium |
CN112233117A (en) * | 2020-12-14 | 2021-01-15 | 浙江卡易智慧医疗科技有限公司 | New coronary pneumonia CT detects discernment positioning system and computing equipment |
CN112967287A (en) * | 2021-01-29 | 2021-06-15 | 平安科技(深圳)有限公司 | Gastric cancer focus identification method, device, equipment and storage medium based on image processing |
Also Published As
Publication number | Publication date |
---|---|
CN113706514A (en) | 2021-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shaziya et al. | Automatic lung segmentation on thoracic CT scans using U-net convolutional network | |
CN106485695B (en) | Medical image Graph Cut dividing method based on statistical shape model | |
KR20210028226A (en) | Automatic determination of the normal posture of 3D objects and the superposition of 3D objects using deep learning | |
CN105719278B (en) | A kind of medical image cutting method based on statistics deformation model | |
Tang et al. | CT image enhancement using stacked generative adversarial networks and transfer learning for lesion segmentation improvement | |
CN109063710A (en) | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features | |
CN109389584A (en) | Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN | |
CN113706514B (en) | Focus positioning method, device, equipment and storage medium based on template image | |
WO2024104035A1 (en) | Long short-term memory self-attention model-based three-dimensional medical image segmentation method and system | |
CN112258514B (en) | Segmentation method of pulmonary blood vessels of CT (computed tomography) image | |
Wang et al. | Fully automatic intervertebral disc segmentation using multimodal 3D U-Net | |
CN110570394A (en) | medical image segmentation method, device, equipment and storage medium | |
CN107274406A (en) | A kind of method and device of detection sensitizing range | |
CN115578320A (en) | Full-automatic space registration method and system for orthopedic surgery robot | |
CN116664640A (en) | Microscopic digital image registration method suitable for histopathological staining sections | |
CN110490841B (en) | Computer-aided image analysis method, computer device and storage medium | |
CN117689697A (en) | US and CT image registration method based on point cloud registration and image feature registration | |
Liu et al. | 3-D prostate MR and TRUS images detection and segmentation for puncture biopsy | |
CN109559296B (en) | Medical image registration method and system based on full convolution neural network and mutual information | |
CN110992310A (en) | Method and device for determining partition where mediastinal lymph node is located | |
CN113962957A (en) | Medical image processing method, bone image processing method, device and equipment | |
CN117809122A (en) | Processing method, system, electronic equipment and medium for intracranial large blood vessel image | |
CN113610746A (en) | Image processing method and device, computer equipment and storage medium | |
CN117152173A (en) | Coronary artery segmentation method and system based on DUNetR model | |
CN117011246A (en) | Segmented vertebra CT image segmentation method and system based on transducer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |