CN116229189A - Image processing method, device, equipment and storage medium based on fluorescence endoscope - Google Patents

Image processing method, device, equipment and storage medium based on fluorescence endoscope Download PDF

Info

Publication number
CN116229189A
CN116229189A CN202310522623.4A CN202310522623A CN116229189A CN 116229189 A CN116229189 A CN 116229189A CN 202310522623 A CN202310522623 A CN 202310522623A CN 116229189 A CN116229189 A CN 116229189A
Authority
CN
China
Prior art keywords
image
endoscope
images
feature
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310522623.4A
Other languages
Chinese (zh)
Other versions
CN116229189B (en
Inventor
陆汇海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bosheng Medical Technology Co ltd
Original Assignee
Shenzhen Bosheng Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bosheng Medical Technology Co ltd filed Critical Shenzhen Bosheng Medical Technology Co ltd
Priority to CN202310522623.4A priority Critical patent/CN116229189B/en
Publication of CN116229189A publication Critical patent/CN116229189A/en
Application granted granted Critical
Publication of CN116229189B publication Critical patent/CN116229189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine

Abstract

The invention relates to the field of image processing, and discloses an image processing method, device and equipment based on a fluorescence endoscope and a storage medium, which are used for improving the image processing accuracy of the fluorescence endoscope. The method comprises the following steps: performing image position relation analysis on a plurality of endoscope area images to obtain a target image position relation; determining a plurality of feature classification boundaries between a plurality of endoscope region images and image boundary features corresponding to each feature classification boundary according to the target image position relationship; image feature fusion is carried out on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image, so that a plurality of feature fusion images are generated; respectively inputting a plurality of feature fusion images into an endoscope image analysis model to carry out endoscope image analysis, so as to obtain an image analysis result corresponding to each feature fusion image; and generating an abnormal detection result of the target detection area according to the characteristic fusion image position relation and the image analysis result.

Description

Image processing method, device, equipment and storage medium based on fluorescence endoscope
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, a device, and a storage medium for processing an image based on a fluorescence endoscope.
Background
Fluorescent endoscopes are widely used in early screening and diagnosis of digestive systems and urinary systems, but image processing techniques still face challenges such as poor image quality, difficult feature extraction, and the like. At present, the commonly used fluorescence endoscope image processing method comprises a traditional manual segmentation and feature extraction method, and the traditional method requires a great deal of time and effort for manual segmentation and feature extraction by professionals, so that the defects of strong subjectivity, complex operation, unstable results and the like exist.
However, the existing scheme also has the problems of insufficient training samples, unreasonable feature selection, poor robustness of the classifier and the like, so that the classification effect is poor, the misdiagnosis rate is high, and the popularization of the classifier in practical application is influenced.
Disclosure of Invention
The invention provides an image processing method, device and equipment based on a fluorescence endoscope and a storage medium, which are used for improving the image processing accuracy of the fluorescence endoscope.
The first aspect of the present invention provides a fluorescence endoscope-based image processing method, comprising:
acquiring at least one group of fluorescence endoscope images of a target detection area, inputting the at least one group of fluorescence endoscope images into a preset image segmentation model to carry out image area segmentation, and obtaining a plurality of endoscope area images;
Performing image position relation analysis on the plurality of endoscope area images to obtain a target image position relation;
determining a plurality of feature classification boundaries among the plurality of endoscope region images according to the target image position relationship, and acquiring image boundary features corresponding to each feature classification boundary;
image feature fusion is carried out on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image, so that a plurality of feature fusion images are generated;
inputting the feature fusion images into a preset endoscope image analysis model to carry out endoscope image analysis, so as to obtain an image analysis result corresponding to each feature fusion image;
and respectively comparing analysis results of every two adjacent feature fusion images according to the position relation of the feature fusion images and the image analysis results to obtain a plurality of comparison results, and generating an abnormal detection result of the target detection area according to the plurality of comparison results.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the acquiring at least one set of fluorescence endoscope images of the target detection area, and inputting the at least one set of fluorescence endoscope images into a preset image segmentation model to perform image region segmentation, to obtain a plurality of endoscope region images, includes:
Scanning a target detection area through a preset fluorescent endoscope, and collecting at least one group of fluorescent endoscope images of the target detection area;
performing image segmentation parameters and image structure analysis on the at least one group of fluorescence endoscope images to obtain target segmentation parameters and image structure information;
and calling a preset image segmentation model to segment the image region of the fluorescence endoscope image according to the target segmentation parameters and the image structure information, so as to obtain a plurality of endoscope region images.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the performing image position relationship analysis on the plurality of endoscope area images to obtain a target image position relationship includes:
extracting feature points of the plurality of endoscope area images to obtain at least one first feature point corresponding to each endoscope area image, and storing the at least one first feature point into a preset feature library;
according to the first feature points in the feature library, similar region extraction is carried out on the plurality of endoscope region images, and a target region is obtained;
and according to the target area, calculating the image position relation of the plurality of endoscope area images to obtain the position relation of the target image.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, the determining a plurality of feature classification boundaries between the plurality of endoscope region images according to the target image positional relationship, and acquiring an image boundary feature corresponding to each feature classification boundary, includes:
performing three-dimensional space mapping on the plurality of endoscope region images according to the target image position relationship, and determining the geometric relationship among the plurality of endoscope region images;
extracting feature vectors of the plurality of endoscope area images according to a preset classifier to obtain feature vectors corresponding to each endoscope area image;
matrix combination is carried out on the feature vectors corresponding to each endoscope area image to generate a feature matrix, and a plurality of corresponding feature classification boundaries are constructed according to the feature matrix;
and extracting the image boundary characteristics corresponding to each characteristic classification boundary according to the geometric relationship.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the performing image feature fusion on the image boundary feature corresponding to each feature classification boundary and at least one endoscope area image to generate a plurality of feature fusion images includes:
Performing association identification matching on the image boundary characteristics corresponding to each characteristic classification boundary and at least one endoscope region image to obtain association identifications corresponding to each endoscope region image;
and carrying out image feature fusion on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image based on the associated identifier corresponding to each endoscope region image, and generating a plurality of feature fusion images.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the inputting the plurality of feature fusion images into a preset endoscope image analysis model to perform endoscope image analysis, to obtain an image analysis result corresponding to each feature fusion image, includes:
inputting the feature fusion images into a preset endoscope image analysis model respectively, wherein the endoscope image analysis model comprises: three convolutional layers, two residual error networks and a logistic regression layer;
performing endoscopic image analysis on the feature fusion images through the endoscopic image analysis model to obtain an initial recognition result corresponding to each feature fusion image;
and marking the identification information of the feature fusion images according to the initial identification result corresponding to each feature fusion image to obtain an image analysis result corresponding to each feature fusion image.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the comparing analysis results of each two adjacent feature fusion images according to the feature fusion image position relationship and the image analysis result to obtain a plurality of comparison results, and generating an abnormal detection result of the target detection area according to the plurality of comparison results includes:
according to the position relation of the feature fusion images and the image analysis result, respectively comparing analysis results of every two adjacent feature fusion images to obtain a plurality of comparison results;
according to a preset mapping relation, mapping the characteristic values of the plurality of comparison results to obtain a target value corresponding to each comparison result;
carrying out average value operation on the target value corresponding to each comparison result to obtain a target average value;
comparing the target average value with a target value corresponding to each comparison result to generate an abnormal detection result of the target detection area, wherein the abnormal detection result comprises: abnormal region and abnormal feature information.
A second aspect of the present invention provides a fluorescence endoscope-based image processing apparatus including:
The acquisition module is used for acquiring at least one group of fluorescent endoscope images of the target detection area, inputting the at least one group of fluorescent endoscope images into a preset image segmentation model to carry out image area segmentation, and obtaining a plurality of endoscope area images;
the analysis module is used for analyzing the image position relationship of the plurality of endoscope area images to obtain a target image position relationship;
the determining module is used for determining a plurality of feature classification boundaries among the plurality of endoscope region images according to the target image position relationship and acquiring image boundary features corresponding to each feature classification boundary;
the fusion module is used for carrying out image feature fusion on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image to generate a plurality of feature fusion images;
the processing module is used for inputting the feature fusion images into a preset endoscope image analysis model to carry out endoscope image analysis to obtain an image analysis result corresponding to each feature fusion image;
the generation module is used for comparing analysis results of every two adjacent feature fusion images according to the position relation of the feature fusion images and the image analysis results to obtain a plurality of comparison results, and generating an abnormal detection result of the target detection area according to the plurality of comparison results.
A third aspect of the present invention provides an image processing apparatus based on a fluorescence endoscope, comprising: a memory and at least one set of processors, the memory having instructions stored therein; the at least one set of processors invokes the instructions in the memory to cause the fluorescence endoscope-based image processing apparatus to perform the fluorescence endoscope-based image processing method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the above-described fluorescence endoscope-based image processing method.
In the technical scheme provided by the invention, image position relation analysis is carried out on a plurality of endoscope area images to obtain a target image position relation; determining a plurality of feature classification boundaries between a plurality of endoscope region images and image boundary features corresponding to each feature classification boundary according to the target image position relationship; image feature fusion is carried out on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image, so that a plurality of feature fusion images are generated; respectively inputting a plurality of feature fusion images into an endoscope image analysis model to carry out endoscope image analysis, so as to obtain an image analysis result corresponding to each feature fusion image; according to the method, the device and the system, the image recognition and classification accuracy is improved, the automatic recognition and classification of the target area in the fluorescence endoscope image are realized by adopting image segmentation, feature extraction and machine learning, so that the accuracy and reliability of image recognition are effectively improved, a large number of fluorescence endoscope images can be rapidly classified and identified, the problem that a large amount of time and energy are consumed in the traditional manual operation is avoided, and the recognition accuracy is improved.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a fluorescence endoscope-based image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of image position relationship analysis in an embodiment of the invention;
FIG. 3 is a flow chart of determining a plurality of feature classification boundaries in an embodiment of the invention;
FIG. 4 is a flow chart of endoscopic image analysis in an embodiment of the present invention;
FIG. 5 is a schematic view of an embodiment of a fluorescence endoscope-based image processing apparatus in an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of a fluorescence endoscope-based image processing apparatus in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides an image processing method, device and equipment based on a fluorescence endoscope and a storage medium, which are used for improving the image processing accuracy of the fluorescence endoscope. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below with reference to fig. 1, and one embodiment of a fluorescence endoscope-based image processing method in an embodiment of the present invention includes:
s101, acquiring at least one group of fluorescence endoscope images of a target detection area, inputting the at least one group of fluorescence endoscope images into a preset image segmentation model to carry out image area segmentation, and obtaining a plurality of endoscope area images;
it is to be understood that the execution subject of the present invention may be an image processing apparatus based on a fluorescence endoscope, and may be a terminal or a server, and is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, the server is connected with the fluorescence endoscope equipment through a corresponding interface or protocol, at least one group of fluorescence endoscope images are acquired, and the acquired fluorescence endoscope images are uploaded to a computing environment where a preset image segmentation model is located by the server. The computing environment can be a cloud server, a data center or a local computer, and the like, the preset image segmentation model obtains a plurality of endoscope region images through processing and analyzing the input fluorescence endoscope images, the endoscope region images are returned to the server, and the server receives the segmented endoscope region images.
S102, analyzing the image position relationship of a plurality of endoscope area images to obtain a target image position relationship;
specifically, according to the plurality of endoscope area images obtained in the previous step, it should be noted that the endoscope area images may represent different targets or different viewing angles, further, in the embodiment of the present invention, the server may use an image feature extraction method to convert each endoscope area image into a set of feature vectors, where the feature vectors may represent key information in the image, such as color, texture, shape, and the like, in the embodiment of the present invention, the image feature extraction method includes, but is not limited to SIFT, SURF, HOG, and the like, and further, by calculating the similarity between the feature vectors of each endoscope area image, determining the positional relationship between them, where the similarity calculation may use cosine similarity, euclidean distance, correlation coefficient, and the like, and finally, the server analyzes the positional relationship between the plurality of endoscope areas according to the similarity between the endoscope area images, to obtain the positional relationship of the target image.
S103, determining a plurality of feature classification boundaries among a plurality of endoscope region images according to the position relation of the target image, and acquiring image boundary features corresponding to each feature classification boundary;
The server groups the endoscope region images according to the positional relationship of the target images obtained in the previous step, wherein the images in each group have similar characteristics. For example, if a group contains endoscopic region images that differ in terms of several views but all belong to the same organ, the images may have similarity in terms of features, and the features in each group of endoscopic region images may be clustered or classified to determine a plurality of feature classification boundaries. The boundaries between the categories are feature classification boundaries, and further, the server extracts corresponding boundary features for each feature classification boundary and the endoscope area images on both sides of the feature classification boundary. For example, features such as shape, color, texture and the like of an image boundary can be extracted, and finally, in the embodiment of the invention, the extracted image boundary features can be screened and preprocessed according to different application scenes. For example, feature selection methods may be used to select the most relevant and distinguishing features, and dimension reduction methods may be used to reduce the dimensions of features to increase computational efficiency.
S104, carrying out image feature fusion on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image to generate a plurality of feature fusion images;
specifically, the server selects one to a plurality of images from the endoscope area images related to each feature classification boundary as fusion input, and further, the server selects a corresponding feature fusion method according to different application scenes, wherein feature fusion can be achieved through weighted average, maximum value, summation and the like. For example, a weighted average method may be used to take into account the contribution of different features to obtain a feature fused image, and a corresponding feature vector may be extracted for each selected endoscope region image, where the feature vector includes information such as shape, texture, color, and the like. And fusing the image boundary features corresponding to the feature classification boundaries with the feature vectors of the selected endoscope region image to generate a plurality of feature fusion images.
S105, respectively inputting a plurality of feature fusion images into a preset endoscope image analysis model to carry out endoscope image analysis, and obtaining an image analysis result corresponding to each feature fusion image;
specifically, the server selects a corresponding model from the existing endoscope image analysis models according to task requirements, and then, the server takes each feature fusion image as input, sends the input feature fusion images into the selected endoscope image analysis model for processing, and extracts features of the input feature fusion images through a convolutional neural network, wherein the features can comprise information such as shapes, colors, textures and the like, and can also be high-level semantic information. Image analysis is performed based on the selected endoscopic image analysis model and the extracted features. In the target detection task, algorithms such as frame regression and a classifier can be used for detecting targets in the image, and in the target tracking task, algorithms such as Kalman filtering and particle filtering can be used for tracking the motion trail of the targets. And outputting the image analysis result corresponding to each feature fusion image to a visual interface or a database for viewing and use by a user.
S106, respectively comparing analysis results of every two adjacent feature fusion images according to the position relation of the feature fusion images and the image analysis results to obtain a plurality of comparison results, and generating an abnormal detection result of the target detection area according to the plurality of comparison results.
Specifically, preprocessing is performed on every two adjacent feature fusion images, including operations such as size adjustment and normalization, and then, for every two adjacent feature fusion images, similarity or distance between every two adjacent features can be calculated to determine whether abnormality exists between every two adjacent features. In the embodiment of the invention, euclidean distance, cosine similarity and the like can be calculated. Further, the server gathers all the comparison results to obtain a plurality of target detection areas in which abnormality may exist. Based on the location and characteristic information of these areas, it is further screened and confirmed whether there is really an abnormality. Finally, the abnormal region may be marked with a red frame for manual inspection and confirmation.
In the embodiment of the invention, image position relation analysis is carried out on a plurality of endoscope area images to obtain a target image position relation; determining a plurality of feature classification boundaries between a plurality of endoscope region images and image boundary features corresponding to each feature classification boundary according to the target image position relationship; image feature fusion is carried out on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image, so that a plurality of feature fusion images are generated; respectively inputting a plurality of feature fusion images into an endoscope image analysis model to carry out endoscope image analysis, so as to obtain an image analysis result corresponding to each feature fusion image; according to the method, the device and the system, the image recognition and classification accuracy is improved, the automatic recognition and classification of the target area in the fluorescence endoscope image are realized by adopting image segmentation, feature extraction and machine learning, so that the accuracy and reliability of image recognition are effectively improved, a large number of fluorescence endoscope images can be rapidly classified and identified, the problem that a large amount of time and energy are consumed in the traditional manual operation is avoided, and the recognition accuracy is improved.
In a specific embodiment, the process of executing step S101 may specifically include the following steps:
(1) Scanning a target detection area through a preset fluorescent endoscope, and collecting at least one group of fluorescent endoscope images of the target detection area;
(2) Performing image segmentation parameters and image structure analysis on at least one group of fluorescence endoscope images to obtain target segmentation parameters and image structure information;
(3) And according to the target segmentation parameters and the image structure information, calling a preset image segmentation model to segment the image region of the fluorescence endoscope image, so as to obtain a plurality of endoscope region images.
Specifically, scanning a region to be detected by a preset fluorescence endoscope scanner, and collecting at least one group of fluorescence endoscope images; performing image segmentation parameters and structure analysis on the acquired fluorescence endoscope image to obtain target segmentation parameters and image structure information; according to the target segmentation parameters and the image structure information, a preset image segmentation model is called to segment the image area of the fluorescence endoscope image, so that a plurality of endoscope area images are obtained, and further, the acquired fluorescence endoscope image is subjected to image segmentation parameters and structure analysis, so that the target segmentation parameters and the image structure information are obtained. This step can be implemented using various image processing tools or custom algorithms, by extracting feature and structure information of the image to determine a target region, and finally, according to the target segmentation parameters and the image structure information, invoking a preset image segmentation model to segment the image region of the fluorescence endoscope image, thereby obtaining a plurality of endoscope region images. This step is typically implemented using a deep learning framework such as PyTorch or TensorFlow, and may also be implemented using Python writing custom algorithms. By dividing the endoscopic image, it can be divided into a plurality of regions, and each region is further analyzed.
In a specific embodiment, as shown in fig. 2, the process of executing step S102 may specifically include the following steps:
s201, extracting feature points of a plurality of endoscope area images to obtain at least one first feature point corresponding to each endoscope area image, and storing the at least one first feature point into a preset feature library;
s202, extracting similar areas from a plurality of endoscope area images according to first feature points in a feature library to obtain a target area;
s203, calculating the image position relation of the plurality of endoscope area images according to the target area to obtain the position relation of the target image.
Specifically, the server extracts feature points of a plurality of endoscope area images, obtains at least one first feature point corresponding to each endoscope area image, and stores the at least one first feature point into a preset feature library. And carrying out similar region extraction on the plurality of endoscope region images according to the first feature points in the feature library to obtain a target region. And according to the target area, calculating the image position relationship of the plurality of endoscope area images to obtain the position relationship of the target image. In the invention, the server distributes tasks to a plurality of servers for simultaneous processing by using a distributed computing technology, so as to improve the computing efficiency and reduce the processing time. It should be noted that in practical application, the selection and parameter setting of the feature point extraction algorithm, the threshold setting of similar region extraction, the accuracy requirement of image position relation calculation, and the like all need to be adjusted and optimized according to specific scenes. By this procedure, it is possible to accurately extract the target region from the plurality of endoscopic region images and calculate the positional relationship therebetween for subsequent analysis and processing.
In a specific embodiment, as shown in fig. 3, the process of executing step S103 may specifically include the following steps:
s301, performing three-dimensional space mapping on a plurality of endoscope region images according to the position relation of the target image, and determining the geometric relation among the plurality of endoscope region images;
s302, extracting feature vectors of a plurality of endoscope area images according to a preset classifier to obtain feature vectors corresponding to each endoscope area image;
s303, performing matrix combination on feature vectors corresponding to each endoscope region image to generate a feature matrix, and constructing a plurality of corresponding feature classification boundaries according to the feature matrix;
s304, extracting image boundary features corresponding to each feature classification boundary according to the geometric relationship.
Specifically, the server performs three-dimensional space mapping on a plurality of endoscope area images according to the position relation of the target image, determines the geometric relation among the plurality of endoscope area images, converts two adjacent endoscope images into a 3D scene through a binocular stereoscopic vision technology, determines the spatial position and the direction relation among the two adjacent endoscope images through a coordinate system in the 3D scene, performs feature vector extraction on the plurality of endoscope area images according to a preset classifier to obtain feature vectors corresponding to each endoscope area image, and in the embodiment of the invention, can realize a feature vector extraction model through a deep learning framework such as a PyTorch or a TensorFlow, can also realize the feature vector extraction model through other machine learning algorithms such as a SVM, a KNN and the like, performs matrix combination on the feature vectors corresponding to each endoscope area image, generates a feature matrix, and constructs a plurality of corresponding feature classification boundaries according to the feature matrix.
In a specific embodiment, the process of executing step S104 may specifically include the following steps:
(1) Performing association identification matching on the image boundary characteristics corresponding to each characteristic classification boundary and at least one endoscope region image to obtain association identifications corresponding to each endoscope region image;
(2) And carrying out image feature fusion on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image based on the associated identifier corresponding to each endoscope region image, and generating a plurality of feature fusion images.
Specifically, performing association identification matching on the image boundary feature corresponding to each feature classification boundary and at least one endoscope region image to obtain association identification corresponding to each endoscope region image. Specifically, the server calculates the similarity between the two images by calculating the similarity between the two images, for example, by cosine similarity. After the matching is successful, the associated identification of the endoscope area image is stored, and image feature fusion is carried out on the image boundary feature corresponding to each feature classification boundary and at least one endoscope area image based on the associated identification corresponding to each endoscope area image, so as to generate a plurality of feature fusion images. In the case of image feature fusion, various fusion algorithms in the image processing field may be used, and for example, a weighted average method, a laplacian pyramid method, a wavelet transform method, or the like may be used. Specifically, the fusion value of the image boundary feature corresponding to each feature classification boundary may be calculated using the associated identification of each endoscope region image as a weight, thereby generating a plurality of feature fusion images.
In a specific embodiment, as shown in fig. 4, the process of performing step S105 may specifically include the following steps:
s401, respectively inputting a plurality of feature fusion images into a preset endoscope image analysis model, wherein the endoscope image analysis model comprises: three convolutional layers, two residual error networks and a logistic regression layer;
s402, respectively carrying out endoscopic image analysis on a plurality of feature fusion images through an endoscopic image analysis model to obtain an initial recognition result corresponding to each feature fusion image;
s403, according to the initial recognition result corresponding to each feature fusion image, performing recognition information labeling on the feature fusion images to obtain an image analysis result corresponding to each feature fusion image.
Specifically, the server inputs a plurality of feature fusion images into a preset endoscope image analysis model, wherein the model comprises three convolution layers, two residual error networks and a logistic regression layer. The method comprises the steps that a server constructs a Convolutional Neural Network (CNN) model through a deep learning framework such as PyTorch or TensorFlow to extract and classify the characteristics of the images, further, the server analyzes a plurality of characteristic fusion images through an endoscope image analysis model to obtain initial identification results corresponding to each characteristic fusion image, specifically, the server inputs image data into the neural network through a forward propagation algorithm and calculates along the structure of the neural network until final identification results are output, and further, the server marks identification information of the characteristic fusion images according to the initial identification results corresponding to each characteristic fusion image to obtain image analysis results corresponding to each characteristic fusion image.
In a specific embodiment, the process of executing step S106 may specifically include the following steps:
(1) According to the position relation of the feature fusion images and the image analysis result, respectively comparing analysis results of every two adjacent feature fusion images to obtain a plurality of comparison results;
(2) According to a preset mapping relation, mapping characteristic values of a plurality of comparison results to obtain a target value corresponding to each comparison result;
(3) Carrying out average value operation on the target value corresponding to each comparison result to obtain a target average value;
(4) Comparing the target average value with a target value corresponding to each comparison result to generate an abnormal detection result of the target detection area, wherein the abnormal detection result comprises: abnormal region and abnormal feature information.
Specifically, according to the position relation of the feature fusion images and the image analysis result, the analysis results of every two adjacent feature fusion images are respectively compared to obtain a plurality of comparison results, the server is realized by calculating the distance or the overlapping degree between each feature fusion image, and whether the target object exists or not and the position, the size and other information of the target object are judged according to the comparison results. And mapping the characteristic values of the plurality of comparison results according to a preset mapping relation to obtain a target value corresponding to each comparison result. The server stores the mapping relation through the data structure of the hash table, for example, maps a specific comparison result into a certain fixed number or character string. Finally, carrying out average value operation on the target value corresponding to each comparison result to obtain a target average value, and comparing the target average value with the target value corresponding to each comparison result to generate an abnormal detection result of the target detection region, wherein the abnormal detection result comprises the following steps: abnormal region and abnormal feature information. In the step, whether the target average value is similar to the target value corresponding to each comparison result can be judged by setting a threshold value, so that whether the target object has an abnormal condition is further judged.
The above describes the image processing method based on the fluorescence endoscope in the embodiment of the present invention, and the following describes the image processing apparatus based on the fluorescence endoscope in the embodiment of the present invention, referring to fig. 5, one embodiment of the image processing apparatus based on the fluorescence endoscope in the embodiment of the present invention includes:
the acquisition module 501 is configured to acquire at least one set of fluorescence endoscope images of a target detection area, and input the at least one set of fluorescence endoscope images into a preset image segmentation model to perform image area segmentation, so as to obtain a plurality of endoscope area images;
the analysis module 502 is configured to perform image position relationship analysis on the multiple endoscope area images to obtain a target image position relationship;
a determining module 503, configured to determine a plurality of feature classification boundaries between the plurality of endoscope area images according to the target image position relationship, and obtain image boundary features corresponding to each feature classification boundary;
the fusion module 504 is configured to perform image feature fusion on the image boundary feature corresponding to each feature classification boundary and at least one endoscope region image, so as to generate a plurality of feature fusion images;
the processing module 505 is configured to input the plurality of feature fusion images into a preset endoscope image analysis model to perform endoscope image analysis, so as to obtain an image analysis result corresponding to each feature fusion image;
And the generating module 506 is configured to compare analysis results of each two adjacent feature fusion images according to the feature fusion image position relationship and the image analysis results, obtain a plurality of comparison results, and generate an abnormal detection result of the target detection area according to the plurality of comparison results.
Through the cooperation of the components, image position relation analysis is carried out on a plurality of endoscope area images to obtain a target image position relation; determining a plurality of feature classification boundaries between a plurality of endoscope region images and image boundary features corresponding to each feature classification boundary according to the target image position relationship; image feature fusion is carried out on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image, so that a plurality of feature fusion images are generated; respectively inputting a plurality of feature fusion images into an endoscope image analysis model to carry out endoscope image analysis, so as to obtain an image analysis result corresponding to each feature fusion image; according to the method, the device and the system, the image recognition and classification accuracy is improved, the automatic recognition and classification of the target area in the fluorescence endoscope image are realized by adopting image segmentation, feature extraction and machine learning, so that the accuracy and reliability of image recognition are effectively improved, a large number of fluorescence endoscope images can be rapidly classified and identified, the problem that a large amount of time and energy are consumed in the traditional manual operation is avoided, and the recognition accuracy is improved.
The fluorescence endoscope-based image processing apparatus in the embodiment of the present invention is described in detail from the point of view of the modularized functional entity in fig. 5 above, and the fluorescence endoscope-based image processing device in the embodiment of the present invention is described in detail from the point of view of hardware processing below.
Fig. 6 is a schematic structural diagram of a fluorescence endoscope-based image processing apparatus 600 according to an embodiment of the present invention, where the fluorescence endoscope-based image processing apparatus 600 may have a relatively large difference due to configuration or performance, and may include one or more processors (central processing units, CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage media 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the fluorescence endoscope-based image processing apparatus 600. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the fluorescence endoscope-based image processing device 600.
The fluorescence endoscope-based image processing device 600 can also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the fluorescence endoscope-based image processing apparatus structure shown in fig. 6 is not limiting of the fluorescence endoscope-based image processing apparatus, and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components.
The present invention also provides a fluorescence endoscope-based image processing apparatus including a memory and a processor, the memory storing computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of the fluorescence endoscope-based image processing method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and which may also be a volatile computer readable storage medium, the computer readable storage medium having instructions stored therein, which when executed on a computer, cause the computer to perform the steps of the fluorescence endoscope-based image processing method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random acceS memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An image processing method based on a fluorescence endoscope, characterized in that the image processing method based on the fluorescence endoscope comprises the following steps:
acquiring at least one group of fluorescence endoscope images of a target detection area, inputting the at least one group of fluorescence endoscope images into a preset image segmentation model to carry out image area segmentation, and obtaining a plurality of endoscope area images;
performing image position relation analysis on the plurality of endoscope area images to obtain a target image position relation;
determining a plurality of feature classification boundaries among the plurality of endoscope region images according to the target image position relationship, and acquiring image boundary features corresponding to each feature classification boundary;
Image feature fusion is carried out on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image, so that a plurality of feature fusion images are generated;
inputting the feature fusion images into a preset endoscope image analysis model to carry out endoscope image analysis, so as to obtain an image analysis result corresponding to each feature fusion image;
and respectively comparing analysis results of every two adjacent feature fusion images according to the position relation of the feature fusion images and the image analysis results to obtain a plurality of comparison results, and generating an abnormal detection result of the target detection area according to the plurality of comparison results.
2. The fluorescence endoscope-based image processing method according to claim 1, wherein the acquiring at least one set of fluorescence endoscope images of the target detection region and inputting the at least one set of fluorescence endoscope images into a preset image segmentation model for image region segmentation to obtain a plurality of endoscope region images comprises:
scanning a target detection area through a preset fluorescent endoscope, and collecting at least one group of fluorescent endoscope images of the target detection area;
Performing image segmentation parameters and image structure analysis on the at least one group of fluorescence endoscope images to obtain target segmentation parameters and image structure information;
and calling a preset image segmentation model to segment the image region of the fluorescence endoscope image according to the target segmentation parameters and the image structure information, so as to obtain a plurality of endoscope region images.
3. The fluorescence endoscope-based image processing method according to claim 1, wherein the performing image positional relationship analysis on the plurality of endoscope region images to obtain a target image positional relationship comprises:
extracting feature points of the plurality of endoscope area images to obtain at least one first feature point corresponding to each endoscope area image, and storing the at least one first feature point into a preset feature library;
according to the first feature points in the feature library, similar region extraction is carried out on the plurality of endoscope region images, and a target region is obtained;
and according to the target area, calculating the image position relation of the plurality of endoscope area images to obtain the position relation of the target image.
4. The fluorescence endoscope-based image processing method of claim 1, wherein the determining a plurality of feature classification boundaries between the plurality of endoscope region images according to the target image positional relationship and acquiring image boundary features corresponding to each feature classification boundary comprises:
Performing three-dimensional space mapping on the plurality of endoscope region images according to the target image position relationship, and determining the geometric relationship among the plurality of endoscope region images;
extracting feature vectors of the plurality of endoscope area images according to a preset classifier to obtain feature vectors corresponding to each endoscope area image;
matrix combination is carried out on the feature vectors corresponding to each endoscope area image to generate a feature matrix, and a plurality of corresponding feature classification boundaries are constructed according to the feature matrix;
and extracting the image boundary characteristics corresponding to each characteristic classification boundary according to the geometric relationship.
5. The fluorescence endoscope-based image processing method according to claim 1, wherein the performing image feature fusion on the image boundary feature corresponding to each feature classification boundary and at least one endoscope region image to generate a plurality of feature fusion images includes:
performing association identification matching on the image boundary characteristics corresponding to each characteristic classification boundary and at least one endoscope region image to obtain association identifications corresponding to each endoscope region image;
and carrying out image feature fusion on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image based on the associated identifier corresponding to each endoscope region image, and generating a plurality of feature fusion images.
6. The fluorescence endoscope-based image processing method according to claim 1, wherein the inputting the plurality of feature fusion images into a preset endoscope image analysis model for performing endoscope image analysis to obtain an image analysis result corresponding to each feature fusion image, respectively, comprises:
inputting the feature fusion images into a preset endoscope image analysis model respectively, wherein the endoscope image analysis model comprises: three convolutional layers, two residual error networks and a logistic regression layer;
performing endoscopic image analysis on the feature fusion images through the endoscopic image analysis model to obtain an initial recognition result corresponding to each feature fusion image;
and marking the identification information of the feature fusion images according to the initial identification result corresponding to each feature fusion image to obtain an image analysis result corresponding to each feature fusion image.
7. The fluorescence endoscope-based image processing method according to claim 1, wherein the comparing analysis results of each two adjacent feature fusion images according to the feature fusion image positional relationship and the image analysis results to obtain a plurality of comparison results, and generating an abnormality detection result of the target detection region according to the plurality of comparison results, comprises:
According to the position relation of the feature fusion images and the image analysis result, respectively comparing analysis results of every two adjacent feature fusion images to obtain a plurality of comparison results;
according to a preset mapping relation, mapping the characteristic values of the plurality of comparison results to obtain a target value corresponding to each comparison result;
carrying out average value operation on the target value corresponding to each comparison result to obtain a target average value;
comparing the target average value with a target value corresponding to each comparison result to generate an abnormal detection result of the target detection area, wherein the abnormal detection result comprises: abnormal region and abnormal feature information.
8. An image processing apparatus based on a fluorescence endoscope, characterized in that the image processing apparatus based on a fluorescence endoscope comprises:
the acquisition module is used for acquiring at least one group of fluorescent endoscope images of the target detection area, inputting the at least one group of fluorescent endoscope images into a preset image segmentation model to carry out image area segmentation, and obtaining a plurality of endoscope area images;
the analysis module is used for analyzing the image position relationship of the plurality of endoscope area images to obtain a target image position relationship;
The determining module is used for determining a plurality of feature classification boundaries among the plurality of endoscope region images according to the target image position relationship and acquiring image boundary features corresponding to each feature classification boundary;
the fusion module is used for carrying out image feature fusion on the image boundary features corresponding to each feature classification boundary and at least one endoscope region image to generate a plurality of feature fusion images;
the processing module is used for inputting the feature fusion images into a preset endoscope image analysis model to carry out endoscope image analysis to obtain an image analysis result corresponding to each feature fusion image;
the generation module is used for comparing analysis results of every two adjacent feature fusion images according to the position relation of the feature fusion images and the image analysis results to obtain a plurality of comparison results, and generating an abnormal detection result of the target detection area according to the plurality of comparison results.
9. A fluorescence endoscope-based image processing apparatus, characterized by comprising: a memory and at least one set of processors, the memory having instructions stored therein;
The at least one set of processors invokes the instructions in the memory to cause the fluorescence endoscope-based image processing apparatus to perform the fluorescence endoscope-based image processing method of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the fluorescence endoscope-based image processing method of any of claims 1-7.
CN202310522623.4A 2023-05-10 2023-05-10 Image processing method, device, equipment and storage medium based on fluorescence endoscope Active CN116229189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310522623.4A CN116229189B (en) 2023-05-10 2023-05-10 Image processing method, device, equipment and storage medium based on fluorescence endoscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310522623.4A CN116229189B (en) 2023-05-10 2023-05-10 Image processing method, device, equipment and storage medium based on fluorescence endoscope

Publications (2)

Publication Number Publication Date
CN116229189A true CN116229189A (en) 2023-06-06
CN116229189B CN116229189B (en) 2023-07-04

Family

ID=86573572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310522623.4A Active CN116229189B (en) 2023-05-10 2023-05-10 Image processing method, device, equipment and storage medium based on fluorescence endoscope

Country Status (1)

Country Link
CN (1) CN116229189B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152362A (en) * 2023-10-27 2023-12-01 深圳市中安视达科技有限公司 Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum
CN117204950A (en) * 2023-09-18 2023-12-12 普密特(成都)医疗科技有限公司 Endoscope position guiding method, device, equipment and medium based on image characteristics
CN117204950B (en) * 2023-09-18 2024-05-10 普密特(成都)医疗科技有限公司 Endoscope position guiding method, device, equipment and medium based on image characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504720A (en) * 2015-01-07 2015-04-08 四川大学 New prostate ultrasonoscopy segmentation technique
EP3304423A1 (en) * 2015-06-05 2018-04-11 Siemens Aktiengesellschaft Method and system for simultaneous scene parsing and model fusion for endoscopic and laparoscopic navigation
CN110766643A (en) * 2019-10-28 2020-02-07 电子科技大学 Microaneurysm detection method facing fundus images
CN111161279A (en) * 2019-12-12 2020-05-15 中国科学院深圳先进技术研究院 Medical image segmentation method and device and server
US20210209755A1 (en) * 2020-01-02 2021-07-08 Nabin K. Mishra Automatic lesion border selection based on morphology and color features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504720A (en) * 2015-01-07 2015-04-08 四川大学 New prostate ultrasonoscopy segmentation technique
EP3304423A1 (en) * 2015-06-05 2018-04-11 Siemens Aktiengesellschaft Method and system for simultaneous scene parsing and model fusion for endoscopic and laparoscopic navigation
CN110766643A (en) * 2019-10-28 2020-02-07 电子科技大学 Microaneurysm detection method facing fundus images
CN111161279A (en) * 2019-12-12 2020-05-15 中国科学院深圳先进技术研究院 Medical image segmentation method and device and server
US20210209755A1 (en) * 2020-01-02 2021-07-08 Nabin K. Mishra Automatic lesion border selection based on morphology and color features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BRIAN JOHNSON: "Remote Sensing Image Fusion at the Segment Level Using a Spatially-Weighted Approach: Applications for Land Cover Spectral Analysis and Mapping", 《ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION》, vol. 4, no. 1, pages 172 - 184 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117204950A (en) * 2023-09-18 2023-12-12 普密特(成都)医疗科技有限公司 Endoscope position guiding method, device, equipment and medium based on image characteristics
CN117204950B (en) * 2023-09-18 2024-05-10 普密特(成都)医疗科技有限公司 Endoscope position guiding method, device, equipment and medium based on image characteristics
CN117152362A (en) * 2023-10-27 2023-12-01 深圳市中安视达科技有限公司 Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum

Also Published As

Publication number Publication date
CN116229189B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
US10990191B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
Sodhi et al. In-field segmentation and identification of plant structures using 3D imaging
JP6091560B2 (en) Image analysis method
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
US20170220892A1 (en) Edge-based recognition, systems and methods
CN104573614B (en) Apparatus and method for tracking human face
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN113128610A (en) Industrial part pose estimation method and system
EP3012781A1 (en) Method and apparatus for extracting feature correspondences from multiple images
JP2018128897A (en) Detection method and detection program for detecting attitude and the like of object
CN110197206B (en) Image processing method and device
KR101460313B1 (en) Apparatus and method for robot localization using visual feature and geometric constraints
CN116229189B (en) Image processing method, device, equipment and storage medium based on fluorescence endoscope
CN106415606B (en) A kind of identification based on edge, system and method
CN112364881A (en) Advanced sampling consistency image matching algorithm
CN116664585B (en) Scalp health condition detection method and related device based on deep learning
CN115018886B (en) Motion trajectory identification method, device, equipment and medium
CN114399731B (en) Target positioning method under supervision of single coarse point
Bergström et al. Integration of visual cues for robotic grasping
Liu et al. Geometrized Transformer for Self-Supervised Homography Estimation
JP2010073138A (en) Feature point detector, feature point detection method, and feature point detection program
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
Le et al. Geometry-Based 3D Object Fitting and Localizing in Grasping Aid for Visually Impaired
Venkatesan et al. Supervised and unsupervised learning approaches for tracking moving vehicles
Li et al. Rethinking scene representation: A saliency-driven hierarchical multi-scale resampling for RGB-D scene point cloud in robotic applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant