CN117204950B - Endoscope position guiding method, device, equipment and medium based on image characteristics - Google Patents

Endoscope position guiding method, device, equipment and medium based on image characteristics Download PDF

Info

Publication number
CN117204950B
CN117204950B CN202311202571.9A CN202311202571A CN117204950B CN 117204950 B CN117204950 B CN 117204950B CN 202311202571 A CN202311202571 A CN 202311202571A CN 117204950 B CN117204950 B CN 117204950B
Authority
CN
China
Prior art keywords
image
initial
feature
endoscope
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311202571.9A
Other languages
Chinese (zh)
Other versions
CN117204950A (en
Inventor
滕长青
俞晓红
刘青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pmt Chengdu Medical Technology Co ltd
Original Assignee
Pmt Chengdu Medical Technology Co ltd
Filing date
Publication date
Application filed by Pmt Chengdu Medical Technology Co ltd filed Critical Pmt Chengdu Medical Technology Co ltd
Priority to CN202311202571.9A priority Critical patent/CN117204950B/en
Publication of CN117204950A publication Critical patent/CN117204950A/en
Application granted granted Critical
Publication of CN117204950B publication Critical patent/CN117204950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application relates to the technical field of image processing and discloses an endoscope position guiding method, device, equipment and medium based on image characteristics, wherein the method comprises the steps of extracting initial characteristic points of an initial image through a preset image characteristic extraction model, and determining an effective image corresponding to a target endoscope according to the initial characteristic points; acquiring a standard image, and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model; calculating deviation information of the initial feature points and the standard feature points; and determining a target area in the effective image based on the deviation information, and guiding the target endoscope to move to the target area. Through the mode, after the effective image is determined through the initial feature points, the deviation calculation is carried out on the effective image and the standard feature points in the standard image, the endoscope is controlled to move to the target area according to the deviation information, and the position guiding accuracy of the endoscope for operation is improved.

Description

Endoscope position guiding method, device, equipment and medium based on image characteristics
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an endoscope position guiding method, device, equipment, and medium based on image features.
Background
Endoscopes are used in the fields such as medical fields and industrial fields. An endoscope used in the industrial field is capable of performing a test such as whether or not there is a scratch or corrosion by inserting a flexible and slender insertion portion into a jet engine, a factory piping, or the like.
Endoscopes are detection instruments that integrate traditional optics, ergonomics, precision machinery, modern electronics, mathematics, software, etc., with image sensors, optical lenses, light source illumination, mechanical devices, etc., that can be accessed orally into the stomach, nasally into the lungs, or through other natural tunnels. The endoscope can be used for seeing lesions which cannot be displayed by X rays, and simultaneously, the endoscope can be matched with a biopsy tool to perform forceps inspection, brushing inspection and needle suction inspection of living tissues; the tumor, polyp or other focus is resected in cooperation with the operative tool. In the minimally invasive surgery guided by an endoscope, the similarity of image structures, colors and textures is high, the false matching rate of visual feature points is high, the diagnosis and treatment process relates to multidisciplinary technologies such as neurosurgery and oncology, the operation of surgical instruments in the surgery avoids the effects of cutting important tissues and organs and focus after the surgery, and the characteristic matching of the endoscope image is relied on, so that the accurate three-dimensional reconstruction of the soft tissue surface is obtained. Therefore, how to improve the position guidance accuracy of the surgical endoscope is a technical problem to be solved at present.
Disclosure of Invention
The application provides an endoscope position guiding method, device, equipment and medium based on image characteristics, so as to improve the position guiding accuracy of an endoscope for operation.
In a first aspect, the present application provides an endoscope position guidance method based on image features, the method comprising:
Extracting initial feature points of an initial image through a preset image feature extraction model, and determining an effective image corresponding to a target endoscope according to the initial feature points;
Acquiring a standard image, and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model;
Calculating deviation information of the initial feature points and the standard feature points;
And determining a target area in the effective image based on the deviation information, and guiding the target endoscope to move to the target area.
Further, before extracting the initial feature points of the initial image by the preset image feature extraction model, the method comprises the following steps:
sampling the image to be trained in a training set through a sampling layer of the preset image feature extraction model and a convolution function of the sampling layer to generate a training image corresponding to the image to be trained, wherein the first convolution parameters of the sampling layer comprise sampling convolution kernel size parameters and/or sampling convolution step length parameters;
and carrying out feature extraction processing on the training image through a feature processing layer of the preset image feature extraction model, generating a training feature map corresponding to the training image, and finishing training of the preset image feature extraction model.
Further, extracting initial feature points of an initial image through a preset image feature extraction model, and determining an effective image corresponding to a target endoscope according to the initial feature points, wherein the method comprises the following steps:
And selecting at least one pair of initial feature points in the initial image, and determining the effective image based on a preset radius and the at least one pair of initial feature points.
Further, calculating deviation information of the initial feature point and the standard feature point includes:
Acquiring color channel information RGB components of all pixel points of the effective image, and generating a target color component matrix based on the RGB components;
and calculating the deviation information based on a preset standard color component matrix and the target color component matrix.
Further, before selecting at least one pair of the initial feature points in the initial image and determining the effective image based on a preset radius and the at least one pair of the initial feature points, the method includes:
image segmentation is carried out on the initial image through a preset image segmentation model, and at least one grid image is generated;
The initial feature points are extracted from at least one of the grid images.
Further, extracting the initial feature point from at least one of the grid images includes:
calculating the saturation value of each pixel point in the grid image through a preset formula, and determining the pixel point with the saturation value larger than a saturation threshold value as the initial feature point;
wherein, the preset formula is:
Representing the saturation value of the (i, j) pixel point,/> Representing the pixel points of the a-th row and the b-th column around the (i, j) pixel point,/>Saturation value representing pixel point of a row and a column in the grid image,/>Representing the number of pixels near (i, j) in the grid image.
Further, determining a target area in the effective image based on the deviation information, and before guiding the target endoscope to move to the target area, comprising:
initializing an image background based on a preset formula, wherein the preset formula is as follows:
B(x)={B1(x) ,B2(x) ,...,Bi(x) ,...,BN(x)}
Wherein B (x) is a preset background model, N is the number of samples, and the samples Bi (x) are determined by the color values vi, LBSP texture feature values LBSPi (x), and color dimension confidence And texture dimension confidence/>The composition is as follows:
In a second aspect, the present application also provides an endoscope position guidance device based on image features, the device comprising:
the effective image determining module is used for extracting initial feature points of an initial image through a preset image feature extraction model and determining an effective image corresponding to the target endoscope according to the initial feature points;
The standard feature point extraction module is used for acquiring a standard image and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model;
The deviation information calculation module is used for calculating deviation information of the initial characteristic points and the standard characteristic points;
And the moving module is used for determining a target area in the effective image based on the deviation information and guiding the target endoscope to move to the target area.
In a third aspect, the present application also provides a computer device comprising a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and implement the endoscope position guidance method based on image features as described above when executing the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement an endoscope position guidance method based on image features as described above.
The application discloses an endoscope position guiding method, device, equipment and medium based on image characteristics, wherein the method comprises the steps of extracting initial characteristic points of an initial image through a preset image characteristic extraction model, and determining an effective image corresponding to a target endoscope according to the initial characteristic points; acquiring a standard image, and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model; calculating deviation information of the initial feature points and the standard feature points; and determining a target area in the effective image based on the deviation information, and guiding the target endoscope to move to the target area. Through the mode, after the effective image is determined through the initial feature points, the deviation calculation is carried out on the effective image and the standard feature points in the standard image, the endoscope is controlled to move to the target area according to the deviation information, and the position guiding accuracy of the endoscope for operation is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an endoscope position guidance method based on image features provided by a first embodiment of the present application;
FIG. 2 is a schematic flow chart of an endoscope position guidance method based on image features provided by a second embodiment of the present application;
FIG. 3 is a schematic flow chart of an endoscope position guidance method based on image features provided by a third embodiment of the present application;
FIG. 4 is a schematic block diagram of an endoscope position guidance device based on image features provided by an embodiment of the present application;
Fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
The embodiment of the application provides an endoscope position guiding method, device, equipment and medium based on image characteristics. The endoscope position guiding method based on the image features can be applied to a server, after an effective image is determined through initial feature points, deviation calculation is carried out on the effective image and standard feature points in standard images, and the endoscope is controlled to move to a target area according to deviation information, so that the position guiding accuracy of the endoscope for operation is improved. The server may be an independent server or a server cluster.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart of an endoscope position guiding method based on image features according to a first embodiment of the present application. The endoscope position guiding method based on the image features can be applied to a server and used for performing deviation calculation on the effective image and standard feature points in the standard image after the effective image is determined through the initial feature points, and controlling the endoscope to move to a target area according to deviation information, so that the position guiding accuracy of the endoscope for operation is improved.
As shown in fig. 1, the image feature-based endoscope position guidance method specifically includes steps S10 to S40.
S10, extracting initial feature points of an initial image through a preset image feature extraction model, and determining an effective image corresponding to a target endoscope according to the initial feature points;
Step S20, acquiring a standard image, and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model;
Step S30, calculating deviation information of the initial feature points and the standard feature points;
And step S40, determining a target area in the effective image based on the deviation information, and guiding the target endoscope to move to the target area.
In a particular embodiment, an edge is a pixel that constitutes a boundary (or edge) between two image regions. Generally the shape of one edge may be arbitrary and may also include intersections. In practice an edge is generally defined as a subset of the composition of points in an image that possess a large gradient. Some commonly used algorithms also relate points of high gradient to form a more perfect edge depiction. These algorithms may also place some restrictions on edges.
Corners are similar features in the image, which have a two-dimensional structure in part. Early algorithms first performed edge detection and then analyzed the trend of the edge to find the edge abrupt turns (angles). Later developed algorithms do not require this step of edge detection but can look for high curvature directly in the image gradient. It was found later that it is sometimes possible to find areas with the same characteristics as corners where there are no corners in the image.
Feature extraction and feature selection are both features that find the most efficient (invariance of the same class of samples, discriminativity of different samples, robustness to noise) from the original features.
Feature extraction: converting the original features into a group of features with obvious physical significance, texture or statistical significance or kernel;
Feature selection: and selecting a group of features with the most statistical significance from the feature set to achieve dimension reduction.
The gray-scale-based method is to detect by using local change of gray scales of pixel points, and the characteristic points are pixel points with the largest gray scale change on a certain algorithm. The derivative of the gray scale around the pixel point can be obtained by differential operation, so that the position of the feature point can be obtained.
The embodiment discloses an endoscope position guiding method, device, equipment and medium based on image features, wherein the method comprises the steps of extracting initial feature points of an initial image through a preset image feature extraction model, and determining an effective image corresponding to a target endoscope according to the initial feature points; acquiring a standard image, and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model; calculating deviation information of the initial feature points and the standard feature points; and determining a target area in the effective image based on the deviation information, and guiding the target endoscope to move to the target area. Through the mode, after the effective image is determined through the initial feature points, the deviation calculation is carried out on the effective image and the standard feature points in the standard image, the endoscope is controlled to move to the target area according to the deviation information, and the position guiding accuracy of the endoscope for operation is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of an endoscope position guiding method based on image features according to a second embodiment of the present application. The endoscope position guiding method based on the image features can be applied to a server and used for performing deviation calculation on the effective image and standard feature points in the standard image after the effective image is determined through the initial feature points, and controlling the endoscope to move to a target area according to deviation information, so that the position guiding accuracy of the endoscope for operation is improved.
Based on the embodiment shown in fig. 1, this embodiment includes steps S01 to S02 before the step S10 shown in fig. 2.
Step S01, sampling the image to be trained in a training set through a sampling layer of the preset image feature extraction model and a convolution function of the sampling layer to generate a training image corresponding to the image to be trained, wherein a first convolution parameter of the sampling layer comprises a sampling convolution kernel size parameter and/or a sampling convolution step size parameter;
And step S02, performing feature extraction processing on the training image through a feature processing layer of the preset image feature extraction model, generating a training feature map corresponding to the training image, and finishing training of the preset image feature extraction model.
Specifically, feature extraction is a primary operation in image processing, that is, it is the first operation processing performed on an image. It examines each pixel to determine if the pixel represents a feature. If it is part of a larger algorithm, this algorithm typically only examines the feature areas of the image. As a precondition for feature extraction, the input image is typically smoothed in the scale space by a gaussian blur kernel. One or more features of the image are thereafter calculated by local derivative operations.
Since many computer image algorithms use feature extraction as their primary computational step, a large number of feature extraction algorithms have been developed, with a wide variety of extracted features, and with very different computational complexity and repeatability.
The embodiment discloses an endoscope position guiding method, device, equipment and medium based on image features, wherein the method comprises the steps of sampling an image to be trained in a training set through a sampling layer of a preset image feature extraction model and a convolution function of the sampling layer to generate a training image corresponding to the image to be trained, and a first convolution parameter of the sampling layer comprises a sampling convolution kernel size parameter and/or a sampling convolution step size parameter; performing feature extraction processing on the training image through a feature processing layer of the preset image feature extraction model to generate a training feature map corresponding to the training image, and completing training of the preset image feature extraction model; extracting initial feature points of an initial image through a preset image feature extraction model, and determining an effective image corresponding to a target endoscope according to the initial feature points; acquiring a standard image, and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model; calculating deviation information of the initial feature points and the standard feature points; and determining a target area in the effective image based on the deviation information, and guiding the target endoscope to move to the target area. Through the mode, after the effective image is determined through the initial feature points, the deviation calculation is carried out on the effective image and the standard feature points in the standard image, the endoscope is controlled to move to the target area according to the deviation information, and the position guiding accuracy of the endoscope for operation is improved.
Based on the embodiment shown in fig. 1, in this embodiment, step S10 includes:
And selecting at least one pair of initial feature points in the initial image, and determining the effective image based on a preset radius and the at least one pair of initial feature points.
Referring to fig. 3, fig. 3 is a schematic flowchart of an endoscope position guiding method based on image features according to a third embodiment of the present application. The endoscope position guiding method based on the image features can be applied to a server and used for performing deviation calculation on the effective image and standard feature points in the standard image after the effective image is determined through the initial feature points, and controlling the endoscope to move to a target area according to deviation information, so that the position guiding accuracy of the endoscope for operation is improved.
Based on the embodiment shown in fig. 1, in this embodiment, as shown in fig. 3, the step S30 includes steps S301 to S302.
Step S301, color channel information RGB components of all pixel points of the effective image are obtained, and a target color component matrix is generated based on the RGB components;
step S302, calculating the deviation information based on a preset standard color component matrix and the target color component matrix.
In a specific embodiment, the color matrix is an effective color feature, the color distribution of the image is represented by a moment by using the concept of a moment in linear algebra, and the color distribution is described by using a first color moment, a second color moment and a third color moment. Image description using color moments does not require quantization of image features. Since each pixel has three color channels of the color space, the color moment of the image is described with 9 components.
Based on the embodiment shown in fig. 3, the steps before selecting at least one pair of the initial feature points in the initial image and determining the effective image based on a preset radius and at least one pair of the initial feature points include:
image segmentation is carried out on the initial image through a preset image segmentation model, and at least one grid image is generated;
The initial feature points are extracted from at least one of the grid images.
Based on the above embodiment, the present embodiment includes:
calculating the saturation value of each pixel point in the grid image through a preset formula, and determining the pixel point with the saturation value larger than a saturation threshold value as the initial feature point;
wherein, the preset formula is:
Representing the saturation value of the (i, j) pixel point,/> Representing the pixel points of the a-th row and the b-th column around the (i, j) pixel point,/>Saturation value representing pixel point of a row and a column in the grid image,/>Representing the number of pixels near (i, j) in the grid image.
Based on all the above embodiments, before step S40 in this embodiment, the method includes:
initializing an image background based on a preset formula, wherein the preset formula is as follows:
B(x)={B1(x) ,B2(x) ,...,Bi(x) ,...,BN(x)}
Wherein B (x) is a preset background model, N is the number of samples, and the samples Bi (x) are determined by the color values vi, LBSP texture feature values LBSPi (x), and color dimension confidence And texture dimension confidence/>The composition is as follows:
Referring to fig. 4, fig. 4 is a schematic block diagram of an image feature-based endoscope position guidance apparatus for performing the aforementioned image feature-based endoscope position guidance method according to an embodiment of the present application. Wherein the image feature-based endoscope position guidance device may be configured at a server.
As shown in fig. 4, the image feature-based endoscope position guidance apparatus 400 includes:
The effective image determining module 10 is used for extracting initial feature points of an initial image through a preset image feature extraction model, and determining an effective image corresponding to the target endoscope according to the initial feature points;
the standard feature point extraction module 20 is configured to obtain a standard image, and extract standard feature points corresponding to the standard image based on the preset image feature extraction model;
a deviation information calculating module 30, configured to calculate deviation information of the initial feature point and the standard feature point;
a moving module 40 for determining a target area in the effective image based on the deviation information and guiding the target endoscope to move to the target area.
Further, the image feature-based endoscope position guidance apparatus further includes:
The training image generation module is used for carrying out sampling processing on an image to be trained in a training set through a sampling layer of the preset image feature extraction model and a convolution function of the sampling layer to generate a training image corresponding to the image to be trained, wherein a first convolution parameter of the sampling layer comprises a sampling convolution kernel size parameter and/or a sampling convolution step size parameter;
And the model training module is used for carrying out feature extraction processing on the training image through a feature processing layer of the preset image feature extraction model, generating a training feature map corresponding to the training image and finishing training of the preset image feature extraction model.
Further, the effective image determining module 10 includes:
and the effective image determining unit is used for selecting at least one pair of initial characteristic points in the initial image and determining the effective image based on a preset radius and the at least one pair of initial characteristic points.
Further, the deviation information calculating module 30 includes:
a color component matrix unit, configured to obtain color channel information RGB components of all pixel points of the effective image, and generate a target color component matrix based on the RGB components;
and the deviation information calculating unit is used for calculating the deviation information based on a preset standard color component matrix and the target color component matrix.
Further, the image feature-based endoscope position guidance apparatus further includes:
The image segmentation module is used for carrying out image segmentation on the initial image through a preset image segmentation model to generate at least one grid image;
the feature point extraction module is used for carrying out image segmentation on the initial image through a preset image segmentation model to generate at least one grid image;
The initial feature points are extracted from at least one of the grid images.
It should be noted that, for convenience and brevity of description, the specific working process of the apparatus and each module described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The apparatus described above may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 5.
Referring to fig. 5, fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device may be a server.
With reference to FIG. 5, the computer device includes a processor, memory, and a network interface connected by a system bus, where the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause the processor to perform any one of a number of image feature-based endoscope position guidance methods.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by a processor, causes the processor to perform any one of a number of image feature-based endoscope position guidance methods.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It should be appreciated that the Processor may be a central processing unit (Central Processing Unit, CPU), it may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in one embodiment the processor is configured to run a computer program stored in the memory to implement the steps of:
Extracting initial feature points of an initial image through a preset image feature extraction model, and determining an effective image corresponding to a target endoscope according to the initial feature points;
Acquiring a standard image, and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model;
Calculating deviation information of the initial feature points and the standard feature points;
And determining a target area in the effective image based on the deviation information, and guiding the target endoscope to move to the target area.
In one embodiment, the method is used for realizing the following steps before extracting the initial feature points of the initial image by presetting an image feature extraction model:
sampling the image to be trained in a training set through a sampling layer of the preset image feature extraction model and a convolution function of the sampling layer to generate a training image corresponding to the image to be trained, wherein the first convolution parameters of the sampling layer comprise sampling convolution kernel size parameters and/or sampling convolution step length parameters;
and carrying out feature extraction processing on the training image through a feature processing layer of the preset image feature extraction model, generating a training feature map corresponding to the training image, and finishing training of the preset image feature extraction model.
In one embodiment, through a preset image feature extraction model, extracting initial feature points of an initial image, and determining an effective image corresponding to a target endoscope according to the initial feature points, wherein the effective image is used for realizing:
And selecting at least one pair of initial feature points in the initial image, and determining the effective image based on a preset radius and the at least one pair of initial feature points.
In one embodiment, deviation information of the initial feature point and the standard feature point is calculated for realizing:
Acquiring color channel information RGB components of all pixel points of the effective image, and generating a target color component matrix based on the RGB components;
and calculating the deviation information based on a preset standard color component matrix and the target color component matrix.
In one embodiment, before selecting at least one pair of the initial feature points in the initial image and determining the effective image based on a preset radius and at least one pair of the initial feature points, the method is used for realizing:
image segmentation is carried out on the initial image through a preset image segmentation model, and at least one grid image is generated;
The initial feature points are extracted from at least one of the grid images.
In one embodiment, the initial feature points are extracted from at least one of the grid images for implementation:
calculating the saturation value of each pixel point in the grid image through a preset formula, and determining the pixel point with the saturation value larger than a saturation threshold value as the initial feature point;
wherein, the preset formula is:
Representing the saturation value of the (i, j) pixel point,/> Representing the pixel points of the a-th row and the b-th column around the (i, j) pixel point,/>Saturation value representing pixel point of a row and a column in the grid image,/>Representing the number of pixels near (i, j) in the grid image.
In one embodiment, based on the deviation information, a target area is determined in the effective image and before the target endoscope is guided to move to the target area, the method is used for realizing:
initializing an image background based on a preset formula, wherein the preset formula is as follows:
B(x)={B1(x) ,B2(x) ,...,Bi(x) ,...,BN(x)}
Wherein B (x) is a preset background model, N is the number of samples, and the samples Bi (x) are determined by the color values vi, LBSP texture feature values LBSPi (x), and color dimension confidence And texture dimension confidence/>The composition is as follows:
In an embodiment of the present application, a computer readable storage medium is further provided, where the computer readable storage medium stores a computer program, where the computer program includes program instructions, and the processor executes the program instructions to implement any of the endoscope position guiding methods based on image features provided in the embodiments of the present application.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like, which are provided on the computer device.
While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (4)

1. An endoscope position guidance device based on image features, comprising:
the effective image determining module is used for extracting initial feature points of an initial image through a preset image feature extraction model and determining an effective image corresponding to the target endoscope according to the initial feature points;
The standard feature point extraction module is used for acquiring a standard image and extracting standard feature points corresponding to the standard image based on the preset image feature extraction model;
The deviation information calculation module is used for calculating deviation information of the initial characteristic points and the standard characteristic points;
A moving module for determining a target area in the effective image based on the deviation information, and guiding the target endoscope to move to the target area;
wherein the effective image determining module includes:
and the effective image determining unit is used for selecting at least one pair of initial characteristic points in the initial image and determining the effective image based on a preset radius and the at least one pair of initial characteristic points.
2. The image feature-based endoscope position guidance apparatus of claim 1, further comprising:
The training image generation module is used for carrying out sampling processing on an image to be trained in a training set through a sampling layer of the preset image feature extraction model and a convolution function of the sampling layer to generate a training image corresponding to the image to be trained, wherein a first convolution parameter of the sampling layer comprises a sampling convolution kernel size parameter and/or a sampling convolution step size parameter;
And the model training module is used for carrying out feature extraction processing on the training image through a feature processing layer of the preset image feature extraction model, generating a training feature map corresponding to the training image and finishing training of the preset image feature extraction model.
3. An endoscope position guidance apparatus based on image features according to claim 1, wherein said deviation information calculation module comprises:
a color component matrix unit, configured to obtain color channel information RGB components of all pixel points of the effective image, and generate a target color component matrix based on the RGB components;
and the deviation information calculating unit is used for calculating the deviation information based on a preset standard color component matrix and the target color component matrix.
4. The image feature-based endoscope position guidance apparatus of claim 1, further comprising:
The image segmentation module is used for carrying out image segmentation on the initial image through a preset image segmentation model to generate at least one grid image;
the feature point extraction module is used for carrying out image segmentation on the initial image through a preset image segmentation model to generate at least one grid image;
The initial feature points are extracted from at least one of the grid images.
CN202311202571.9A 2023-09-18 Endoscope position guiding method, device, equipment and medium based on image characteristics Active CN117204950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311202571.9A CN117204950B (en) 2023-09-18 Endoscope position guiding method, device, equipment and medium based on image characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311202571.9A CN117204950B (en) 2023-09-18 Endoscope position guiding method, device, equipment and medium based on image characteristics

Publications (2)

Publication Number Publication Date
CN117204950A CN117204950A (en) 2023-12-12
CN117204950B true CN117204950B (en) 2024-05-10

Family

ID=

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999902A (en) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 Optical navigation positioning system based on CT (computed tomography) registration results and navigation method thereby
CN107977969A (en) * 2017-12-11 2018-05-01 北京数字精准医疗科技有限公司 A kind of dividing method, device and the storage medium of endoscope fluorescence image
CN111080676A (en) * 2019-12-20 2020-04-28 电子科技大学 Method for tracking endoscope image sequence feature points through online classification
CN115496703A (en) * 2021-06-18 2022-12-20 富联精密电子(天津)有限公司 Pneumonia area detection method and system
CN115944388A (en) * 2023-03-03 2023-04-11 西安市中心医院 Surgical endoscope position guiding method, surgical endoscope position guiding device, computer equipment and storage medium
CN116229189A (en) * 2023-05-10 2023-06-06 深圳市博盛医疗科技有限公司 Image processing method, device, equipment and storage medium based on fluorescence endoscope

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999902A (en) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 Optical navigation positioning system based on CT (computed tomography) registration results and navigation method thereby
CN107977969A (en) * 2017-12-11 2018-05-01 北京数字精准医疗科技有限公司 A kind of dividing method, device and the storage medium of endoscope fluorescence image
CN111080676A (en) * 2019-12-20 2020-04-28 电子科技大学 Method for tracking endoscope image sequence feature points through online classification
CN115496703A (en) * 2021-06-18 2022-12-20 富联精密电子(天津)有限公司 Pneumonia area detection method and system
CN115944388A (en) * 2023-03-03 2023-04-11 西安市中心医院 Surgical endoscope position guiding method, surgical endoscope position guiding device, computer equipment and storage medium
CN116229189A (en) * 2023-05-10 2023-06-06 深圳市博盛医疗科技有限公司 Image processing method, device, equipment and storage medium based on fluorescence endoscope

Similar Documents

Publication Publication Date Title
US20210192758A1 (en) Image processing method and apparatus, electronic device, and computer readable storage medium
Chen et al. SLAM-based dense surface reconstruction in monocular minimally invasive surgery and its application to augmented reality
Zhao et al. Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method
US20150297313A1 (en) Markerless tracking of robotic surgical tools
JP7046553B2 (en) Superposition method of magnetic tracking system equipped with an image pickup device
Tanzi et al. Real-time deep learning semantic segmentation during intra-operative surgery for 3D augmented reality assistance
CN109124662B (en) Rib center line detection device and method
Zhou et al. Real-time dense reconstruction of tissue surface from stereo optical video
JP2008546441A (en) Elastic image registration method based on a model for comparing first and second images
CN112734776B (en) Minimally invasive surgical instrument positioning method and system
CN112102294A (en) Training method and device for generating countermeasure network, and image registration method and device
US11633235B2 (en) Hybrid hardware and computer vision-based tracking system and method
CN114022547A (en) Endoscope image detection method, device, equipment and storage medium
EP4156096A1 (en) Method, device and system for automated processing of medical images to output alerts for detected dissimilarities
WO2017086433A1 (en) Medical image processing method, device, system, and program
CN115944388B (en) Surgical endoscope position guiding method, device, computer equipment and storage medium
CN117204950B (en) Endoscope position guiding method, device, equipment and medium based on image characteristics
Pholberdee et al. Study of chronic wound image segmentation: Impact of tissue type and color data augmentation
JP6827707B2 (en) Information processing equipment and information processing system
Lapeer et al. An optimised radial basis function algorithm for fast non-rigid registration of medical images
CN117204950A (en) Endoscope position guiding method, device, equipment and medium based on image characteristics
WO2020031071A1 (en) Internal organ localization of a subject for providing assistance during surgery
CN113850794A (en) Image processing method and device
Banach et al. Saliency improvement in feature-poor surgical environments using local Laplacian of Specified Histograms
CN111340739A (en) Image processing method and system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant