CN113487605A - Tissue cavity positioning method, device, medium and equipment for endoscope - Google Patents

Tissue cavity positioning method, device, medium and equipment for endoscope Download PDF

Info

Publication number
CN113487605A
CN113487605A CN202111033760.9A CN202111033760A CN113487605A CN 113487605 A CN113487605 A CN 113487605A CN 202111033760 A CN202111033760 A CN 202111033760A CN 113487605 A CN113487605 A CN 113487605A
Authority
CN
China
Prior art keywords
image
image sequence
network
target
cavity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111033760.9A
Other languages
Chinese (zh)
Other versions
CN113487605B (en
Inventor
石小周
边成
赵家英
杨志雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202111033760.9A priority Critical patent/CN113487605B/en
Publication of CN113487605A publication Critical patent/CN113487605A/en
Application granted granted Critical
Publication of CN113487605B publication Critical patent/CN113487605B/en
Priority to PCT/CN2022/104089 priority patent/WO2023029741A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Endoscopes (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a tissue cavity localization method, apparatus, medium, and device for an endoscope, the method comprising: receiving a cavity image sequence to be identified, wherein the cavity image sequence comprises a plurality of continuous images, and the last image in the cavity image sequence is obtained by the endoscope at the current position; determining a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image according to the cavity image sequence and the key point identification model; the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial characteristics of the cavity image sequence, the time circulation sub-network is used for acquiring the time characteristics of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial characteristics and the time characteristics so as to acquire the target direction point. Therefore, the direction of the tissue cavity can be predicted to provide data support for endoscope navigation.

Description

Tissue cavity positioning method, device, medium and equipment for endoscope
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method, apparatus, medium, and device for positioning a tissue cavity of an endoscope.
Background
In recent years, due to the appearance of deep learning, the artificial intelligence technology is rapidly developed, and the artificial intelligence can replace human work in many fields, such as repetitive tedious work, and the burden of the human work can be greatly reduced.
In endoscopy, such as enteroscopy, the endoscopy generally comprises two stages of endoscope entering and endoscope withdrawing, wherein the endoscope withdrawing is the examination stage of the condition of the patient by a doctor, but the physician needs to spend more energy and time in entering the endoscope, and the blind endoscope entering can damage the intestinal mucosa and cause perforation. In the related art, the endoscope entering time can be saved through automatic navigation, and the workload of doctors is saved. In the related art, however, many complicated situations may exist in the process of endoscope entering, such as blocking of dirt, peristalsis of the intestinal tract, different intestinal tracts of different people and the like, and in the case that the intestinal lumen is not visible, a doctor is usually required to participate in automatic equipment control, and the enteroscope is backed up for a distance through manual control of the doctor and then manually enters the endoscope.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a method of tissue cavity localization for an endoscope, the method comprising:
receiving a cavity image sequence to be identified, wherein the cavity image sequence comprises a plurality of continuous images, and the last image in the cavity image sequence is obtained by an endoscope at the current position;
determining a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image according to the cavity image sequence and the key point identification model, wherein the target direction point is used for indicating the next target moving direction of the endoscope at the current position of the endoscope;
the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial features of the cavity image sequence, the time circulation sub-network is used for acquiring the temporal features of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial features and the temporal features to acquire the target direction points.
In a second aspect, the present disclosure provides a tissue cavity positioning device for an endoscope, the device comprising:
the endoscope identification device comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for receiving a cavity image sequence to be identified, the cavity image sequence comprises a plurality of continuous images, and the last image in the cavity image sequence is obtained by the endoscope at the current position;
a first determining module, configured to determine, according to the cavity image sequence and a keypoint recognition model, a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image, where the target direction point is used to indicate a next target moving direction of the endoscope at a current position of the endoscope;
the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial features of the cavity image sequence, the time circulation sub-network is used for acquiring the temporal features of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial features and the temporal features to acquire the target direction points.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method of the first aspect.
In a fourth aspect, an electronic device is provided, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method of the first aspect.
By the technical scheme, the target direction point of the tissue cavity at the current moment can be predicted by combining a plurality of historical cavity images, and in the process of predicting the direction based on the key point recognition model, the spatial characteristics and the time characteristics contained in the plurality of cavity images can be simultaneously used, so that on one hand, the accuracy of the predicted target direction point can be effectively improved, and data support is provided for the automatic endoscope entering navigation of the endoscope; on the other hand, the method is suitable for more complicated in-vivo environment, and the application range of the tissue cavity positioning method is widened. Moreover, by the technical scheme, the moving direction of the tissue cavity can be predicted based on the cavity image sequence, so that the method can be applied to a scene in which the cavity center point is not identified in the cavity image, the manual operation of a user is not needed, the automation level of endoscope entering is improved, and the use experience of the user is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flow chart of a method of tissue cavity localization for an endoscope provided in accordance with an implementation of the present disclosure;
FIG. 2 is a schematic structural diagram of a keypoint identification model provided in accordance with an implementation of the present disclosure;
FIG. 3 is a flow diagram of a keypoint recognition model training provided in accordance with an implementation of the present disclosure;
FIG. 4 is a schematic diagram of a standard ConvLSTM network;
FIG. 5 is a block diagram of a tissue cavity positioning device for an endoscope provided in accordance with an implementation of the present disclosure;
FIG. 6 illustrates a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flow chart of a method of positioning a tissue cavity for an endoscope, as shown in fig. 1, provided in accordance with an implementation of the present disclosure, the method comprising:
in step 11, a cavity image sequence to be identified is received, where the cavity image sequence includes a plurality of consecutive images, and a last image in the cavity image sequence is obtained by the endoscope at a current position of the endoscope.
Among them, in medical endoscope image recognition, an endoscope takes a medical endoscope video stream inside a living body, for example, a human body. Illustratively, during the endoscope entering process of the endoscope, namely, the endoscope enters the target position of the human body from the cavity channel or the closed body cavity of the human body communicated with the outside, image shooting is carried out, so that the current position of the endoscope can be determined based on the shot image or video to provide navigation for the endoscope entering process. Illustratively, the cavity communicating with the outside may be an alimentary canal, a respiratory tract, or the like, and the enclosed body cavity may be a thoracic cavity, an abdominal cavity, or the like, which may be introduced into the endoscope through an incision.
In this embodiment, images in a video stream captured during movement of the endoscope can be sampled, so that the cavity image sequence can be obtained, and therefore, the moving direction at N times can be predicted based on the latest N images obtained by the endoscope, and the accuracy of the obtained moving direction can be improved.
In step 12, according to the cavity image sequence and the key point recognition model, a target direction point of the tissue cavity corresponding to the cavity image sequence relative to the last image is determined, wherein the target direction point is used for indicating a next target moving direction of the endoscope at the current position of the endoscope.
The tissue cavity corresponding to the cavity image sequence is the tissue cavity corresponding to the display image in the cavity image sequence. The tissue cavity may be an intestinal cavity, a gastric cavity, or the like, for example, after the endoscope enters the intestinal cavity, it may take images at the position thereof to obtain a cavity image sequence, and the tissue cavity corresponding to the tissue cavity is the intestinal cavity.
Taking an enteroscope as an example, the automatic navigation of the enteroscope mainly determines the intestinal lumen in the intestinal tract based on the lumen image, so that the enteroscope moves according to the direction of the intestinal lumen, and reaches the back blind area to finish endoscope entering. Due to the complex environment of the intestinal tract, such as the peristalsis of the intestinal tract, different appearances of different intestinal segments, and the like, and the blocking of dirt in the intestinal tract, the bending of the intestinal tract being too large, the adhesion of the intestinal wall, the lens being too close to the intestinal wall, and the like, the intestinal lumen may not be seen in the currently-photographed lumen image, and thus the moving position of the enteroscope cannot be determined. Therefore, in the embodiment of the present disclosure, the target direction point of the tissue cavity relative to the last image is a point for indicating a direction in which the tissue cavity is located, that is, if the tissue cavity is identified in the cavity image sequence, the target direction point may be a center point of the tissue cavity, that is, a center of a spatial section enclosed by an inner wall of the tissue cavity, and if the tissue cavity is not identified in the cavity image sequence, the target direction point is a relative position point of the predicted center point of the tissue cavity relative to the last cavity image, and indicates that the endoscope should be shifted toward the target direction point, so as to provide a direction guide for advancing the endoscope.
As shown in fig. 2, the keypoint identification model includes a convolution sub-network 101, a time-cycle sub-network 102, and a decoding sub-network 103, where the convolution sub-network 101 is configured to obtain a spatial feature of the cavity image sequence Im, the time-cycle sub-network 102 is configured to obtain a temporal feature of the cavity image sequence, and the decoding sub-network 103 is configured to perform decoding based on the spatial feature and the temporal feature to obtain the target direction point.
Therefore, by the technical scheme, the target direction point of the tissue cavity at the current moment can be predicted by combining a plurality of historical cavity images, and in the process of predicting the direction based on the key point recognition model, the spatial characteristics and the time characteristics contained in the plurality of cavity images can be simultaneously utilized, so that on one hand, the accuracy of the predicted target direction point can be effectively improved, and data support is provided for the automatic endoscope entering navigation of the endoscope; on the other hand, the method is suitable for more complicated in-vivo environment, and the application range of the tissue cavity positioning method is widened. Moreover, by the technical scheme, the moving direction of the tissue cavity can be predicted based on the cavity image sequence, so that the method can be applied to a scene in which the cavity center point is not identified in the cavity image, the manual operation of a user is not needed, the automation level of endoscope entering is improved, and the use experience of the user is improved.
In one possible embodiment, the method may further comprise:
transmitting the target direction point to a driving device of the endoscope to move the endoscope to the target direction point;
and returning to the step of receiving the cavity image sequence to be identified until the endoscope reaches the target position point.
The driving device of the endoscope is used for controlling the endoscope to move, and a driving device commonly used in the art can be adopted, which is not limited in the present disclosure. After the target direction point is determined, the endoscope can be controlled to shift towards the target direction point, so that the endoscope realizes the endoscope entering movement. Then, the cavity image may be obtained again in the moving process of the endoscope, and the cavity image sequence corresponding to the current position after the endoscope is moved is obtained by combining the historical cavity image, and the target moving direction of the endoscope is further determined through the above steps 11 and 12.
For example, the target position point may be a target position point determined according to the detection portion, and when the intestinal tract is detected, the target position point may be a position point of a back-blind area in the intestinal tract, so that the moving operation is ended when the target position point is determined to be reached based on the cavity image sequence, and the automatic endoscope entering operation of the endoscope is realized.
Therefore, by means of the technical scheme, automatic endoscope entering navigation of the endoscope can be achieved based on the target direction point and the driving device, technical and experience requirements of endoscope entering operation on detection personnel can be effectively reduced, the detection personnel can use the endoscope conveniently, and user experience is improved.
In order to make those skilled in the art understand the technical solutions provided by the embodiments of the present invention, the following describes the above steps and related contents in more detail.
In one possible embodiment, the keypoint recognition model may be trained by the following method, as shown in fig. 3, which may include the following steps:
in step 21, a plurality of sets of training samples are obtained, where each set of training samples includes a training image sequence and a label image corresponding to the training image sequence.
The number of training images included in the training image sequence may be limited according to an actual usage scenario, for example, the training image sequence may include 5 training images, that is, the position of the tissue cavity in the current state may be predicted based on the 5 closest training images. And the label image corresponding to the training image sequence is used for indicating the position of the direction point of the cavity in the last image predicted based on the plurality of images.
In step 22, inputting the target input image sequence into a convolution sub-network to obtain a spatial feature image corresponding to the target input image sequence, and inputting the target input image sequence into a time circulation sub-network to obtain a temporal feature image corresponding to the target input image sequence, where the target input image sequence includes the training image sequence.
In this step, a training sample may be obtained, and a training image sequence in the training sample is input into a convolution sub-network, so that feature extraction is performed on the training image sequence through the convolution sub-network. Illustratively, the convolutional subnetwork may employ a Resnet18 network structure that removes the fully connected layer and the pooled layer.
For example, the input of the convolution sub-network may be a result of superimposing each training image in the training image sequence in the channel dimension, and if the training image is an RGB image, the training image may be represented as an image of 3 channels, and thus the input of the convolution sub-network is an image of 3N channel dimensions, where N is the number of training images included in the training image sequence. Then, the training image sequence is input into the convolution sub-network in the above manner, so that the N training images can be subjected to sign extraction simultaneously in the convolution sub-network. And performing feature fusion processing on each layer of the convolution sub-network based on the N training images to obtain a spatial feature image output by the convolution sub-network.
And meanwhile, the training image sequence in the training sample can be input into a time circulation sub-network, so that the training image sequence is subjected to feature extraction through the time circulation sub-network. For example, the time cycle sub-network may be an LSTM (Long Short-Term Memory) network, in the time cycle sub-network, only one training image is processed each time based on a precedence relationship, that is, feature extraction is performed on the training image corresponding to the earliest training image in the training image to obtain a feature map, then feature extraction may be performed on the basis of the feature map and a next training image to further obtain a next feature map, that is, only one training image is processed each time in the network, and in processing the current training image, the feature map of the training image based on the history and the current training image are processed, so that in the process of extracting the image features, the feature weight of the training image at a later time is larger, and the extracted features are more matched with the features at the current time.
Fig. 4 is a schematic diagram of a standard ConvLSTM network. WhereinX t RepresentstThe input of the time of day is,h (t-1) representst-1The input of the hidden unit at the moment,C (t-1) an input representing a memory of a main line of the network,f t the output of the forgetting gate is represented,i t representing the output of the input gate or gates,g t representing a replenishment of the memory of the main line,o t representing the output of the output gate. In this example, a convolution of 3 × 3 can be uniformly used in LSTM, padding is 1, and step stride is 1. Input throughh (t-1) AndX t are fused to obtainf t Control historyC (t-1) Degree of forgetfulness of, byi t Is composed ofg t Weighting to determine the amount of information obtained from the unit input byo t To obtain information obtained from the main line memory as the output of the cellh t Wherein the calculation formula is as follows, whereinϕRepresents the value of tan h, and the content of tan h,σrepresenting Sigmoid, W represents the corresponding convolution weight in the network,εrepresenting the amount of network translation,. corresponds to the multiplication of the elements of the matrix, and the corresponding calculation is as follows:
Figure 161591DEST_PATH_IMAGE001
in step 23, the spatial feature image and the temporal feature image are fused to obtain a fused feature image.
The spatial feature image and the temporal feature image can be feature-spliced through a coordinate function, so as to obtain the fused feature image.
In step 24, the fused feature image is input into a decoding subnetwork to obtain a directional feature image.
In one possible embodiment, the decoding subnetwork may be implemented by a plurality of decoding layers including a volume block, a self-attention module and an upsampling module. As an example, the fused feature map is input into the self-attention module, and transformed by 3 convolution kernels f (x), g (x), h (x) of 1 × 1. The feature map M1 passing through f (x) is transposed to obtain a feature map M1', the feature map M2 passing through g (x) is subjected to matrix multiplication to obtain a feature correlation representation, then the feature correlation representation can be mapped into a probability form of 0-1 based on softmax to obtain a probability matrix P, and finally the probability matrix P and the feature map M3 passing through h (x) are subjected to matrix multiplication to obtain a feature map S output by the attention module.
And then, performing convolution operation on the feature map S through a convolution block ConvBlock to change the number of channels of the feature map S, and inputting the feature map obtained after the convolution operation into an upsampling module upsampling for upsampling. After the input characteristic diagram is sampled, an output characteristic diagram U is obtained. Then, the next decoding layer is processed based on the feature diagram U, and the calculation method and the types described above are not described herein again. And obtaining a feature map with the same size as the original image, namely the directional feature image, through the output of the last decoding layer.
In step 25, a target loss of the keypoint identification model is determined based on the directional feature image and the label image corresponding to the target input image sequence.
Wherein, a Mean Square Error (MSE) may be calculated to obtain the target loss based on the directional feature image and the label image corresponding to the input training image sequence. The calculation method of the mean square error is a common method in the art, and is not described herein again.
In step 26, the parameters of the keypoint identification model are updated according to the target loss if an update condition is met.
As an example, the update condition may be that the target loss is greater than a preset loss threshold, which indicates that the recognition accuracy of the keypoint recognition model is insufficient. As another example, the update condition may be that the number of iterations is less than a preset threshold, and the identification accuracy of the keypoint identification model is considered to be insufficient when the number of iterations is less.
Accordingly, in case that the update condition is satisfied, the parameters of the keypoint identification model may be updated according to the target loss. The method for updating the parameter based on the determined target loss may adopt an updating method commonly used in the art, and is not described herein again.
Under the condition that the updating condition is not met, the identification accuracy of the key point identification model can be considered to meet the training requirement, at the moment, the training process can be stopped, and the trained key point identification model is obtained.
Therefore, by the technical scheme, the key point identification model can be trained based on the training image sequence, so that the spatial features corresponding to a plurality of training images can be combined in the key point identification model, and meanwhile, the prediction can be performed by combining the relation of the plurality of training images based on the time sequence, the identification accuracy of the key point identification model is improved, and the tissue cavity positioning method can be suitable for more complex and wider application scenes. Meanwhile, in the training process, feature extraction can be carried out based on time sequence, so that the serialized data feature extraction is more consistent with subjective cognition of human beings, and the recognition experience of a user is fitted, so that the accuracy of the predicted direction point can be further ensured to a certain extent, and data support is provided for accurate navigation of the movement of the endoscope.
Correspondingly, in step 12, determining a target direction point of the tissue cavity corresponding to the cavity image sequence relative to the last image according to the cavity image sequence and the key point identification model may include:
and inputting the cavity image sequence into the key point identification model, obtaining a direction characteristic image output by the key point identification model, and determining a point with the maximum corresponding characteristic value in the direction characteristic image as the target direction point.
Therefore, the target direction point corresponding to the cavity image sequence can be quickly and accurately determined based on the characteristics output by the key point recognition model, and the guide of the moving direction is provided for the automatic navigation of the endoscope.
In a possible embodiment, the target input image sequence further includes a processed image sequence, the processed image sequence is an image sequence obtained by performing preprocessing based on the training image sequence, and the tag images corresponding to the processed image sequence are images obtained by performing the same preprocessing on the tag images corresponding to the training image sequence.
Illustratively, the preprocessing mode may be data enhancement, such as color, luminance, chrominance, saturation transformation, affine transformation, and the like.
As an example, in order to improve the accuracy of image processing, the training image may be normalized before data enhancement, i.e., normalized to a preset size, so as to facilitate the normalization of the training image.
Correspondingly, in the embodiment, the training image in the training image sequence can be preprocessed, so that the training image sequence is transformed to obtain a processed image sequence, the diversity of the training sample is increased, the generalization of the trained key point identification model can be effectively improved, and the tissue cavity positioning method can be suitable for more complex and wider application scenes. In the embodiment of the disclosure, in order to ensure consistency between a training image sequence and a tag image, the tag image may be transformed based on the same preprocessing mode, so as to obtain a tag image corresponding to the processing image sequence, and then, a prediction error of an output image corresponding to the processing image sequence is identified based on the tag image obtained after the processing, so that diversity of the training image may be further improved, training efficiency of a key point identification model is improved to a certain extent, stability of the key point identification model is improved, and an accurate data support is provided for endoscope navigation.
In one possible embodiment, an exemplary implementation of determining a target loss of the keypoint identification model from the directional feature image and the tag image corresponding to the target input image sequence is as follows, which may include:
and converting the label image into a Gaussian feature map according to the positions of each point in the label image and the direction point of the label in the label image, wherein the direction point of the label in the label image is the direction point of the tissue cavity in the training image sequence.
The number of the labeled direction points in the label image is one, the feature values of the other positions are 0, and when the direction feature image output by the decoding subnetwork is an image with all 0 positions, the target loss between the direction feature image and the label image is small, and the updating of the parameters of the model is inconvenient. Therefore, in the embodiment of the present disclosure, a tag image may be processed, and the tag image is converted into a gaussian feature map through a relationship between each point in the tag image and a position of an annotation direction point in the tag image, where the farther a point in the tag image is from the annotation direction point, the smaller the gaussian feature value of the point is.
Illustratively, the label image is converted into a gaussian feature map according to the positions of each point in the label image and an annotation direction point in the label image by the following formula:
Figure 582208DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
for representing gaussian characteristic diagrams
Figure 523620DEST_PATH_IMAGE004
A characteristic value of the coordinates;
Figure 637069DEST_PATH_IMAGE004
the coordinate value of the element in the label image is represented;
Figure DEST_PATH_IMAGE005
coordinate values used for representing the direction points marked in the label image;
Figure 296721DEST_PATH_IMAGE006
the hyper-parameter is used for representing gaussian transformation, wherein the value of the hyper-parameter can be set based on an actual application scene, which is not limited by the disclosure.
Therefore, each point of the non-labeling direction point in the label image can be characterized through the characteristic value, and data support is provided for target loss predicted by a subsequent accurate calculation model.
And determining the target loss according to the direction characteristic image and the Gaussian characteristic image.
Illustratively, a Mean Square Error (MSE) may be calculated based on the directional feature image and the gaussian feature map to obtain the target loss.
Therefore, according to the technical scheme, when the target loss is determined, the label image can be converted into the Gaussian image for calculation, so that the accuracy of the determined target loss can be guaranteed, the accuracy of parameter adjustment of the key point recognition model is guaranteed, the model training efficiency is improved, meanwhile, the accuracy of direction point prediction of a cavity image sequence to be recognized based on the trained key point recognition model can be improved, and decision data are provided for endoscope-entering navigation.
Optionally, the decoding subnetwork comprises a plurality of layers of feature decoding networks, and feature graphs output by each layer of feature decoding network have different sizes;
an exemplary implementation manner of determining the target loss of the keypoint identification model according to the direction feature image and the tag image corresponding to the target input image sequence is as follows, and the step may include:
and for each layer of feature decoding network, carrying out standardization processing on the feature map or the label image output by the layer of feature decoding network so as to obtain a target feature map and a target label image which correspond to the layer of feature decoding network and have the same size.
In the process of performing feature extraction coding on an input image sequence, coding is generally performed by increasing the number of channels and reducing the width and height of a feature map, so in the process of performing decoding based on the multi-layer feature decoding network, the number of channels is generally reduced and the width and height of the feature map are increased, so that the finally output feature map is the same as the size of an original input image.
For example, the feature map output by each layer of the feature decoding network may be normalized to a feature map having the same size as the label image, and the feature map obtained by normalizing each layer may be used as the target feature map corresponding to the layer, and the label image may be determined as the target label image.
As another example, the label image may be normalized. For example, if the label image is normalized to a label image with the same size of the feature map output by the layer feature decoding network for each layer of feature processing network, the label image obtained by normalizing each layer may be used as the target label image corresponding to the layer, and the feature map output by the layer may be determined as the target feature map.
The object of the normalization process is the same for each layer of feature processing network, that is, the label image is normalized for each layer, and the feature map is normalized for each layer.
And aiming at each layer of feature decoding network, determining the loss corresponding to the layer of feature decoding network according to the target feature graph and the target label image corresponding to the layer of feature decoding network. The loss calculation method is similar to the loss calculation method described above, and is not described herein again. Therefore, the accuracy of the target direction point of the tissue cavity predicted by each layer in the decoding sub-network can be concerned in the decoding process, so that the accuracy of the finally determined target direction point is improved.
And determining the target loss of the key point identification model according to the loss corresponding to each layer of feature decoding network.
The sum of the losses corresponding to each layer of feature decoding network may be determined as a target loss, or an average value of the losses corresponding to each layer of feature decoding network may be determined as the target loss, which may be set according to an actual usage scenario.
By the technical scheme, loss calculation can be performed on the feature graph output by each layer of the feature decoding network in the decoding sub-network, so that the target loss of the key point recognition model can be determined by combining the loss corresponding to each layer, on one hand, the accuracy of the determined target loss can be improved based on multi-scale prediction, on the other hand, the efficiency and the accuracy of model parameter adjustment based on the target loss can be improved, and the training efficiency of the key point recognition model is improved. And the prediction accuracy of each layer of feature decoding network in the decoding sub-network can be improved, the accumulation of decoding errors corresponding to the multi-layer decoding feature networks is avoided to a certain extent, the identification accuracy of the key point identification model is further improved, and the endoscope navigation is ensured.
The present disclosure also provides a tissue cavity locating device for an endoscope, as shown in fig. 5, the device 50 comprising:
a receiving module 51, configured to receive a cavity image sequence to be identified, where the cavity image sequence includes multiple continuous images, and a last image in the cavity image sequence is obtained by an endoscope at a current position of the endoscope;
a first determining module 52, configured to determine, according to the cavity image sequence and a keypoint identification model, a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image, where the target direction point is used to indicate a next target moving direction of the endoscope at a current position of the endoscope;
the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial features of the cavity image sequence, the time circulation sub-network is used for acquiring the temporal features of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial features and the temporal features to acquire the target direction points.
Optionally, the keypoint recognition model is trained by a training device, and the training device includes:
the acquisition module is used for acquiring a plurality of groups of training samples, wherein each group of training samples comprises a training image sequence and a label image corresponding to the training image sequence;
a first processing module, configured to input a target input image sequence into the convolution sub-network, obtain a spatial feature image corresponding to the target input image sequence, and input the target input image sequence into the time circulation sub-network, obtain a temporal feature image corresponding to the target input image sequence, where the target input image sequence includes the training image sequence;
the fusion module is used for fusing the spatial characteristic image and the time characteristic image to obtain a fusion characteristic image;
the second processing module is used for inputting the fusion characteristic image into the decoding sub-network to obtain a direction characteristic image;
the second determining module is used for determining the target loss of the key point identification model according to the direction characteristic image and the label image corresponding to the target input image sequence;
and the updating module is used for updating the parameters of the key point identification model according to the target loss under the condition that the updating condition is met.
Optionally, the target input image sequence further includes a processed image sequence, the processed image sequence is an image sequence obtained by performing preprocessing based on the training image sequence, and the tag images corresponding to the processed image sequence are images obtained by performing the same preprocessing on the tag images corresponding to the training image sequence.
Optionally, the second determining module includes:
the conversion submodule is used for converting the label image into a Gaussian feature map according to the positions of each point in the label image and the position of the direction marking point in the label image;
and the first determining submodule is used for determining the target loss according to the direction characteristic image and the Gaussian characteristic image.
Optionally, the tag image is converted into a gaussian feature map according to the following formula according to the positions of each point in the tag image and the labeling direction point in the tag image:
Figure DEST_PATH_IMAGE007
Figure 989870DEST_PATH_IMAGE003
for representing gaussian characteristic diagrams
Figure 51367DEST_PATH_IMAGE004
A characteristic value of the coordinates;
Figure 319406DEST_PATH_IMAGE004
the coordinate value of the element in the label image is represented;
Figure 466354DEST_PATH_IMAGE005
coordinate values used for representing the direction points marked in the label image;
Figure 494353DEST_PATH_IMAGE008
hyper-parameters for representing a gaussian transformation.
Optionally, the decoding subnetwork comprises a plurality of layers of feature decoding networks, and feature graphs output by each layer of feature decoding network have different sizes;
the second determining module includes:
the processing submodule is used for carrying out standardization processing on the feature map or the label image output by each layer of feature decoding network aiming at each layer of feature decoding network so as to obtain a target feature map and a target label image which correspond to the layer of feature decoding network and have the same size;
the second determining submodule is used for determining the loss corresponding to each layer of the feature decoding network according to the target feature image and the target label image corresponding to the layer of the feature decoding network;
and the third determining submodule is used for determining the target loss of the key point identification model according to the loss corresponding to each layer of feature decoding network.
Optionally, the apparatus further comprises:
and the sending module is used for sending the target direction point to a driving device of the endoscope so as to enable the endoscope to move to the target direction point and trigger the receiving module to receive the cavity image sequence to be identified until the endoscope reaches a target position point.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a cavity image sequence to be identified, wherein the cavity image sequence comprises a plurality of continuous images, and the last image in the cavity image sequence is obtained by an endoscope at the current position; determining a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image according to the cavity image sequence and the key point identification model, wherein the target direction point is used for indicating the next target moving direction of the endoscope at the current position of the endoscope; the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial features of the cavity image sequence, the time circulation sub-network is used for acquiring the temporal features of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial features and the temporal features to acquire the target direction points.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a module does not in some cases constitute a definition of the module itself, for example, the receiving module may also be described as a "module receiving a sequence of cavity images to be identified".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides a method of tissue cavity localization for an endoscope, wherein the method comprises:
receiving a cavity image sequence to be identified, wherein the cavity image sequence comprises a plurality of continuous images, and the last image in the cavity image sequence is obtained by an endoscope at the current position;
determining a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image according to the cavity image sequence and the key point identification model, wherein the target direction point is used for indicating the next target moving direction of the endoscope at the current position of the endoscope;
the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial features of the cavity image sequence, the time circulation sub-network is used for acquiring the temporal features of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial features and the temporal features to acquire the target direction points.
Example 2 provides the method of example 1, wherein the keypoint recognition model is trained by:
acquiring a plurality of groups of training samples, wherein each group of training samples comprises a training image sequence and a label image corresponding to the training image sequence;
inputting a target input image sequence into the convolution sub-network to obtain a spatial feature image corresponding to the target input image sequence, and inputting the target input image sequence into the time circulation sub-network to obtain a temporal feature image corresponding to the target input image sequence, wherein the target input image sequence comprises the training image sequence;
fusing the spatial characteristic image and the temporal characteristic image to obtain a fused characteristic image;
inputting the fusion characteristic image into the decoding sub-network to obtain a direction characteristic image;
determining the target loss of the key point identification model according to the direction characteristic image and the label image corresponding to the target input image sequence;
and updating the parameters of the key point identification model according to the target loss under the condition that an updating condition is met.
Example 3 provides the method of example 2, wherein the target input image sequence further includes a processed image sequence, the processed image sequence is an image sequence obtained by performing preprocessing based on the training image sequence, and the tag images corresponding to the processed image sequence are images obtained by performing the same preprocessing on the tag images corresponding to the training image sequence.
Example 4 provides the method of example 2, wherein the determining a target loss of the keypoint identification model from the directional feature image and a label image corresponding to the target input image sequence, includes:
converting the label image into a Gaussian feature map according to the positions of each point in the label image and the position of the labeling direction point in the label image;
and determining the target loss according to the direction characteristic image and the Gaussian characteristic image.
Example 5 provides the method of example 4, wherein the tag image is converted into a gaussian feature map according to the positions of each point in the tag image and the annotation direction point in the tag image by the following formula:
Figure 410356DEST_PATH_IMAGE009
Figure 334450DEST_PATH_IMAGE003
for representing gaussian characteristic diagrams
Figure 234273DEST_PATH_IMAGE004
A characteristic value of the coordinates;
Figure 534804DEST_PATH_IMAGE004
the coordinate value of the element in the label image is represented;
Figure 836472DEST_PATH_IMAGE005
coordinate values used for representing the direction points marked in the label image;
Figure 931467DEST_PATH_IMAGE010
hyper-parameters for representing a gaussian transformation.
Example 6 provides the method of example 2, wherein the decoding subnetwork comprises multiple layers of feature decoding networks, each layer of feature decoding network outputting a feature map of a different size;
determining the target loss of the key point identification model according to the direction feature image and the label image corresponding to the target input image sequence, wherein the determining comprises:
for each layer of feature decoding network, carrying out standardization processing on the feature map or the label image output by the layer of feature decoding network to obtain a target feature map and a target label image which are corresponding to the layer of feature decoding network and have the same size;
aiming at each layer of feature decoding network, determining the loss corresponding to the layer of feature decoding network according to the target feature graph and the target label image corresponding to the layer of feature decoding network;
and determining the target loss of the key point identification model according to the loss corresponding to each layer of feature decoding network.
Example 7 provides the method of example 1, wherein the method further comprises:
transmitting the target direction point to a driving device of the endoscope to move the endoscope to the target direction point;
and returning to the step of receiving the cavity image sequence to be identified until the endoscope reaches the target position point.
Example 8 provides a tissue cavity positioning device for an endoscope, wherein the device comprises:
the endoscope identification device comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for receiving a cavity image sequence to be identified, the cavity image sequence comprises a plurality of continuous images, and the last image in the cavity image sequence is obtained by the endoscope at the current position;
a first determining module, configured to determine, according to the cavity image sequence and a keypoint recognition model, a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image, where the target direction point is used to indicate a next target moving direction of the endoscope at a current position of the endoscope;
the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial features of the cavity image sequence, the time circulation sub-network is used for acquiring the temporal features of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial features and the temporal features to acquire the target direction points.
Example 9 provides a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processing device, implements the steps of the method of any of examples 1-7, in accordance with one or more embodiments of the present disclosure.
Example 10 provides, in accordance with one or more embodiments of the present disclosure, an electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method of any of examples 1-7.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Claims (10)

1. A method of tissue cavity localization for an endoscope, the method comprising:
receiving a cavity image sequence to be identified, wherein the cavity image sequence comprises a plurality of continuous images, and the last image in the cavity image sequence is obtained by an endoscope at the current position;
determining a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image according to the cavity image sequence and the key point identification model, wherein the target direction point is used for indicating the next target moving direction of the endoscope at the current position of the endoscope;
the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial features of the cavity image sequence, the time circulation sub-network is used for acquiring the temporal features of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial features and the temporal features to acquire the target direction points.
2. The method of claim 1, wherein the keypoint recognition model is trained by:
acquiring a plurality of groups of training samples, wherein each group of training samples comprises a training image sequence and a label image corresponding to the training image sequence;
inputting a target input image sequence into the convolution sub-network to obtain a spatial feature image corresponding to the target input image sequence, and inputting the target input image sequence into the time circulation sub-network to obtain a temporal feature image corresponding to the target input image sequence, wherein the target input image sequence comprises the training image sequence;
fusing the spatial characteristic image and the temporal characteristic image to obtain a fused characteristic image;
inputting the fusion characteristic image into the decoding sub-network to obtain a direction characteristic image;
determining the target loss of the key point identification model according to the direction characteristic image and the label image corresponding to the target input image sequence;
and updating the parameters of the key point identification model according to the target loss under the condition that an updating condition is met.
3. The method according to claim 2, wherein the target input image sequence further comprises a processed image sequence, the processed image sequence is an image sequence obtained by preprocessing based on the training image sequence, and the label images corresponding to the processed image sequence are images obtained by preprocessing the same label images corresponding to the training image sequence.
4. The method according to claim 2, wherein the determining the target loss of the keypoint identification model according to the direction feature image and the tag image corresponding to the target input image sequence comprises:
converting the label image into a Gaussian feature map according to the positions of each point in the label image and the position of the labeling direction point in the label image;
and determining the target loss according to the direction characteristic image and the Gaussian characteristic image.
5. The method according to claim 4, wherein the label image is converted into a Gaussian feature map according to the positions of each point in the label image and the labeling direction point in the label image by the following formula:
Figure 24243DEST_PATH_IMAGE001
Figure 530311DEST_PATH_IMAGE002
for representing gaussian characteristic diagrams
Figure 480949DEST_PATH_IMAGE003
A characteristic value of the coordinates;
Figure 97875DEST_PATH_IMAGE003
the coordinate value of the element in the label image is represented;
Figure 450359DEST_PATH_IMAGE004
coordinate values used for representing the direction points marked in the label image;
Figure 127328DEST_PATH_IMAGE005
hyper-parameters for representing a gaussian transformation.
6. The method of claim 2, wherein the decoding subnetwork comprises a plurality of layers of feature decoding networks, each layer of feature decoding network outputting a feature map of a different size;
determining the target loss of the key point identification model according to the direction feature image and the label image corresponding to the target input image sequence, wherein the determining comprises:
for each layer of feature decoding network, carrying out standardization processing on the feature map or the label image output by the layer of feature decoding network to obtain a target feature map and a target label image which are corresponding to the layer of feature decoding network and have the same size;
aiming at each layer of feature decoding network, determining the loss corresponding to the layer of feature decoding network according to the target feature graph and the target label image corresponding to the layer of feature decoding network;
and determining the target loss of the key point identification model according to the loss corresponding to each layer of feature decoding network.
7. The method of claim 1, further comprising:
transmitting the target direction point to a driving device of the endoscope to move the endoscope to the target direction point;
and returning to the step of receiving the cavity image sequence to be identified until the endoscope reaches the target position point.
8. A tissue cavity positioning device for an endoscope, the device comprising:
the endoscope identification device comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for receiving a cavity image sequence to be identified, the cavity image sequence comprises a plurality of continuous images, and the last image in the cavity image sequence is obtained by the endoscope at the current position;
a first determining module, configured to determine, according to the cavity image sequence and a keypoint recognition model, a target direction point of a tissue cavity corresponding to the cavity image sequence relative to the last image, where the target direction point is used to indicate a next target moving direction of the endoscope at a current position of the endoscope;
the key point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network is used for acquiring the spatial features of the cavity image sequence, the time circulation sub-network is used for acquiring the temporal features of the cavity image sequence, and the decoding sub-network is used for decoding based on the spatial features and the temporal features to acquire the target direction points.
9. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 1 to 7.
CN202111033760.9A 2021-09-03 2021-09-03 Tissue cavity positioning method, device, medium and equipment for endoscope Active CN113487605B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111033760.9A CN113487605B (en) 2021-09-03 2021-09-03 Tissue cavity positioning method, device, medium and equipment for endoscope
PCT/CN2022/104089 WO2023029741A1 (en) 2021-09-03 2022-07-06 Tissue cavity locating method and apparatus for endoscope, medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111033760.9A CN113487605B (en) 2021-09-03 2021-09-03 Tissue cavity positioning method, device, medium and equipment for endoscope

Publications (2)

Publication Number Publication Date
CN113487605A true CN113487605A (en) 2021-10-08
CN113487605B CN113487605B (en) 2021-11-19

Family

ID=77947180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111033760.9A Active CN113487605B (en) 2021-09-03 2021-09-03 Tissue cavity positioning method, device, medium and equipment for endoscope

Country Status (2)

Country Link
CN (1) CN113487605B (en)
WO (1) WO2023029741A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705546A (en) * 2021-10-28 2021-11-26 武汉楚精灵医疗科技有限公司 Interference type recognition model training method, recognition method and device and electronic equipment
CN114332019A (en) * 2021-12-29 2022-04-12 小荷医疗器械(海南)有限公司 Endoscope image detection assistance system, method, medium, and electronic apparatus
CN114332080A (en) * 2022-03-04 2022-04-12 北京字节跳动网络技术有限公司 Tissue cavity positioning method and device, readable medium and electronic equipment
WO2023029741A1 (en) * 2021-09-03 2023-03-09 北京字节跳动网络技术有限公司 Tissue cavity locating method and apparatus for endoscope, medium and device
WO2023138619A1 (en) * 2022-01-21 2023-07-27 小荷医疗器械(海南)有限公司 Endoscope image processing method and apparatus, readable medium, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200070062A (en) * 2018-12-07 2020-06-17 주식회사 포인바이오닉스 System and method for detecting lesion in capsule endoscopic image using artificial neural network
CN111666998A (en) * 2020-06-03 2020-09-15 电子科技大学 Endoscope intelligent intubation decision-making method based on target point detection
CN111915573A (en) * 2020-07-14 2020-11-10 武汉楚精灵医疗科技有限公司 Digestive endoscopy focus tracking method based on time sequence feature learning
CN112348125A (en) * 2021-01-06 2021-02-09 安翰科技(武汉)股份有限公司 Capsule endoscope image identification method, equipment and medium based on deep learning
CN112766416A (en) * 2021-02-10 2021-05-07 中国科学院深圳先进技术研究院 Digestive endoscopy navigation method and system
CN113112609A (en) * 2021-03-15 2021-07-13 同济大学 Navigation method and system for lung biopsy bronchoscope

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487605B (en) * 2021-09-03 2021-11-19 北京字节跳动网络技术有限公司 Tissue cavity positioning method, device, medium and equipment for endoscope

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200070062A (en) * 2018-12-07 2020-06-17 주식회사 포인바이오닉스 System and method for detecting lesion in capsule endoscopic image using artificial neural network
CN111666998A (en) * 2020-06-03 2020-09-15 电子科技大学 Endoscope intelligent intubation decision-making method based on target point detection
CN111915573A (en) * 2020-07-14 2020-11-10 武汉楚精灵医疗科技有限公司 Digestive endoscopy focus tracking method based on time sequence feature learning
CN112348125A (en) * 2021-01-06 2021-02-09 安翰科技(武汉)股份有限公司 Capsule endoscope image identification method, equipment and medium based on deep learning
CN112766416A (en) * 2021-02-10 2021-05-07 中国科学院深圳先进技术研究院 Digestive endoscopy navigation method and system
CN113112609A (en) * 2021-03-15 2021-07-13 同济大学 Navigation method and system for lung biopsy bronchoscope

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023029741A1 (en) * 2021-09-03 2023-03-09 北京字节跳动网络技术有限公司 Tissue cavity locating method and apparatus for endoscope, medium and device
CN113705546A (en) * 2021-10-28 2021-11-26 武汉楚精灵医疗科技有限公司 Interference type recognition model training method, recognition method and device and electronic equipment
CN114332019A (en) * 2021-12-29 2022-04-12 小荷医疗器械(海南)有限公司 Endoscope image detection assistance system, method, medium, and electronic apparatus
WO2023124876A1 (en) * 2021-12-29 2023-07-06 小荷医疗器械(海南)有限公司 Endoscope image detection auxiliary system and method, medium and electronic device
WO2023138619A1 (en) * 2022-01-21 2023-07-27 小荷医疗器械(海南)有限公司 Endoscope image processing method and apparatus, readable medium, and electronic device
CN114332080A (en) * 2022-03-04 2022-04-12 北京字节跳动网络技术有限公司 Tissue cavity positioning method and device, readable medium and electronic equipment

Also Published As

Publication number Publication date
WO2023029741A1 (en) 2023-03-09
CN113487605B (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN113487605B (en) Tissue cavity positioning method, device, medium and equipment for endoscope
CN113496512B (en) Tissue cavity positioning method, device, medium and equipment for endoscope
CN112767329B (en) Image processing method and device and electronic equipment
CN113487608B (en) Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus
CN113496489B (en) Training method of endoscope image classification model, image classification method and device
CN110532981B (en) Human body key point extraction method and device, readable storage medium and equipment
CN112541928A (en) Network training method and device, image segmentation method and device and electronic equipment
US11417014B2 (en) Method and apparatus for constructing map
CN113658178B (en) Tissue image identification method and device, readable medium and electronic equipment
CN113470029B (en) Training method and device, image processing method, electronic device and storage medium
CN114332019B (en) Endoscopic image detection assistance system, method, medium, and electronic device
CN113469295B (en) Training method for generating model, polyp recognition method, device, medium, and apparatus
WO2023207564A1 (en) Endoscope advancing and retreating time determining method and device based on image recognition
CN112288816A (en) Pose optimization method, pose optimization device, storage medium and electronic equipment
CN113470030A (en) Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment
CN111724364B (en) Method and device based on lung lobes and trachea trees, electronic equipment and storage medium
CN114332080B (en) Tissue cavity positioning method and device, readable medium and electronic equipment
CN114937178B (en) Multi-modality-based image classification method and device, readable medium and electronic equipment
CN117237761A (en) Training method of object re-recognition model, object re-recognition method and device
CN115375884B (en) Free viewpoint synthesis model generation method, image drawing method and electronic device
CN115375657A (en) Method for training polyp detection model, detection method, device, medium, and apparatus
CN113470026B (en) Polyp recognition method, device, medium, and apparatus
CN116012875A (en) Human body posture estimation method and related device
CN114299289A (en) Image processing method and device, electronic equipment and storage medium
CN116704593A (en) Predictive model training method, apparatus, electronic device, and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211008

Assignee: Xiaohe medical instrument (Hainan) Co.,Ltd.

Assignor: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Contract record no.: X2021990000694

Denomination of invention: Tissue cavity positioning method, device, medium and equipment for endoscope

License type: Common License

Record date: 20211117

EE01 Entry into force of recordation of patent licensing contract