CN115553925B - Endoscope control model training method and device, equipment and storage medium - Google Patents

Endoscope control model training method and device, equipment and storage medium Download PDF

Info

Publication number
CN115553925B
CN115553925B CN202211547911.7A CN202211547911A CN115553925B CN 115553925 B CN115553925 B CN 115553925B CN 202211547911 A CN202211547911 A CN 202211547911A CN 115553925 B CN115553925 B CN 115553925B
Authority
CN
China
Prior art keywords
endoscope
control mode
preset
image
control data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211547911.7A
Other languages
Chinese (zh)
Other versions
CN115553925A (en
Inventor
何进雄
谭有余
谭文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Seesheen Medical Technology Co ltd
Original Assignee
Zhuhai Seesheen Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Seesheen Medical Technology Co ltd filed Critical Zhuhai Seesheen Medical Technology Co ltd
Priority to CN202211547911.7A priority Critical patent/CN115553925B/en
Publication of CN115553925A publication Critical patent/CN115553925A/en
Application granted granted Critical
Publication of CN115553925B publication Critical patent/CN115553925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/32Surgical robots operating autonomously
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/77Manipulators with motion or force scaling
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/285Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for injections, endoscopy, bronchoscopy, sigmoidscopy, insertion of contraceptive devices or enemas

Abstract

The invention belongs to the technical field of endoscope control, and discloses an endoscope control model training method, device, equipment and storage medium.

Description

Endoscope control model training method and device, equipment and storage medium
Technical Field
The invention belongs to the technical field of endoscope control, and particularly relates to an endoscope control model training method, an endoscope control model training device, equipment and a storage medium.
Background
With the rapid development of medical technology, people will pursue faster and more effective diagnosis and treatment technology, and the current bionic intelligent instrument becomes a key field of attention. An endoscope operation auxiliary robot is advanced medical equipment and mainly comprises three parts, namely a mechanical arm, a control console and a three-dimensional imaging system. During operation, a doctor places the endoscope operation auxiliary robot on an operating table, controls a mechanical arm through a console, and drives an endoscope to enter a human body cavity to diagnose and treat through the mechanical arm. The process mechanical arm can simulate the functions of a plurality of parts such as fingers, elbow joints and the like, flexibly avoids barriers such as ribs and the like for operation, replaces a manual operation endoscope, and realizes automatic operation endoscope.
The operation of automatic control is more stable and is difficult to shake, and it is more free to rotate, also can not easily appear the easy tired problem of manual operation, more is favorable to the minimal access surgery to develop, can clearly present the position of sickening through three-dimensional imaging system simultaneously, increases the operation accuracy. Therefore, the auxiliary robot for the endoscopic surgery has the advantages of intelligence, stability, accuracy and the like, can implement complex surgical operations in a minimally invasive mode, and is more beneficial to recovery of patients.
Before an operation, a planned path and various control commands of the endoscope in a human body cavity can be set in advance, an automatic control mode is started during work, a control console continuously sends operation instructions to the robot, and the robot controls the endoscope to move automatically according to the preset path and the operation instructions before the operation, so that operation auxiliary examination is performed.
In practical application, due to posture change or other motion factors (such as intestinal peristalsis), a diseased tissue may be displaced, and at this time, a target position preset for the diseased tissue before an operation is no longer an actual position of the diseased tissue, that is, a target position of a current lesion deviates, and position compensation is required. At present, after the current focus target position deviates, the multi-purpose human eye recognition is switched to a manual control mode for manual adjustment and compensation, but the human eye recognition needs time, and the accuracy of human eye judgment is not high, so that the efficiency and the accuracy of the existing position compensation mode are low.
Disclosure of Invention
The invention aims to provide an endoscope control model training method, an endoscope control model training device, equipment and a storage medium, which can train and obtain an endoscope control model with an automatic position compensation function, thereby improving the compensation efficiency and accuracy.
The invention discloses a training method of an endoscope control model in a first aspect, which comprises the following steps:
acquiring first control data of an endoscope in an automatic control mode;
when the endoscope is detected to be switched from the automatic manipulation mode to the manual manipulation mode, collecting second manipulation data of the endoscope in the manual manipulation mode;
determining an actual end point position where the endoscope stays when the manual manipulation mode is ended;
if the actual end point position is not located on the preset path, calculating the relative distance between the actual end point position and a preset target position;
if the relative distance between the actual end point position and a preset target position is smaller than a distance threshold, taking the first control data and the second control data as training samples;
and training the deep learning neural network according to the training samples to obtain a target control model.
The second aspect of the present invention discloses an endoscope manipulation model training device, including:
the first acquisition unit is used for acquiring first control data of the endoscope in an automatic control mode;
the second acquisition unit is used for acquiring second manipulation data of the endoscope in the manual manipulation mode when the endoscope is detected to be switched from the automatic manipulation mode to the manual manipulation mode;
the positioning unit is used for determining the actual terminal position of the endoscope staying when the manual control mode is finished;
the calculating unit is used for calculating the relative distance between the actual end point position and a preset target position when the actual end point position is not positioned on a preset path;
a determining unit, configured to use the first manipulation data and the second manipulation data as training samples when a relative distance between the actual end point position and a preset target position is smaller than a distance threshold;
and the training unit is used for training the deep learning neural network according to the training samples to obtain a target control model.
A third aspect of the invention discloses an electronic device comprising a memory storing executable program code and a processor coupled to the memory; the processor calls the executable program code stored in the memory for executing the endoscope manipulation model training method disclosed in the first aspect.
A fourth aspect of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the endoscopic maneuver model training method disclosed in the first aspect.
The method and the device for training the endoscope control model have the advantages that the method, the device and the storage medium for training the endoscope control model acquire first control data of the endoscope in an automatic control mode, acquire second control data of the endoscope in the manual control mode when the endoscope is detected to be switched from the automatic control mode to the manual control mode, determine the actual end point position where the endoscope stays when the endoscope is in the manual control mode, judge that the second control data are control data for performing position compensation manually after the target end point position deviates if the actual end point position is not located on a preset path and the relative distance between the actual end point position and a preset target position is smaller than a distance threshold, and then use the second control data and the first control data in the automatic control mode as training samples to train a deep learning neural network to obtain the target control model, so that the endoscope control model obtained by training can autonomously re-plan a path after the target end point position deviates, automatically perform position compensation without being switched to the manual control mode to perform manual position compensation, and accordingly compensation efficiency and accuracy can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles and effects of the invention.
Unless otherwise specified or defined, the same reference numerals in different figures refer to the same or similar features, and different reference numerals may be used for the same or similar features.
FIG. 1 is a flow chart of a method for training an endoscopic steering model according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of an endoscopic maneuver model training device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Description of the reference numerals:
201. a first acquisition unit; 202. a second acquisition unit; 203. a positioning unit; 204. a calculation unit; 205. a determination unit; 206. a training unit; 301. a memory; 302. a processor.
Detailed Description
Unless specifically stated or otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. In the case of combining the technical solutions of the present invention in a realistic scenario, all technical and scientific terms used herein may also have meanings corresponding to the purpose of achieving the technical solutions of the present invention. As used herein, "first and second" \ 8230, "are used merely to distinguish between names and do not denote a particular quantity or order. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
As used herein, unless otherwise specified or defined, the terms "comprises," "comprising," and "including" are used interchangeably and refer to the term "comprising," and are used interchangeably and refer to the term "comprising," or "comprises," as used herein.
It is needless to say that technical contents or technical features which are contrary to the object of the present invention or clearly contradicted by the object of the present invention should be excluded.
In order to facilitate an understanding of the invention, specific embodiments thereof will be described in more detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention discloses an endoscope manipulation model training method. The main body of the method may be an electronic device such as a computer, a notebook computer, a tablet computer, or the like, or an endoscope control model training device embedded in the electronic device, which is not limited in the present invention. In this embodiment, an electronic device is taken as an example for explanation. The method comprises the following steps S10-S90:
s10, the electronic equipment constructs a three-dimensional graph according to the imaging examination image and identifies a preset target position based on the three-dimensional graph.
In the embodiment of the invention, the electronic equipment can be in communication connection with the endoscopic surgery auxiliary robot through a wireless or wired network, so that the electronic equipment can output a control signal to control the mechanical arm of the robot to drive the endoscope to move. Wherein the endoscope can be an optical endoscope such as a gastroscope, an enteroscope, a cystoscope, a bronchoscope, a thoracoscope or a laparoscope; the imaging examination image is a CT image or an MRI image, and the predetermined target position is a position of a lesion tissue (e.g., a tumor or a polyp) determined by image recognition based on the three-dimensional map.
After the lesion tissue position is identified by the specific electronic device based on the three-dimensional image, whether the position is consistent with the lesion position subjectively judged by the operator or not can be determined based on a man-machine interaction mode, if so, the identified lesion tissue position is determined as a preset target position, and if not, the identified lesion tissue position is set as the preset target position according to a specified terminal position input by the operator (i.e., a user).
And S20, planning a path from the designated starting point to the preset target position by the electronic equipment to be used as a preset path.
Specifically, the preset target position is an end point of a preset path, and the preset path may be obtained through manual planning, automatic planning, or human-computer interactive planning.
Manual planning: the operator inputs the path parameters needed by each step through the system interface, and controls the viewpoint to move forward step by step in the virtual organ. Manual planning is a very time-consuming roaming method, and if it is an inexperienced operator, it is very likely to get lost.
Automatic planning: the whole navigation path of the virtual organ is automatically extracted before roaming. Therefore, the automatic roaming of the virtual organ can be completed quickly, and doctors are assisted to know the whole structure and pathological change condition of the organ quickly. However, the automatic roaming method under the automatic path planning technology cannot realize the interactive function between the system and the operator, and cannot allow the operator to perform detailed examination on the suspected organ area.
And (3) man-machine interactive planning: and extracting the navigation path in real time according to the starting point and the end point of the path appointed by the operator, thereby realizing the real-time roaming function. In order to meet the requirement of real-time performance, the calculation amount of the interactive path planning algorithm needs to be small, and a continuous and smooth path connecting a starting point and an end point can be extracted quickly.
As an example of an automatic planning method, an optimal route from a specified start point to an end point position may be planned using a center path algorithm based on a distance transformation algorithm. Specifically, a distance transformation algorithm is adopted to plan a central path, a boundary distance value of each data voxel in a three-dimensional graph needs to be calculated, a DFB field is established, then a source point is designated to serve as a root node of a maximum cost tree, the maximum cost tree is established by taking the DFB value of the voxel as a weight, and meanwhile, the distance from each voxel to the source point is calculated to establish the DFS field.
Taking a section of trachea as an example, step S20 may include a step of establishing a DFB field of an image and a step of establishing a maximum cost tree and a DFS field, specifically, define OV (obj zero voxel) as a set of voxels of the trachea, define BG (background voxel) as a set of background voxels, define BV (boundary voxel) as a set of boundary voxels, represent surface neighboring voxels with F (face), represent edge neighboring voxels with E (edge), represent top neighboring voxels with V (vertex), and the step of establishing the DFB field is as follows:
and S201, image binarization processing.
And (4) carrying out binarization processing on the three-dimensional image, and representing the image content by black and white, wherein the black part represents a background, and the white part is a segmented trachea.
And S202, initializing boundary voxels.
Setting the DFB value of all background voxels P (P ∈ BG set) as 0, and setting the DFB value of all trachea voxels P (P ∈ OV set) as ∞. For all tracheal voxels P(P ∈ OV set), and search its 26 adjacent voxels one by one, if meet background voxel P 26 (P 26 E BG set) is a face-adjacent voxel, edge-adjacent voxel, or top-adjacent voxel, then P is 26 Respectively set the DFB value of (1) to 10, 14 or 17, and set P to 26 Added to the BV set and marked.
And S203, pushing in the boundary.
Searching 26 adjacent voxels of all voxels P (P is the BV set) in the BV set one by one, and assigning DFB P26= min{DFB P26 , DFB P + (10 for F/14 for E/17 for V) }, marking each voxel P that changed DFB value 26 The flag for P is cancelled. The above process is repeated until the BV set is empty, so the DFB field setup is complete.
Establishing a two-dimensional DFB field according to the algorithm steps, establishing a maximum cost tree by taking the DFB value of the voxel as a weight value based on the characteristic that the voxel positioned in the center has the local maximum DFB value, placing the voxel with the maximum local DFB value on the trunk of the tree, regarding the voxel in the cavity as a node of the tree, and designating one of the nodes as a root node of the tree, wherein each node (except the root node) points to a father node, so that all voxels in the trachea cavity are connected into a directed connected tree.
In the process of establishing the maximum cost tree, when a node C points to a father node, the weight value of the node C connected to the father node is the DFB value of the father node, the root node of the maximum cost tree is used as an initial point, and the DFS value from each node on the maximum cost tree to the initial point is calculated, so that the DFS field is established.
Specifically, the establishing of the maximum cost tree and the DFS field includes the following steps:
s211, initializing a double-end queue M and giving a designated initial point S.
Specifically, the parent node of S is set to null, that is, the initial point S has a link attribute of 0, and the DFS value of S is set to zero.
S212, adding the initial point S into the queue M, setting the initial point S as a current point C, and marking the initial point S as processed.
And S213, circularly operating, and traversing each node one by one.
S214, finding 26 adjacent points B of the current point C, if the adjacent points B are not processed, adding the adjacent points B into the tail of the queue M, and setting the connection attribute of the adjacent points B as the current point C.
At this time, the DFS value of the adjacent point B is the sum of the DFS value of the current point C and the distance value between the adjacent point B and the current point C.
S215, marking the adjacent point B as processed.
The parent node and DFS value of B need not be processed thereafter.
S216, deleting the current node C from the queue M.
S215, taking the node with the maximum DFB value in the queue M as the current node C.
S216, repeating the steps S213-S215 until the queue M is empty.
And (3) through algorithm execution, establishing a maximum spanning tree in a data field, backtracking from the tail node of the longest branch to obtain the central path of the segment of the trachea, and circulating the process to finally obtain the whole central line. And (4) carrying out smooth (cubic Bezier curve) and Bezier interpolation processing on the central line to obtain a final planning path as a preset path.
And S30, the electronic equipment controls the endoscope to start an automatic control mode so as to enable the endoscope to move on a preset path.
When the endoscope initiates the automatic steering mode, the endoscope may autonomously move on a preset path.
S40, the electronic equipment acquires first control data of the endoscope in an automatic control mode.
The first manipulation data refers to automatic manipulation data of the endoscope in an automatic manipulation mode, and comprises coordinate information of the endoscope in a moving process and a manipulation instruction aiming at the coordinate information.
Specifically, the control instruction for the endoscope is as follows: the manipulation commands given to the endoscope-surgery assisting robot or manipulator or the like for manipulating the endoscope in order to bring the endoscope from the designated starting point to the preset target position include, but are not limited to: the moving speed of the endoscope, such as in a straight cavity, can control the endoscope to move at a higher speed, and in a complex cavity, can control the endoscope to move at a lower speed; according to a control command made by the control environment of the endoscope, the endoscope is controlled to move directly in a straight-way environment, and when a cavity with an obstacle (an organization structure exists in the cavity), the endoscope can be controlled to change the previous straight movement into a steering movement so as to avoid the obstacle to pass through; autonomous selection of a path by the endoscope, such as at a multi-lumen bifurcation, controlling the endoscope to enter a lumen according to the originally planned path, and so forth.
Further optionally, when the endoscope is in the automatic operation mode, the electronic device may further perform the following steps S41 to S44:
and S41, controlling the endoscope to shoot to obtain a current cavity image.
The endoscope can be controlled to shoot at a preset frequency in the process of moving on a preset path, so that a plurality of frames of images are obtained, each frame of image corresponds to different time points, and then the image shot at the latest time point is taken as the current cavity image after each shooting, so that the current cavity image is updated in real time.
And S42, calculating the matching degree between the current cavity image and the preset target image.
After each shooting of the endoscope, the matching degree between the current cavity image and a preset target image is calculated, wherein the preset target image refers to an image corresponding to a preset target position, namely, the preset target image contains a pathological change tissue area. So that whether the endoscope is approaching the preset target position can be judged.
Generally, of several frames of images captured at respective time points at which the endoscope moves on the preset path, the later images at the time points have a higher degree of matching with the preset target image.
Specifically, a mean hash algorithm, a difference hash algorithm, or a perceptual hash algorithm may be used to calculate the similarity between the current cavity image and the preset target image, so as to obtain the matching degree between the current cavity image and the preset target image.
Specifically, a node position can be set in the preset path, the node position is close enough to the preset target position, the endoscope can completely and clearly shoot and obtain an image of a lesion tissue area (the preset target position), specifically, the node position can be manually marked after the electronic equipment finishes the planned preset path, the preset distance between the node position and the preset target position can also be input on the planned preset path, when the endoscope moves on the preset path, the relative distance between the node position and the preset target position can be collected in real time through the position sensor, and when the relative distance between the endoscope and the preset target position is equal to the preset distance, the point where the endoscope is marked is the node position.
Specifically, the matching degree between the precondition image shot at the node position and the preset target image is set as a first threshold. If the matching degree is smaller than the first threshold, it can be determined that an image area matched with the lesion tissue area in the preset target image does not exist in the current cavity image shot by the endoscope, that is, the endoscope is not close to the preset target position enough. Wherein, the first threshold value can be set to a value of 50%, 60%, 75% or the like.
As an alternative embodiment, the step of controlling the endoscope to shoot in S41 may be specifically executed when the duration of the endoscope entering the automatic operation mode reaches a set duration (for example, half of the duration required by the endoscope moving the preset path all the way).
Or, in some other possible embodiments, when the endoscope is in the automatic steering mode and before the endoscope is controlled to shoot in S41, the current position of the endoscope may be obtained by a position sensor disposed in the endoscope, wherein the endoscope moves on the preset path, and the closer the relative distance between the current position of the endoscope and the preset target position is, the higher the matching degree between the shot current cavity image and the preset target image is; therefore, the step of controlling the endoscope to perform shooting can be specifically performed when the relative distance between the current position and the preset target position is less than or equal to the preset distance.
And S43, outputting prompt information for representing the deviation of the target end point position when the matching degree reaches the first threshold and is smaller than the second threshold.
Wherein the second threshold is greater than the first threshold, and assuming that the first threshold is 75%, the second threshold may be set to 85%, 90%, or 93%, etc. When it is detected that the relative distance between the current position of the endoscope and the preset target position is smaller than or equal to the preset distance, that is, the endoscope reaches a position where a complete image of a lesion tissue region (preset target position) can be acquired, an image region matched with the lesion tissue region in the preset target image exists in a current cavity image captured by the endoscope, at this time, the endoscope continuously acquires the matching degree between the current cavity image and the preset target image, and if the matching degree reaches a first threshold value and is smaller than a second threshold value, it is indicated that the overall matching degree of the image is still insufficient, it can be determined that the position of the lesion tissue (that is, the target end point position) at this time is shifted, and position compensation needs to be performed, the electronic device outputs prompt information, for example, the electronic device can output prompt information in a voice and/or text manner.
And S44, when a switching instruction input by the user for the prompt information is received, switching the endoscope from the automatic control mode to the manual control mode.
The electronic equipment starts a timer to time to obtain a timing duration while outputting the prompt message, and if the timing duration reaches a preset duration (such as 8 seconds, 10 seconds or 15 seconds) and a switching instruction input by a user is not received, the electronic equipment does not process the timing duration; and if a switching instruction input by a user is received within a preset time length, switching the endoscope from an automatic control mode to a manual control mode. The method for inputting the switching instruction by the user includes, but is not limited to, inputting through interactive methods such as voice, touch, gesture or text input, which is not limited in this disclosure.
And S50, when the endoscope is detected to be switched from the automatic control mode to the manual control mode, the electronic equipment collects second control data of the endoscope in the manual control mode.
Specifically, when the endoscope is switched from the automatic control mode to the manual control mode, all manual control data in the manual control mode may be collected as second control data, which at least includes coordinate information of the endoscope during manual control movement and a manually input control instruction for the endoscope.
Specifically, since the position of the diseased tissue is shifted, the preset target position cannot accurately express the position of the diseased tissue, and a control instruction needs to be manually input to drive the endoscope to reach the actual position (i.e., the target end point position) of the diseased tissue position again, specifically, a control instruction can be manually input to the auxiliary robot or the manipulator for the endoscope operation to enable the endoscope to reach the target end point position, or the endoscope can be manually and directly operated to reach the target end point position.
Preferably, when it is detected that the endoscope is switched from the automatic manipulation mode to the manual manipulation mode, and before the electronic device collects second manipulation data of the endoscope in the manual manipulation mode, the following steps S501 to S503 may be further performed:
s501, when the endoscope is detected to be switched from the automatic control mode to the manual control mode, the electronic equipment determines that the current moment is the switching moment.
S502, the electronic equipment acquires displacement information of the endoscope between the historical time and the switching time.
The historical time is prior to the switching time, and the time length of the difference between the historical time and the switching time is the designated time length. For example, assuming that the switching time is 4 pm and the specified time period is 1 minute, the history time is 3 pm 59 minutes, and the displacement information of the endoscope between 3 pm 59 minutes and 4 pm is specifically acquired in step S502.
And then comparing the displacement information with a preset path, so that whether a route is deviated or not in a historical time period before mode switching of the endoscope can be identified, if the displacement information is not matched with the preset path, the route is probably the case of deviation, aiming at the case, manual operation is carried out after the mode is switched to a manual operation mode, so that deviation correction is carried out, namely, the endoscope is guided back to the preset path based on man-machine interaction, and at the moment, the target end point position of the endoscope is still a preset target position and is different from the case that the position of a lesion tissue is changed.
And S503, if the displacement information is matched with the preset path, the electronic equipment collects second control data of the endoscope in a manual control mode.
If the displacement information is matched with the preset path, which indicates that the route does not deviate before the switching time, the manual operation mode is switched at the time, and the position of the lesion tissue may change, so that it can be further determined that the position of the lesion tissue (namely, the target end point position) deviates at the switching time, and then second operation data of the endoscope in the manual operation mode is collected.
By implementing the steps S501-S503, when the endoscope is detected to be switched from the automatic control mode to the manual control mode, the condition that the route deviates is eliminated by using the historical displacement information in the historical time period, and whether the target end point position deviates or not is further judged, so that the misjudgment rate can be reduced, and the accuracy and pertinence of the collected manual control data are improved.
And S60, the electronic equipment determines the actual end point position of the endoscope staying when the manual operation mode is ended.
If the position of the lesion tissue (namely the target end position) deviates, the endoscope is manually guided to move to the position where the lesion tissue moves, namely the actual end position in the manual operation mode.
It should be noted that, in an actual application scenario, when a position change or a change in an internal environment (e.g., intestinal peristalsis) causes a movement of a diseased tissue, an offset distance is generally not large (4 cm, 5cm, or 6cm can be set as a maximum offset limit), which means that a target (i.e., a diseased tissue) change is found when an endoscope is about to reach a preset target position on a preset path, and based on this situation, the endoscope can be guided to reach a changed target end point position only by performing fine adjustment in a manual operation mode.
Therefore, whether the situation of the route is deviated or not can be judged more accurately by identifying whether the actual end point position where the endoscope stays when the endoscope exits the manual control mode is located on the preset path or not, and if the actual end point position is located on the preset path, the situation of the route is accurately judged, and the manual control data switched to the manual control mode under the situation is mainly deviation correction data and is not re-planning data aiming at the path changed by the target (namely lesion tissues). This is therefore excluded, thereby improving the pertinence of the method.
And S70, if the actual end point position is not located on the preset path, the electronic equipment calculates the relative distance between the actual end point position and the preset target position.
If the actual end point position is not located on the preset path, the position of the lesion tissue (namely the target end point position) can be judged to be deviated, and then it is determined that the manual control data is mainly path replanning data, and the first control data and the second control data can be used as training samples. Or preferably, in the embodiment of the present invention, a relative distance between the actual end point position and the preset target position may be further calculated, when the relative distance is smaller than a distance threshold (i.e., a maximum deviation limit), it is determined that the lesion tissue position (i.e., the target end point position) is deviated, and the first manipulation data and the second manipulation data are used as training samples. Therefore, the accuracy and pertinence of the collected manual control data can be improved.
And S80, if the relative distance between the actual end point position and the preset target position is smaller than a distance threshold, the electronic equipment takes the first control data and the second control data as training samples.
And S90, the electronic equipment trains the deep learning neural network according to the training samples to obtain a target control model.
The network layer of the deep learning neural network comprises an input layer and a plurality of operation layers stacked on the input layer, and the operation layers can be viewed as a complex function as a whole: a Fully Connected Neural Network (FCNN), the training of the deep learning Neural Network propagates the error back to each layer of the Network under the driving of the loss function, so as to update the Network parameters.
In step S90, the electronic device may determine a target loss function corresponding to the training sample from a plurality of pre-stored loss functions, and in the process of performing end-to-end learning training on the deep learning neural network by using the training sample, reversely propagate an error to each layer of the network by using the target loss function, and finally determine that the training is completed when the target loss function is converged, so as to obtain a target control model. The target manipulation model can be used for automatic manipulation of the endoscope subsequently.
The objective loss function may include, but is not limited to, data loss and regularization loss (regularization loss) of model parameters.
In summary, by implementing the embodiment of the present invention, the manual manipulation data for path re-planning when the position of the lesion tissue (i.e., the target end point position) deviates is collected and is used as the training sample for training, so that the endoscope manipulation model obtained by training can autonomously re-plan the path after the target end point position deviates, and automatically perform position compensation without switching to the manual manipulation mode for manual position compensation, thereby improving compensation efficiency and accuracy.
As shown in fig. 2, the embodiment of the present invention discloses an endoscope manipulation model training apparatus, which includes a first obtaining unit 201, a second obtaining unit 202, a positioning unit 203, a calculating unit 204, a determining unit 205, and a training unit 206, wherein,
a first acquiring unit 201, configured to acquire first manipulation data of the endoscope in an automatic manipulation mode;
the second acquiring unit 202 is configured to acquire second manipulation data of the endoscope in the manual manipulation mode when it is detected that the endoscope is switched from the automatic manipulation mode to the manual manipulation mode;
the positioning unit 203 is used for determining the actual terminal position of the endoscope staying when the manual control mode is ended;
a calculating unit 204, configured to calculate a relative distance between the actual end point position and the preset target position when the actual end point position is not located on the preset path;
a determining unit 205, configured to take the first manipulation data and the second manipulation data as training samples when a relative distance between the actual end point position and the preset target position is smaller than a distance threshold;
and the training unit 206 is configured to train the deep learning neural network according to the training samples to obtain a target control model.
Optionally, the endoscope manipulation model training apparatus may further include the following units, not shown:
the identification unit is used for constructing a three-dimensional image according to the imaging examination image and identifying a preset target position based on the three-dimensional image before the first acquisition unit 201 acquires the first control data of the endoscope in the automatic control mode;
the planning unit is used for planning a path from the designated starting point to a preset target position as a preset path;
and the control unit is used for controlling the endoscope to start an automatic control mode so as to enable the endoscope to move on a preset path.
Optionally, the endoscope manipulation model training apparatus may further include the following units, not shown:
the shooting unit is used for controlling the endoscope to shoot to obtain a current cavity image when the endoscope is in an automatic control mode;
the matching unit is used for calculating the matching degree between the current cavity image and a preset target image;
the prompting unit is used for outputting prompting information for representing the deviation of the target end point position when the matching degree reaches a first threshold value and is smaller than a second threshold value;
and the switching unit is used for switching the endoscope from the automatic control mode to the manual control mode when receiving a switching instruction input by a user aiming at the prompt information.
As an optional embodiment, the shooting unit is specifically configured to acquire a current position of the endoscope in real time when the endoscope is in an automatic manipulation mode; and when the relative distance between the current position and the preset target position is smaller than or equal to the preset distance, controlling the endoscope to shoot to obtain a current cavity image.
As an alternative implementation, the second obtaining unit 202 may include the following sub-units, which are not shown in the drawing:
the determining subunit is used for determining that the current moment is the switching moment when the endoscope is detected to be switched from the automatic control mode to the manual control mode;
the displacement acquisition subunit is used for acquiring displacement information of the endoscope between the historical time and the switching time; the historical time is prior to the switching time, and the time length of the difference between the historical time and the switching time is the designated time length;
and the acquisition subunit is used for acquiring second control data of the endoscope in a manual control mode when the displacement information is matched with the preset path.
As shown in fig. 3, an embodiment of the present invention discloses an electronic device, which includes a memory 301 storing executable program codes and a processor 302 coupled to the memory 301;
the processor 302 calls the executable program code stored in the memory 301 to execute the endoscope control model training method described in the above embodiments.
An embodiment of the present invention further discloses a computer-readable storage medium, which stores a computer program that causes a computer to execute the endoscope control model training method described in each of the above embodiments.
The above embodiments are provided to illustrate, reproduce and deduce the technical solutions of the present invention, and to fully describe the technical solutions, the objects and the effects of the present invention, so as to make the public more thoroughly and comprehensively understand the disclosure of the present invention, and not to limit the protection scope of the present invention.
The above examples are not intended to be exhaustive of the invention and there may be many other embodiments not listed. Any alterations and modifications without departing from the spirit of the invention are within the scope of the invention.

Claims (9)

1. An endoscopic steering model training method, comprising:
acquiring first control data of an endoscope in an automatic control mode; the first control data is automatic control data of the endoscope in an automatic control mode;
when the endoscope is detected to be switched from the automatic control mode to the manual control mode, determining the current moment as a switching moment;
acquiring displacement information of the endoscope between the historical time and the switching time; the historical time is prior to the switching time, and the time length of the difference between the historical time and the switching time is a specified time length;
if the displacement information is matched with a preset path, acquiring second control data of the endoscope in the manual control mode; the second control data is manual control data of the endoscope in a manual control mode;
determining an actual end point position where the endoscope stays when the manual manipulation mode is ended;
if the actual end point position is not located on the preset path, calculating the relative distance between the actual end point position and a preset target position;
if the relative distance between the actual end point position and a preset target position is smaller than a distance threshold, taking the first control data and the second control data as training samples; wherein the distance threshold is used to characterize a maximum excursion limit;
and training the deep learning neural network according to the training samples to obtain a target control model.
2. The method of training an endoscopic steering model of claim 1, wherein prior to acquiring the first steering data of the endoscope in the autopilot mode, the method further comprises:
constructing a three-dimensional map according to the image examination image, and identifying a preset target position based on the three-dimensional map;
planning a path from the designated starting point to the preset target position as the preset path;
controlling an endoscope to start an automatic manipulation mode so that the endoscope moves on the preset path.
3. The endoscopic steering model training method of claim 2, wherein the method further comprises:
when the endoscope is in the automatic control mode, controlling the endoscope to shoot to obtain a current cavity image;
calculating the matching degree between the current cavity image and a preset target image;
when the matching degree reaches a first threshold and is smaller than a second threshold, outputting prompt information for representing the deviation of the target end point position;
and when a switching instruction input by a user for the prompt information is received, switching the endoscope from the automatic control mode to a manual control mode.
4. The endoscope steering model training method according to claim 3, wherein when the endoscope is in the automatic steering mode and before controlling the endoscope to capture a current lumen image, the method further comprises:
when the endoscope is in the automatic control mode, acquiring the current position of the endoscope in real time;
and when the relative distance between the current position and the preset target position is smaller than or equal to a preset distance, executing the step of controlling the endoscope to shoot to obtain a current cavity image.
5. An endoscopic manipulation model training apparatus, comprising:
the first acquisition unit is used for acquiring first control data of the endoscope in an automatic control mode; the first control data is automatic control data of the endoscope in an automatic control mode;
the second acquisition unit is used for acquiring second control data of the endoscope in the manual control mode when the endoscope is detected to be switched from the automatic control mode to the manual control mode; the second control data is manual control data of the endoscope in a manual control mode;
the positioning unit is used for determining the actual terminal position of the endoscope staying when the manual control mode is finished;
the calculating unit is used for calculating the relative distance between the actual end point position and a preset target position when the actual end point position is not positioned on a preset path;
a determining unit, configured to use the first manipulation data and the second manipulation data as training samples when a relative distance between the actual end point position and a preset target position is smaller than a distance threshold; wherein the distance threshold is used to characterize a maximum excursion limit;
the training unit is used for training the deep learning neural network according to the training samples to obtain a target control model;
wherein the second acquisition unit includes:
the determining subunit is used for determining that the current moment is the switching moment when the endoscope is detected to be switched from the automatic control mode to the manual control mode;
the displacement acquisition subunit is used for acquiring displacement information of the endoscope between the historical time and the switching time; the historical time is prior to the switching time, and the time length of the difference between the historical time and the switching time is a specified time length;
and the acquisition subunit is used for acquiring second control data of the endoscope in the manual control mode when the displacement information is matched with the preset path.
6. The endoscopic steering model training apparatus according to claim 5, further comprising:
the identification unit is used for constructing a three-dimensional image according to the imaging examination image before the first acquisition unit acquires the first control data of the endoscope in the automatic control mode, and identifying a preset target position based on the three-dimensional image;
the planning unit is used for planning a path from the specified starting point to the preset target position as the preset path;
and the control unit is used for controlling the endoscope to start an automatic control mode so as to enable the endoscope to move on the preset path.
7. The endoscopic steering model training apparatus according to claim 6, further comprising:
the shooting unit is used for controlling the endoscope to shoot to obtain a current cavity image when the endoscope is in the automatic control mode;
the matching unit is used for calculating the matching degree between the current cavity image and a preset target image;
the prompting unit is used for outputting prompting information for representing the deviation of the target end point position when the matching degree reaches a first threshold value and is smaller than a second threshold value;
and the switching unit is used for switching the endoscope from the automatic control mode to the manual control mode when receiving a switching instruction input by a user aiming at the prompt information.
8. An electronic device comprising a memory storing executable program code and a processor coupled to the memory; the processor calls the executable program code stored in the memory for performing the endoscope steering model training method of any one of claims 1 to 4.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, wherein the computer program causes a computer to execute the endoscopic steering model training method according to any one of claims 1 to 4.
CN202211547911.7A 2022-12-05 2022-12-05 Endoscope control model training method and device, equipment and storage medium Active CN115553925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211547911.7A CN115553925B (en) 2022-12-05 2022-12-05 Endoscope control model training method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211547911.7A CN115553925B (en) 2022-12-05 2022-12-05 Endoscope control model training method and device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115553925A CN115553925A (en) 2023-01-03
CN115553925B true CN115553925B (en) 2023-03-21

Family

ID=84770521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211547911.7A Active CN115553925B (en) 2022-12-05 2022-12-05 Endoscope control model training method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115553925B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761075A (en) * 2023-05-09 2023-09-15 深圳显融医疗科技有限公司 Image processing method and device based on endoscope, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103479320B (en) * 2013-09-25 2015-08-26 深圳先进技术研究院 Active snakelike endoscope robot system
JP2017153815A (en) * 2016-03-03 2017-09-07 拡史 瀬尾 Endoscope training apparatus, endoscope training method, and program
US11571107B2 (en) * 2019-03-25 2023-02-07 Karl Storz Imaging, Inc. Automated endoscopic device control systems
CN112164026B (en) * 2020-09-01 2022-10-25 上海交通大学 Endoscope polyp real-time detection method, system and terminal

Also Published As

Publication number Publication date
CN115553925A (en) 2023-01-03

Similar Documents

Publication Publication Date Title
CN108685560B (en) Automated steering system and method for robotic endoscope
KR102196291B1 (en) Determining position of medical device in branched anatomical structure
CN112804958A (en) Indicator system
Sganga et al. Offsetnet: Deep learning for localization in the lung using rendered images
CN115553925B (en) Endoscope control model training method and device, equipment and storage medium
Kuntz et al. Motion planning for a three-stage multilumen transoral lung access system
CN103957834A (en) Automatic depth scrolling and orientation adjustment for semiautomated path planning
WO2012068194A2 (en) Endoscope guidance based on image matching
US20210145523A1 (en) Robotic surgery depth detection and modeling
CN116421277A (en) Hair follicle transplantation control method, device, computer equipment and storage medium
CN113557547A (en) System and method for connecting segmented structures
KR102298417B1 (en) Program and method for generating surgical simulation information
CN115721422A (en) Operation method, device, equipment and storage medium for interventional operation
CN116747017A (en) Cerebral hemorrhage operation planning system and method
KR102418452B1 (en) Method and device to train operation determination model of medical tool control device
CN113876420B (en) Path planning method, system, device and medium for planning surgical path
WO2022158433A1 (en) Trained model generation method, training data generation method, program, information processing device, and information processing method
US20210287434A1 (en) System and methods for updating an anatomical 3d model
CN115422838A (en) Autonomous learning method, apparatus, device and medium for surgical robot
CN115590454B (en) Endoscope operation state automatic switching device, endoscope operation state automatic switching equipment and endoscope operation state automatic switching storage medium
US20220358640A1 (en) Medical image processing device and medical image processing program
US20220409301A1 (en) Systems and methods for identifying and facilitating an intended interaction with a target object in a surgical space
US20220395334A1 (en) Systems and methods for guiding surgical procedures
CN115551432A (en) Systems and methods for facilitating automated operation of devices in a surgical space
US20230225802A1 (en) Phase segmentation of a percutaneous medical procedure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant