WO2023097944A1 - Bronchoscope position determination method and apparatus, system, device, and medium - Google Patents

Bronchoscope position determination method and apparatus, system, device, and medium Download PDF

Info

Publication number
WO2023097944A1
WO2023097944A1 PCT/CN2022/086429 CN2022086429W WO2023097944A1 WO 2023097944 A1 WO2023097944 A1 WO 2023097944A1 CN 2022086429 W CN2022086429 W CN 2022086429W WO 2023097944 A1 WO2023097944 A1 WO 2023097944A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
image
bronchial tree
target object
data
Prior art date
Application number
PCT/CN2022/086429
Other languages
French (fr)
Chinese (zh)
Inventor
李楠宇
陈日清
余坤璋
刘润南
徐宏
苏晨晖
Original Assignee
杭州堃博生物科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州堃博生物科技有限公司 filed Critical 杭州堃博生物科技有限公司
Publication of WO2023097944A1 publication Critical patent/WO2023097944A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00147Holding or positioning arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/267Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
    • A61B1/2676Bronchoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • the invention relates to the field of bronchoscopes, in particular to a method, device, system, equipment and medium for determining the position of a bronchoscope.
  • Bronchoscope navigation refers to providing navigation guidance for actually taking video images during the operation by determining the position of the bronchoscope.
  • the invention provides a method, device, system, equipment and medium for determining the position of a bronchoscope, so as to solve the problem that the navigation result is difficult to meet the demand.
  • a method for determining the position of a bronchoscope comprising:
  • the intraoperative image being captured when a bronchoscope travels within the target object
  • identifying the bifurcation nodes of the virtual bronchial tree of the target object includes:
  • the node recognition neural network is trained in the following manner:
  • each training sample in the training sample set is respectively extracted, and the virtual bronchial tree contained in the training sample is marked with a label, and the label is used to mark each branch node in the virtual bronchial tree contained in the training sample the actual location of
  • the predicted position of each bifurcation node contained in the virtual bronchial tree contained in the training sample output by the node recognition neural network is obtained;
  • the identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object is obtained, including:
  • the identification information is determined by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree.
  • the identification information is determined by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree, including:
  • identification information of each lung segment and a bifurcation node in the virtual bronchial tree of the target object is determined.
  • the second graph convolutional network is configured to be able to calculate the probability that any lung segment in the third graph data corresponds to each bifurcation node, and the bifurcation node to which any lung segment belongs is the bifurcation node with the highest probability.
  • determining the target virtual slice map matching the intraoperative image by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object includes:
  • the first map data to be matched includes the number and distribution of lung segment openings in the virtual slice map
  • obtain the first image data to be matched corresponding to any virtual slice image including:
  • the center of the virtual opening area is used as a vertex, and the lines between the centers are used as edges to construct the first graph data;
  • the first graph data includes the number, position, and vertices of any virtual slice graph. relative distance between
  • the first graph data is mapped to the first graph data to be matched of any virtual slice graph.
  • acquiring second image data to be matched corresponding to the intraoperative image includes:
  • each actual opening area corresponds to a lung segment opening in the intraoperative image
  • the second graph data includes the number and position of the vertices in the intraoperative image, and the relative distance between the vertices;
  • the second image data is mapped to the second image data to be matched of the intraoperative image.
  • the first graph convolutional network is an anti-deformation convolutional network
  • the anti-deformation convolution network includes a spatial transformation layer and a convolution processing unit:
  • the space transformation layer is configured to: obtain the first image data and the second image data, perform space transformation on the first image data and/or the second image data, and obtain the first image
  • the convolution processing unit is configured to convolve the first image data to be convolved to obtain the first image data to be matched of any virtual slice image, and perform convolution on the second image data to be convolved Convolve to obtain the second to-be-matched image data of the intraoperative image.
  • determining the target virtual slice map matching the intraoperative image by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object includes:
  • the historical matching information represents: the position and slice angle of the virtual slice map matched by the historical intraoperative image in the virtual bronchial tree of the target object;
  • the current matching range represents: the position range of the target virtual slice map in the virtual bronchial tree of the target object;
  • the target virtual slice map is determined by matching the intraoperative image with the virtual slice map corresponding to the current matching range.
  • determining the current matching range includes:
  • the target virtual slice map is determined by matching the intraoperative image with the corresponding virtual slice map within the current matching range, including:
  • the spliced map to be matched By inputting the second map to be matched data corresponding to the intraoperative image into the long short-term memory network, the spliced map to be matched outputted by the long short-term memory network is obtained; wherein, the spliced map to be matched is Data refers to: the image data to be matched after splicing the second image data to be matched corresponding to the intraoperative image and the second image data to be matched corresponding to at least one historical intraoperative image;
  • the target virtual slice map is determined by matching the stitched map data to be matched with the first map data to be matched corresponding to the virtual slice map within the current matching range.
  • a device for determining the position of a bronchoscope comprising:
  • the bronchial tree acquisition module is used to acquire the virtual bronchial tree of the target object
  • An identification module configured to identify a bifurcation node of the virtual bronchial tree of the target object, and obtain identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation node;
  • the intraoperative image acquisition module is used to acquire the intraoperative image of the target object, and the intraoperative image is captured when the bronchoscope travels in the human body;
  • An image matching module configured to determine a target virtual slice map matching the intraoperative image by matching the intraoperative image with a virtual slice map of the virtual bronchial tree of the target object;
  • An identification matching module configured to determine the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map, and the determined identification information is used to characterize the bronchoscope The current location within the target object.
  • an electronic device including a processor and a memory
  • the memory is used to store codes
  • the processor is configured to execute the codes in the memory to implement the methods involved in the first aspect and alternative solutions thereof.
  • a storage medium on which a computer program is stored, and when the program is executed by a processor, the method involved in the first aspect and its optional solution is implemented.
  • a bronchoscopic navigation system comprising: a bronchoscope and a data processing unit, the data processing unit is configured to implement the methods involved in the first aspect and its optional solutions.
  • the present invention In the method, device, system, equipment and medium for determining the position of the bronchoscope provided by the present invention, for the virtual bronchial tree of the target object, its bifurcation nodes are identified, and based on this, the lung segments, subdivisions, and branches in the virtual bronchial tree are determined.
  • the identification information of the fork node compared with the scheme of directly using the virtual bronchial tree for navigation and positioning without identifying and determining the fork node and identification information, the present invention can accurately and effectively locate which lung segment and intersection the bronchoscope reaches and display to meet the needs of bronchoscopic navigation. Furthermore, when identifying and determining based on the trained model, various virtual bronchial tree situations can be effectively taken into consideration.
  • the current matching range is first determined according to the historical matching information, and then the matching target virtual slice map is searched for the current matching range.
  • this scheme can effectively reduce the cost of matching.
  • the amount of data to be processed improves processing efficiency.
  • Fig. 1 is a structural schematic diagram of a bronchial navigation system in an exemplary embodiment of the present invention
  • Fig. 2 is a schematic flowchart of a method for determining the position of a bronchoscope in an exemplary embodiment of the present invention
  • Fig. 3 is a schematic flow diagram of identifying fork nodes in an exemplary embodiment of the present invention.
  • Fig. 4 is a schematic diagram of the principle of a node recognition neural network in an exemplary embodiment of the present invention.
  • Fig. 5 is a schematic flowchart of determining identification information in an exemplary embodiment of the present invention.
  • Fig. 6 is a schematic diagram of a knowledge map in an exemplary embodiment of the present invention.
  • Fig. 7 is a schematic diagram of another knowledge graph in an exemplary embodiment of the present invention.
  • Fig. 8 is a schematic flowchart of determining identification information in another exemplary embodiment of the present invention.
  • Fig. 9 is a schematic diagram of the result after completion of partial naming of lung segments and bifurcation nodes of the virtual bronchial tree in an exemplary embodiment of the present invention.
  • Fig. 10 is a schematic flow chart of determining a target virtual slice map in an exemplary embodiment of the present invention.
  • Figure 11 is a schematic diagram of the opening of a lung segment in an exemplary embodiment of the present invention.
  • Fig. 12 is a schematic flowchart of determining the first graph data to be matched in an exemplary embodiment of the present invention.
  • Fig. 13 is a schematic diagram of a virtual opening area and an actual opening area in an exemplary embodiment of the present invention.
  • Fig. 14 is a schematic flow chart of determining the second graph data to be matched in an exemplary embodiment of the present invention.
  • Fig. 15 is a schematic diagram of the principle of determining a target virtual slice map in an exemplary embodiment of the present invention.
  • Fig. 16 is a schematic flow chart of determining a target virtual slice map in another exemplary embodiment of the present invention.
  • Fig. 17 is a schematic diagram of the program modules of the device for determining the position of the bronchoscope in an exemplary embodiment of the present invention.
  • Fig. 18 is a schematic structural diagram of an electronic device in an exemplary embodiment of the present invention.
  • an embodiment of the present invention provides a bronchoscope navigation system 100 , including: a bronchoscope 101 and a data processing unit 102 .
  • the bronchoscope 101 may include an image acquisition unit, and the bronchoscope 101 may be understood as a device or a combination of devices that can use the image acquisition unit to acquire corresponding images after entering the trachea of a human body.
  • the bronchoscope 101 may also include a curved tube (such as an active curved tube and/or a passive curved tube), and the image acquisition unit may be located at one end of the curved tube.
  • a curved tube such as an active curved tube and/or a passive curved tube
  • the data processing unit 102 can be understood as any device or a combination of devices with data processing capabilities. In the embodiment of the present invention, the data processing unit 102 can be used to implement the position determination method mentioned below. Furthermore, the data processing unit 102 can directly or indirectly perform data interaction with the image acquisition unit in the bronchoscope 101, so that the data acquisition unit 102 can receive intraoperative images.
  • an embodiment of the present invention provides a method for determining the position of a bronchoscope, including:
  • the virtual bronchial tree is 3D, which can also be understood as a 3D virtual model of the bronchial tree.
  • the virtual bronchial tree may be a 3D virtual model of the bronchial tree obtained by reconstructing CT data.
  • the virtual bronchial tree may also be obtained in other ways, which is not limited in this specification.
  • the target object can be understood as the human body that currently needs to be navigated in the body.
  • S202 Identify fork nodes of the virtual bronchial tree of the target object, and determine identification information of each lung segment and fork node in the virtual bronchial tree of the target object based on the identified fork nodes;
  • a bifurcation node can be understood as any node capable of describing the position of a bifurcation of the virtual bronchial tree, and for each bifurcation, a bifurcation node can be formed to represent it.
  • the coordinates of the central position of the bifurcation in the virtual bronchial tree may be used as the coordinates of the bifurcation node.
  • the process of identifying the bifurcation node can also be regarded as a process of marking the bifurcation opening in the virtual bronchial tree of the target object, and can also be regarded as a process of determining the position of the bifurcation node.
  • the identification information can be understood as any information capable of identifying the bifurcation nodes and lung segments, so that different lung segments and different bifurcation nodes can be identified differently.
  • the identification information includes the name of the lung segment and the identification of the bifurcation node.
  • the intraoperative image is captured while the bronchoscope is traveling within the corresponding target object;
  • S204 Determine a target virtual slice map matching the intraoperative image by matching the intraoperative image with a virtual slice map of the virtual bronchial tree of the target object;
  • the virtual slice map of a position in the virtual bronchial tree can be understood as: a map formed by slicing the position of the virtual bronchial tree.
  • the virtual slice diagram may include a slice diagram and/or a cross-section diagram and the like.
  • S205 Determine the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map;
  • the determined identification information is used to characterize the current position of the bronchoscope in the target object.
  • the current position matches the position of the target virtual slice map in the virtual bronchial tree of the target object. That is, the position of the target virtual slice map in the virtual bronchial tree of the target object may reflect the current position of the bronchoscope in the target object.
  • the complete virtual bronchial tree can be displayed, and the identification information and the current position of the bronchoscope in the target object can be displayed in the displayed virtual bronchial tree; another
  • a local virtual bronchus tree near the current position of the bronchoscope and corresponding identification information may also be displayed.
  • a first interface can also be used to display a complete virtual bronchial tree, and another second interface can be used to display a partial virtual bronchial tree.
  • identification information can also be displayed in the two interfaces. The current position of the bronchoscope in the target object is displayed in the first interface;
  • the above solution can provide richer information for the navigation of the bronchoscope.
  • the virtual bronchial tree of the target object its bifurcation nodes are identified, and based on this, the identification information of each lung segment and bifurcation node in the virtual bronchial tree is determined.
  • Positioning does not identify and determine the identification information of bifurcation nodes and lung segments.
  • the present invention can accurately and effectively locate and display which lung segment and intersection the bronchoscope reaches, and meet the needs of bronchoscopic navigation.
  • the process of identifying the bifurcation nodes of the virtual bronchial tree of the target object may include:
  • S301 Input the obtained virtual bronchial tree of the target object into a pre-trained node recognition neural network, and obtain the positions of each branch node contained in the virtual bronchial tree of the target object output by the node recognition neural network.
  • Step S301 can be understood as an implementation of the process of identifying the bifurcation nodes of the virtual bronchial tree of the target object in step S202 shown in FIG. Let me repeat.
  • the node identification neural network can be any neural network capable of identifying bifurcated nodes in the input virtual bronchial tree, for example, it can be a convolutional neural network, and in other examples, it can also be realized by using a perceptron neural network, a recurrent neural network, etc. .
  • the convolutional neural network as an example, during training, the weight value of each layer in the convolutional neural network can be updated based on the algorithm of forward propagation and backpropagation, and then, in the case of sufficient training samples, it can effectively guarantee the node identification neural network. Accuracy of network recognition.
  • the node recognition neural network is trained in the following way:
  • each training sample in the training sample set is respectively extracted, and the virtual bronchial tree contained in the training sample is marked with a label, and the label is used to mark each branch node in the virtual bronchial tree contained in the training sample the actual location of
  • the predicted position of each bifurcation node contained in the virtual bronchial tree contained in the training sample output by the node recognition neural network is obtained;
  • the function value of the cost function used in the node identification neural network training matches the difference information
  • the difference information represents: for the virtual bronchial tree contained in the training sample, the points marked in the label The error between the actual location of the fork node and the predicted location of the fork node predicted by the node recognition neural network.
  • the sum of the variances of the errors of the positions of all branch nodes of the virtual bronchial tree can be used as the function value of the cost function of the node recognition neural network.
  • the virtual bronchial tree of normal people generally has 18 lung segments and 17 bifurcations, but because each patient has its own specificity, there are still many variations in the inspection.
  • the training samples will cover various situations of the bronchial tree, and further, the node recognition results can effectively take into account various situations of the bronchial tree, for example, the situation of 17 bifurcations and 18 lung segments can also be taken into account.
  • Variation so that the trained node recognition neural network can recognize the bifurcation nodes and lung segments contained in various virtual bronchial trees, and improve the accuracy of the trained node recognition neural network output results.
  • the following uses a 3D convolutional neural network as an example of a node recognition neural network to describe a process of training and establishing the 3D convolutional neural network:
  • Step a construct a deep learning data set, the data in the data set is: 3D bronchial tree (ie virtual bronchial tree included in the training sample) reconstructed and rendered based on the collected CT data of the patient. Labels can be formed after marking the bifurcation node positions of the virtual bronchial tree contained in the training samples, and then the virtual bronchial tree contained in the training samples and the corresponding labels can be used as training samples, thereby forming a set of training samples as a data set.
  • 3D bronchial tree ie virtual bronchial tree included in the training sample
  • Labels can be formed after marking the bifurcation node positions of the virtual bronchial tree contained in the training samples, and then the virtual bronchial tree contained in the training samples and the corresponding labels can be used as training samples, thereby forming a set of training samples as a data set.
  • Step b Take 50% of the data set as the training set, 10% as the validation set, and 40% as the test set.
  • data sets can also be allocated using other ratios.
  • Step c establish a 3D convolutional neural network, and use the Xavier initialization method to initialize the weight parameters of the 3D convolutional neural network.
  • step c for the input data X formed by each training sample, subsequent step d-step f may be cyclically implemented.
  • Step d Perform maximum and minimum normalization on the input data X.
  • the formula is as follows:
  • the input data X can be understood as: After the virtual bronchial tree is converted into a three-dimensional matrix, each point of the virtual bronchial tree is an element of the three-dimensional matrix, and the value of each element, for example, can represent color, grayscale, pixel The value of information such as value is the input data X here, and correspondingly, X' is the data after normalization.
  • min(X) can be understood as the smallest value among all points of the three-dimensional matrix of the virtual bronchial tree
  • max(X) can be understood as the largest value among all points of the three-dimensional matrix of the virtual bronchial tree.
  • the data can be mapped to the range of 0 to 1 for processing, which facilitates the speed and convenience of subsequent processing.
  • the above step d can be realized by the 3D convolutional neural network after the input data X is input into the 3D convolutional neural network, or normalization can be implemented before each input data X is input into the 3D convolutional neural network, and then the normalization The subsequent data is fed into a 3D convolutional neural network.
  • Step e After the normalized data is input into the 3D convolutional neural network, after the 3D convolutional neural network predicts the bifurcation node, the position of the bifurcation node predicted by the 3D convolutional neural network can be calculated by the forward propagation algorithm The difference between the position of the bifurcation node and the fork node marked by the label (that is, the position error information), so as to calculate the value of the loss function, where the loss function can also be described as a cost function.
  • the forward propagation process uses the cost function to calculate the error between the position (such as coordinates) of the forked node marked by the label and the predicted position (such as coordinates) of the forked node.
  • the difference between the corresponding positions can be used
  • Step f apply the function value of the cost function calculated in the forward propagation process to the error back propagation algorithm, thereby optimizing the weight parameters of the 3D convolutional neural network.
  • step d repeats the above step d to step f until the set number of rounds (for example, 200 rounds) of training is completed. Each round of training is verified on the verification set. After 200 rounds of training, the 3D convolutional neural network with the best result of the verification set is tested on the test set, and the trained 3D convolutional neural network is used as the node recognition neural network.
  • the set number of rounds for example, 200 rounds
  • the ultimate goal is to obtain the gradient value of the overall loss value (that is, the function value of the cost function) relative to the parameters w and b.
  • the gradient value of these two parameters it is necessary to first compare the loss value relative to the output value of the neuron and The output value of the neuron through the activation function is calculated first. if by Indicates the gradient value of the overall loss value to the input value of the jth neuron in the last layer of the network L, because for The value obtained after activation, according to the chain rule:
  • the backpropagation process is completed, and then use the gradient descent algorithm to update the gradient values of the parameters w and b to form parameters w and b that conform to the updated gradient values, that is: w l ⁇ w l - ⁇ /n ⁇ x ⁇ x ,l (a x,l-1 ) T , b l ⁇ b l - ⁇ /n ⁇ x ⁇ x,l where ⁇ is the learning rate, which can be an artificially set value.
  • the trained 3D convolutional neural network (which can also be understood as a virtual bronchial tree node segmentation model) is obtained, and the trained 3D convolutional neural network can be used to identify the fork nodes of various virtual bronchial trees .
  • the process of determining the identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object may include:
  • S501 Determine the identification information by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree.
  • Step S501 can be understood as an implementation of the process of determining the identification information of each lung segment and the bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation node in step S202 shown in FIG. The content that has been described in the embodiment shown in 2 will not be repeated here.
  • the knowledge graph of the bronchial tree can be any information carrier that can characterize the connection relationship between the intersection (or intersection node) and the lung segment to a certain extent.
  • the number of knowledge graphs of the bronchial tree can be one or more. This paper The description does not limit this.
  • FIG. 6 shows an example of a knowledge graph
  • FIG. 7 also shows an example of a knowledge graph.
  • the actual bronchial situation and the objective knowledge of the bronchial tree can be effectively combined to accurately match the identification information of each lung segment and intersection in the virtual bronchial tree.
  • intersection nodes start directly from the main airway, and match the name and identification of each bifurcation and lung segment in the knowledge map along each bifurcation node of the virtual bronchial tree to obtain the identification information;
  • the correspondence between the bifurcation node and the lung segment can be further confirmed based on the neural network (such as the second graph convolutional network), For example, if the positioning and recognition results of the bifurcation nodes deviate, it may cause errors and omissions in the corresponding relationship between the bifurcation node and the lung segment. Confirming the adjustment can ensure that the corresponding relationship between each bifurcation node and the lung segment can be adjusted and corrected, so that the identification information can be matched more accurately.
  • the neural network such as the second graph convolutional network
  • the process of determining the identification information by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree may include:
  • the data in the third figure is a matrix that can characterize the connection relationship between vertices (that is, the identified fork nodes); it can be represented as G(V, E), and in the data in the third figure, V can be understood as a fork Or a bifurcation node, where E can be understood as a lung segment;
  • S802 Input the third graph data into the second graph convolutional network, so as to use the second graph convolutional network to determine the correspondence between bifurcation nodes and lung segments in the virtual bronchial tree of the target object relation;
  • the second graph convolution network can be understood as a neural network capable of processing the third graph data; specifically, the second graph convolution network can be configured to be able to calculate any A lung segment corresponds to the probability of each bifurcation node, and the bifurcation node with the highest probability is selected as the bifurcation node to which any lung segment belongs.
  • S803 According to the corresponding relationship and the knowledge map, determine the identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object.
  • steps S801 to S803 can be, for example:
  • Step a through the identification of the bifurcation node (characterizing the bifurcation mouth) (that is, node segmentation, such as the identification in step S301), since the bifurcation mouth of the bronchial tree is between adjacent lung segments, based on this, it is possible to construct
  • the data in the third picture is: G(V, E), where V is a bifurcation, and E is a lung segment.
  • This step a is an implementation of step S801.
  • Step b Since the fork has been marked (that is, the fork node is identified), the next thing to solve is the lung segment.
  • the second graph convolution network can use the second graph convolution network to do Edge classification, that is, by classifying the edges to the corresponding vertices, so as to determine the corresponding relationship between the edges and the vertices.
  • the input of the second graph convolutional network is the third graph data G(V, E), and the output is the corresponding relationship between V and E , the graph convolutional network can play a smoothing role, and can better establish the relationship between nodes and edges; this step b is an implementation of step S802.
  • the processing method of the commonly used graph convolutional network can also be applied to this to determine the corresponding relationship between edges and vertices, that is: the commonly used graph convolutional network is used as the second graph convolutional network; correspondingly, if using Commonly used graph convolutional networks, then: when determining the corresponding relationship, the corresponding relationship between edges and vertices can be determined by classifying vertices into edges.
  • the virtual bronchial tree contained in the training sample The graph data of can be marked with a label, which is used to mark the actual probability that any vertex (fork node) belongs to (that is, corresponds to) each edge (lung segment); after training, the common graph convolutional network can be used to output The actual probability that any vertex of the data in the third graph is classified (that is, corresponds to) any side.
  • the second graph convolutional network can classify edges to corresponding vertices, thereby determining the corresponding relationship between edges and vertices.
  • the training samples The graph data of the included virtual bronchial tree can be marked with labels, and the labels are used to mark the actual probability that any edge (lung segment) belongs to (that is, corresponds to) each vertex (fork node); then the second graph after training
  • the convolutional network can be used to output the actual probability that any edge of the third graph data is classified (ie corresponds to) any vertex.
  • the processing process in the second image convolutional network can be as shown in the following steps c-step e:
  • Step c first solve the adjacency matrix A according to G (V, E), the formula is as follows:
  • Step d Define the input of the lth layer of the second graph convolutional network as X l+1 , and the output as X l , the relationship is as follows:
  • is the nonlinear transformation layer
  • A is the adjacency matrix
  • D is the degree matrix of each edge
  • W l , b l are the weights of the lth layer of the second graph convolutional network.
  • Step e Define the output of the second graph convolution and classify the edges.
  • the formula used is as follows:
  • Z is the probability that each edge corresponds to each vertex (that is, the probability that any lung segment corresponds to each bifurcation node), and then, for each edge, the vertex with the highest probability can be selected as its corresponding vertex (also That is, the bifurcation node with the highest probability is selected as the bifurcation node to which any lung segment belongs); softmax is the activation function of the output layer, and W z and b z are the weights of the output layer, which can be updated using the backpropagation algorithm.
  • steps S801 and S802 can be realized.
  • step S803 may be, for example, shown in step f below.
  • Step f According to the knowledge map summarized by the doctor, traverse each fork in the knowledge map one by one, and match the identification information of the lung segment and the fork in the knowledge map to the fork of the virtual bronchial tree (that is, the fork node) and the lung segment, so as to complete the name of the edge and node of the virtual bronchial tree of the target object (that is, the identification information, which is the name of the lung segment and the identification of the intersection/intersection node), and the results can be referred to in Fig. 9, which shows the results of lung segment and bifurcation node local name completion of the virtual bronchial tree.
  • the specific matching naming process can be, for example:
  • the first one is the first bifurcation, which separates the left and right main bronchi;
  • each edge that is, each lung segment of the virtual bronchial tree
  • vertex that is, each bifurcation or bifurcation node of the virtual bronchial tree
  • each edge and node in the virtual bronchial tree that is, each lung segment, bifurcation or bifurcation node of the virtual bronchial tree
  • the order of traversal is not limited to the above example.
  • the features of the map can be extracted in advance to form a form more suitable for matching.
  • the embodiment of the present invention does not exclude A scheme that directly matches images.
  • the image data to be matched will be formed based on the virtual slice image and the intraoperative image, which may include the following steps:
  • step S1001 is the same as that of step S201 in the above-mentioned embodiment shown in FIG. 2, and the execution process of step 1002 is the same as that of step S203 in the above-mentioned embodiment shown in FIG. repeat.
  • step S1001 it may also include:
  • step S1002 it may also include:
  • S1004 Acquire second image data to be matched corresponding to the intraoperative image.
  • the number and distribution of lung segment openings in the corresponding image can be characterized by using the image data to be matched.
  • step S1005 may be executed: compare the first image to be matched with the extracted second image to be matched, and determine the target virtual slice image according to the comparison result;
  • the corresponding first graph data to be matched can be correspondingly
  • the virtual slice map of is determined as the target virtual slice map.
  • step S1005 The execution process of the above step S1005 is the same as the step S204 in the embodiment shown in FIG. 2 , and will not be repeated here.
  • the opening of the lung segment can be understood as the entrance of the lung segment, which can be displayed as a closed-loop opening in the two-dimensional virtual slice or intraoperative image.
  • the opening of the lung segment can be understood as the entrance of the second lung segment and the entrance of the third lung segment.
  • the lung segment openings can be, for example, the lung segment opening 1101 , the lung segment opening 1102 and the lung segment opening 1103 shown in FIG. 11 .
  • the first to-be-matched map data of the virtual slice map can at least represent the number and distribution of virtual lung segment openings in the virtual slice map
  • the second to-be-matched map data of the intraoperative image can at least represent The number and distribution of the real lung segment openings in the intraoperative image are shown.
  • the distribution mode therein refers to, for example, the relative position and distance between the openings of each two lung segments, and the distribution mode may also refer to the position of the center of the lung segment openings in the virtual slice map, etc., for example.
  • the above method of matching the data of the first map to be matched with the data of the second map to be matched can avoid the intraoperative image, the virtual slice map and the lung segment opening.
  • Irrelevant information (such as the color in the image, the texture of the trachea wall that has nothing to do with the opening of the lung segment, etc.) interferes with the matching results.
  • Both the first to-be-matched data and the second to-be-matched data can be represented in a matrix, and of course they can also be represented in other ways, without departing from the scope of the embodiments of the present invention.
  • the process of forming the first image data to be matched of the virtual slice image may include:
  • Each virtual opening area corresponds to a lung segment opening in the virtual slice map
  • the first graph data is a matrix capable of characterizing the number and position characteristics of vertices in any virtual slice graph, and the relative distance between vertices;
  • step S1201 to step S1203 can be regarded as an implementation of step S1003 in the embodiment shown in FIG. 10 , and the content already described in the embodiment shown in FIG. 10 will not be repeated here.
  • the virtual opening area (and the actual opening area hereinafter) can be understood with reference to the closed-loop graph shown on the right side of FIG. 13 .
  • the identification of the virtual opening area (which can also be understood as delimitation and segmentation) can be identified by using an opening identification model.
  • the determination of the virtual opening area and the actual opening area can also be achieved by extracting and screening lines in the image, and extracting closed lines, without resorting to an opening recognition model, which is not limited in this specification.
  • step S1201 may include: using an opening identification model to identify an opening area in the virtual slice map, so as to determine the virtual opening area.
  • the opening recognition model can be understood as any model that can divide and delineate the corresponding lung segment opening. Since the opening is the salient content in the figure, the opening recognition model can also be understood as being able to The salient regions in the figure are segmented, which can also be described as a neural network for saliency detection. Each bifurcation contains one or more lung segment openings, and each lung segment opening corresponds to a lung segment. The texture information, size information, and shape information of the lung segment openings themselves or between them are unique.
  • the neural network for saliency detection can be a convolutional neural network, which can be used for saliency segmentation of virtual slices and intraoperative images (i.e., the identification of opening regions). In some examples, it can also be used Two neural networks were used to perform saliency segmentation (that is, opening recognition) on the virtual slice map and intraoperative image respectively.
  • a kind of establishment, training process of this convolutional neural network (being the opening recognition model) is given as an example below:
  • Step a use the image labeling software with a graphical interface (such as label me software) to mark the salient areas of the virtual slice map and the intraoperative image (it can also be understood as marking the opening of the lung segment in the picture), and obtain the marked area
  • the labeling result of the label can be used as a label, and then each virtual slice map and corresponding label can be used as a sample, and the intraoperative image and corresponding label can also be used as a sample, and then a set of samples can be constructed as a data set, taking 50% of the data set As a training set, 10% as a validation set, and 40% as a test set.
  • Step b normalize the samples in the data set, so that the size of the graphs in the sample is unified to 500 ⁇ 500; that is, the size of each graph is unified;
  • Step c establishing a convolutional neural network, and using the Xavier initialization method to initialize the weight parameters of the convolutional neural network;
  • Step d Set the output matrix size of the convolutional neural network to 500 ⁇ 500 ⁇ 26, 26 means that in the case of variation, the human body will have up to 25 categories of lung segments, and there are 26 categories in total including the background.
  • the samples can be input to the convolutional neural network one by one, and the convolutional neural network can implement the following steps e and f.
  • Step e The difference between the output matrix and the label obtained by the prediction of the convolutional neural network is the function value of the loss function
  • Step f apply the function value of the loss function to the error backpropagation algorithm, and optimize the weight optimization parameters of the convolutional neural network.
  • Step e and step f are repeated until the training of the set number of rounds (for example, 1000 rounds) is completed. Each round of training is verified on the verification set. After 1000 rounds of training, the convolutional neural network with the results of the verification set is tested on the test set, and the trained convolutional neural network is used as the opening recognition model.
  • the set number of rounds for example, 1000 rounds
  • the process of obtaining the data of the second image to be matched corresponding to the intraoperative image may include:
  • Each actual opening area corresponds to a lung segment opening in the intraoperative image
  • the second image data is a matrix capable of characterizing the number of vertices in the intraoperative image, their location features, and the relative distance between vertices; it can be understood with reference to the content of the first image data;
  • step S1301 may include: using an opening identification model to identify an opening area in the intraoperative image, so as to determine the actual opening area.
  • Steps S1401 to S1403 are similar to the embodiment shown in FIG. 12 , and will not be repeated here.
  • the opening area mentioned therein can also be a closed ellipse, circle or olive-shaped curve as shown in Figure 15, wherein the middle column shows a virtual slice diagram, where The area inside the closed ellipse, circle or olive-shaped curve is a virtual opening area, the center of which is used as the vertex when constructing the first graph data, and the connecting line at the center is used as the edge for constructing the first graph data; the column on the right shows is the intraoperative image, and the area inside the closed ellipse, circle or olive-shaped curve is the actual opening area, and its center is used as the vertex when constructing the second image data, and the connecting line of the center is used as the point for constructing the second image data side.
  • the first graph convolutional network can adopt a non-resistant variable convolutional network (for example, without the space transformation layer mentioned later), and then, through the first graph convolutional network, only realize Convolution for the first image data and the second image data.
  • the first graph convolutional network used above can be an anti-deformation convolutional network
  • the features of the image data can be extracted, which can better reflect the characteristics of the first image data and the second image data, and make the matching result more accurate.
  • the anti-deformation convolutional network may include:
  • the space transformation layer is configured to: obtain the first image data and the second image data, transform the first image data and/or the second image data, and obtain the first image data corresponding to the first image data.
  • the first image data to be convoluted is the image data after the first image data transformation, and the second image data to be convoluted is the second image data;
  • the second graph data to be convoluted is the graph data after the transformation of the second graph data, and the first graph data to be convoluted is the first graph data;
  • the first image data to be convoluted is the image data transformed from the first image data
  • the second image data to be convoluted is the image transformed from the second image data.
  • the transformation includes: alignment transformation
  • the alignment transformation refers to: when the intraoperative image matches any of the virtual slice images, transforming the position of the vertex represented by the data of the first image and the position of the vertex represented by the data of the second image to be consistent or similar;
  • the effect represented by the transformation may include: rotating the triangle formed by the vertices, translating the triangle formed by the vertices, and transforming the line segment formed by the vertices Perform rotations, translations on line segments formed by vertices, etc.
  • the spatial transformation layer can also transform the first image data and/or the second image data, because it will not affect the final intraoperative image and the virtual slice image. Therefore, no matter how the matching result of the slice map is transformed, it does not depart from the scope of the embodiments of the present invention.
  • the convolution processing unit is configured to convolve the first image data to be convolved to obtain the first image data to be matched of any virtual slice image, and perform convolution on the second image data to be convolved Convolve to obtain the second to-be-matched image data of the intraoperative image.
  • the convolution processing unit includes:
  • the embedding layer is used to convert the data representing the relative distance between vertices in the first graph data to be convolved into a fixed vector to obtain the third graph data to be convoluted, and convert the data representing the vertices in the second graph data to be convoluted to The data of relative distance between is converted into described fixed vector, obtains the 4th to be convoluted image data;
  • a graph convolution layer configured to convolve the third graph data to be convolved, obtain the first graph data to be matched of any virtual slice graph, and convolve the fourth graph data to be convolved , to obtain the second image data to be matched of the intraoperative image.
  • the convolution processing unit may also include a space transformation layer and a graph convolution layer instead of an embedding layer, so that the first image data to be convolved and the second image data to be convolved can be directly convoluted. product.
  • the alignment transformation between graph data can be realized, and the graph data to be matched obtained on this basis can accurately and effectively realize the matching of images.
  • the process that needs to be completed may include, for example:
  • Step a Generate a 500 ⁇ 500 01 matrix based on the 500 ⁇ 500 ⁇ 26 output matrix of the convolutional neural network for saliency detection (that is, the opening recognition model), where 0 represents the background (that is, outside the virtual opening area and the actual opening area area), 1 represents the salient area (that is, the virtual opening area, the area inside the actual opening area).
  • Step b After converting the virtual slice image and the intraoperative image into a matrix, multiply by the corresponding 01 matrix respectively to obtain the first matrix and the second matrix.
  • Step c For the first matrix and the second matrix, take the lung segment opening as the connected domain, calculate its center position, take the center point of each lung segment opening as the vertex, and the line connecting the center points as the edge, construct the graph data (ie first graph data and second graph data).
  • step S1202 Implementing the above steps a, b, and c for the virtual slice map is an implementation of step S1202, and implementing the above steps a, b, and c for intraoperative images is step S1402 A way of realizing .
  • the anti-deformation graph convolutional network contains three layers in total, where the first layer is a spatial transformation layer, the second layer is an embedding layer, and the third layer is a graph convolutional layer.
  • the first layer of spatial transformation layer can be used to perform spatial transformation on the first image data and/or the second image data, and its effect can be reflected in the structure of the center point of the lung segment opening in the corresponding image (such as intraoperative image, virtual slice image). 6D degrees of freedom transformation of graphics and line segments.
  • the role of the second embedding layer can be understood as: because there are differences in the size of the lung segments, they are unified into a fixed vector through the embedding layer, which is beneficial to the calculation of the convolutional layer of the following figure;
  • the function of the third layer graph convolution layer is as follows:
  • the input of the lth layer of the definition graph convolution is X l+1 , and the output is X l .
  • the relationship is as follows:
  • is the nonlinear transformation layer
  • A is the adjacency matrix
  • D is the degree matrix of each edge
  • W l , b l are the graph convolution weights.
  • the alignment and feature smoothing of the to-be-matched map data of the virtual slice map and the to-be-matched map data of the intraoperative image are realized.
  • the second to-be-matched image data) is matched, and then through the matching between the image data, the matching between the virtual slice image and the intraoperative image is realized.
  • the map data of the virtual slice map (such as the first map data and the map data to be matched) all correspond to the specific position and slice angle of the virtual bronchial tree, so through the matching of the map data, the corresponding intraoperative image of the target object can be found
  • the matched target virtual slice map so as to judge the current position of the bronchoscope in the human body based on the position of the matched target virtual slice map in the virtual bronchial tree, and realize navigation.
  • the transformed first image data and the second image data can be directly used as the image data to be matched, so as to perform matching.
  • graph data mentioned above can be represented by a matrix.
  • the historical matching information represents: the position and slice angle of the virtual slice map matched by the historical intraoperative image in the virtual bronchial tree of the target object;
  • S1602 Determine the current matching range according to the historical matching information
  • the current matching range represents: the position range of the target virtual slice map in the virtual bronchial tree
  • S1603 Determine the target virtual slice map by matching the intraoperative image with a corresponding virtual slice map within the current matching range.
  • Step S1601 to step S1603 can be understood as an implementation of step S204 shown in FIG. 2 , and the content already described in the embodiment shown in FIG. 2 will not be repeated here.
  • vision-based bronchoscopic navigation has a spatio-temporal logic, for example, if the current bronchoscope is at the 15th bifurcation, then it may only go to the 16th bifurcation and the 13th bifurcation, adding multiple intraoperative images
  • the inductive reasoning of the image can narrow the matching range of the virtual reality map, without matching the intraoperative image with all the virtual slices of the virtual bronchial tree of the target object, effectively reducing the amount of matching data, speeding up the matching, and eliminating some illogical solution.
  • step S1602 may include:
  • the 6D degree of freedom information of the intraoperative image (representing its position and slice angle in the virtual bronchial tree) and the corresponding shooting time can be vectorized, for example, these information can be juxtaposed one by one to form a current vector.
  • the long-term and short-term memory network can use various training intraoperative images and training virtual slice images as materials to train the long-term and short-term memory network during training, and gradually update its matching range through the weight output by the long-term and short-term memory network. ability.
  • step S1602 matching may be realized based on the image data to be matched determined in step S1003 and step S1004 in the embodiment shown in FIG. 10 , for example: in step S1603, may include:
  • the spliced map to be matched By inputting the second map to be matched data corresponding to the intraoperative image into the long short-term memory network, the spliced map to be matched outputted by the long short-term memory network is obtained; wherein, the spliced map to be matched is Data refers to: the image data to be matched after splicing the second image data to be matched corresponding to the intraoperative image and the second image data to be matched corresponding to at least one historical intraoperative image;
  • the target virtual slice map is determined by matching the stitched map data to be matched with the first map data to be matched in the virtual slice map within the current matching range.
  • the above scheme can effectively improve the accuracy of matching compared with only using intraoperative images for matching.
  • splicing may not be performed, but the target virtual slice is determined by locally matching the second to-be-matched map data of the intraoperative image with the first to-be-matched map data of the virtual slice map within the current matching range picture.
  • the embodiment of the present invention also provides a device 1700 for determining the position of a bronchoscope, including:
  • a bronchial tree acquiring module 1701 configured to acquire the virtual bronchial tree of the target object
  • the identification module 1702 is configured to identify the bifurcation node of the virtual bronchial tree of the target object, and obtain the identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation node;
  • An intraoperative image acquisition module 1703 configured to acquire an intraoperative image of the target object, where the intraoperative image is captured when the bronchoscope travels in the human body;
  • An image matching module 1704 configured to determine a target virtual slice map matching the intraoperative image by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object;
  • the identification matching module 1705 is configured to determine the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map, and the determined identification information is used to characterize the bronchial The current position of the mirror within the target object.
  • the identification module 1702 is specifically used for:
  • the node recognition neural network is trained in the following manner:
  • each training sample in the training sample set is respectively extracted, and the virtual bronchial tree contained in the training sample is marked with a label, and the label is used to mark each branch node in the virtual bronchial tree contained in the training sample the actual location of
  • the predicted position of each bifurcation node contained in the virtual bronchial tree contained in the training sample output by the node recognition neural network is obtained;
  • the identification module 1702 is specifically used for;
  • the identification information is determined by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree.
  • the identification module 1702 is specifically used for;
  • identification information of each lung segment and a bifurcation node in the virtual bronchial tree of the target object is determined.
  • the second graph convolutional network is configured to be able to calculate the probability that any lung segment in the third graph data corresponds to each bifurcation node, and the bifurcation node to which any lung segment belongs is the bifurcation node with the highest probability.
  • the image matching module 1704 is specifically used for:
  • the first image data to be matched includes the number and distribution of lung segment openings in the virtual slice image
  • the image matching module 1704 is specifically used for:
  • the center of the virtual opening area is used as a vertex, and the lines between the centers are used as edges to construct the first graph data;
  • the first graph data includes the number, position, and vertices of any virtual slice graph. relative distance between
  • the first graph data is mapped to the first graph data to be matched of any virtual slice graph.
  • the image matching module 1704 is specifically used for:
  • each actual opening area corresponds to a lung segment opening in the intraoperative image
  • the second graph data includes the number and position of the vertices in the intraoperative image, and the relative distance between the vertices;
  • the second image data is mapped to the second image data to be matched of the intraoperative image.
  • the first graph convolutional network is an anti-deformation convolutional network
  • the anti-deformation convolution network includes a spatial transformation layer and a convolution processing unit:
  • the space transformation layer is configured to: obtain the first image data and the second image data, perform space transformation on the first image data and/or the second image data, and obtain the first image
  • the convolution processing unit is configured to convolve the first image data to be convolved to obtain the first image data to be matched of any virtual slice image, and perform convolution on the second image data to be convolved Convolve to obtain the second to-be-matched image data of the intraoperative image.
  • the image matching module 1704 is specifically used for:
  • the historical matching information represents: the position and slice angle of the virtual slice map matched by the historical intraoperative image in the virtual bronchial tree of the target object;
  • the current matching range represents: the position range of the target virtual slice map in the virtual bronchial tree of the target object;
  • the target virtual slice map is determined by matching the intraoperative image with the virtual slice map corresponding to the current matching range.
  • the image matching module 1704 is specifically used for:
  • the image matching module 1704 is specifically used for:
  • the spliced map to be matched By inputting the second map to be matched data corresponding to the intraoperative image into the long short-term memory network, the spliced map to be matched outputted by the long short-term memory network is obtained; wherein, the spliced map to be matched is Data refers to: the image data to be matched after splicing the second image data to be matched corresponding to the intraoperative image and the second image data to be matched corresponding to at least one historical intraoperative image;
  • the target virtual slice map is determined by matching the stitched map data to be matched with the first map data to be matched corresponding to the virtual slice map within the current matching range.
  • an electronic device 1800 including:
  • memory 1802 configured to store executable instructions of the processor
  • the processor 1801 is configured to execute the above-mentioned methods by executing the executable instructions.
  • the processor 1801 can communicate with the memory 1802 through the bus 1803 .
  • An embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, and the above-mentioned method is implemented when the program is executed by a processor.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the program executes the steps including the above-mentioned method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Pulmonology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Otolaryngology (AREA)
  • Physiology (AREA)
  • Robotics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Endoscopes (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a bronchoscope position determination method and apparatus, a system, a device, and a medium. The bronchoscope position determination method, comprising: obtaining a virtual bronchial tree of a target object; identifying bifurcated nodes of the virtual bronchial tree of the target object, and obtaining identification information of lung segments and the bifurcated nodes in the virtual bronchial tree of the target object on the basis of the identified bifurcated nodes; obtaining an intraoperative image of the target object; matching the intraoperative image with a virtual slice image of the virtual bronchial tree of the target object, and determining a target virtual slice image matched with the intraoperative image; and determining identification information of the corresponding lung segments and bifurcated nodes matched with the target virtual slice image in the virtual bronchial tree of the target object, the determined identification information being used for representing a current position of the bronchoscope in the target object.

Description

支气管镜的位置确定方法、装置、系统、设备与介质Method, device, system, equipment and medium for determining position of bronchoscope 技术领域technical field
本发明涉及支气管镜领域,尤其涉及一种支气管镜的位置确定方法、装置、系统、设备与介质。The invention relates to the field of bronchoscopes, in particular to a method, device, system, equipment and medium for determining the position of a bronchoscope.
背景技术Background technique
支气管镜导航,指的是:通过对支气管镜位置的确定,给术中实际拍摄视频图像提供导航指导。Bronchoscope navigation refers to providing navigation guidance for actually taking video images during the operation by determining the position of the bronchoscope.
然而,现有相关技术中,在确定支气管镜位置时,并不能准确定位并描述出支气管镜处于何种肺段,可见,现有技术中,导航定位时所反馈的信息较为有限,难以满足支气管镜导航需求。However, in the existing related technologies, when determining the position of the bronchoscope, it is not possible to accurately locate and describe which lung segment the bronchoscope is in. It can be seen that in the prior art, the information fed back during navigation and positioning is relatively limited, and it is difficult to meet the needs of the bronchoscope. mirror navigation requirements.
发明内容Contents of the invention
本发明提供一种支气管镜的位置确定方法、装置、系统、设备与介质,以解决导航结果难以满足需求的问题。The invention provides a method, device, system, equipment and medium for determining the position of a bronchoscope, so as to solve the problem that the navigation result is difficult to meet the demand.
根据本发明的第一方面,提供了一种支气管镜的位置确定方法,包括:According to a first aspect of the present invention, a method for determining the position of a bronchoscope is provided, comprising:
获取目标对象的虚拟支气管树;Obtain the virtual bronchial tree of the target object;
识别所述目标对象的虚拟支气管树的分叉节点,并基于识别出的分叉节点,获得所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息;Identifying the bifurcation nodes of the virtual bronchial tree of the target object, and obtaining identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation nodes;
获取所述目标对象的术中图像,所述术中图像是支气管镜在所述目标对象内行进时拍摄到的;acquiring an intraoperative image of the target object, the intraoperative image being captured when a bronchoscope travels within the target object;
通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图;determining a target virtual slice map matching the intraoperative image by matching the intraoperative image with a virtual slice map of a virtual bronchial tree of the target subject;
确定所述目标对象的虚拟支气管树中匹配于所述目标虚拟切片图的相应的肺段与分叉节点的标识信息,确定出的标识信息被用于表征所述支气管镜在所述目标对象中的当前位置。Determining the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map, and the determined identification information is used to characterize the bronchoscope in the target object the current location of .
可选的,识别所述目标对象的虚拟支气管树的分叉节点,包括:Optionally, identifying the bifurcation nodes of the virtual bronchial tree of the target object includes:
将获取到的所述目标对象的虚拟支气管树输入预先训练的节点识别神经网络,获得所述节点识别神经网络输出的所述目标对象的虚拟支气管树所含的各个分叉节点的位置。Inputting the acquired virtual bronchial tree of the target object into the pre-trained node recognition neural network, and obtaining the positions of each branch node contained in the virtual bronchial tree of the target object output by the node recognition neural network.
可选的,所述节点识别神经网络由下述方式训练得到:Optionally, the node recognition neural network is trained in the following manner:
分别提取训练样本集合中的各个训练样本的样本特征,所述训练样本所含的虚拟支气管树上被标注有标签,所述标签用于标记所述训练样本所含虚拟支气管树中各个分叉节点的实际位置;The sample features of each training sample in the training sample set are respectively extracted, and the virtual bronchial tree contained in the training sample is marked with a label, and the label is used to mark each branch node in the virtual bronchial tree contained in the training sample the actual location of
通过将提取到的样本特征输入所述节点识别神经网络中,获得所述节点识别神经网络输出的所述训练样本所含虚拟支气管树中各个分叉节点的预测位置;By inputting the extracted sample features into the node recognition neural network, the predicted position of each bifurcation node contained in the virtual bronchial tree contained in the training sample output by the node recognition neural network is obtained;
根据所述实际位置与所述预测位置之间的差异信息调整所述节点识别神经网络,获得训练后的节点识别神经网络。adjusting the node recognition neural network according to the difference information between the actual position and the predicted position, to obtain a trained node recognition neural network.
可选的,基于识别出的分叉节点,获得所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息,包括:Optionally, based on the identified bifurcation node, the identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object is obtained, including:
通过将所述目标对象的虚拟支气管树内识别出的分叉节点与支气管树的知识图谱进行匹配,确定所述标识信息。The identification information is determined by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree.
可选的,通过将所述目标对象的虚拟支气管树内识别出的分叉节点与支气管树的知识图谱进行匹配,确定所述标识信息,包括:Optionally, the identification information is determined by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree, including:
以所述识别出的分叉节点为顶点、所述目标对象的虚拟支气管树中用于连接识别出的分叉节点的肺段为边,构建第三图数据;Constructing the third graph data with the identified bifurcation node as the vertex and the lung segment used to connect the identified bifurcation node in the virtual bronchial tree of the target object as the edge;
将所述第三图数据输入预先训练的第二图卷积网络,以利用所述第二图卷积网络确定所述目标对象的虚拟支气管树中的分叉节点与肺段之间的对应关系;inputting the third graph data into the pre-trained second graph convolutional network, so as to use the second graph convolutional network to determine the corresponding relationship between bifurcation nodes and lung segments in the virtual bronchial tree of the target object ;
根据所述对应关系以及所述知识图谱,确定所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息。According to the corresponding relationship and the knowledge graph, identification information of each lung segment and a bifurcation node in the virtual bronchial tree of the target object is determined.
可选的,所述第二图卷积网络被配置为能够计算出所述第三图数据中任一肺段对应于各分叉节点的概率,且所述任一肺段所属的分叉节点为其中概率最高的分叉节点。Optionally, the second graph convolutional network is configured to be able to calculate the probability that any lung segment in the third graph data corresponds to each bifurcation node, and the bifurcation node to which any lung segment belongs is the bifurcation node with the highest probability.
可选的,通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图,包括:Optionally, determining the target virtual slice map matching the intraoperative image by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object includes:
获取对应于任一虚拟切片图的第一待匹配图数据,所述第一待匹配图数据包括所述虚拟切片图中 肺段开口的数量与分布方式;Obtain the first map data to be matched corresponding to any virtual slice map, the first map data to be matched includes the number and distribution of lung segment openings in the virtual slice map;
获取对应于所述术中图像的第二待匹配图数据,所述第二待匹配图数据包括所述术中图像中肺段开口的数量与分布方式;Acquiring second image data to be matched corresponding to the intraoperative image, where the second image data to be matched includes the number and distribution of lung segment openings in the intraoperative image;
将所述第一待匹配图数据与提取到的第二待匹配图数据进行比较,并根据比较结果确定所述目标虚拟切片图。Comparing the first to-be-matched image data with the extracted second to-be-matched image data, and determining the target virtual slice image according to the comparison result.
可选的,获取对应于任一虚拟切片图的第一待匹配图数据,包括:Optionally, obtain the first image data to be matched corresponding to any virtual slice image, including:
确定所述任一虚拟切片图中的虚拟开口区域,每个虚拟开口区域对应所述虚拟切片图中的一个肺段开口;Determining the virtual opening area in any of the virtual slice diagrams, each virtual opening area corresponding to a lung segment opening in the virtual slice diagram;
以所述虚拟开口区域的中心为顶点、各个中心之间的连线为边,构造第一图数据;所述第一图数据包括所述任一虚拟切片图中顶点的数量、位置,以及顶点间相对距离;The center of the virtual opening area is used as a vertex, and the lines between the centers are used as edges to construct the first graph data; the first graph data includes the number, position, and vertices of any virtual slice graph. relative distance between
利用第一图卷积网络,将所述第一图数据映射为所述任一虚拟切片图的第一待匹配图数据。Using the first graph convolutional network, the first graph data is mapped to the first graph data to be matched of any virtual slice graph.
可选的,获取对应于所述术中图像的第二待匹配图数据,包括:Optionally, acquiring second image data to be matched corresponding to the intraoperative image includes:
在所述术中图像中,确定实际开口区域;每个实际开口区域对应表征所述术中图像中的一个肺段开口;In the intraoperative image, determine an actual opening area; each actual opening area corresponds to a lung segment opening in the intraoperative image;
以所述实际开口区域的中心为顶点、中心间的连线为边,构造第二图数据;所述第二图数据包括所述术中图像中顶点的数量、位置,以及顶点间相对距离;Constructing second graph data with the center of the actual opening area as the vertex and the connecting line between the centers as the edge; the second graph data includes the number and position of the vertices in the intraoperative image, and the relative distance between the vertices;
利用所述第一图卷积网络,将所述第二图数据映射为所述术中图像的第二待匹配图数据。Using the first image convolutional network, the second image data is mapped to the second image data to be matched of the intraoperative image.
可选的,所述第一图卷积网络为抗形变卷积网络;Optionally, the first graph convolutional network is an anti-deformation convolutional network;
所述抗形变卷积网络包括空间变换层和卷积处理单元:The anti-deformation convolution network includes a spatial transformation layer and a convolution processing unit:
所述空间变换层,用于:获取所述第一图数据与所述第二图数据,对所述第一图数据和/或所述第二图数据进行空间变换,得到所述第一图数据对应的第一待卷积图数据以及所述第二图数据对应的第二待卷积图数据;The space transformation layer is configured to: obtain the first image data and the second image data, perform space transformation on the first image data and/or the second image data, and obtain the first image The first graph data to be convolved corresponding to the data and the second graph data to be convolved corresponding to the second graph data;
所述卷积处理单元,用于对所述第一待卷积图数据进行卷积,得到所述任一虚拟切片图的第一待匹配图数据,对所述第二待卷积图数据进行卷积,得到所述术中图像的第二待匹配图数据。The convolution processing unit is configured to convolve the first image data to be convolved to obtain the first image data to be matched of any virtual slice image, and perform convolution on the second image data to be convolved Convolve to obtain the second to-be-matched image data of the intraoperative image.
可选的,通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图,包括:Optionally, determining the target virtual slice map matching the intraoperative image by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object includes:
获取历史匹配信息;所述历史匹配信息表征了:历史术中图像所匹配到的虚拟切片图在所述目标对象的虚拟支气管树中的位置与切片角度;Acquiring historical matching information; the historical matching information represents: the position and slice angle of the virtual slice map matched by the historical intraoperative image in the virtual bronchial tree of the target object;
根据所述历史匹配信息,确定当前匹配范围;其中,所述当前匹配范围表征了:所述目标虚拟切片图在所述目标对象的虚拟支气管树中所处的位置范围;According to the historical matching information, determine the current matching range; wherein, the current matching range represents: the position range of the target virtual slice map in the virtual bronchial tree of the target object;
通过将所述术中图像与所述当前匹配范围对应的虚拟切片图进行匹配,确定所述目标虚拟切片图。The target virtual slice map is determined by matching the intraoperative image with the virtual slice map corresponding to the current matching range.
可选的,根据所述历史匹配信息,确定当前匹配范围,包括:Optionally, according to the historical matching information, determining the current matching range includes:
将所述历史匹配信息与所述历史术中图像的拍摄时间转换为向量,得到当前向量,并将所述当前向量输入至预先训练的长短期记忆网络,以利用所述长短期记忆网络确定所述当前匹配范围。Converting the historical matching information and the shooting time of the historical intraoperative images into a vector to obtain a current vector, and inputting the current vector into a pre-trained long-short-term memory network, so as to use the long-short-term memory network to determine the Describe the current matching range.
可选的,通过将所述术中图像与所述当前匹配范围内对应的虚拟切片图进行匹配,确定所述目标虚拟切片图,包括:Optionally, the target virtual slice map is determined by matching the intraoperative image with the corresponding virtual slice map within the current matching range, including:
通过将所述术中图像对应的第二待匹配图数据输入所述长短期记忆网络,获得所述长短期记忆网络输出的拼接后的待匹配图数据;其中,所述拼接后的待匹配图数据指:所述术中图像对应的第二待匹配图数据与至少一张历史术中图像对应的第二待匹配图数据拼接后的待匹配图数据;By inputting the second map to be matched data corresponding to the intraoperative image into the long short-term memory network, the spliced map to be matched outputted by the long short-term memory network is obtained; wherein, the spliced map to be matched is Data refers to: the image data to be matched after splicing the second image data to be matched corresponding to the intraoperative image and the second image data to be matched corresponding to at least one historical intraoperative image;
通过将所述拼接后的待匹配图数据与所述当前匹配范围内对应的虚拟切片图的第一待匹配图数据进行匹配,确定所述目标虚拟切片图。The target virtual slice map is determined by matching the stitched map data to be matched with the first map data to be matched corresponding to the virtual slice map within the current matching range.
根据本发明的第二方面,提供了一种支气管镜的位置确定装置,包括:According to a second aspect of the present invention, a device for determining the position of a bronchoscope is provided, comprising:
支气管树获取模块,用于获取目标对象的虚拟支气管树;The bronchial tree acquisition module is used to acquire the virtual bronchial tree of the target object;
识别模块,用于识别所述目标对象的虚拟支气管树的分叉节点,并基于识别出的分叉节点,获得所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息;An identification module, configured to identify a bifurcation node of the virtual bronchial tree of the target object, and obtain identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation node;
术中图像获取模块,用于获取所述目标对象的术中图像,所述术中图像是支气管镜在人体内行进时拍摄到的;The intraoperative image acquisition module is used to acquire the intraoperative image of the target object, and the intraoperative image is captured when the bronchoscope travels in the human body;
图像匹配模块,用于通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图;An image matching module, configured to determine a target virtual slice map matching the intraoperative image by matching the intraoperative image with a virtual slice map of the virtual bronchial tree of the target object;
标识匹配模块,用于确定所述目标对象的虚拟支气管树中匹配于所述目标虚拟切片图的相应的肺段与分叉节点的标识信息,确定出的标识信息被用于表征所述支气管镜在所述目标对象中的当前位置。An identification matching module, configured to determine the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map, and the determined identification information is used to characterize the bronchoscope The current location within the target object.
根据本发明的第三方面,提供了一种电子设备,包括处理器与存储器,According to a third aspect of the present invention, an electronic device is provided, including a processor and a memory,
所述存储器,用于存储代码;The memory is used to store codes;
所述处理器,用于执行所述存储器中的代码用以实现第一方面及其可选方案涉及的方法。The processor is configured to execute the codes in the memory to implement the methods involved in the first aspect and alternative solutions thereof.
根据本发明的第四方面,提供了一种存储介质,其上存储有计算机程序,该程序被处理器执行时实现第一方面及其可选方案涉及的方法。According to a fourth aspect of the present invention, a storage medium is provided, on which a computer program is stored, and when the program is executed by a processor, the method involved in the first aspect and its optional solution is implemented.
根据本发明的第五方面,提供了一种支气管镜导航系统,包括:支气管镜与数据处理部,所述数据处理部用于实施第一方面及其可选方案涉及的方法。According to a fifth aspect of the present invention, a bronchoscopic navigation system is provided, comprising: a bronchoscope and a data processing unit, the data processing unit is configured to implement the methods involved in the first aspect and its optional solutions.
本发明提供的支气管镜的位置确定方法、装置、系统、设备与介质中,针对于目标对象的虚拟支气管树,识别了其分叉节点,并基于此确定了虚拟支气管树中各肺段、分叉节点的标识信息,相较于直接使用虚拟支气管树进行导航定位而不对分叉节点、标识信息进行识别、确定的方案,本发明可准确有效地对支气管镜到达哪个肺段、交叉口进行定位和显示,满足支气管镜导航的需求。进一步的,基于训练好的模型进行识别、确定时,可有效兼顾各种虚拟支气管树情形。In the method, device, system, equipment and medium for determining the position of the bronchoscope provided by the present invention, for the virtual bronchial tree of the target object, its bifurcation nodes are identified, and based on this, the lung segments, subdivisions, and branches in the virtual bronchial tree are determined. The identification information of the fork node, compared with the scheme of directly using the virtual bronchial tree for navigation and positioning without identifying and determining the fork node and identification information, the present invention can accurately and effectively locate which lung segment and intersection the bronchoscope reaches and display to meet the needs of bronchoscopic navigation. Furthermore, when identifying and determining based on the trained model, various virtual bronchial tree situations can be effectively taken into consideration.
进一步的可选方案中,先根据历史匹配信息确定当前匹配范围,然后针对于当前匹配范围查找匹配的目标虚拟切片图,相较于现有技术中全局匹配的方式,该方案可有效降低匹配所需处理的数据量,提高处理效率。In a further optional scheme, the current matching range is first determined according to the historical matching information, and then the matching target virtual slice map is searched for the current matching range. Compared with the global matching method in the prior art, this scheme can effectively reduce the cost of matching. The amount of data to be processed improves processing efficiency.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those skilled in the art, other drawings can also be obtained according to these drawings without any creative effort.
图1是本发明一示例性的实施例中支气管导航系统的构造示意图;Fig. 1 is a structural schematic diagram of a bronchial navigation system in an exemplary embodiment of the present invention;
图2是本发明一示例性的实施例中支气管镜的位置确定方法的流程示意图;Fig. 2 is a schematic flowchart of a method for determining the position of a bronchoscope in an exemplary embodiment of the present invention;
图3是本发明一示例性的实施例中识别分叉节点的流程示意图;Fig. 3 is a schematic flow diagram of identifying fork nodes in an exemplary embodiment of the present invention;
图4是本发明一示例性的实施例中节点识别神经网络的原理示意图;Fig. 4 is a schematic diagram of the principle of a node recognition neural network in an exemplary embodiment of the present invention;
图5是本发明一示例性的实施例中确定标识信息的流程示意图;Fig. 5 is a schematic flowchart of determining identification information in an exemplary embodiment of the present invention;
图6是本发明一示例性的实施例中一种知识图谱的示意图;Fig. 6 is a schematic diagram of a knowledge map in an exemplary embodiment of the present invention;
图7是本发明一示例性的实施例中另一种知识图谱的示意图;Fig. 7 is a schematic diagram of another knowledge graph in an exemplary embodiment of the present invention;
图8是本发明另一示例性的实施例中确定标识信息的流程示意图;Fig. 8 is a schematic flowchart of determining identification information in another exemplary embodiment of the present invention;
图9是本发明一示例性的实施例中虚拟支气管树的肺段和分叉节点局部命名补全后的结果示意图;Fig. 9 is a schematic diagram of the result after completion of partial naming of lung segments and bifurcation nodes of the virtual bronchial tree in an exemplary embodiment of the present invention;
图10是本发明一示例性的实施例中确定目标虚拟切片图的流程示意图;Fig. 10 is a schematic flow chart of determining a target virtual slice map in an exemplary embodiment of the present invention;
图11是本发明一示例性的实施例中肺段开口的示意图;Figure 11 is a schematic diagram of the opening of a lung segment in an exemplary embodiment of the present invention;
图12是本发明一示例性的实施例中确定第一待匹配图数据的流程示意图;Fig. 12 is a schematic flowchart of determining the first graph data to be matched in an exemplary embodiment of the present invention;
图13是本发明一示例性的实施例中虚拟开口区域与实际开口区域的示意图;Fig. 13 is a schematic diagram of a virtual opening area and an actual opening area in an exemplary embodiment of the present invention;
图14是本发明一示例性的实施例中确定第二待匹配图数据的流程示意图;Fig. 14 is a schematic flow chart of determining the second graph data to be matched in an exemplary embodiment of the present invention;
图15是本发明一示例性的实施例中确定目标虚拟切片图的原理示意图;Fig. 15 is a schematic diagram of the principle of determining a target virtual slice map in an exemplary embodiment of the present invention;
图16是本发明另一示例性的实施例中确定目标虚拟切片图的流程示意图;Fig. 16 is a schematic flow chart of determining a target virtual slice map in another exemplary embodiment of the present invention;
图17是本发明一示例性的实施例中支气管镜的位置确定装置的程序模块示意图;Fig. 17 is a schematic diagram of the program modules of the device for determining the position of the bronchoscope in an exemplary embodiment of the present invention;
图18是本发明一示例性的实施例中电子设备的构造示意图。Fig. 18 is a schematic structural diagram of an electronic device in an exemplary embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", "fourth", etc. (if any) in the description and claims of the present invention and the above drawings are used to distinguish similar objects and not necessarily Describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having", as well as any variations thereof, are intended to cover a non-exclusive inclusion, for example, a process, method, system, product or device comprising a sequence of steps or elements is not necessarily limited to the expressly listed instead, may include other steps or elements not explicitly listed or inherent to the process, method, product or apparatus.
下面以具体地实施例对本发明的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。The technical solution of the present invention will be described in detail below with specific embodiments. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments.
请参考图1,本发明实施例提供了一种支气管镜导航系统100,包括:支气管镜101与数据处理部102。Please refer to FIG. 1 , an embodiment of the present invention provides a bronchoscope navigation system 100 , including: a bronchoscope 101 and a data processing unit 102 .
支气管镜101可以包括图像采集部,支气管镜101可理解为在进入人体气管后,能够利用图像采集部采集相应图像的装置或装置的组合。其中,支气管镜101还可以包括弯曲管(例如主动弯曲管和/或被动弯曲管),图像采集部可设于弯曲管的一端,此外,不论采用何种支气管镜,均不脱离本发明实施例的范围。The bronchoscope 101 may include an image acquisition unit, and the bronchoscope 101 may be understood as a device or a combination of devices that can use the image acquisition unit to acquire corresponding images after entering the trachea of a human body. Wherein, the bronchoscope 101 may also include a curved tube (such as an active curved tube and/or a passive curved tube), and the image acquisition unit may be located at one end of the curved tube. In addition, no matter what kind of bronchoscope is used, it does not depart from the embodiment of the present invention. range.
所述数据处理部102,可理解为具有数据处理能力的任意装置或装置的组合,在本发明实施例中,数据处理部102可用于实施后文涉及的位置确定方法,进而,该数据处理部102可直接或间接与支气管镜101中的图像采集部进行数据交互,使得数据采集部102可以接收术中图像。The data processing unit 102 can be understood as any device or a combination of devices with data processing capabilities. In the embodiment of the present invention, the data processing unit 102 can be used to implement the position determination method mentioned below. Furthermore, the data processing unit 102 can directly or indirectly perform data interaction with the image acquisition unit in the bronchoscope 101, so that the data acquisition unit 102 can receive intraoperative images.
请参考图2,本发明实施例提供了一种支气管镜的位置确定方法,包括:Please refer to FIG. 2 , an embodiment of the present invention provides a method for determining the position of a bronchoscope, including:
S201:获取目标对象的虚拟支气管树;S201: Obtain a virtual bronchial tree of the target object;
在一实施例中,虚拟支气管树是3D的,也可理解为支气管树的3D虚拟模型。其中,虚拟支气管树可以是通过CT数据重构而得到的支气管树的3D虚拟模型,当然,还可以采用别的方式获得虚拟支气管树,本说明书并不对此进行限制。对应的,目标对象,可理解为当前需体内导航的人体。In an embodiment, the virtual bronchial tree is 3D, which can also be understood as a 3D virtual model of the bronchial tree. Wherein, the virtual bronchial tree may be a 3D virtual model of the bronchial tree obtained by reconstructing CT data. Of course, the virtual bronchial tree may also be obtained in other ways, which is not limited in this specification. Correspondingly, the target object can be understood as the human body that currently needs to be navigated in the body.
S202:识别所述目标对象的虚拟支气管树的分叉节点,并基于识别出的分叉节点,确定所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息;S202: Identify fork nodes of the virtual bronchial tree of the target object, and determine identification information of each lung segment and fork node in the virtual bronchial tree of the target object based on the identified fork nodes;
在一实施例中,分叉节点可理解为能够对虚拟支气管树的分叉口的位置进行描述的任意节点,针对于每个分叉口,可形成一个分叉节点来表征。例如,可采用虚拟支气管树中分叉口部位的中心位置的坐标作为分叉节点的坐标。进而,其中对分叉节点识别的过程,也可视作在目标对象的虚拟支气管树中将分叉口标记出来的过程,还可视作确定分叉节点的位置的过程。In an embodiment, a bifurcation node can be understood as any node capable of describing the position of a bifurcation of the virtual bronchial tree, and for each bifurcation, a bifurcation node can be formed to represent it. For example, the coordinates of the central position of the bifurcation in the virtual bronchial tree may be used as the coordinates of the bifurcation node. Furthermore, the process of identifying the bifurcation node can also be regarded as a process of marking the bifurcation opening in the virtual bronchial tree of the target object, and can also be regarded as a process of determining the position of the bifurcation node.
在一实施例中,标识信息可理解为能够对分叉节点与肺段进行标识,以使不同肺段被区别标识、不同分叉节点被区别标识的任意信息。一种举例中,所述标识信息包括肺段的名称与分叉节点的标识。In an embodiment, the identification information can be understood as any information capable of identifying the bifurcation nodes and lung segments, so that different lung segments and different bifurcation nodes can be identified differently. In one example, the identification information includes the name of the lung segment and the identification of the bifurcation node.
S203:获取所述目标对象的术中图像;S203: Acquiring an intraoperative image of the target object;
所述术中图像是支气管镜在相应的目标对象内行进时拍摄到的;The intraoperative image is captured while the bronchoscope is traveling within the corresponding target object;
S204:通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图;S204: Determine a target virtual slice map matching the intraoperative image by matching the intraoperative image with a virtual slice map of the virtual bronchial tree of the target object;
其中,虚拟支气管树中某位置的虚拟切片图可理解为:对虚拟支气管树的该位置进行切片而形成的图。虚拟切片图可以包括切面图和/或截面图等。Wherein, the virtual slice map of a position in the virtual bronchial tree can be understood as: a map formed by slicing the position of the virtual bronchial tree. The virtual slice diagram may include a slice diagram and/or a cross-section diagram and the like.
S205:确定所述目标对象的虚拟支气管树中匹配于所述目标虚拟切片图的相应的肺段与分叉节点的标识信息;S205: Determine the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map;
其中,确定出的标识信息被用于表征所述支气管镜在所述目标对象中的当前位置。Wherein, the determined identification information is used to characterize the current position of the bronchoscope in the target object.
所述当前位置匹配于所述目标虚拟切片图在所述目标对象的虚拟支气管树中的位置。即:所述目标虚拟切片图在目标对象的虚拟支气管树中的位置,可体现出支气管镜在所述目标对象内的当前位置。The current position matches the position of the target virtual slice map in the virtual bronchial tree of the target object. That is, the position of the target virtual slice map in the virtual bronchial tree of the target object may reflect the current position of the bronchoscope in the target object.
一种举例中,基于步骤S205,可显示完整的虚拟支气管树,并在所显示的虚拟支气管树中显示出所述标识信息,以及所述支气管镜在所述目标对象内的当前位置;再一举例中,基于步骤S205,也可显示支气管镜的当前位置附近的局部虚拟支气管树以及相应的标识信息。又一举例中,基于步骤S205,还可利用一个第一界面显示完整的虚拟支气管树,利用另一个第二界面显示局部虚拟支气管树,同时,还可在两个界面中显示出标识信息,在第一界面中显示出支气管镜在目标对象内的当前位置;In one example, based on step S205, the complete virtual bronchial tree can be displayed, and the identification information and the current position of the bronchoscope in the target object can be displayed in the displayed virtual bronchial tree; another For example, based on step S205, a local virtual bronchus tree near the current position of the bronchoscope and corresponding identification information may also be displayed. In another example, based on step S205, a first interface can also be used to display a complete virtual bronchial tree, and another second interface can be used to display a partial virtual bronchial tree. At the same time, identification information can also be displayed in the two interfaces. The current position of the bronchoscope in the target object is displayed in the first interface;
相较于只显示出当前位置的方案,以上方案可以为支气管镜的导航提供更丰富的信息。Compared with the solution that only shows the current position, the above solution can provide richer information for the navigation of the bronchoscope.
以上方案中,针对于目标对象的虚拟支气管树,识别了其分叉节点,并基于此确定了虚拟支气管树中各肺段、分叉节点的标识信息,相较于直接使用虚拟支气管树进行导航定位而不对分叉节点、肺段的标识信息进行识别、确定的方案,本发明可准确有效地对支气管镜到达哪个肺段、交叉口进行定位和显示,满足支气管镜导航的需求。In the above scheme, for the virtual bronchial tree of the target object, its bifurcation nodes are identified, and based on this, the identification information of each lung segment and bifurcation node in the virtual bronchial tree is determined. Compared with directly using the virtual bronchial tree for navigation Positioning does not identify and determine the identification information of bifurcation nodes and lung segments. The present invention can accurately and effectively locate and display which lung segment and intersection the bronchoscope reaches, and meet the needs of bronchoscopic navigation.
其中一种实施方式中,请参考图3,识别所述目标对象的虚拟支气管树的分叉节点的过程,可以包括:In one of the implementation manners, please refer to FIG. 3 , the process of identifying the bifurcation nodes of the virtual bronchial tree of the target object may include:
S301:将获取到的所述目标对象的虚拟支气管树输入预先训练的节点识别神经网络,获得所述节点识别神经网络输出的所述目标对象的虚拟支气管树所含的各个分叉节点的位置。S301: Input the obtained virtual bronchial tree of the target object into a pre-trained node recognition neural network, and obtain the positions of each branch node contained in the virtual bronchial tree of the target object output by the node recognition neural network.
步骤S301可理解为图2所示步骤S202中识别所述目标对象的虚拟支气管树的分叉节点的过程的一种实现方式,对于图2所示实施例中已经阐述过的内容,在此不再赘述。Step S301 can be understood as an implementation of the process of identifying the bifurcation nodes of the virtual bronchial tree of the target object in step S202 shown in FIG. Let me repeat.
其中的节点识别神经网络可以是能够对所输入的虚拟支气管树识别分叉节点的任意神经网络,例如可以为卷积神经网络,其他举例中,也可能采用感知机神经网络、循环神经网络等实现。以卷积神经网络为例,训练时,可基于正向传播与反向传播的算法更新卷积神经网络中各层的权重值,进而, 在足够训练样本的情况下,可有效保障节点识别神经网络识别的准确性。The node identification neural network can be any neural network capable of identifying bifurcated nodes in the input virtual bronchial tree, for example, it can be a convolutional neural network, and in other examples, it can also be realized by using a perceptron neural network, a recurrent neural network, etc. . Taking the convolutional neural network as an example, during training, the weight value of each layer in the convolutional neural network can be updated based on the algorithm of forward propagation and backpropagation, and then, in the case of sufficient training samples, it can effectively guarantee the node identification neural network. Accuracy of network recognition.
所述节点识别神经网络由下述方式训练得到:The node recognition neural network is trained in the following way:
分别提取训练样本集合中的各个训练样本的样本特征,所述训练样本所含的虚拟支气管树上被标注有标签,所述标签用于标记所述训练样本所含虚拟支气管树中各个分叉节点的实际位置;The sample features of each training sample in the training sample set are respectively extracted, and the virtual bronchial tree contained in the training sample is marked with a label, and the label is used to mark each branch node in the virtual bronchial tree contained in the training sample the actual location of
通过将提取到的样本特征输入所述节点识别神经网络中,获得所述节点识别神经网络输出的所述训练样本所含虚拟支气管树中各个分叉节点的预测位置;By inputting the extracted sample features into the node recognition neural network, the predicted position of each bifurcation node contained in the virtual bronchial tree contained in the training sample output by the node recognition neural network is obtained;
根据所述实际位置与所述预测位置之间的差异信息调整所述节点识别神经网络,获得训练后的节点识别神经网络。adjusting the node recognition neural network according to the difference information between the actual position and the predicted position, to obtain a trained node recognition neural network.
其中,所述节点识别神经网络训练时所采用的代价函数的函数值匹配于所述差异信息,所述差异信息表征了:针对于所述训练样本所含虚拟支气管树,标签中标记出的分叉节点的实际位置与节点识别神经网络预测出的分叉节点的预测位置之间的误差。例如:可将虚拟支气管树所有分叉节点位置的误差的方差之和作为节点识别神经网络的代价函数的函数值。Wherein, the function value of the cost function used in the node identification neural network training matches the difference information, and the difference information represents: for the virtual bronchial tree contained in the training sample, the points marked in the label The error between the actual location of the fork node and the predicted location of the fork node predicted by the node recognition neural network. For example: the sum of the variances of the errors of the positions of all branch nodes of the virtual bronchial tree can be used as the function value of the cost function of the node recognition neural network.
同时,正常人的虚拟支气管树一般有18个肺段,17个分叉口,但因为每个患者具有的自身特异性,变异情况在检查中仍然数量很多,针对于此,在训练节点识别神经网络时,训练样本中会涵盖支气管树的各种情形,进而,节点识别结果可有效兼顾各种支气管树的情形,例如可兼顾17个分叉口、18个肺段的情形,也可兼顾其他变异的情况,从而可以使得训练后的节点识别神经网络可以识别出各种不同虚拟支气管树所含的分叉节点和肺段,提升训练后的节点识别神经网络输出结果的准确性。At the same time, the virtual bronchial tree of normal people generally has 18 lung segments and 17 bifurcations, but because each patient has its own specificity, there are still many variations in the inspection. During the network, the training samples will cover various situations of the bronchial tree, and further, the node recognition results can effectively take into account various situations of the bronchial tree, for example, the situation of 17 bifurcations and 18 lung segments can also be taken into account. Variation, so that the trained node recognition neural network can recognize the bifurcation nodes and lung segments contained in various virtual bronchial trees, and improve the accuracy of the trained node recognition neural network output results.
以下以3D卷积神经网络作为节点识别神经网络为例,对一种训练、建立该3D卷积神经网络的过程进行说明:The following uses a 3D convolutional neural network as an example of a node recognition neural network to describe a process of training and establishing the 3D convolutional neural network:
步骤a:构造深度学习数据集,该数据集中的数据为:基于所收集到的病人的CT数据重建并渲染得到的3D支气管树(即训练样本所含虚拟支气管树)。对训练样本所含虚拟支气管树的分叉节点位置进行标记后,可形成标签,进而,训练样本所含虚拟支气管树及对应的标签可作为训练样本,从而形成训练样本集合作为数据集。Step a: construct a deep learning data set, the data in the data set is: 3D bronchial tree (ie virtual bronchial tree included in the training sample) reconstructed and rendered based on the collected CT data of the patient. Labels can be formed after marking the bifurcation node positions of the virtual bronchial tree contained in the training samples, and then the virtual bronchial tree contained in the training samples and the corresponding labels can be used as training samples, thereby forming a set of training samples as a data set.
步骤b:取数据集的50%作为训练集,10%作为验证集,40%作为测试集。其他举例中,数据集也可采用其他比例进行分配。Step b: Take 50% of the data set as the training set, 10% as the validation set, and 40% as the test set. In other examples, data sets can also be allocated using other ratios.
通过以上步骤a与步骤b,可完成训练样本集合的构建与分配。Through the above steps a and b, the construction and distribution of the training sample set can be completed.
步骤c:建立3D卷积神经网络,并采用Xavier初始化方法来对3D卷积神经网络的权重参数进行初始化。Step c: establish a 3D convolutional neural network, and use the Xavier initialization method to initialize the weight parameters of the 3D convolutional neural network.
步骤c之后,可以针对于每个训练样本所形成的输入数据X,循环实施后续的步骤d-步骤f。After step c, for the input data X formed by each training sample, subsequent step d-step f may be cyclically implemented.
步骤d:对输入数据X做数值的最大最小归一化。公式如下:Step d: Perform maximum and minimum normalization on the input data X. The formula is as follows:
Figure PCTCN2022086429-appb-000001
Figure PCTCN2022086429-appb-000001
其中的输入数据X可理解为:将虚拟支气管树转换为三维矩阵后,虚拟支气管树的每个点即为三维矩阵的一个元素,每个元素的数值,例如可以为表征颜色、灰度、像素值等信息的数值,即为此处的输入数据X,对应的,X’为归一化之后的数据。The input data X can be understood as: After the virtual bronchial tree is converted into a three-dimensional matrix, each point of the virtual bronchial tree is an element of the three-dimensional matrix, and the value of each element, for example, can represent color, grayscale, pixel The value of information such as value is the input data X here, and correspondingly, X' is the data after normalization.
其中的min(X),可理解为虚拟支气管树的三维矩阵的所有点中最小的数值,max(X)可理解为虚拟支气管树的三维矩阵的所有点中最大的数值。进而,通过以上最大最小归一化的过程,可将数据映射到0~1范围之内处理,便于提高后续处理的速度与便捷性。Among them, min(X) can be understood as the smallest value among all points of the three-dimensional matrix of the virtual bronchial tree, and max(X) can be understood as the largest value among all points of the three-dimensional matrix of the virtual bronchial tree. Furthermore, through the above-mentioned maximum-minimum normalization process, the data can be mapped to the range of 0 to 1 for processing, which facilitates the speed and convenience of subsequent processing.
以上步骤d可在将输入数据X输入3D卷积神经网络后由3D卷积神经网络实现,也可在每次将输入数据X输入3D卷积神经网络之前实施归一化,然后将归一化之后的数据输入3D卷积神经网络。The above step d can be realized by the 3D convolutional neural network after the input data X is input into the 3D convolutional neural network, or normalization can be implemented before each input data X is input into the 3D convolutional neural network, and then the normalization The subsequent data is fed into a 3D convolutional neural network.
步骤e:归一化之后的数据输入至3D卷积神经网络之后,3D卷积神经网络对分叉节点进行预测后,可通过正向传播算法计算3D卷积神经网络预测的分叉节点的位置和标签所标记的分叉节点的位置间的差值(即位置误差信息),从而计算出损失函数值,其中的损失函数,也可描述为代价函数。Step e: After the normalized data is input into the 3D convolutional neural network, after the 3D convolutional neural network predicts the bifurcation node, the position of the bifurcation node predicted by the 3D convolutional neural network can be calculated by the forward propagation algorithm The difference between the position of the bifurcation node and the fork node marked by the label (that is, the position error information), so as to calculate the value of the loss function, where the loss function can also be described as a cost function.
正向传播过程使用代价函数计算标签所标记的分叉节点的位置(例如坐标)与预测到的分叉节点的位置(例如坐标)之间的误差,为方便理解,可使用对应位置之间的均方误差作为代价函数的函数值C,即
Figure PCTCN2022086429-appb-000002
若输入样本数n=1,则有
Figure PCTCN2022086429-appb-000003
其中j表示第j个分叉节点,y为标签中标记的分叉节点的坐标,
Figure PCTCN2022086429-appb-000004
代表预测值,即识别出来的分叉节点的坐标,L对应神经网络的最大层数。
The forward propagation process uses the cost function to calculate the error between the position (such as coordinates) of the forked node marked by the label and the predicted position (such as coordinates) of the forked node. For the convenience of understanding, the difference between the corresponding positions can be used The mean square error is used as the function value C of the cost function, namely
Figure PCTCN2022086429-appb-000002
If the number of input samples n=1, then there is
Figure PCTCN2022086429-appb-000003
where j represents the jth fork node, y is the coordinate of the fork node marked in the label,
Figure PCTCN2022086429-appb-000004
Represents the predicted value, that is, the coordinates of the identified bifurcation nodes, and L corresponds to the maximum number of layers of the neural network.
步骤f:将正向传播过程中计算出的代价函数的函数值应用于误差反向传播算法,从而优化3D卷积神经网络的权值参数。Step f: apply the function value of the cost function calculated in the forward propagation process to the error back propagation algorithm, thereby optimizing the weight parameters of the 3D convolutional neural network.
重复以上步骤d至步骤f,直至完成设定的轮数(例如200轮)的训练。每轮训练都在验证集上做验证,200轮训练后,取验证集最好结果的3D卷积神经网络在测试集上测试,得到训练后的3D卷积神经网络作为节点识别神经网络。Repeat the above step d to step f until the set number of rounds (for example, 200 rounds) of training is completed. Each round of training is verified on the verification set. After 200 rounds of training, the 3D convolutional neural network with the best result of the verification set is tested on the test set, and the trained 3D convolutional neural network is used as the node recognition neural network.
其中,误差反向传播优化算法如下:Among them, the error backpropagation optimization algorithm is as follows:
以图4为例,在误差反向传播算法中,使用
Figure PCTCN2022086429-appb-000005
表示第l层第j个神经元与第l-1层的第k个神经元之间的权重值,
Figure PCTCN2022086429-appb-000006
为第l层第j个神经元对应的偏置量,
Figure PCTCN2022086429-appb-000007
为第l层第j个神经元的输出,该取值可用
Figure PCTCN2022086429-appb-000008
表示,其中σ(·)为激活函数,
Figure PCTCN2022086429-appb-000009
为第l层第j个神经元的输入值,该取值可用
Figure PCTCN2022086429-appb-000010
表示。此外,其中的
Figure PCTCN2022086429-appb-000011
表示第l层的第j个输入数据;
Taking Figure 4 as an example, in the error backpropagation algorithm, use
Figure PCTCN2022086429-appb-000005
Indicates the weight value between the jth neuron of the l-th layer and the kth neuron of the l-1th layer,
Figure PCTCN2022086429-appb-000006
is the offset corresponding to the jth neuron in the l layer,
Figure PCTCN2022086429-appb-000007
is the output of the jth neuron in layer l, and this value is available
Figure PCTCN2022086429-appb-000008
Represents, where σ(·) is the activation function,
Figure PCTCN2022086429-appb-000009
is the input value of the jth neuron in layer l, which can be used
Figure PCTCN2022086429-appb-000010
express. In addition, the
Figure PCTCN2022086429-appb-000011
Represents the j-th input data of the l-th layer;
反向传播时,最终的目的是得到整体损失值(即代价函数的函数值)相对参数w,b的梯度值,为计算这两个参数的梯度,需首先把损失值相对神经元输出值和神经元经过激活函数的输出值先计算出来。若以
Figure PCTCN2022086429-appb-000012
表示整体损失值对网络末层L的第j个神经元输入值产生的梯度值,由于
Figure PCTCN2022086429-appb-000013
Figure PCTCN2022086429-appb-000014
经过激活后得到的值,根据链式法则:
When backpropagating, the ultimate goal is to obtain the gradient value of the overall loss value (that is, the function value of the cost function) relative to the parameters w and b. In order to calculate the gradient of these two parameters, it is necessary to first compare the loss value relative to the output value of the neuron and The output value of the neuron through the activation function is calculated first. if by
Figure PCTCN2022086429-appb-000012
Indicates the gradient value of the overall loss value to the input value of the jth neuron in the last layer of the network L, because
Figure PCTCN2022086429-appb-000013
for
Figure PCTCN2022086429-appb-000014
The value obtained after activation, according to the chain rule:
Figure PCTCN2022086429-appb-000015
Figure PCTCN2022086429-appb-000015
使用矩阵或向量形式将第L层的所有神经元同时考虑,则有,Using matrix or vector form to consider all neurons in layer L at the same time, then,
Figure PCTCN2022086429-appb-000016
Figure PCTCN2022086429-appb-000016
其中符号о表示Hadamard乘积。不同于末层L,隐藏层l中单个神经元的输入值来自多个上一层l-1的神经元,故有:where the symbol о represents the Hadamard product. Different from the last layer L, the input value of a single neuron in the hidden layer l comes from multiple neurons of the previous layer l-1, so there are:
Figure PCTCN2022086429-appb-000017
Figure PCTCN2022086429-appb-000017
同理,用矩阵或向量形式进行表示,有
Figure PCTCN2022086429-appb-000018
Similarly, expressed in matrix or vector form, there is
Figure PCTCN2022086429-appb-000018
最后,可根据上述结果直接计算w,b的梯度值,其中:Finally, the gradient values of w and b can be directly calculated according to the above results, where:
Figure PCTCN2022086429-appb-000019
Figure PCTCN2022086429-appb-000019
Figure PCTCN2022086429-appb-000020
Figure PCTCN2022086429-appb-000020
至此,反向传播过程完成,随后使用梯度下降算法对参数w,b的梯度值进行更新,形成符合更新后梯度值的参数w、b,即:w l→w l-η/nΣ xδ x,l(a x,l-1) T,b l→b l-η/nΣ xδ x,l其中η为学习率,该学习率可以为人为设置的值。 So far, the backpropagation process is completed, and then use the gradient descent algorithm to update the gradient values of the parameters w and b to form parameters w and b that conform to the updated gradient values, that is: w l →w l -η/nΣ x δ x ,l (a x,l-1 ) T , b l →b l -η/nΣ x δ x,l where η is the learning rate, which can be an artificially set value.
通过以上步骤,获得训练后的3D卷积神经网络(也可理解为一种虚拟支气管树节点分割模型),该训练后的3D卷积神经网络可用于识别出各种虚拟支气管树的分叉节点。Through the above steps, the trained 3D convolutional neural network (which can also be understood as a virtual bronchial tree node segmentation model) is obtained, and the trained 3D convolutional neural network can be used to identify the fork nodes of various virtual bronchial trees .
其中一种实施方式中,请参考图5,基于识别出的分叉节点,确定所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息的过程可以包括:In one of the implementation manners, please refer to FIG. 5, based on the identified bifurcation node, the process of determining the identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object may include:
S501:通过将所述目标对象的虚拟支气管树内识别出的分叉节点与支气管树的知识图谱进行匹配,确定所述标识信息。S501: Determine the identification information by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree.
步骤S501可理解为图2所示步骤S202中基于识别出的分叉节点,确定所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息的过程的一种实现方式,对于图2所示实施例中已经阐述过的内容,在此不再赘述。Step S501 can be understood as an implementation of the process of determining the identification information of each lung segment and the bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation node in step S202 shown in FIG. The content that has been described in the embodiment shown in 2 will not be repeated here.
支气管树的知识图谱可以为任意能够对交叉口(或交叉节点)和肺段之间的连接关系进行一定程度表征的一种信息载体,支气管树的知识图谱的数量可以为一个或者多个,本说明书并不对此进行限制。例如,图6示意了一种知识图谱的举例,图7也示意了一种知识图谱的举例。The knowledge graph of the bronchial tree can be any information carrier that can characterize the connection relationship between the intersection (or intersection node) and the lung segment to a certain extent. The number of knowledge graphs of the bronchial tree can be one or more. This paper The description does not limit this. For example, FIG. 6 shows an example of a knowledge graph, and FIG. 7 also shows an example of a knowledge graph.
通过所识别的分叉节点和知识图谱,可有效结合实际的支气管情况与客观的支气管树的知识,从 而准确匹配出虚拟支气管树中各肺段、交叉口的标识信息。Through the identified bifurcation nodes and the knowledge map, the actual bronchial situation and the objective knowledge of the bronchial tree can be effectively combined to accurately match the identification information of each lung segment and intersection in the virtual bronchial tree.
一种实施例中,可基于所识别出的交叉节点,直接从主气道开始,沿着虚拟支气管树的各个分叉节点匹配知识图谱中各分叉口、肺段的名称、标识,得到标识信息;In one embodiment, based on the identified intersection nodes, start directly from the main airway, and match the name and identification of each bifurcation and lung segment in the knowledge map along each bifurcation node of the virtual bronchial tree to obtain the identification information;
另一实施例中,如图8所示,也可在匹配标识信息之前,先基于神经网络(例如第二图卷积网络)对分叉节点、肺段之间的对应关系进行进一步的确认,例如,若分叉节点的定位、识别结果发生偏差,可能会导致分叉节点、肺段的对应关系发生错漏,此时,通过神经网络(例如第二图卷积网络)对该对应关系的进一步确认调整,可保证各分叉节点与肺段之间的对应关系能够被调整纠正,从而能够更准确地匹配到标识信息。In another embodiment, as shown in FIG. 8, before matching the identification information, the correspondence between the bifurcation node and the lung segment can be further confirmed based on the neural network (such as the second graph convolutional network), For example, if the positioning and recognition results of the bifurcation nodes deviate, it may cause errors and omissions in the corresponding relationship between the bifurcation node and the lung segment. Confirming the adjustment can ensure that the corresponding relationship between each bifurcation node and the lung segment can be adjusted and corrected, so that the identification information can be matched more accurately.
在图8所示的实施例中,通过将所述目标对象的虚拟支气管树内识别出的分叉节点与支气管树的知识图谱进行匹配,确定所述标识信息的过程,可以包括:In the embodiment shown in FIG. 8, the process of determining the identification information by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree may include:
S801:以所述识别出的分叉节点为顶点,以所述目标对象的虚拟支气管树中连接分叉节点的肺段为边,构建第三图数据;S801: Taking the identified bifurcation node as a vertex, and taking the lung segment connected to the bifurcation node in the virtual bronchial tree of the target object as an edge, construct a third graph data;
第三图数据为能够表征出顶点(即所述识别出的分叉节点)间连接关系的矩阵;其可表征为G(V,E),第三图数据中,V可理解为分叉口或分叉节点,其中的E可理解为肺段;The data in the third figure is a matrix that can characterize the connection relationship between vertices (that is, the identified fork nodes); it can be represented as G(V, E), and in the data in the third figure, V can be understood as a fork Or a bifurcation node, where E can be understood as a lung segment;
S802:将所述第三图数据输入所述第二图卷积网络,以利用所述第二图卷积网络确定所述目标对象的虚拟支气管树中的分叉节点与肺段之间的对应关系;S802: Input the third graph data into the second graph convolutional network, so as to use the second graph convolutional network to determine the correspondence between bifurcation nodes and lung segments in the virtual bronchial tree of the target object relation;
其中的第二图卷积网络,可理解为能够对第三图数据进行处理的神经网络;具体的,所述第二图卷积网络可被配置为能够计算出所述第三图数据中任一肺段对应于各分叉节点的概率,并选择概率最高的分叉节点作为所述任一肺段所属的分叉节点。The second graph convolution network can be understood as a neural network capable of processing the third graph data; specifically, the second graph convolution network can be configured to be able to calculate any A lung segment corresponds to the probability of each bifurcation node, and the bifurcation node with the highest probability is selected as the bifurcation node to which any lung segment belongs.
S803:根据所述对应关系,以及所述知识图谱,确定所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息。S803: According to the corresponding relationship and the knowledge map, determine the identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object.
步骤S801至S803的具体过程可例如:The specific process of steps S801 to S803 can be, for example:
步骤a:通过分叉节点(表征了分叉口)的识别(即节点分割,例如步骤S301中的识别),由于支气管树的分叉口处于相邻的肺段之间,基于此,可构建第三图数据,即:G(V,E),其中,V为分叉口,E为肺段,该步骤a即为步骤S801的一种实现方式。Step a: through the identification of the bifurcation node (characterizing the bifurcation mouth) (that is, node segmentation, such as the identification in step S301), since the bifurcation mouth of the bronchial tree is between adjacent lung segments, based on this, it is possible to construct The data in the third picture is: G(V, E), where V is a bifurcation, and E is a lung segment. This step a is an implementation of step S801.
步骤b:因为已经标记出分叉口(即识别出了分叉节点),接下来需要求解的是肺段,和常用图卷积做顶点分类不同,此处可以使用第二图卷积网络做边的分类,即通过将边分类到对应顶点,从而确定边与顶点的对应关系,第二图卷积网络的输入为第三图数据G(V,E),输出为V和E的对应关系,图卷积网络可起到平滑的作用,可以更好地建立了节点和边的关系;该步骤b即为步骤S802的一种实现方式。Step b: Since the fork has been marked (that is, the fork node is identified), the next thing to solve is the lung segment. Unlike the common graph convolution for vertex classification, here you can use the second graph convolution network to do Edge classification, that is, by classifying the edges to the corresponding vertices, so as to determine the corresponding relationship between the edges and the vertices. The input of the second graph convolutional network is the third graph data G(V, E), and the output is the corresponding relationship between V and E , the graph convolutional network can play a smoothing role, and can better establish the relationship between nodes and edges; this step b is an implementation of step S802.
其他实施方式中,也可将常用图卷积网络的处理方式应用于此而确定边与顶点的对应关系,即:以该常用图卷积网络作为第二图卷积网络;对应的,若采用常用图卷积网络,则:在确定对应关系时,可通过将顶点分类到边的方式,确定边与顶点的对应关系,例如:训练该常用图卷积网络时,训练样本所含虚拟支气管树的图数据可被标注标签,该标签用于标记任意一个顶点(分叉节点)属于(即对应于)每个边(肺段)的实际概率;训练后,该常用图卷积网络可用于输出第三图数据的任一顶点分类到(即对应到)任一边的实际概率。In other embodiments, the processing method of the commonly used graph convolutional network can also be applied to this to determine the corresponding relationship between edges and vertices, that is: the commonly used graph convolutional network is used as the second graph convolutional network; correspondingly, if using Commonly used graph convolutional networks, then: when determining the corresponding relationship, the corresponding relationship between edges and vertices can be determined by classifying vertices into edges. For example: when training this common graph convolutional network, the virtual bronchial tree contained in the training sample The graph data of can be marked with a label, which is used to mark the actual probability that any vertex (fork node) belongs to (that is, corresponds to) each edge (lung segment); after training, the common graph convolutional network can be used to output The actual probability that any vertex of the data in the third graph is classified (that is, corresponds to) any side.
区别于此种采用常用图卷积网络的方案,通过第二图卷积网络可将边分类到对应顶点,从而确定边与顶点间对应关系,在训练该第二图卷积网络时,训练样本所含虚拟支气管树的图数据可被标注有标签,标签用于标记任一个边(肺段)属于(即对应于)每个顶点(分叉节点)的实际概率;那么训练后的第二图卷积网络可用于输出第三图数据的任一边分类到(即对应到)任一顶点的实际概率。Different from this scheme using the commonly used graph convolutional network, the second graph convolutional network can classify edges to corresponding vertices, thereby determining the corresponding relationship between edges and vertices. When training the second graph convolutional network, the training samples The graph data of the included virtual bronchial tree can be marked with labels, and the labels are used to mark the actual probability that any edge (lung segment) belongs to (that is, corresponds to) each vertex (fork node); then the second graph after training The convolutional network can be used to output the actual probability that any edge of the third graph data is classified (ie corresponds to) any vertex.
将第三图数据输入至第二图卷积网络后,第二图卷积网络中的处理过程可例如以下步骤c-步骤e所示:After the third image data is input to the second image convolutional network, the processing process in the second image convolutional network can be as shown in the following steps c-step e:
步骤c:首先根据G(V,E)求解邻接矩阵A,公式如下:Step c: first solve the adjacency matrix A according to G (V, E), the formula is as follows:
Figure PCTCN2022086429-appb-000021
Figure PCTCN2022086429-appb-000021
步骤d:定义第二图卷积网络的第l层的输入为X l+1,输出为X l,其关系如下: Step d: Define the input of the lth layer of the second graph convolutional network as X l+1 , and the output as X l , the relationship is as follows:
Figure PCTCN2022086429-appb-000022
Figure PCTCN2022086429-appb-000022
Figure PCTCN2022086429-appb-000023
Figure PCTCN2022086429-appb-000023
其中σ为非线性变换层,A为邻接矩阵,D为每条边的度矩阵,W l,b l为第二图卷积网络第l层的权重。 Where σ is the nonlinear transformation layer, A is the adjacency matrix, D is the degree matrix of each edge, W l , b l are the weights of the lth layer of the second graph convolutional network.
步骤e:定义第二图卷积的输出,做边的分类,采用的公式如下所示:Step e: Define the output of the second graph convolution and classify the edges. The formula used is as follows:
Z=softmax(X lW z+b z) Z=softmax(X l W z +b z )
其中Z为每个边对应于各顶点的概率(亦即任一肺段对应于各分叉节点的概率),然后,针对每个边,可选择概率最高的顶点作为其所对应的顶点(亦即选择概率最高的分叉节点作为所述任一肺段所属的分叉节点);softmax为输出层的激活函数,W z,b z为输出层的权重,可以使用反向传播算法更新。 Where Z is the probability that each edge corresponds to each vertex (that is, the probability that any lung segment corresponds to each bifurcation node), and then, for each edge, the vertex with the highest probability can be selected as its corresponding vertex (also That is, the bifurcation node with the highest probability is selected as the bifurcation node to which any lung segment belongs); softmax is the activation function of the output layer, and W z and b z are the weights of the output layer, which can be updated using the backpropagation algorithm.
通过以上步骤a-e,可实现步骤S801与S802。Through the above steps a-e, steps S801 and S802 can be realized.
进而,步骤S803的具体实现方式可例如以下步骤f所示。Furthermore, a specific implementation manner of step S803 may be, for example, shown in step f below.
步骤f:根据医生总结的知识图谱,逐一遍历知识图谱中的每个分叉口,将知识图谱中肺段、分叉口的标识信息匹配到虚拟支气管树的分叉口(即分叉节点)与肺段,从而对目标对象的虚拟支气管树做边和节点的命名(即标识信息,该标识信息亦即肺段的名称与交叉口/交叉节点的标识)补全,其结果可参照于图9所示,其中示意了虚拟支气管树的肺段和分叉节点局部命名补全后的结果。Step f: According to the knowledge map summarized by the doctor, traverse each fork in the knowledge map one by one, and match the identification information of the lung segment and the fork in the knowledge map to the fork of the virtual bronchial tree (that is, the fork node) and the lung segment, so as to complete the name of the edge and node of the virtual bronchial tree of the target object (that is, the identification information, which is the name of the lung segment and the identification of the intersection/intersection node), and the results can be referred to in Fig. 9, which shows the results of lung segment and bifurcation node local name completion of the virtual bronchial tree.
具体的匹配命名的过程可例如:The specific matching naming process can be, for example:
初始化位置,从主气道开始,按照知识图谱,第一个为第一分叉口,分开左、右主支气管;Initialize the position, starting from the main airway, according to the knowledge map, the first one is the first bifurcation, which separates the left and right main bronchi;
从左主支气管开始依次往下,按照知识图谱补全每条边(即虚拟支气管树的每个肺段)和顶点(即虚拟支气管树的每个分叉口或分叉节点)的命名;Starting from the left main bronchus and going down in turn, complete the naming of each edge (that is, each lung segment of the virtual bronchial tree) and vertex (that is, each bifurcation or bifurcation node of the virtual bronchial tree) according to the knowledge map;
从右主支气管开始依次往下,按照知识图谱补全每条边和节点的命名。Starting from the right main bronchus and going down in turn, complete the naming of each edge and node according to the knowledge map.
即:通过遍历知识图谱,将虚拟支气管树中各个边和节点(即虚拟支气管树的每个肺段、分叉口或分叉节点)匹配到知识图谱中的标识信息,遍历的顺序可不限于以上举例。That is: by traversing the knowledge graph, each edge and node in the virtual bronchial tree (that is, each lung segment, bifurcation or bifurcation node of the virtual bronchial tree) is matched to the identification information in the knowledge graph, and the order of traversal is not limited to the above example.
可见,以上步骤S801至S803的具体方案中,通常第二图卷积网络的作用对第三图数据G(V,E)进行顶点V(即分叉口或分叉节点)的分类,虚拟支气管树的每个分叉口都会聚合几个肺段E,进而,通过第二图卷积网络做边的聚合,从而建立了顶点和边的关系,即分叉口(分叉节点)与肺段的对应关系。在此基础上,通过引入知识图谱,可以对分类好的边和顶点做命名补全(即确定标识信息)。It can be seen that in the specific scheme of the above steps S801 to S803, usually the function of the convolutional network of the second graph classifies the vertex V (that is, the bifurcation or the bifurcation node) of the data G(V, E) of the third graph, and the virtual bronchus Each fork of the tree will aggregate several lung segments E, and then, through the second graph convolutional network to do edge aggregation, thus establishing the relationship between vertices and edges, that is, the fork (fork node) and lung segment corresponding relationship. On this basis, by introducing the knowledge graph, it is possible to complete the named edges and vertices (that is, to determine the identification information) for the classified edges and vertices.
其中一种实施方式中,为了准确有效地实现虚拟切片图与术中图像之间的匹配,可预先对图的特征进行提取,形成更适于匹配的形式,当然,本发明实施例也不排除直接将图片进行匹配的方案。在图10所示的实施例中,将会基于虚拟切片图、术中图像形成待匹配图数据,可以包括以下步骤:In one of the implementations, in order to accurately and effectively realize the matching between the virtual slice map and the intraoperative image, the features of the map can be extracted in advance to form a form more suitable for matching. Of course, the embodiment of the present invention does not exclude A scheme that directly matches images. In the embodiment shown in FIG. 10, the image data to be matched will be formed based on the virtual slice image and the intraoperative image, which may include the following steps:
S1001:获取目标对象的虚拟支气管树;S1001: Obtain a virtual bronchial tree of the target object;
S1002:获取所述术中图像;S1002: Acquiring the intraoperative image;
在一实施例中,步骤S1001的执行过程与上述图2所示的实施例中的步骤S201相同,步骤1002的执行过程与上述图2所示的实施例中的步骤S203相同,此处不再赘述。In one embodiment, the execution process of step S1001 is the same as that of step S201 in the above-mentioned embodiment shown in FIG. 2, and the execution process of step 1002 is the same as that of step S203 in the above-mentioned embodiment shown in FIG. repeat.
步骤S1001之后,还可包括:After step S1001, it may also include:
S1003:获取对应于任一虚拟切片图的第一待匹配图数据;S1003: Obtain the first image data to be matched corresponding to any virtual slice image;
步骤S1002之后,还可包括:After step S1002, it may also include:
S1004:获取对应于所述术中图像的第二待匹配图数据。S1004: Acquire second image data to be matched corresponding to the intraoperative image.
其中,利用其中的待匹配图数据能够表征出对应图中肺段开口的数量与分布方式。Wherein, the number and distribution of lung segment openings in the corresponding image can be characterized by using the image data to be matched.
步骤S1003与步骤S1004之后,可以执行步骤S1005:将所述第一待匹配图数据与提取到的第二待匹配图数据进行比较,并根据比较结果确定所述目标虚拟切片图;After step S1003 and step S1004, step S1005 may be executed: compare the first image to be matched with the extracted second image to be matched, and determine the target virtual slice image according to the comparison result;
例如在比较结果为第一待匹配图数据与第二待匹配图数据一致、或者第一待匹配图数据与第二待匹配图数据近似的情况下,可以将相应的第一待匹配图数据对应的虚拟切片图确定为所述目标虚拟切片图。For example, when the comparison result is that the first graph data to be matched is consistent with the second graph data to be matched, or the first graph data to be matched is similar to the second graph data to be matched, the corresponding first graph data to be matched can be correspondingly The virtual slice map of is determined as the target virtual slice map.
以上步骤S1005的执行过程与图2所示的实施例中的步骤S204相同,此处不再赘述。The execution process of the above step S1005 is the same as the step S204 in the embodiment shown in FIG. 2 , and will not be repeated here.
其中的肺段开口,可理解为肺段的入口,其可在二维的虚拟切片图或术中图像中显示为一个呈闭环的开口。例如:针对于某第一肺段分叉至第二肺段与第三肺段的分叉口,肺段开口可理解为第二肺段的入口与第三肺段的入口。以图11所示的术中图像(也可视作虚拟切片图)为例,其中的肺段开口可例如图11所示的肺段开口1101、肺段开口1102与肺段开口1103。The opening of the lung segment can be understood as the entrance of the lung segment, which can be displayed as a closed-loop opening in the two-dimensional virtual slice or intraoperative image. For example: for the bifurcation of a first lung segment to the second lung segment and the third lung segment, the opening of the lung segment can be understood as the entrance of the second lung segment and the entrance of the third lung segment. Taking the intraoperative image shown in FIG. 11 (which can also be regarded as a virtual slice image) as an example, the lung segment openings can be, for example, the lung segment opening 1101 , the lung segment opening 1102 and the lung segment opening 1103 shown in FIG. 11 .
进一步的,部分待匹配图数据中,也可表征出肺段开口的形状、尺寸等信息。其中,所述虚拟切片图的第一待匹配图数据至少能表征出所述虚拟切片图中虚拟的肺段开口的数量与分布方式,所述术中图像的第二待匹配图数据至少能表征出所述术中图像中真实的肺段开口的数量与分布方式。其中的分布方式例如指每两个肺段开口之间的相对位置、距离等,其中的分布方式也可例如指肺段开口中心在虚拟切片图中的位置等。Furthermore, in part of the image data to be matched, information such as the shape and size of the opening of the lung segment can also be represented. Wherein, the first to-be-matched map data of the virtual slice map can at least represent the number and distribution of virtual lung segment openings in the virtual slice map, and the second to-be-matched map data of the intraoperative image can at least represent The number and distribution of the real lung segment openings in the intraoperative image are shown. The distribution mode therein refers to, for example, the relative position and distance between the openings of each two lung segments, and the distribution mode may also refer to the position of the center of the lung segment openings in the virtual slice map, etc., for example.
相较于直接利用术中图像与虚拟切片图进行匹配的方案,以上利用第一待匹配图数据与第二待匹配图数据进行匹配的方式可避免术中图像、虚拟切片图中与肺段开口无关的信息(例如图像中的颜色、与肺段开口无关的气管壁纹路等)对匹配结果的干扰。Compared with the scheme of directly matching the intraoperative image with the virtual slice map, the above method of matching the data of the first map to be matched with the data of the second map to be matched can avoid the intraoperative image, the virtual slice map and the lung segment opening. Irrelevant information (such as the color in the image, the texture of the trachea wall that has nothing to do with the opening of the lung segment, etc.) interferes with the matching results.
第一待匹配数据和第二待匹配数据均可以采用矩阵的方式进行表征,当然也可以采用别的方式进行表示,且均不脱离本发明实施例的范围。Both the first to-be-matched data and the second to-be-matched data can be represented in a matrix, and of course they can also be represented in other ways, without departing from the scope of the embodiments of the present invention.
请参考图12所示的实施例中,形成所述虚拟切片图的第一待匹配图数据的过程,可以包括:Please refer to the embodiment shown in FIG. 12 , the process of forming the first image data to be matched of the virtual slice image may include:
S1201:确定所述任一虚拟切片图中的虚拟开口区域;S1201: Determine a virtual opening area in any virtual slice map;
每个虚拟开口区域对应表征所述虚拟切片图中的一个肺段开口;Each virtual opening area corresponds to a lung segment opening in the virtual slice map;
S1202:以所述虚拟开口区域的中心为节点、中心间的连线为边,构造第一图数据;S1202: Using the center of the virtual opening area as a node and the connection between the centers as an edge, construct the first graph data;
所述第一图数据为能够表征出所述任一虚拟切片图中顶点的数量、位置特征,以及顶点间相对距离的矩阵;The first graph data is a matrix capable of characterizing the number and position characteristics of vertices in any virtual slice graph, and the relative distance between vertices;
S1203:利用第一图卷积网络,将所述第一图数据映射为所述任一虚拟切片图的第一待匹配图数据。S1203: Using the first graph convolutional network, map the first graph data to the first graph data to be matched of any virtual slice graph.
以上步骤S1201至步骤S1203可视作图10所示实施例中步骤S1003的一种实现方式,对于图10所述实施例已经阐述过的内容,在此不再赘述。The above step S1201 to step S1203 can be regarded as an implementation of step S1003 in the embodiment shown in FIG. 10 , and the content already described in the embodiment shown in FIG. 10 will not be repeated here.
其中的虚拟开口区域(以及后文中的实际开口区域)均可参照于图13右侧所示的闭环图形理解。该虚拟开口区域的识别(也可理解为划定、分割)可以是利用开口识别模型识别出来的。当然,也可通过对图像中线条的提取、筛选,以及封闭线条的提取等方式,实现虚拟开口区域、实际开口区域的确定,而无需借助开口识别模型,本说明书并不对此进行限制。The virtual opening area (and the actual opening area hereinafter) can be understood with reference to the closed-loop graph shown on the right side of FIG. 13 . The identification of the virtual opening area (which can also be understood as delimitation and segmentation) can be identified by using an opening identification model. Of course, the determination of the virtual opening area and the actual opening area can also be achieved by extracting and screening lines in the image, and extracting closed lines, without resorting to an opening recognition model, which is not limited in this specification.
一种实施例中,步骤S1201可以包括:利用开口识别模型识别所述虚拟切片图中的开口区域,以确定所述虚拟开口区域。In one embodiment, step S1201 may include: using an opening identification model to identify an opening area in the virtual slice map, so as to determine the virtual opening area.
其中的开口识别模型,可理解为能够将肺段对应的肺段开口分割、划定的任意模型,由于其中的开口即为图中显著性的内容,故而,开口识别模型也可理解为能够将图中的显著性区域给分割出来,其也可描述为显著性检测的神经网络。每个分叉口包含一个或者多个肺段开口,每个肺段开口对应一个肺段,肺段开口本身或者之间的纹理信息,大小信息,形状信息具有独特性。The opening recognition model can be understood as any model that can divide and delineate the corresponding lung segment opening. Since the opening is the salient content in the figure, the opening recognition model can also be understood as being able to The salient regions in the figure are segmented, which can also be described as a neural network for saliency detection. Each bifurcation contains one or more lung segment openings, and each lung segment opening corresponds to a lung segment. The texture information, size information, and shape information of the lung segment openings themselves or between them are unique.
其中显著性检测的神经网络(即开口识别模型)可以为卷积神经网络,其可用于对虚拟切片图与术中图像进行显著性分割(即开口区域的识别),部分举例中,也可利用两个神经网络分别对虚拟切片图与术中图像进行显著性分割(即开口识别)。以下对该卷积神经网络(即开口识别模型)的一种建立、训练过程进行举例:The neural network for saliency detection (i.e., the opening recognition model) can be a convolutional neural network, which can be used for saliency segmentation of virtual slices and intraoperative images (i.e., the identification of opening regions). In some examples, it can also be used Two neural networks were used to perform saliency segmentation (that is, opening recognition) on the virtual slice map and intraoperative image respectively. A kind of establishment, training process of this convolutional neural network (being the opening recognition model) is given as an example below:
步骤a,使用图形界面的图像标注软件(例如label me软件)标记好虚拟切片图和术中图像的显著性区域(也可理解为标记出图中的肺段开口),得到标注了显著性区域的标记结果作为标签,进而,可以将每个虚拟切片图及对应标签作为样本,还可以将术中图像及对应标签作为样本,进而,可构造样本的集合作为数据集,取数据集的50%作为训练集,10%作为验证集,40%作为测试集。Step a, use the image labeling software with a graphical interface (such as label me software) to mark the salient areas of the virtual slice map and the intraoperative image (it can also be understood as marking the opening of the lung segment in the picture), and obtain the marked area The labeling result of the label can be used as a label, and then each virtual slice map and corresponding label can be used as a sample, and the intraoperative image and corresponding label can also be used as a sample, and then a set of samples can be constructed as a data set, taking 50% of the data set As a training set, 10% as a validation set, and 40% as a test set.
步骤b,对数据集中的样本做归一化,使得样本中图的大小统一为500×500;即将各图的尺寸统一;Step b, normalize the samples in the data set, so that the size of the graphs in the sample is unified to 500×500; that is, the size of each graph is unified;
步骤c,建立卷积神经网络,并采用Xavier初始化方式来初始化该卷积神经网络的权重参数;Step c, establishing a convolutional neural network, and using the Xavier initialization method to initialize the weight parameters of the convolutional neural network;
步骤d.将卷积神经网络的输出矩阵大小设为500×500×26,26代表在变异的情况下,人体最多会有25个肺段的类别,加上背景一共有26个类。Step d. Set the output matrix size of the convolutional neural network to 500×500×26, 26 means that in the case of variation, the human body will have up to 25 categories of lung segments, and there are 26 categories in total including the background.
在步骤d后,可将样本逐一输入到卷积神经网络,该卷积神经网络可实施以下步骤e与步骤f。After step d, the samples can be input to the convolutional neural network one by one, and the convolutional neural network can implement the following steps e and f.
步骤e:以该卷积神经网络的预测得到的输出矩阵和标签的差值为损失函数的函数值;Step e: The difference between the output matrix and the label obtained by the prediction of the convolutional neural network is the function value of the loss function;
步骤f:将损失函数的函数值应用于误差反向传播算法,优化卷积神经网络的权值优化参数。Step f: apply the function value of the loss function to the error backpropagation algorithm, and optimize the weight optimization parameters of the convolutional neural network.
重复步骤e与步骤f,直至完成设定轮数(例如1000轮)的训练。每轮训练都在验证集上做验证,1000轮训练后,取验证集做好结果的卷积神经网络在测试集上测试,得到训练后的卷积神经网络作为开口识别模型。Step e and step f are repeated until the training of the set number of rounds (for example, 1000 rounds) is completed. Each round of training is verified on the verification set. After 1000 rounds of training, the convolutional neural network with the results of the verification set is tested on the test set, and the trained convolutional neural network is used as the opening recognition model.
与获取对应于任一所述第一待匹配图数据的过程相对应的,请参考图14,获取对应于所述术中图像的第二待匹配图数据的过程,可以包括:Corresponding to the process of obtaining data corresponding to any one of the first image to be matched, please refer to FIG. 14 , the process of obtaining the data of the second image to be matched corresponding to the intraoperative image may include:
S1401:在所述术中图像中,确定实际开口区域;S1401: In the intraoperative image, determine an actual opening area;
每个实际开口区域对应表征所述术中图像中的一个肺段开口;Each actual opening area corresponds to a lung segment opening in the intraoperative image;
S1402:以所述实际开口区域的中心为顶点、中心间的连线为边,构造第二图数据;S1402: Using the center of the actual opening area as the vertex and the connection line between the centers as the side, construct the second graph data;
所述第二图数据为能够表征出所述术中图像中顶点的数量、位置特征,以及顶点间相对距离的矩阵;其可参照第一图数据的内容理解;The second image data is a matrix capable of characterizing the number of vertices in the intraoperative image, their location features, and the relative distance between vertices; it can be understood with reference to the content of the first image data;
S1403:利用所述第一图卷积网络,将所述第二图数据映射为所述术中图像的第二待匹配图数据。S1403: Using the first graph convolutional network, map the second graph data to the second image data to be matched of the intraoperative image.
一种实施例中,步骤S1301可以包括:利用开口识别模型识别所述术中图像中的开口区域,以确定所述实际开口区域。In one embodiment, step S1301 may include: using an opening identification model to identify an opening area in the intraoperative image, so as to determine the actual opening area.
步骤S1401至S1403与图12所示实施例相类似,此处不再赘述。Steps S1401 to S1403 are similar to the embodiment shown in FIG. 12 , and will not be repeated here.
其中所提及的开口区域(例如实际开口区域、虚拟开口区域),也可如图15所示的封闭的椭圆形、圆形或橄榄型曲线,其中,中间一列所示为虚拟切片图,其中的封闭的椭圆形、圆形或橄榄型曲线内区域为虚拟开口区域,其中心即作为构建第一图数据时的顶点,中心的连线即作为构建第一图数据的边;右边一列所示为术中图像,其中的封闭的椭圆形、圆形或橄榄型曲线内区域为实际开口区域,其中心即作为构建第二图数据时的顶点,中心的连线即作为构建第二图数据的边。The opening area mentioned therein (such as the actual opening area, the virtual opening area) can also be a closed ellipse, circle or olive-shaped curve as shown in Figure 15, wherein the middle column shows a virtual slice diagram, where The area inside the closed ellipse, circle or olive-shaped curve is a virtual opening area, the center of which is used as the vertex when constructing the first graph data, and the connecting line at the center is used as the edge for constructing the first graph data; the column on the right shows is the intraoperative image, and the area inside the closed ellipse, circle or olive-shaped curve is the actual opening area, and its center is used as the vertex when constructing the second image data, and the connecting line of the center is used as the point for constructing the second image data side.
一种实施例中,第一图卷积网络可采用非抗性变的卷积网络(例如不具有后文所提及的空间变换层),进而,通过该第一图卷积网络,仅实现针对第一图数据、第二图数据的卷积。In one embodiment, the first graph convolutional network can adopt a non-resistant variable convolutional network (for example, without the space transformation layer mentioned later), and then, through the first graph convolutional network, only realize Convolution for the first image data and the second image data.
另一实施例中,考虑到虚拟切片图和术中图像会存在一些刚性变换的差别,如左右旋转,上下颠倒等。故而,以上所采用的第一图卷积网络可以为抗形变卷积网络;In another embodiment, it is considered that there may be some rigid transformation differences between the virtual slice image and the intraoperative image, such as left and right rotation, upside down and so on. Therefore, the first graph convolutional network used above can be an anti-deformation convolutional network;
不论哪种实施方式,通过第一图卷积网络的卷积,均可萃取图数据的特征,更好地体现出第一图数据、第二图数据的特征,使匹配结果更准确。Regardless of the implementation mode, through the convolution of the first image convolutional network, the features of the image data can be extracted, which can better reflect the characteristics of the first image data and the second image data, and make the matching result more accurate.
所述抗形变卷积网络可以包括:The anti-deformation convolutional network may include:
空间变换层,用于:获取所述第一图数据与所述第二图数据,对所述第一图数据和/或第二图数据进行变换,得到所述第一图数据对应的第一待卷积图数据,以及所述第二图数据对应的第二待卷积图数据;The space transformation layer is configured to: obtain the first image data and the second image data, transform the first image data and/or the second image data, and obtain the first image data corresponding to the first image data. Image data to be convoluted, and second image data to be convolved corresponding to the second image data;
其中:in:
若对第一图数据进行变换,不对第二图数据进行变换,则第一待卷积图数据为第一图数据变换后的图数据,第二待卷积图数据为第二图数据;If the first image data is transformed and the second image data is not transformed, then the first image data to be convoluted is the image data after the first image data transformation, and the second image data to be convoluted is the second image data;
若对第二图数据进行变换,不对第一图数据进行变换,则第二待卷积图数据为第二图数据变换后的图数据,第一待卷积图数据为第一图数据;If the second image data is transformed and the first image data is not transformed, then the second graph data to be convoluted is the graph data after the transformation of the second graph data, and the first graph data to be convoluted is the first graph data;
若对第一图数据、第二图数据均进行变换,则第一待卷积图数据为第一图数据变换后的图数据,第二待卷积图数据为第二图数据变换后的图数据;If both the first image data and the second image data are transformed, the first image data to be convoluted is the image data transformed from the first image data, and the second image data to be convoluted is the image transformed from the second image data. data;
所述变换包括:对齐变换;The transformation includes: alignment transformation;
所述对齐变换指:在所述术中图像与所述任一虚拟切片图相匹配时,将所述第一图数据表征的顶点的位置与第二图数据表征的顶点的位置变换为一致或相近;The alignment transformation refers to: when the intraoperative image matches any of the virtual slice images, transforming the position of the vertex represented by the data of the first image and the position of the vertex represented by the data of the second image to be consistent or similar;
以图15中顶点连接所形成的三角形、线段为例,其中变换所表征效果可以包括:对由顶点所形成的三角形进行旋转、对由顶点所形成的三角形进行平移、对由顶点所形成的线段进行旋转、对由顶点所形成的线段进行平移等。Taking the triangles and line segments formed by the connection of vertices in Figure 15 as an example, the effect represented by the transformation may include: rotating the triangle formed by the vertices, translating the triangle formed by the vertices, and transforming the line segment formed by the vertices Perform rotations, translations on line segments formed by vertices, etc.
在所述术中图像与所述任一虚拟切片图不相匹配时,空间变换层也可对第一图数据和/或第二图数据进行变换,因其不会影响最终术中图像与虚拟切片图的匹配结果,故而,不论如何变换,均不脱离本发明实施例的范围。When the intraoperative image does not match any of the virtual slice images, the spatial transformation layer can also transform the first image data and/or the second image data, because it will not affect the final intraoperative image and the virtual slice image. Therefore, no matter how the matching result of the slice map is transformed, it does not depart from the scope of the embodiments of the present invention.
所述卷积处理单元,用于对所述第一待卷积图数据进行卷积,得到所述任一虚拟切片图的第一待匹配图数据,对所述第二待卷积图数据进行卷积,得到所述术中图像的第二待匹配图数据。The convolution processing unit is configured to convolve the first image data to be convolved to obtain the first image data to be matched of any virtual slice image, and perform convolution on the second image data to be convolved Convolve to obtain the second to-be-matched image data of the intraoperative image.
其中一种实施方式中,所述卷积处理单元包括:In one of the implementation manners, the convolution processing unit includes:
嵌入层,用于将所述第一待卷积图数据中表征顶点间相对距离的数据转换为固定向量,得到第三待卷积图数据,将所述第二待卷积图数据中表征顶点间相对距离的数据转换为所述固定向量,得到第四待卷积图数据;The embedding layer is used to convert the data representing the relative distance between vertices in the first graph data to be convolved into a fixed vector to obtain the third graph data to be convoluted, and convert the data representing the vertices in the second graph data to be convoluted to The data of relative distance between is converted into described fixed vector, obtains the 4th to be convoluted image data;
图卷积层,用于对所述第三待卷积图数据进行卷积,得到所述任一虚拟切片图的第一待匹配图数据,对所述第四待卷积图数据进行卷积,得到所述术中图像的第二待匹配图数据。A graph convolution layer, configured to convolve the third graph data to be convolved, obtain the first graph data to be matched of any virtual slice graph, and convolve the fourth graph data to be convolved , to obtain the second image data to be matched of the intraoperative image.
另一实施方式中,所述卷积处理单元也可包括空间变换层与图卷积层而不包含嵌入层,进而可直接对第一待卷积图数据、第二待卷积图数据进行卷积。In another embodiment, the convolution processing unit may also include a space transformation layer and a graph convolution layer instead of an embedding layer, so that the first image data to be convolved and the second image data to be convolved can be directly convoluted. product.
进而,通过以上抗形变卷积网络,可实现图数据之间的对齐变换,在此基础上所得到的待匹配图数据可准确、有效地实现图片的匹配。Furthermore, through the above anti-deformation convolutional network, the alignment transformation between graph data can be realized, and the graph data to be matched obtained on this basis can accurately and effectively realize the matching of images.
使用该抗形变卷积网络之前,所需完成的过程可例如包括:Before using the anti-deformation convolutional network, the process that needs to be completed may include, for example:
步骤a:基于显著性检测的卷积神经网络(即开口识别模型)500×500×26的输出矩阵,生成500×500的01矩阵,其中0代表背景(即虚拟开口区域、实际开口区域之外的区域),1代表显著性区域(即虚拟开口区域、实际开口区域内的区域)。Step a: Generate a 500×500 01 matrix based on the 500×500×26 output matrix of the convolutional neural network for saliency detection (that is, the opening recognition model), where 0 represents the background (that is, outside the virtual opening area and the actual opening area area), 1 represents the salient area (that is, the virtual opening area, the area inside the actual opening area).
步骤b:将虚拟切片图和术中图像转换为矩阵后,分别乘以对应的01矩阵,得到第一矩阵与第二矩阵。Step b: After converting the virtual slice image and the intraoperative image into a matrix, multiply by the corresponding 01 matrix respectively to obtain the first matrix and the second matrix.
步骤c:针对于第一矩阵与第二矩阵,以肺段开口为连通域,计算其中心位置,以每个肺段开口的中心点为顶点,中心点连线为边,构造图数据(即第一图数据与第二图数据)。Step c: For the first matrix and the second matrix, take the lung segment opening as the connected domain, calculate its center position, take the center point of each lung segment opening as the vertex, and the line connecting the center points as the edge, construct the graph data (ie first graph data and second graph data).
针对于虚拟切片图实施以上步骤a、步骤b、步骤c的过程,即为步骤S1202的一种实现方式,针对于术中图像实施以上步骤a、步骤b、步骤c的过程,即为步骤S1402的一种实现方式。Implementing the above steps a, b, and c for the virtual slice map is an implementation of step S1202, and implementing the above steps a, b, and c for intraoperative images is step S1402 A way of realizing .
假设抗形变图卷积网络一共包含三层,其中,第一层是空间变换层,第二层是嵌入层,第三层是图卷积层。Assume that the anti-deformation graph convolutional network contains three layers in total, where the first layer is a spatial transformation layer, the second layer is an embedding layer, and the third layer is a graph convolutional layer.
第一层空间变换层可以用于对第一图数据和/或第二图数据进行空间变换,其效果可体现为对应图像(例如术中图像、虚拟切片图)中肺段开口中心点所构成图形、线段的6D自由度的变换。The first layer of spatial transformation layer can be used to perform spatial transformation on the first image data and/or the second image data, and its effect can be reflected in the structure of the center point of the lung segment opening in the corresponding image (such as intraoperative image, virtual slice image). 6D degrees of freedom transformation of graphics and line segments.
第二层嵌入层的作用可理解为:因为肺段的大小存在区别,通过嵌入层把它们统一为固定向量,有利于后面图卷积层的计算;The role of the second embedding layer can be understood as: because there are differences in the size of the lung segments, they are unified into a fixed vector through the embedding layer, which is beneficial to the calculation of the convolutional layer of the following figure;
第三层图卷积层作用如下:The function of the third layer graph convolution layer is as follows:
首先根据G(V,E)求解邻接矩阵A,公式如下:First solve the adjacency matrix A according to G(V,E), the formula is as follows:
Figure PCTCN2022086429-appb-000024
Figure PCTCN2022086429-appb-000024
定义图卷积的第l层的输入为X l+1,输出为X l,其关系如下: The input of the lth layer of the definition graph convolution is X l+1 , and the output is X l . The relationship is as follows:
Figure PCTCN2022086429-appb-000025
Figure PCTCN2022086429-appb-000025
其中σ为非线性变换层,A为邻接矩阵,D为每条边的度矩阵,W l,b l为图卷积权重。 where σ is the nonlinear transformation layer, A is the adjacency matrix, D is the degree matrix of each edge, W l , b l are the graph convolution weights.
经过抗形变卷积网络,虚拟切片图的待匹配图数据和术中图像的待匹配图数据实现了对齐和特征平滑,之后,可在待匹配图数据之间(即第一待匹配图数据与第二待匹配图数据之间)进行匹配,进而通过图数据之间的匹配,实现虚拟切片图与术中图像间的匹配。After the anti-deformation convolutional network, the alignment and feature smoothing of the to-be-matched map data of the virtual slice map and the to-be-matched map data of the intraoperative image are realized. The second to-be-matched image data) is matched, and then through the matching between the image data, the matching between the virtual slice image and the intraoperative image is realized.
其中,因为虚拟切片图的图数据(例如第一图数据、待匹配图数据)都对应虚拟支气管树的具体位置和切片角度,所以通过图数据的匹配,对应可查找到目标对象的术中图像匹配的目标虚拟切片图,从而基于匹配的目标虚拟切片图在虚拟支气管树中位置,判断支气管镜在人体内的当前位置,实现导航。Among them, because the map data of the virtual slice map (such as the first map data and the map data to be matched) all correspond to the specific position and slice angle of the virtual bronchial tree, so through the matching of the map data, the corresponding intraoperative image of the target object can be found The matched target virtual slice map, so as to judge the current position of the bronchoscope in the human body based on the position of the matched target virtual slice map in the virtual bronchial tree, and realize navigation.
部分方案中,也可在对第一图数据、第二图数据进行对齐变换后,直接利用变换后的第一图数据、第二图数据作为待匹配图数据,从而进行匹配。In some schemes, after aligning and transforming the first image data and the second image data, the transformed first image data and the second image data can be directly used as the image data to be matched, so as to perform matching.
此外,以上所提及的图数据均可采用矩阵进行表示。In addition, the graph data mentioned above can be represented by a matrix.
其中一种实施方式中,为了有效提高匹配效果,请参考图16,通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图的过程,可以包括:In one of the implementations, in order to effectively improve the matching effect, please refer to Figure 16, by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object, determine the The process of target virtual slice map can include:
S1601:获取历史匹配信息;S1601: Obtain historical matching information;
所述历史匹配信息表征了:历史术中图像所匹配到的虚拟切片图在所述目标对象的虚拟支气管树中的位置与切片角度;The historical matching information represents: the position and slice angle of the virtual slice map matched by the historical intraoperative image in the virtual bronchial tree of the target object;
S1602:根据所述历史匹配信息,确定当前匹配范围;S1602: Determine the current matching range according to the historical matching information;
其中,所述当前匹配范围表征了:所述目标虚拟切片图在所述虚拟支气管树中所处的位置范围;Wherein, the current matching range represents: the position range of the target virtual slice map in the virtual bronchial tree;
S1603:通过将所述术中图像与所述当前匹配范围内对应的虚拟切片图进行匹配,确定所述目标虚拟切片图。S1603: Determine the target virtual slice map by matching the intraoperative image with a corresponding virtual slice map within the current matching range.
步骤S1601至步骤S1603可理解为图2所示步骤S204的一种实现方式,对于图2所示实施例已阐述的内容,在此不再赘述。Step S1601 to step S1603 can be understood as an implementation of step S204 shown in FIG. 2 , and the content already described in the embodiment shown in FIG. 2 will not be repeated here.
因为基于视觉的支气管镜导航是具有时空逻辑的,例如如果当前支气管镜在第15分叉口,那接下来它可能只能去第16分叉口,第13分叉口,加入术中多张术中图像的归纳推理可以缩小虚拟真实图匹配的匹配范围,无需将术中图像与目标对象的虚拟支气管树的所有虚拟切片图进行匹配,有效减少了匹配数据量,加快匹配速度,排除一些不符合逻辑的解。Because vision-based bronchoscopic navigation has a spatio-temporal logic, for example, if the current bronchoscope is at the 15th bifurcation, then it may only go to the 16th bifurcation and the 13th bifurcation, adding multiple intraoperative images The inductive reasoning of the image can narrow the matching range of the virtual reality map, without matching the intraoperative image with all the virtual slices of the virtual bronchial tree of the target object, effectively reducing the amount of matching data, speeding up the matching, and eliminating some illogical solution.
可见,以上方案中,可有效降低匹配所需处理的数据量,提高处理效率。It can be seen that, in the above scheme, the amount of data to be processed for matching can be effectively reduced, and the processing efficiency can be improved.
在一实施例中,步骤S1602的具体过程可以包括:In an embodiment, the specific process of step S1602 may include:
将所述历史匹配信息与所述历史术中图像的拍摄时间转换为向量,得到当前向量;并将所述当前向量输入至长短期记忆网络,以利用所述长短期记忆网络确定所述当前匹配范围。converting the historical matching information and the shooting time of the historical intraoperative images into a vector to obtain a current vector; and inputting the current vector into a long-short-term memory network to determine the current matching by using the long-short-term memory network scope.
具体的,可将术中图像的6D自由度的信息(代表其在虚拟支气管树的位置和切片角度)和对应的拍摄时间做向量化,例如可将该些信息逐一并列而形成当前向量。Specifically, the 6D degree of freedom information of the intraoperative image (representing its position and slice angle in the virtual bronchial tree) and the corresponding shooting time can be vectorized, for example, these information can be juxtaposed one by one to form a current vector.
其中的长短期记忆网络,在训练时,可将各训练用术中图像,训练用虚拟切片图,作为素材训练 长短期记忆网络,通过长短期记忆网络输出的权重,逐步更新其确定匹配范围的能力。The long-term and short-term memory network can use various training intraoperative images and training virtual slice images as materials to train the long-term and short-term memory network during training, and gradually update its matching range through the weight output by the long-term and short-term memory network. ability.
在步骤S1602中,可基于图10所示实施例中步骤S1003与步骤S1004确定的待匹配图数据实现匹配,例如:在步骤S1603中,可以包括:In step S1602, matching may be realized based on the image data to be matched determined in step S1003 and step S1004 in the embodiment shown in FIG. 10 , for example: in step S1603, may include:
通过将所述术中图像对应的第二待匹配图数据输入所述长短期记忆网络,获得所述长短期记忆网络输出的拼接后的待匹配图数据;其中,所述拼接后的待匹配图数据指:所述术中图像对应的第二待匹配图数据与至少一张历史术中图像对应的第二待匹配图数据拼接后的待匹配图数据;By inputting the second map to be matched data corresponding to the intraoperative image into the long short-term memory network, the spliced map to be matched outputted by the long short-term memory network is obtained; wherein, the spliced map to be matched is Data refers to: the image data to be matched after splicing the second image data to be matched corresponding to the intraoperative image and the second image data to be matched corresponding to at least one historical intraoperative image;
通过将所述拼接后的待匹配图数据与所述当前匹配范围内的虚拟切片图的第一待匹配图数据进行匹配,确定所述目标虚拟切片图。The target virtual slice map is determined by matching the stitched map data to be matched with the first map data to be matched in the virtual slice map within the current matching range.
由于匹配时使用了术中图像与历史术中图像,相较于仅采用术中图像进行匹配的方式,以上方案可有效提高匹配的准确性。Since intraoperative images and historical intraoperative images are used for matching, the above scheme can effectively improve the accuracy of matching compared with only using intraoperative images for matching.
其他举例中,也可不进行拼接,而通过将所述术中图像的第二待匹配图数据与当前匹配范围内的虚拟切片图的第一待匹配图数据进行局部匹配,确定所述目标虚拟切片图。In other examples, splicing may not be performed, but the target virtual slice is determined by locally matching the second to-be-matched map data of the intraoperative image with the first to-be-matched map data of the virtual slice map within the current matching range picture.
请参考图17,本发明实施例还提供了一种支气管镜的位置确定装置1700,包括:Please refer to FIG. 17 , the embodiment of the present invention also provides a device 1700 for determining the position of a bronchoscope, including:
支气管树获取模块1701,用于获取目标对象的虚拟支气管树;A bronchial tree acquiring module 1701, configured to acquire the virtual bronchial tree of the target object;
识别模块1702,用于识别所述目标对象的虚拟支气管树的分叉节点,并基于识别出的分叉节点,获得所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息;The identification module 1702 is configured to identify the bifurcation node of the virtual bronchial tree of the target object, and obtain the identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation node;
术中图像获取模块1703,用于获取所述目标对象的术中图像,所述术中图像是支气管镜在人体内行进时拍摄到的;An intraoperative image acquisition module 1703, configured to acquire an intraoperative image of the target object, where the intraoperative image is captured when the bronchoscope travels in the human body;
图像匹配模块1704,用于通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图;An image matching module 1704, configured to determine a target virtual slice map matching the intraoperative image by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object;
标识匹配模块1705,用于确定所述目标对象的虚拟支气管树中匹配于所述目标虚拟切片图的相应的肺段与分叉节点的标识信息,确定出的标识信息被用于表征所述支气管镜在所述目标对象中的当前位置。The identification matching module 1705 is configured to determine the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map, and the determined identification information is used to characterize the bronchial The current position of the mirror within the target object.
可选的,所述识别模块1702,具体用于:Optionally, the identification module 1702 is specifically used for:
将获取到的所述目标对象的虚拟支气管树输入预先训练的节点识别神经网络,获得所述节点识别神经网络输出的所述目标对象的虚拟支气管树所含的各个分叉节点的位置。Inputting the acquired virtual bronchial tree of the target object into the pre-trained node recognition neural network, and obtaining the positions of each branch node contained in the virtual bronchial tree of the target object output by the node recognition neural network.
可选的,所述节点识别神经网络由下述方式训练得到:Optionally, the node recognition neural network is trained in the following manner:
分别提取训练样本集合中的各个训练样本的样本特征,所述训练样本所含的虚拟支气管树上被标注有标签,所述标签用于标记所述训练样本所含虚拟支气管树中各个分叉节点的实际位置;The sample features of each training sample in the training sample set are respectively extracted, and the virtual bronchial tree contained in the training sample is marked with a label, and the label is used to mark each branch node in the virtual bronchial tree contained in the training sample the actual location of
通过将提取到的样本特征输入所述节点识别神经网络中,获得所述节点识别神经网络输出的所述训练样本所含虚拟支气管树中各个分叉节点的预测位置;By inputting the extracted sample features into the node recognition neural network, the predicted position of each bifurcation node contained in the virtual bronchial tree contained in the training sample output by the node recognition neural network is obtained;
根据所述实际位置与所述预测位置之间的差异信息调整所述节点识别神经网络,获得训练后的节点识别神经网络。adjusting the node recognition neural network according to the difference information between the actual position and the predicted position, to obtain a trained node recognition neural network.
可选的,所述识别模块1702,具体用于;Optionally, the identification module 1702 is specifically used for;
通过将所述目标对象的虚拟支气管树内识别出的分叉节点与支气管树的知识图谱进行匹配,确定所述标识信息。The identification information is determined by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree.
可选的,所述识别模块1702,具体用于;Optionally, the identification module 1702 is specifically used for;
以所述识别出的分叉节点为顶点、所述目标对象的虚拟支气管树中用于连接识别出的分叉节点的肺段为边,构建第三图数据;Constructing the third graph data with the identified bifurcation node as the vertex and the lung segment used to connect the identified bifurcation node in the virtual bronchial tree of the target object as the edge;
将所述第三图数据输入预先训练的第二图卷积网络,以利用所述第二图卷积网络确定所述目标对象的虚拟支气管树中的分叉节点与肺段之间的对应关系;inputting the third graph data into the pre-trained second graph convolutional network, so as to use the second graph convolutional network to determine the corresponding relationship between bifurcation nodes and lung segments in the virtual bronchial tree of the target object ;
根据所述对应关系以及所述知识图谱,确定所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息。According to the corresponding relationship and the knowledge graph, identification information of each lung segment and a bifurcation node in the virtual bronchial tree of the target object is determined.
可选的,所述第二图卷积网络被配置为能够计算出所述第三图数据中任一肺段对应于各分叉节点的概率,且所述任一肺段所属的分叉节点为其中概率最高的分叉节点。Optionally, the second graph convolutional network is configured to be able to calculate the probability that any lung segment in the third graph data corresponds to each bifurcation node, and the bifurcation node to which any lung segment belongs is the bifurcation node with the highest probability.
可选的,所述图像匹配模块1704,具体用于:Optionally, the image matching module 1704 is specifically used for:
获取对应于任一虚拟切片图的第一待匹配图数据,所述第一待匹配图数据包括所述虚拟切片图中肺段开口的数量与分布方式;Obtaining the first image data to be matched corresponding to any virtual slice image, the first image data to be matched includes the number and distribution of lung segment openings in the virtual slice image;
获取对应于所述术中图像的第二待匹配图数据,所述第二待匹配图数据包括所述术中图像中肺段开口的数量与分布方式;Acquiring second image data to be matched corresponding to the intraoperative image, where the second image data to be matched includes the number and distribution of lung segment openings in the intraoperative image;
将所述第一待匹配图数据与提取到的第二待匹配图数据进行比较,并根据比较结果确定所述目标虚拟切片图。Comparing the first to-be-matched image data with the extracted second to-be-matched image data, and determining the target virtual slice image according to the comparison result.
可选的,所述图像匹配模块1704,具体用于:Optionally, the image matching module 1704 is specifically used for:
确定所述任一虚拟切片图中的虚拟开口区域,每个虚拟开口区域对应所述虚拟切片图中的一个肺段开口;Determining the virtual opening area in any of the virtual slice diagrams, each virtual opening area corresponding to a lung segment opening in the virtual slice diagram;
以所述虚拟开口区域的中心为顶点、各个中心之间的连线为边,构造第一图数据;所述第一图数据包括所述任一虚拟切片图中顶点的数量、位置,以及顶点间相对距离;The center of the virtual opening area is used as a vertex, and the lines between the centers are used as edges to construct the first graph data; the first graph data includes the number, position, and vertices of any virtual slice graph. relative distance between
利用第一图卷积网络,将所述第一图数据映射为所述任一虚拟切片图的第一待匹配图数据。Using the first graph convolutional network, the first graph data is mapped to the first graph data to be matched of any virtual slice graph.
可选的,所述图像匹配模块1704,具体用于:Optionally, the image matching module 1704 is specifically used for:
在所述术中图像中,确定实际开口区域;每个实际开口区域对应表征所述术中图像中的一个肺段开口;In the intraoperative image, determine an actual opening area; each actual opening area corresponds to a lung segment opening in the intraoperative image;
以所述实际开口区域的中心为顶点、中心间的连线为边,构造第二图数据;所述第二图数据包括所述术中图像中顶点的数量、位置,以及顶点间相对距离;Constructing second graph data with the center of the actual opening area as the vertex and the connecting line between the centers as the edge; the second graph data includes the number and position of the vertices in the intraoperative image, and the relative distance between the vertices;
利用所述第一图卷积网络,将所述第二图数据映射为所述术中图像的第二待匹配图数据。Using the first image convolutional network, the second image data is mapped to the second image data to be matched of the intraoperative image.
可选的,所述第一图卷积网络为抗形变卷积网络;Optionally, the first graph convolutional network is an anti-deformation convolutional network;
所述抗形变卷积网络包括空间变换层和卷积处理单元:The anti-deformation convolution network includes a spatial transformation layer and a convolution processing unit:
所述空间变换层,用于:获取所述第一图数据与所述第二图数据,对所述第一图数据和/或所述第二图数据进行空间变换,得到所述第一图数据对应的第一待卷积图数据以及所述第二图数据对应的第二待卷积图数据;The space transformation layer is configured to: obtain the first image data and the second image data, perform space transformation on the first image data and/or the second image data, and obtain the first image The first graph data to be convolved corresponding to the data and the second graph data to be convolved corresponding to the second graph data;
所述卷积处理单元,用于对所述第一待卷积图数据进行卷积,得到所述任一虚拟切片图的第一待匹配图数据,对所述第二待卷积图数据进行卷积,得到所述术中图像的第二待匹配图数据。The convolution processing unit is configured to convolve the first image data to be convolved to obtain the first image data to be matched of any virtual slice image, and perform convolution on the second image data to be convolved Convolve to obtain the second to-be-matched image data of the intraoperative image.
可选的,所述图像匹配模块1704,具体用于:Optionally, the image matching module 1704 is specifically used for:
获取历史匹配信息;所述历史匹配信息表征了:历史术中图像所匹配到的虚拟切片图在所述目标对象的虚拟支气管树中的位置与切片角度;Acquiring historical matching information; the historical matching information represents: the position and slice angle of the virtual slice map matched by the historical intraoperative image in the virtual bronchial tree of the target object;
根据所述历史匹配信息,确定当前匹配范围;其中,所述当前匹配范围表征了:所述目标虚拟切片图在所述目标对象的虚拟支气管树中所处的位置范围;According to the historical matching information, determine the current matching range; wherein, the current matching range represents: the position range of the target virtual slice map in the virtual bronchial tree of the target object;
通过将所述术中图像与所述当前匹配范围对应的虚拟切片图进行匹配,确定所述目标虚拟切片图。The target virtual slice map is determined by matching the intraoperative image with the virtual slice map corresponding to the current matching range.
可选的,所述图像匹配模块1704,具体用于:Optionally, the image matching module 1704 is specifically used for:
将所述历史匹配信息与所述历史术中图像的拍摄时间转换为向量,得到当前向量,并将所述当前向量输入至预先训练的长短期记忆网络,以利用所述长短期记忆网络确定所述当前匹配范围。Converting the historical matching information and the shooting time of the historical intraoperative images into a vector to obtain a current vector, and inputting the current vector into a pre-trained long-short-term memory network, so as to use the long-short-term memory network to determine the Describe the current matching range.
可选的,所述图像匹配模块1704,具体用于:Optionally, the image matching module 1704 is specifically used for:
通过将所述术中图像对应的第二待匹配图数据输入所述长短期记忆网络,获得所述长短期记忆网络输出的拼接后的待匹配图数据;其中,所述拼接后的待匹配图数据指:所述术中图像对应的第二待匹配图数据与至少一张历史术中图像对应的第二待匹配图数据拼接后的待匹配图数据;By inputting the second map to be matched data corresponding to the intraoperative image into the long short-term memory network, the spliced map to be matched outputted by the long short-term memory network is obtained; wherein, the spliced map to be matched is Data refers to: the image data to be matched after splicing the second image data to be matched corresponding to the intraoperative image and the second image data to be matched corresponding to at least one historical intraoperative image;
通过将所述拼接后的待匹配图数据与所述当前匹配范围内对应的虚拟切片图的第一待匹配图数据进行匹配,确定所述目标虚拟切片图。The target virtual slice map is determined by matching the stitched map data to be matched with the first map data to be matched corresponding to the virtual slice map within the current matching range.
请参考图18,提供了一种电子设备1800,包括:Please refer to FIG. 18 , an electronic device 1800 is provided, including:
处理器1801;以及, processor 1801; and,
存储器1802,用于存储所述处理器的可执行指令; memory 1802, configured to store executable instructions of the processor;
其中,所述处理器1801配置为经由执行所述可执行指令来执行以上所涉及的方法。Wherein, the processor 1801 is configured to execute the above-mentioned methods by executing the executable instructions.
处理器1801能够通过总线1803与存储器1802通讯。The processor 1801 can communicate with the memory 1802 through the bus 1803 .
本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现以上所涉及的方法。An embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, and the above-mentioned method is implemented when the program is executed by a processor.
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps for implementing the above method embodiments can be completed by program instructions and related hardware. The aforementioned program can be stored in a computer-readable storage medium. When the program is executed, it executes the steps including the above-mentioned method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present invention, rather than limiting them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or perform equivalent replacements for some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the various embodiments of the present invention. scope.

Claims (17)

  1. 一种支气管镜的位置确定方法,其特征在于,包括:A method for determining the position of a bronchoscope, comprising:
    获取目标对象的虚拟支气管树;Obtain the virtual bronchial tree of the target object;
    识别所述目标对象的虚拟支气管树的分叉节点,并基于识别出的分叉节点,获得所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息;Identifying the bifurcation nodes of the virtual bronchial tree of the target object, and obtaining identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation nodes;
    获取所述目标对象的术中图像,所述术中图像是支气管镜在所述目标对象内行进时拍摄到的;acquiring an intraoperative image of the target object, the intraoperative image being captured when a bronchoscope travels within the target object;
    通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图;determining a target virtual slice map matching the intraoperative image by matching the intraoperative image with a virtual slice map of a virtual bronchial tree of the target object;
    确定所述目标对象的虚拟支气管树中匹配于所述目标虚拟切片图的相应的肺段与分叉节点的标识信息,确定出的标识信息被用于表征所述支气管镜在所述目标对象中的当前位置。Determining the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map, and the determined identification information is used to characterize the bronchoscope in the target object the current location of .
  2. 根据权利要求1所述的支气管镜的位置确定方法,其特征在于,识别所述目标对象的虚拟支气管树的分叉节点,包括:The method for determining the position of a bronchoscope according to claim 1, wherein identifying the bifurcation node of the virtual bronchial tree of the target object comprises:
    将获取到的所述目标对象的虚拟支气管树输入预先训练的节点识别神经网络,获得所述节点识别神经网络输出的所述目标对象的虚拟支气管树所含的各个分叉节点的位置。Inputting the acquired virtual bronchial tree of the target object into the pre-trained node recognition neural network, and obtaining the positions of each branch node contained in the virtual bronchial tree of the target object output by the node recognition neural network.
  3. 根据权利要求2所述的支气管镜的位置确定方法,其特征在于,所述节点识别神经网络由下述方式训练得到:The method for determining the position of a bronchoscope according to claim 2, wherein the node recognition neural network is trained in the following manner:
    分别提取训练样本集合中的各个训练样本的样本特征,所述训练样本所含的虚拟支气管树上被标注有标签,所述标签用于标记所述训练样本所含虚拟支气管树中各个分叉节点的实际位置;The sample features of each training sample in the training sample set are respectively extracted, and the virtual bronchial tree contained in the training sample is marked with a label, and the label is used to mark each branch node in the virtual bronchial tree contained in the training sample the actual location of
    通过将提取到的样本特征输入所述节点识别神经网络中,获得所述节点识别神经网络输出的所述训练样本所含虚拟支气管树中各个分叉节点的预测位置;By inputting the extracted sample features into the node recognition neural network, the predicted position of each bifurcation node contained in the virtual bronchial tree contained in the training sample output by the node recognition neural network is obtained;
    根据所述实际位置与所述预测位置之间的差异信息调整所述节点识别神经网络,获得训练后的节点识别神经网络。adjusting the node recognition neural network according to the difference information between the actual position and the predicted position, to obtain a trained node recognition neural network.
  4. 根据权利要求1所述的支气管镜的位置确定方法,其特征在于,基于识别出的分叉节点,获得所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息,包括:The method for determining the position of a bronchoscope according to claim 1, wherein, based on the identified bifurcation node, obtaining identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object includes:
    通过将所述目标对象的虚拟支气管树内识别出的分叉节点与支气管树的知识图谱进行匹配,确定所述标识信息。The identification information is determined by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree.
  5. 根据权利要求4所述的支气管镜的位置确定方法,其特征在于,通过将所述目标对象的虚拟支气管树内识别出的分叉节点与支气管树的知识图谱进行匹配,确定所述标识信息,包括:The method for determining the position of a bronchoscope according to claim 4, wherein the identification information is determined by matching the bifurcation nodes identified in the virtual bronchial tree of the target object with the knowledge map of the bronchial tree, include:
    以所述识别出的分叉节点为顶点、所述目标对象的虚拟支气管树中用于连接识别出的分叉节点的肺段为边,构建第三图数据;Constructing the third graph data with the identified bifurcation node as the vertex and the lung segment used to connect the identified bifurcation node in the virtual bronchial tree of the target object as the edge;
    将所述第三图数据输入预先训练的第二图卷积网络,以利用所述第二图卷积网络确定所述目标对象的虚拟支气管树中的分叉节点与肺段之间的对应关系;inputting the third graph data into the pre-trained second graph convolutional network, so as to use the second graph convolutional network to determine the corresponding relationship between bifurcation nodes and lung segments in the virtual bronchial tree of the target object ;
    根据所述对应关系以及所述知识图谱,确定所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息。According to the corresponding relationship and the knowledge graph, identification information of each lung segment and a bifurcation node in the virtual bronchial tree of the target object is determined.
  6. 根据权利要求5所述的支气管镜的位置确定方法,其特征在于,所述第二图卷积网络被配置为能够计算出所述第三图数据中任一肺段对应于各分叉节点的概率,且所述任一肺段所属的分叉节点为其中概率最高的分叉节点。The method for determining the position of a bronchoscope according to claim 5, wherein the second graph convolutional network is configured to be able to calculate the position of any lung segment corresponding to each bifurcation node in the third graph data. probability, and the bifurcation node to which any lung segment belongs is the bifurcation node with the highest probability.
  7. 根据权利要求1所述的支气管镜的位置确定方法,其特征在于,通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图,包括:The method for determining the position of a bronchoscope according to claim 1, characterized in that, by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object, the position matching the intraoperative image is determined. Target virtual slice map, including:
    获取对应于任一虚拟切片图的第一待匹配图数据,所述第一待匹配图数据包括所述虚拟切片图中肺段开口的数量与分布方式;Obtaining the first image data to be matched corresponding to any virtual slice image, the first image data to be matched includes the number and distribution of lung segment openings in the virtual slice image;
    获取对应于所述术中图像的第二待匹配图数据,所述第二待匹配图数据包括所述术中图像中肺段开口的数量与分布方式;Acquiring second image data to be matched corresponding to the intraoperative image, where the second image data to be matched includes the number and distribution of lung segment openings in the intraoperative image;
    将所述第一待匹配图数据与提取到的第二待匹配图数据进行比较,并根据比较结果确定所述目标虚拟切片图。Comparing the first to-be-matched image data with the extracted second to-be-matched image data, and determining the target virtual slice image according to the comparison result.
  8. 根据权利要求7所述的支气管镜的位置确定方法,其特征在于,获取对应于任一虚拟切片图的第一待匹配图数据,包括:The method for determining the position of a bronchoscope according to claim 7, wherein obtaining the first map data to be matched corresponding to any virtual slice map includes:
    确定所述任一虚拟切片图中的虚拟开口区域,每个虚拟开口区域对应所述虚拟切片图中的一个肺段开口;Determining the virtual opening area in any of the virtual slice diagrams, each virtual opening area corresponding to a lung segment opening in the virtual slice diagram;
    以所述虚拟开口区域的中心为顶点、各个中心之间的连线为边,构造第一图数据;所述第一图数据包括所述任一虚拟切片图中顶点的数量、位置,以及顶点间相对距离;The center of the virtual opening area is used as a vertex, and the lines between the centers are used as edges to construct the first graph data; the first graph data includes the number, position, and vertices of any virtual slice graph. relative distance between
    利用第一图卷积网络,将所述第一图数据映射为所述任一虚拟切片图的第一待匹配图数据。Using the first graph convolutional network, the first graph data is mapped to the first graph data to be matched of any virtual slice graph.
  9. 根据权利要求8所述的支气管镜的位置确定方法,其特征在于,获取对应于所述术中图像的第二待匹配图数据,包括:The method for determining the position of a bronchoscope according to claim 8, wherein obtaining the second image data to be matched corresponding to the intraoperative image comprises:
    在所述术中图像中,确定实际开口区域;每个实际开口区域对应表征所述术中图像中的一个肺段开口;In the intraoperative image, determine an actual opening area; each actual opening area corresponds to a lung segment opening in the intraoperative image;
    以所述实际开口区域的中心为顶点、中心间的连线为边,构造第二图数据;所述第二图数据包括所述术中图像中顶点的数量、位置,以及顶点间相对距离;Constructing second graph data with the center of the actual opening area as the vertex and the connecting line between the centers as the edge; the second graph data includes the number and position of the vertices in the intraoperative image, and the relative distance between the vertices;
    利用所述第一图卷积网络,将所述第二图数据映射为所述术中图像的第二待匹配图数据。Using the first image convolutional network, the second image data is mapped to the second image data to be matched of the intraoperative image.
  10. 根据权利要求9所述的支气管镜的位置确定方法,其特征在于,所述第一图卷积网络为抗形变卷积网络;The method for determining the position of a bronchoscope according to claim 9, wherein the first graph convolutional network is an anti-deformation convolutional network;
    所述抗形变卷积网络包括空间变换层和卷积处理单元:The anti-deformation convolution network includes a spatial transformation layer and a convolution processing unit:
    所述空间变换层,用于:获取所述第一图数据与所述第二图数据,对所述第一图数据和/或所述第二图数据进行空间变换,得到所述第一图数据对应的第一待卷积图数据以及所述第二图数据对应的第二待卷积图数据;The space transformation layer is configured to: obtain the first image data and the second image data, perform space transformation on the first image data and/or the second image data, and obtain the first image The first graph data to be convolved corresponding to the data and the second graph data to be convolved corresponding to the second graph data;
    所述卷积处理单元,用于对所述第一待卷积图数据进行卷积,得到所述任一虚拟切片图的第一待匹配图数据,对所述第二待卷积图数据进行卷积,得到所述术中图像的第二待匹配图数据。The convolution processing unit is configured to convolve the first image data to be convolved to obtain the first image data to be matched of any virtual slice image, and perform convolution on the second image data to be convolved Convolve to obtain the second to-be-matched image data of the intraoperative image.
  11. 根据权利要求1所述的支气管镜的位置确定方法,其特征在于,通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图,包括:The method for determining the position of a bronchoscope according to claim 1, characterized in that, by matching the intraoperative image with the virtual slice map of the virtual bronchial tree of the target object, the position matching the intraoperative image is determined. Target virtual slice map, including:
    获取历史匹配信息;所述历史匹配信息表征了:历史术中图像所匹配到的虚拟切片图在所述目标对象的虚拟支气管树中的位置与切片角度;Acquiring historical matching information; the historical matching information represents: the position and slice angle of the virtual slice map matched by the historical intraoperative image in the virtual bronchial tree of the target object;
    根据所述历史匹配信息,确定当前匹配范围;其中,所述当前匹配范围表征了:所述目标虚拟切片图在所述目标对象的虚拟支气管树中所处的位置范围;According to the historical matching information, determine the current matching range; wherein, the current matching range represents: the position range of the target virtual slice map in the virtual bronchial tree of the target object;
    通过将所述术中图像与所述当前匹配范围对应的虚拟切片图进行匹配,确定所述目标虚拟切片图。The target virtual slice map is determined by matching the intraoperative image with the virtual slice map corresponding to the current matching range.
  12. 根据权利要求11所述的支气管镜的位置确定方法,其特征在于,根据所述历史匹配信息,确定当前匹配范围,包括:The method for determining the position of a bronchoscope according to claim 11, wherein determining the current matching range according to the historical matching information includes:
    将所述历史匹配信息与所述历史术中图像的拍摄时间转换为向量,得到当前向量,并将所述当前向量输入至预先训练的长短期记忆网络,以利用所述长短期记忆网络确定所述当前匹配范围。Converting the historical matching information and the shooting time of the historical intraoperative images into a vector to obtain a current vector, and inputting the current vector into a pre-trained long-short-term memory network, so as to use the long-short-term memory network to determine the Describe the current matching range.
  13. 根据权利要求11所述的支气管镜的位置确定方法,其特征在于,通过将所述术中图像与所述当前匹配范围内对应的虚拟切片图进行匹配,确定所述目标虚拟切片图,包括:The method for determining the position of a bronchoscope according to claim 11, wherein the target virtual slice map is determined by matching the intraoperative image with the corresponding virtual slice map within the current matching range, comprising:
    通过将所述术中图像对应的第二待匹配图数据输入长短期记忆网络,获得所述长短期记忆网络输出的拼接后的待匹配图数据;其中,所述拼接后的待匹配图数据指:所述术中图像对应的第二待匹配图数据与至少一张历史术中图像对应的第二待匹配图数据拼接后的待匹配图数据;By inputting the second map data to be matched corresponding to the intraoperative image into the long short-term memory network, the spliced map data to be matched outputted by the long short-term memory network is obtained; wherein, the spliced map data to be matched refers to : the image data to be matched after splicing the second image data to be matched corresponding to the intraoperative image and the second image data to be matched corresponding to at least one historical intraoperative image;
    通过将所述拼接后的待匹配图数据与所述当前匹配范围内对应的虚拟切片图的第一待匹配图数据进行匹配,确定所述目标虚拟切片图。The target virtual slice map is determined by matching the stitched map data to be matched with the first map data to be matched corresponding to the virtual slice map within the current matching range.
  14. 一种支气管镜的位置确定装置,其特征在于,包括:A device for determining the position of a bronchoscope, characterized in that it comprises:
    支气管树获取模块,用于获取目标对象的虚拟支气管树;The bronchial tree acquisition module is used to acquire the virtual bronchial tree of the target object;
    识别模块,用于识别所述目标对象的虚拟支气管树的分叉节点,并基于识别出的分叉节点,获得所述目标对象的虚拟支气管树中各肺段与分叉节点的标识信息;An identification module, configured to identify a bifurcation node of the virtual bronchial tree of the target object, and obtain identification information of each lung segment and bifurcation node in the virtual bronchial tree of the target object based on the identified bifurcation node;
    术中图像获取模块,用于获取所述目标对象的术中图像,所述术中图像是支气管镜在人体内行进时拍摄到的;The intraoperative image acquisition module is used to acquire the intraoperative image of the target object, and the intraoperative image is captured when the bronchoscope travels in the human body;
    图像匹配模块,用于通过将所述术中图像与所述目标对象的虚拟支气管树的虚拟切片图进行匹配,确定匹配于所述术中图像的目标虚拟切片图;An image matching module, configured to determine a target virtual slice map matching the intraoperative image by matching the intraoperative image with a virtual slice map of the virtual bronchial tree of the target object;
    标识匹配模块,用于确定所述目标对象的虚拟支气管树中匹配于所述目标虚拟切片图的相应的肺段与分叉节点的标识信息,确定出的标识信息被用于表征所述支气管镜在所述目标对象中的当前位置。An identification matching module, configured to determine the identification information of corresponding lung segments and bifurcation nodes in the virtual bronchial tree of the target object that match the target virtual slice map, and the determined identification information is used to characterize the bronchoscope The current location within the target object.
  15. 一种电子设备,其特征在于,包括处理器与存储器,An electronic device, characterized in that it includes a processor and a memory,
    所述存储器,用于存储代码;The memory is used to store codes;
    所述处理器,用于执行所述存储器中的代码用以实现权利要求1至13任意之一所述方法。The processor is configured to execute the codes in the memory to implement the method according to any one of claims 1 to 13.
  16. 一种存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1至13任意之一所述方法。A storage medium, on which a computer program is stored, and when the program is executed by a processor, the method described in any one of claims 1 to 13 is implemented.
  17. 一种支气管镜导航系统,其特征在于,包括:支气管镜与数据处理部,所述数据处理部用于实施权利要求1至13任一项所述方法。A bronchoscopic navigation system, characterized by comprising: a bronchoscope and a data processing unit, the data processing unit being used to implement the method described in any one of claims 1 to 13.
PCT/CN2022/086429 2021-12-03 2022-04-12 Bronchoscope position determination method and apparatus, system, device, and medium WO2023097944A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111460651.5A CN113855242B (en) 2021-12-03 2021-12-03 Bronchoscope position determination method, device, system, equipment and medium
CN202111460651.5 2021-12-03

Publications (1)

Publication Number Publication Date
WO2023097944A1 true WO2023097944A1 (en) 2023-06-08

Family

ID=78985612

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/086429 WO2023097944A1 (en) 2021-12-03 2022-04-12 Bronchoscope position determination method and apparatus, system, device, and medium

Country Status (2)

Country Link
CN (1) CN113855242B (en)
WO (1) WO2023097944A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113855242B (en) * 2021-12-03 2022-04-19 杭州堃博生物科技有限公司 Bronchoscope position determination method, device, system, equipment and medium
CN114041741B (en) * 2022-01-13 2022-04-22 杭州堃博生物科技有限公司 Data processing unit, processing device, surgical system, surgical instrument, and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070167714A1 (en) * 2005-12-07 2007-07-19 Siemens Corporate Research, Inc. System and Method For Bronchoscopic Navigational Assistance
US20120203065A1 (en) * 2011-02-04 2012-08-09 The Penn State Research Foundation Global and semi-global registration for image-based bronchoscopy guidance
CN102883651A (en) * 2010-01-28 2013-01-16 宾夕法尼亚州研究基金会 Image-based global registration system and method applicable to bronchoscopy guidance
CN105787439A (en) * 2016-02-04 2016-07-20 广州新节奏智能科技有限公司 Depth image human body joint positioning method based on convolution nerve network
US20180271358A1 (en) * 2017-05-23 2018-09-27 Parseh Intelligent Surgical System Navigating an imaging instrument in a branched structure
CN112741692A (en) * 2020-12-18 2021-05-04 上海卓昕医疗科技有限公司 Rapid navigation method and system for realizing device navigation to target tissue position
CN113112609A (en) * 2021-03-15 2021-07-13 同济大学 Navigation method and system for lung biopsy bronchoscope
CN113855242A (en) * 2021-12-03 2021-12-31 杭州堃博生物科技有限公司 Bronchoscope position determination method, device, system, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6199267B2 (en) * 2014-09-29 2017-09-20 富士フイルム株式会社 Endoscopic image display device, operating method thereof, and program
US20210052240A1 (en) * 2019-08-19 2021-02-25 Covidien Lp Systems and methods of fluoro-ct imaging for initial registration

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070167714A1 (en) * 2005-12-07 2007-07-19 Siemens Corporate Research, Inc. System and Method For Bronchoscopic Navigational Assistance
CN102883651A (en) * 2010-01-28 2013-01-16 宾夕法尼亚州研究基金会 Image-based global registration system and method applicable to bronchoscopy guidance
US20120203065A1 (en) * 2011-02-04 2012-08-09 The Penn State Research Foundation Global and semi-global registration for image-based bronchoscopy guidance
CN105787439A (en) * 2016-02-04 2016-07-20 广州新节奏智能科技有限公司 Depth image human body joint positioning method based on convolution nerve network
US20180271358A1 (en) * 2017-05-23 2018-09-27 Parseh Intelligent Surgical System Navigating an imaging instrument in a branched structure
CN112741692A (en) * 2020-12-18 2021-05-04 上海卓昕医疗科技有限公司 Rapid navigation method and system for realizing device navigation to target tissue position
CN113112609A (en) * 2021-03-15 2021-07-13 同济大学 Navigation method and system for lung biopsy bronchoscope
CN113855242A (en) * 2021-12-03 2021-12-31 杭州堃博生物科技有限公司 Bronchoscope position determination method, device, system, equipment and medium

Also Published As

Publication number Publication date
CN113855242B (en) 2022-04-19
CN113855242A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
WO2023097944A1 (en) Bronchoscope position determination method and apparatus, system, device, and medium
US11450063B2 (en) Method and apparatus for training object detection model
US11586851B2 (en) Image classification using a mask image and neural networks
US10885352B2 (en) Method, apparatus, and device for determining lane line on road
Naseer et al. Deep regression for monocular camera-based 6-dof global localization in outdoor environments
JP6596164B2 (en) Unsupervised matching in fine-grained datasets for single view object reconstruction
CN108256479B (en) Face tracking method and device
CN112990211B (en) Training method, image processing method and device for neural network
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
CN113159283B (en) Model training method based on federal transfer learning and computing node
CN111985376A (en) Remote sensing image ship contour extraction method based on deep learning
KR20180055070A (en) Method and device to perform to train and recognize material
EP3905194A1 (en) Pose estimation method and apparatus
CN110838122B (en) Point cloud segmentation method and device and computer storage medium
Armagan et al. Learning to align semantic segmentation and 2.5 d maps for geolocalization
CN112053441B (en) Full-automatic layout recovery method for indoor fisheye image
CN110837861B (en) Image matching method, device, equipment and storage medium
JP2019008571A (en) Object recognition device, object recognition method, program, and trained model
WO2023115915A1 (en) Gan-based remote sensing image cloud removal method and device, and storage medium
CN113781519A (en) Target tracking method and target tracking device
CN113379748B (en) Point cloud panorama segmentation method and device
Wen et al. Cooperative indoor 3D mapping and modeling using LiDAR data
CN112633222B (en) Gait recognition method, device, equipment and medium based on countermeasure network
CN111563916B (en) Long-term unmanned aerial vehicle tracking and positioning method, system and device based on stereoscopic vision
US20220180548A1 (en) Method and apparatus with object pose estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22899767

Country of ref document: EP

Kind code of ref document: A1