CN117281616A - Operation control method and system based on mixed reality - Google Patents
Operation control method and system based on mixed reality Download PDFInfo
- Publication number
- CN117281616A CN117281616A CN202311487810.XA CN202311487810A CN117281616A CN 117281616 A CN117281616 A CN 117281616A CN 202311487810 A CN202311487810 A CN 202311487810A CN 117281616 A CN117281616 A CN 117281616A
- Authority
- CN
- China
- Prior art keywords
- displacement
- frame
- range
- pixel
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000006073 displacement reaction Methods 0.000 claims abstract description 130
- 230000001502 supplementing effect Effects 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 59
- 230000008859 change Effects 0.000 claims description 21
- 230000000153 supplemental effect Effects 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 10
- 239000003550 marker Substances 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 3
- 238000003780 insertion Methods 0.000 claims description 3
- 230000037431 insertion Effects 0.000 claims description 3
- 230000006641 stabilisation Effects 0.000 abstract description 3
- 238000011105 stabilization Methods 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000001356 surgical procedure Methods 0.000 description 7
- 230000000295 complement effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 210000003625 skull Anatomy 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000000115 thoracic cavity Anatomy 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/108—Computer aided selection or customisation of medical implants or cutting guides
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Robotics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the field of surgical control, in particular to a surgical control method and a surgical control system based on mixed reality, wherein the method comprises the following steps: acquiring head information, marking the head position, performing three-dimensional modeling on the head information, generating a video stream, and performing preprocessing; obtaining a plurality of gray images according to the preprocessed video stream; calculating the same-range index value of each pixel point of the gray level images of two adjacent frames; determining the displacement variation range of each pixel point of two adjacent frames of gray images according to the index value in the same range; calculating the moving speed of the pixel points of the front and rear two frames of gray images according to the displacement variation range; and setting a frame supplementing flow according to the moving speed. The invention carries out frame supplementing treatment on the video stream, is beneficial to reducing the time for generating the three-dimensional model, reduces the time for waiting for the stabilization of the specific frame number, increases the video frame number and ensures that doctors can clearly observe the operation information.
Description
Technical Field
The present invention relates generally to the field of surgical control. More particularly, the invention relates to a mixed reality-based surgical control method and system.
Background
The mixed reality surgery is to combine the virtual reality technology with the real-time surgery operation, and superimpose virtual information in the actual surgery scene through a head-mounted display or other devices so as to improve the surgery efficiency and accuracy. The mixed reality operation can help doctors to better understand the illness state and the anatomical structure of patients, and improve the accuracy and the safety of the operation. For example, in neurosurgery, a physician may use mixed reality techniques to visualize the brain structure of a patient to better locate tumors or other abnormal regions. In addition, mixed reality surgery may also improve collaboration and training between doctors. Physicians can share virtual information to better understand surgical procedures and steps, thereby improving the efficiency and accuracy of the entire team.
The prior art patent grant number CN111419398B is a thoracic surgery auxiliary control system based on virtual reality, and is used for realizing reconstruction of a three-dimensional model of a human organ in an operation area based on medical image data by establishing a three-dimensional reconstruction module, and obtaining three-dimensional coordinates in the three-dimensional model where an operation flow guiding material is located by a projection coordinate generation module so as to enable the position and the size parameters of the human organ to generate projection coordinates of the projection module, thereby being beneficial to improving the efficiency and the success rate of the operation.
At present, when a three-dimensional model is used in a mixed reality operation, the phenomenon of dislocation of low frames or pixel points exists, so that when a scalpel is operated, the operation needs to wait for the stabilization of specific frames, the operation can miss the optimal time and even misjudge the operation position, and therefore, an operation control method and an operation control system based on mixed reality are needed.
Disclosure of Invention
In order to solve one or more of the above technical problems, the present invention divides a displacement variation range according to the characteristics of a three-dimensional model, and performs synchronous video stream frame compensation when a sampling part of a patient in the model is moved according to the displacement variation range, thereby improving the smoothness and position accuracy of the three-dimensional model during movement.
In a first aspect, head information is acquired, a plurality of mark points are generated by marking the head positions, three-dimensional modeling is performed on the head information, a video stream is generated, and preprocessing is performed to obtain a plurality of gray images;
constructing target displacement vectors of pixel points in the gray level images of two adjacent frames; calculating the same-range index value of each pixel point of the gray level images of two adjacent frames; determining the displacement variation range of each pixel point of two adjacent frames of gray images according to the index value in the same range; calculating the moving speed of the pixel points of the front and rear two frames of gray images according to the displacement variation range; and setting a frame supplementing flow according to the moving speed.
In one embodiment, a target displacement vector of a pixel point in the gray-scale image of two adjacent frames is constructed, wherein the target displacement vector comprises: a center displacement vector of the marked point and a displacement vector of the non-marked point;
by adopting the technical scheme, the time for generating the three-dimensional model is reduced by supplementing frames in any two adjacent front and rear frames of gray images in the video stream.
In one embodiment, a central displacement vector of the marking point in two adjacent frames of gray images is constructed, and the central displacement vector satisfies the following relation:
wherein,represented as pixel +.>Center shift vector (Pv)>Positions denoted as marker points of the previous frame image, < >>Represented as the position of the marker point of the image of the following frame.
And determining displacement vectors of non-marking points in two adjacent frames of gray level images, wherein the positions of the pixels of the non-marking points in the previous frame of images are consistent with those of the pixels of the non-marking points in the later frame of images, and determining the positions of the pixels of the non-marking points in the later frame of images to obtain the displacement vectors of the non-marking points.
In one embodiment, calculating the same range index value of each pixel point of the gray scale images of two adjacent frames includes:
calculating the similarity between the displacement vector of each pixel point of the gray image of the non-mark point and the center vector of the mark point, wherein the similarity is used as a same-range index value;
the in-range index value satisfies the following relationship:
wherein,expressed as the same range index>Represented as pixel +.>Center shift vector (Pv)>Represented as a target displacement vector for any one pixel 1 within the pixel neighborhood.
By adopting the technical scheme, the similarity degree of each pixel position displacement vector and the center vector in the neighborhood of the marking point is calculated, and the higher the similarity degree is, the similar displacement change is formed between the marking point and the neighborhood pixel point, and the similarity degree is used as the index in the same range.
In one embodiment, the calculating the moving speed of the pixel point of the two frames of gray scale images includes: the moving speed is the ratio of the center displacement vector of the mark point to the time difference of the pixel points of the two-frame gray level images.
In one embodiment, according to the moving speed, a loss function is used to calculate a speed ratio between each displacement variation range of two adjacent frames of gray images, and the loss function satisfies the following relation:
wherein,values expressed as loss function +.>Expressed as the range of displacement variation->Speed between the two frames before and after, < >>Expressed as the range of displacement variation->Speed between the two frames before and after, < >>Expressed as the range of displacement variation->Speed between the preceding frame and the supplemental frame, < >>Expressed as the range of displacement variation->Speed between the preceding frame and the supplemental frame, < >>Expressed as the number of displacement variation ranges of newly added pixels in the supplemental frame.
In one embodiment, the displacement variation range further includes:
and calculating the average value vector of each pixel point in the displacement variation range, wherein the average value vector is the linear characteristic of each pixel point in the displacement variation range.
And determining the displacement of the pixel point of the supplementary frame in the moving direction according to the linear characteristic, wherein the displacement is obtained by multiplying the time difference between the gray level image of the previous frame and the supplementary frame by the moving speed of the pixel point of the gray level image.
By adopting the technical scheme, according to the displacement size and the movement direction of the pixel points in the same displacement variation range in two adjacent frames, the frame number of each pixel point in the range is supplemented, and the supplemented pixel points are in the displacement vectors of the pixel points of the previous frame and the pixel points of the next frame, so that the determination of the displacement amount of the pixel points of the supplemented frame in the movement direction is facilitated.
In one embodiment, setting a frame compensation flow according to the moving speed includes:
generating a blank image and determining the insertion position of the blank image;
determining a mark point and a displacement change range according to the previous frame image, and calculating the displacement change range of the mark point to obtain the moving speed in the previous and subsequent image frames;
determining a displacement distance of the displacement change range, projecting the displacement change range of the previous frame image onto the position of the blank image, and moving to a new position according to the displacement distance;
traversing the position of each displacement variation range on the blank image
In a second aspect, a mixed reality-based surgical control system, comprising: a processor and a memory storing computer program instructions that when executed by the processor implement any of the mixed reality based surgical control methods.
The application has the following effects:
1. according to the method and the device, the frame supplementing processing is carried out on the video stream, so that the time for generating the three-dimensional model is reduced, the time for waiting for the stabilization of the specific frame number is reduced, the reduction of the frame number of the video stream when the three-dimensional model is moved is avoided, the video stream after frame supplementing is smoother and clearer, the treatment risk is reduced, and the doctor can clearly observe the operation information.
2. According to the method and the device, when the displacement variation range is moved in the process of generating the supplementary frame, the speed ratio of each displacement variation range of the front frame and the supplementary frame is obtained, the speed ratio of each displacement variation range in the front frame and the back frame is consistent with the speed ratio of each displacement variation range in the front frame and the back frame, synchronous displacement of each displacement variation range in the supplementary frame image is facilitated, and therefore video streaming is smoother.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the invention are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
fig. 1 is a method flow diagram of steps S1-S6 in a mixed reality-based surgical control method according to an embodiment of the present application.
Fig. 2 is a flowchart of a method of steps S30-S31 in a mixed reality-based surgical control method according to an embodiment of the present application.
Fig. 3 is a flowchart of a method of steps S50-S53 in a mixed reality-based surgical control method according to an embodiment of the present application.
Fig. 4 is a flowchart of a method of steps S60-S63 in a mixed reality-based surgical control method according to an embodiment of the present application.
Fig. 5 is a schematic flow chart of a complementary frame in a mixed reality-based surgical control method according to an embodiment of the present application.
Fig. 6 is a block diagram of a mixed reality-based surgical control system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a mixed reality-based surgical control method, which is applicable to surgical control of any part of the whole body, includes steps S1 to S6, specifically as follows:
s1: the method comprises the steps of obtaining head information, marking the head position to generate a plurality of marking points, carrying out three-dimensional modeling on the head information, generating a video stream, and carrying out preprocessing to obtain a plurality of gray images.
For example, when the three-dimensional model is rotationally moved, the pixel points on the skull are affected by the height of the position, the moving amplitude of different positions in the video stream is inconsistent, and the higher the position on the skull, the longer the displacement distance of the pixel points in two adjacent frames, and conversely, the shorter the displacement distance.
S2: and constructing target displacement vectors of pixel points in two adjacent frames of gray images.
The target displacement vector includes: center displacement vector of marked point and displacement vector of non-marked point
S3: the calculation of the same-range index value of each pixel point of two adjacent frames of gray images, referring to fig. 2, includes steps S30-S31:
s30: constructing the central displacement vector of the marking point in the adjacent front and back frames of gray images, wherein the central displacement vector meets the following relation:
wherein,represented as pixel +.>Center shift vector (Pv)>Positions denoted as marker points of the previous frame image, < >>Represented as the position of the marker point of the image of the following frame.
S31: and determining displacement vectors of non-marking points in two adjacent gray level images, wherein the positions of the pixels of the non-marking points in the front frame image are consistent with those of the pixels of the non-marking points in the rear frame image, and determining the positions of the pixels of the non-marking points in the rear frame image to obtain the displacement vectors of the non-marking points.
By way of example, taking any two adjacent front and rear frames of gray level images in a video as an example, tracking the mark points in the two adjacent front and rear frames of images, determining the gray level value of the mark point position and the gray level value of the non-mark point position, keeping the position changes of all the pixel points consistent, and dividing a stable displacement change range for the images.
S4: and determining the displacement variation range of each pixel point of the two adjacent frames of gray images according to the index value in the same range.
Calculating the similarity between the displacement vector of each pixel point of the non-marking point gray level image and the center vector of the marking point, wherein the similarity is used as the index value in the same range;
the same range index value satisfies the following relationship:
wherein,expressed as the same range index>Represented as pixel +.>Center shift vector (Pv)>Represented as a target displacement vector for any one pixel 1 within the pixel neighborhood.
The method includes the steps of calculating the similarity degree between displacement vectors and center vectors of all pixel points in the neighborhood around the position of the marking point, and the higher the similarity degree is, indicating that the position of the marking point has similar displacement change with the pixel points in the neighborhood, and the similarity degree is taken as a same-range index within the same displacement change range.
The displacement variation range refers to the same displacement variation range of each pixel point of two adjacent frames of gray images, and responds to the index of the same rangeWhen the displacement vector is more than 0.9, each pixel point in two adjacent frames of gray images belongs to the same displacement variation range, and the displacement vector of the pixel point 1 is +.>Repeating the calculation as a center vector, otherwise, not belonging to the same displacement variation range;
responsive to co-range indexAt < 0.9, the range of dot compositions is a range of displacement variation.
And determining the positions of the pixel points in the rear frame image according to the relative positions of the non-marking point pixel points and the marking points in the front frame image.
S5: according to the displacement variation range, the moving speed of the pixel point of the two frames of gray images is calculated, referring to fig. 3, including steps S50-S53:
s50: the moving speed is the ratio of the center displacement vector of the mark point to the time difference of the pixel points of the two-frame gray scale image.
S51: according to the moving speed, calculating the speed ratio between each displacement change range of two adjacent frames of gray images by using a loss function, wherein the loss function satisfies the following relation:
wherein,values expressed as loss function +.>Expressed as the range of displacement variation->Speed between the two frames before and after, < >>Expressed as the range of displacement variation->Speed between the two frames before and after, < >>Expressed as the range of displacement variation->Speed between the preceding frame and the supplemental frame, < >>Expressed as the range of displacement variation->Speed between the preceding frame and the supplemental frame, < >>Expressed as the number of displacement variation ranges of newly added pixels in the supplemental frame.
S52: and calculating the mean value vector of each pixel point in the displacement variation range, wherein the mean value vector is the linear characteristic of each pixel point in the displacement variation range.
S53: and determining the displacement of the pixel point of the supplementary frame in the moving direction according to the linear characteristic, wherein the displacement is obtained by multiplying the time difference between the gray image of the previous frame and the supplementary frame by the moving speed of the pixel point of the gray image.
Illustratively, the supplemental frame is between the front and rear frames. And determining the positions of all the pixels in the complementary frame, namely, moving the positions of the original pixels to form the complementary frame. It is necessary to obtain the moving direction and the moving position of the original pixel.
The displacement change range is expressed as the displacement size and the movement direction of the pixel points in the same displacement change range in two adjacent frames, and the frame number of each pixel point in the range is supplemented, so that the pixel points are supplemented in the displacement vectors of the pixel points of the previous frame and the pixel points of the next frame.
S6: setting the frame supplementing flow according to the moving speed, referring to fig. 4, includes steps S60-S63:
s60: generating a blank image and determining the insertion position of the blank image;
s61: determining a mark point and a displacement change range in the previous frame image, and calculating the displacement change range of the mark point to obtain the moving speed in the previous and the next image frames;
s62: determining a displacement distance of the displacement change range, projecting the displacement change range of the previous frame image onto the position of the blank image, and moving to a new position according to the displacement distance;
s63: the positions of the respective displacement variation ranges on the blank image are traversed.
For example, referring to fig. 5, since the position change of the pixel point is in a uniform manner in the above process when determining the position of the supplemental frame, but in the actual three-dimensional model, the user will not always rotate the head model at a uniform speed, when moving the displacement variation range in the process of generating the supplemental frame, each displacement variation range speed ratio of the front frame and the supplemental frame is obtained, and it is ensured that the speed ratio is consistent with each displacement variation range speed ratio of the front frame and the rear frame, that is, the difference value is 0, and at this time, each displacement variation range in the image of the supplemental frame is synchronously displaced.
The invention also provides a mixed reality-based operation control system. As shown in fig. 6, the system comprises a processor and a memory storing computer program instructions which, when executed by the processor, implement a mixed reality based surgical control method according to a first aspect of the invention.
The system further comprises other components known to those skilled in the art, such as communication buses and communication interfaces, the arrangement and function of which are known in the art and therefore will not be described in detail herein.
In the context of this patent, the foregoing memory may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, the computer readable storage medium may be any suitable magnetic or magneto-optical storage medium, such as, for example, resistance change Memory RRAM (Resistive Random Access Memory), dynamic Random Access Memory DRAM (Dynamic Random Access Memory), static Random Access Memory SRAM (Static Random-Access Memory), enhanced dynamic Random Access Memory EDRAM (Enhanced Dynamic Random Access Memory), high-Bandwidth Memory HBM (High-Bandwidth Memory), hybrid storage cube HMC (Hybrid Memory Cube), etc., or any other medium that may be used to store the desired information and that may be accessed by an application, a module, or both. Any such computer storage media may be part of, or accessible by, or connectable to, the device. Any of the applications or modules described herein may be implemented using computer-readable/executable instructions that may be stored or otherwise maintained by such computer-readable media.
In the description of the present specification, the meaning of "a plurality", "a number" or "a plurality" is at least two, for example, two, three or more, etc., unless explicitly defined otherwise.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.
Claims (9)
1. A mixed reality-based surgical control method, comprising:
acquiring head information, marking the head position to generate a plurality of marking points, performing three-dimensional modeling on the head information, generating a video stream, and preprocessing to obtain a plurality of gray images;
constructing target displacement vectors of pixel points in the gray level images of two adjacent frames;
calculating the same-range index value of each pixel point of the gray level images of two adjacent frames;
determining the displacement variation range of each pixel point of two adjacent frames of gray images according to the index value in the same range;
calculating the moving speed of the pixel points of the front and rear two frames of gray images according to the displacement variation range;
and setting a frame supplementing flow according to the moving speed.
2. The mixed reality-based surgical control method according to claim 1, wherein constructing a target displacement vector of a pixel point in the gray scale image of two adjacent frames, the target displacement vector comprises: a center displacement vector of the mark point and a displacement vector of the non-mark point.
3. The mixed reality-based surgical control method according to claim 1, wherein a center displacement vector of the marker point in two adjacent front and rear frames of gray scale images is constructed, and the center displacement vector satisfies the following relation:
wherein,represented as pixel +.>Center shift vector (Pv)>Positions denoted as marker points of the previous frame image, < >>The position of the marker point of the image of the rear frame;
and determining displacement vectors of non-marking points in two adjacent frames of gray level images, wherein the positions of the pixels of the non-marking points in the previous frame of images are consistent with those of the pixels of the non-marking points in the later frame of images, and determining the positions of the pixels of the non-marking points in the later frame of images to obtain the displacement vectors of the non-marking points.
4. The mixed reality-based surgical control method according to claim 1, wherein calculating the same range index value of each pixel point of the gray scale images of two adjacent frames comprises:
calculating the similarity between the displacement vector of each pixel point of the gray image of the non-mark point and the center vector of the mark point, wherein the similarity is used as a same-range index value;
the in-range index value satisfies the following relationship:
wherein,expressed as the same range index>Represented as pixel +.>Center shift vector (Pv)>Represented as a target displacement vector for any one pixel 1 within the pixel neighborhood.
5. The method according to claim 1, wherein calculating the moving speed of the pixels of the two-frame gray-scale image comprises:
the moving speed is the ratio of the center displacement vector of the mark point to the time difference of the pixel points of the two-frame gray level images.
6. The mixed reality-based surgical control method according to claim 5, wherein a speed ratio between each displacement variation range of two adjacent frames of gray images is calculated using a loss function according to the moving speed, the loss function satisfying the following relation:
wherein,values expressed as loss function +.>Expressed as the range of displacement variation->Speed between the two frames before and after, < >>Expressed as the range of displacement variation->Speed between the two frames before and after, < >>Expressed as the range of displacement variation->Speed between the preceding frame and the supplemental frame, < >>Expressed as the range of displacement variation->Speed between the preceding frame and the supplemental frame, < >>Expressed as the number of displacement variation ranges of newly added pixels in the supplemental frame.
7. The mixed reality-based surgical control method of claim 1, further comprising:
calculating the mean value vector of each pixel point in the displacement variation range, wherein the mean value vector is the linear characteristic of each pixel point in the displacement variation range;
and determining the displacement of the pixel point of the supplementary frame in the moving direction according to the linear characteristic, wherein the displacement is obtained by multiplying the time difference between the gray level image of the previous frame and the supplementary frame by the moving speed of the pixel point of the gray level image.
8. The mixed reality-based surgical control method according to claim 1, wherein setting a frame-supplementing flow according to the moving speed comprises:
generating a blank image and determining the insertion position of the blank image;
determining a mark point and a displacement change range according to the previous frame image, and calculating the displacement change range of the mark point to obtain the moving speed in the previous and subsequent image frames;
determining a displacement distance of the displacement change range, projecting the displacement change range of the previous frame image onto the position of the blank image, and moving to a new position according to the displacement distance;
and traversing the positions of the displacement variation ranges on the blank image.
9. A mixed reality-based surgical control system, comprising: a processor and a memory storing computer program instructions that when executed by the processor implement the mixed reality based surgical control method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311487810.XA CN117281616B (en) | 2023-11-09 | 2023-11-09 | Operation control method and system based on mixed reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311487810.XA CN117281616B (en) | 2023-11-09 | 2023-11-09 | Operation control method and system based on mixed reality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117281616A true CN117281616A (en) | 2023-12-26 |
CN117281616B CN117281616B (en) | 2024-02-06 |
Family
ID=89248218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311487810.XA Active CN117281616B (en) | 2023-11-09 | 2023-11-09 | Operation control method and system based on mixed reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117281616B (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140051921A1 (en) * | 2012-08-15 | 2014-02-20 | Intuitive Surgical Operations, Inc. | Methods and systems for optimizing video streaming |
CN112348851A (en) * | 2020-11-04 | 2021-02-09 | 无锡蓝软智能医疗科技有限公司 | Moving target tracking system and mixed reality operation auxiliary system |
CN113225589A (en) * | 2021-04-30 | 2021-08-06 | 北京凯视达信息技术有限公司 | Video frame insertion processing method |
CN111419398B (en) * | 2020-04-22 | 2022-02-11 | 蚌埠医学院第一附属医院(蚌埠医学院附属肿瘤医院) | Thoracic surgery operation auxiliary control system based on virtual reality |
CN114066885A (en) * | 2022-01-11 | 2022-02-18 | 北京威高智慧科技有限公司 | Lower limb skeleton model construction method and device, electronic equipment and storage medium |
CN114668495A (en) * | 2021-10-15 | 2022-06-28 | 汕头市超声仪器研究所股份有限公司 | Biplane free arm three-dimensional reconstruction method and application thereof |
CN114886521A (en) * | 2022-05-16 | 2022-08-12 | 上海睿刀医疗科技有限公司 | Device and method for determining the position of a puncture needle |
WO2022170562A1 (en) * | 2021-02-10 | 2022-08-18 | 中国科学院深圳先进技术研究院 | Digestive endoscope navigation method and system |
WO2022181919A1 (en) * | 2021-02-26 | 2022-09-01 | (주)휴톰 | Device and method for providing virtual reality-based operation environment |
WO2023082306A1 (en) * | 2021-11-12 | 2023-05-19 | 苏州瑞派宁科技有限公司 | Image processing method and apparatus, and electronic device and computer-readable storage medium |
CN116248955A (en) * | 2022-12-30 | 2023-06-09 | 联通灵境视讯(江西)科技有限公司 | VR cloud rendering image enhancement method based on AI frame extraction and frame supplement |
CN116407276A (en) * | 2023-01-13 | 2023-07-11 | 杭州华匠医学机器人有限公司 | Target tracking method, endoscope system, and computer-readable medium |
CN116421313A (en) * | 2023-04-14 | 2023-07-14 | 郑州大学第一附属医院 | Augmented reality fusion method in navigation of lung tumor resection operation under thoracoscope |
WO2023163768A1 (en) * | 2022-02-28 | 2023-08-31 | Microsoft Technology Licensing, Llc. | Advanced temporal low light filtering with global and local motion compensation |
CN116966381A (en) * | 2023-07-20 | 2023-10-31 | 复旦大学附属眼耳鼻喉科医院 | Tracheal intubation robot autonomous navigation method based on self-supervision monocular depth estimation |
-
2023
- 2023-11-09 CN CN202311487810.XA patent/CN117281616B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140051921A1 (en) * | 2012-08-15 | 2014-02-20 | Intuitive Surgical Operations, Inc. | Methods and systems for optimizing video streaming |
CN111419398B (en) * | 2020-04-22 | 2022-02-11 | 蚌埠医学院第一附属医院(蚌埠医学院附属肿瘤医院) | Thoracic surgery operation auxiliary control system based on virtual reality |
CN112348851A (en) * | 2020-11-04 | 2021-02-09 | 无锡蓝软智能医疗科技有限公司 | Moving target tracking system and mixed reality operation auxiliary system |
WO2022170562A1 (en) * | 2021-02-10 | 2022-08-18 | 中国科学院深圳先进技术研究院 | Digestive endoscope navigation method and system |
WO2022181919A1 (en) * | 2021-02-26 | 2022-09-01 | (주)휴톰 | Device and method for providing virtual reality-based operation environment |
CN113225589A (en) * | 2021-04-30 | 2021-08-06 | 北京凯视达信息技术有限公司 | Video frame insertion processing method |
CN114668495A (en) * | 2021-10-15 | 2022-06-28 | 汕头市超声仪器研究所股份有限公司 | Biplane free arm three-dimensional reconstruction method and application thereof |
WO2023082306A1 (en) * | 2021-11-12 | 2023-05-19 | 苏州瑞派宁科技有限公司 | Image processing method and apparatus, and electronic device and computer-readable storage medium |
CN114066885A (en) * | 2022-01-11 | 2022-02-18 | 北京威高智慧科技有限公司 | Lower limb skeleton model construction method and device, electronic equipment and storage medium |
WO2023163768A1 (en) * | 2022-02-28 | 2023-08-31 | Microsoft Technology Licensing, Llc. | Advanced temporal low light filtering with global and local motion compensation |
CN114886521A (en) * | 2022-05-16 | 2022-08-12 | 上海睿刀医疗科技有限公司 | Device and method for determining the position of a puncture needle |
CN116248955A (en) * | 2022-12-30 | 2023-06-09 | 联通灵境视讯(江西)科技有限公司 | VR cloud rendering image enhancement method based on AI frame extraction and frame supplement |
CN116407276A (en) * | 2023-01-13 | 2023-07-11 | 杭州华匠医学机器人有限公司 | Target tracking method, endoscope system, and computer-readable medium |
CN116421313A (en) * | 2023-04-14 | 2023-07-14 | 郑州大学第一附属医院 | Augmented reality fusion method in navigation of lung tumor resection operation under thoracoscope |
CN116966381A (en) * | 2023-07-20 | 2023-10-31 | 复旦大学附属眼耳鼻喉科医院 | Tracheal intubation robot autonomous navigation method based on self-supervision monocular depth estimation |
Also Published As
Publication number | Publication date |
---|---|
CN117281616B (en) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111281540B (en) | Real-time visual navigation system based on virtual-actual fusion in minimally invasive surgery of orthopedics department | |
JP2022527360A (en) | Registration between spatial tracking system and augmented reality display | |
WO2019040493A1 (en) | Systems and methods for augmented reality guidance | |
JP4518470B2 (en) | Automatic navigation for virtual endoscopy | |
JP4732925B2 (en) | Medical image display method and program thereof | |
EP3986314A1 (en) | Augmented reality system and method for tele-proctoring a surgical procedure | |
JP2013505778A (en) | Computer-readable medium, system, and method for medical image analysis using motion information | |
EP4094706A1 (en) | Intraoperative planning adjustment method, apparatus and device for total knee arthroplasty | |
Ma et al. | 3D visualization and augmented reality for orthopedics | |
JP2017102927A (en) | Mapping 3d to 2d images | |
CN112515763A (en) | Target positioning display method, system and device and electronic equipment | |
CN109767458A (en) | A kind of sequential optimization method for registering of semi-automatic segmentation | |
CN113648061B (en) | Head-mounted navigation system based on mixed reality and navigation registration method | |
CN117281616B (en) | Operation control method and system based on mixed reality | |
Shao et al. | Augmented reality navigation with real-time tracking for facial repair surgery | |
CN113100941B (en) | Image registration method and system based on SS-OCT (scanning and optical coherence tomography) surgical navigation system | |
Zhang et al. | 3D augmented reality based orthopaedic interventions | |
CN108010587A (en) | The preparation method of operation on pelvis vision guided navigation simulation video based on CT images | |
KR102213412B1 (en) | Method, apparatus and program for generating a pneumoperitoneum model | |
CN2857869Y (en) | Real-time guiding device in operation based on local anatomic structure | |
CN110189407A (en) | A kind of human body three-dimensional reconstruction model system based on HOLOLENS | |
CN114565646A (en) | Image registration method, device, electronic device and readable storage medium | |
CN113662663A (en) | Coordinate system conversion method, device and system of AR holographic surgery navigation system | |
CN114092643A (en) | Soft tissue self-adaptive deformation method based on mixed reality and 3DGAN | |
CN111544113A (en) | Target tracking and distance dynamic graphical display method and device in surgical navigation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |