CN107680103A - The method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically - Google Patents
The method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically Download PDFInfo
- Publication number
- CN107680103A CN107680103A CN201710818977.8A CN201710818977A CN107680103A CN 107680103 A CN107680103 A CN 107680103A CN 201710818977 A CN201710818977 A CN 201710818977A CN 107680103 A CN107680103 A CN 107680103A
- Authority
- CN
- China
- Prior art keywords
- pixel
- point
- mrow
- real
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012545 processing Methods 0.000 title claims abstract description 19
- 208000005718 Stomach Neoplasms Diseases 0.000 title claims abstract description 13
- 206010017758 gastric cancer Diseases 0.000 title claims abstract description 13
- 201000011549 stomach cancer Diseases 0.000 title claims abstract description 13
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 239000000284 extract Substances 0.000 claims abstract description 8
- 230000004927 fusion Effects 0.000 claims abstract description 8
- 230000000903 blocking effect Effects 0.000 claims abstract description 3
- 230000006870 function Effects 0.000 claims description 19
- 238000006073 displacement reaction Methods 0.000 claims description 13
- 238000005516 engineering process Methods 0.000 claims description 11
- 230000003287 optical effect Effects 0.000 claims description 7
- 210000004204 blood vessel Anatomy 0.000 claims description 5
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 238000013459 approach Methods 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 239000004744 fabric Substances 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 2
- 238000002357 laparoscopic surgery Methods 0.000 claims description 2
- 238000001356 surgical procedure Methods 0.000 abstract description 5
- 210000002784 stomach Anatomy 0.000 abstract description 4
- 238000004040 coloring Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000002792 vascular Effects 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 230000002980 postoperative effect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 208000035965 Postoperative Complications Diseases 0.000 description 1
- 210000000683 abdominal cavity Anatomy 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000002224 dissection Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000011902 gastrointestinal surgery Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 210000001165 lymph node Anatomy 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
A kind of method for blocking processing mixed reality automatically the invention discloses actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system.The method of the invention comprises the following steps:Off-line phase calculates the depth information and colouring information of laparoscopic image, accurately extracts shelter target;On-line stage searches depth value by feature point tracking and Contour extraction characteristic point and automatically determines hiding relation.The method of the invention quickly determines the position relationship of actual situation scene, the real-time and precise fusion of dummy model and surgical scene is realized, the accuracy of navigation is improved, accurately guides procedure, success rate of operation is improved, is advantageous to the application and popularization of peritoneoscope stomach gut surgery.
Description
Technical field
The invention belongs to there are the fields such as minimally invasive gastroenterological surgery, iconography, mixed reality, image procossing, it is related to a kind of use
In the method that the actual situation of stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically.
Background technology
Stomach cancer is one of common tumour in China, and radical excision is its main treatment means.Laparoscopic technique is because of it
Wound is few and the features such as contributing to post-operative recovery, the application on gastrointestinal surgery field is more and more.However, laparoscope is present
Tubular visual field, lack the distinctive limitation of tactile and depth feelings etc. itself, and the blood vessel traveling of stomach week is complicated, anatomical variation compared with
More, injury of blood vessel caused by lymph node dissection is one of laparoscopic operation of gastric cancer severe complication in art, and is caused unplanned
The major reason of second operation so that the popularization of this technology receives certain limitation.
The research and development and application of optical tracking technology, being transformed into for possibility from " static state " to " dynamic " of navigating, reach real
When the effect navigated.Optical tracking has the advantages that measurement accuracy is high, scope is wide, but it is only capable of realizing rough scene matching,
And peritoneoscope stomach gut surgery has the characteristics that tissue internal organs are not fixed and easy deformation, optical tracking do not reach accurately registering.
Mixed reality technology can realize that actual situation scene is superimposed in real time, can improve the accuracy of navigation matching.But dummy object is folded
Certain spatial relation when being added in real scene, between virtual objects and real scene be present, that is, block and be blocked
Relation.Simply three-dimensional vascular pattern is added in laparoscope real scene, virtual vascular pattern can be caused to block very
Real field scape or real scene block virtual blood vessel, cause observer's losing and the entanglement on locus on sense organ direction.
This project intends research mutual occlusion automatic processing method:According to the real-time registration information of image characteristic point and the real-time shape of vascular pattern
Become the position relationship that information determines actual situation scene;The depth information and colouring information of laparoscopic image are calculated in off-line phase, according to
Shelter target is accurately extracted according to two kinds of information;It is that on-line stage searches depth value by tracking characteristics point and automatically determines and block in art
Relation.The position relationship of actual situation scene is quickly determined under the requirement for meeting accuracy, realizes both real time fusions.This method
The newest fruits of the subjects theories such as Digital Image Processing, pattern-recognition, computer vision and nonlinear optimization are drawn, around actual situation
The automatic real-time processing method technology contents in processing are blocked, for surgical navigational application scenario, mixed reality system can improved
Ensure the effect of mutual occlusion processing and the real-time of system while availability of uniting to greatest extent.
The content of the invention
It is automatic it is an object of the invention to develop a kind of actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system
The method for blocking processing mixed reality.On the basis of optical tracking, processing mixed reality technology is blocked by virtual automatic, will
The virtual information of computer generation and the true nature scene of image collecting device capture are superimposed together, and improve the accurate of navigation
Property, procedure is accurately guided, improves success rate of operation, is advantageous to the application and popularization of peritoneoscope stomach gut surgery.
It is existing that actual situation of the present invention for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixing automatically
Real method, comprises the following steps:
A. in off-line process, left and right two images are shot with laparoscope binocular head first, are calculated each in scene
The depth value of pixel, then improves to depth value, so that extracts relative coarseness blocks edge, while calculates field
In scape in hsv color space each pixel pixel value, and using sharpen etc. the enhancing such as image handle to obtain it is more clear
Profile;Afterwards using means are merged, edge and profile information are blocked with reference to coarse, obtain higher precision blocks edge;
B. in online processing procedure, tracking characteristics point first, the displacement of objective contour is calculated according to the displacement of characteristic point,
Approximate contours are obtained, the precise boundary of target object is then sought in the belt-like zone centered on approximate contours;
C. the correct virtual-real synthesis image of hiding relation is obtained using the technology that redraws;Continue next two field picture, by present frame
Initial profile of the objective contour tried to achieve in image as next frame, repeat above step;It is fast under the requirement for meeting accuracy
Speed determines the position relationship of dummy model and real scene, realizes both real time fusions, provides laparoscopy operation in real time
The blood vessel that can not show walks row information.
According to the further feature of method of the present invention, in the step A, mesh is blocked in the data acquisition and extraction
Mark comprises the following steps:
(1) energy function is constructed:Potts energy functions are improved, using the parallax of pixel are basic by Potts energy functions
It is using matching double points as fundamental optimum unit to optimize cell translation, and the hypothesis bar of pixel Point matching can not effectively be utilized by improving it
Part;Energy function construction such as following formula:
E (f)=Edata(f)+Eocc(f)+Esmooth(f),
Wherein, data item Edata(f) similarity degree between expression matched pixel point in light characteristic, blocks an Eocc
(f) the fixation penalty value of pixel, smooth item E are blocked in descriptionsmooth(f) approach the parallax of neighbor pixel;
(2) energy function minimizes:Depth value is sought by minimizing energy function, tries to achieve each pixel of real scene
After depth value, all pixels blocked in the real-world object region of dummy object can be extracted by comparing the size of depth value
Point;
(3) depth map improves:Image is split using Fast Mean Shift method, image is divided into similitude
The region of matter, then the average value of all pixels depth in the same area is replaced to the depth value of pixel in the region, so as to protect
It is consistent to demonstrate,prove the depth information of each pixel on same object, target object is completely extracted from background;
(4) by comparing the depth value of dummy object and real-world object, the hiding relation between them is determined, so as to extract
Block the profile of object;
(5) enhancing such as it is sharpened according to pixel value of the image in hsv color space to handle, makes contour of object therein more
Add obvious.
According to the further feature of method of the present invention, in the step (5), color image probability point is built first
The Gauss nuclear model G of clothC, then utilize the spatial distribution Gaussian kernel G of depth information structure imageS, final Fusion of Color and space
Gaussian Profile core h=∑s GCGS, finally extract and high-precision block edge.
According to the further feature of method of the present invention, in the step B, the feature point tracking includes following step
Suddenly:The characteristic point in previous frame objective area in image is tracked, then according to the displacement of characteristic point, the profile of previous frame is put down
Move, obtain the approximate contours of target object in current frame image, next tried to achieve in the belt-like zone centered on approximate contours
The precise boundary of target, it is achieved thereby that the accurate tracking of contour of object.
According to the further feature of method of the present invention, in the step B, the feature point extraction uses quick angle
The mode of point detection method.
According to the further feature of method of the present invention, in the step B, the Feature Points Matching is using two-way
Optical flow method rejects the characteristic point not in target area, while target signature point set is updated using Feature Correspondence Algorithm
And correction, finally using the positional information between characteristic point, the changed factor and yardstick of target are determined, so as to realize determining for target
Position.
According to the further feature of method of the present invention, in the step B, the estimation approximate contours include following
Step:The feature traced into according to the characteristic point in the profile of obtained target object, previous frame contour area and present frame
Point, calculates the characteristic point average displacement being mutually matched, and then according to target in the mobile computing current frame image of profile similitude
The approximate contours of object;Matching characteristic point average displacement DtCalculation equation is as follows:
fi,j-1Represent the characteristic point in contour area known to t-1 moment target objects;
fi,jRepresent the characteristic point that t traces into;
M represents the number for the characteristic point that t traces into
By the way that the pixel on known t-1 moment profile is moved into Dt, object in t video frame images can be obtained
The approximate contours of body.
According to the further feature of method of the present invention, in the step B, the extraction precise boundary includes following
Step:Belt-like zone is split using max-flow/minimum segmentation method, obtains accurate objective contour.
Navigation system of the present invention has the advantages that:
(1) overcome the false judgment of hiding relation and accurately guide procedure in real time to performing the operation, reduce art medium vessels
The complication of damage, increase operation safety, improve procedure efficiency, reduce operating time.
(2) promote patient's post-operative recovery, reduce postoperative complications rate, hospital day after desmopyknosis, and then reduce patient and be in hospital
Expense, reduce the medical treatment cost of laparoscopic operation of gastric cancer.
(3) this method can shorten the learning curve of beginner, be advantageous to the application and popularization of laparoscopic operation of gastric cancer.
(4) stomach cancer precision treatment is advantageously implemented, there is higher scientific value and social benefit.
(5) actual situation blocks validity of the processing mixed reality technology for raising virtual operation automatically so that operation is led
Boat given play to more functions, and it is relatively reliable surgical effect is predicted it is significant.
Brief description of the drawings
Fig. 1 is the process that actual situation of the present invention blocks processing mixed reality automatically.
Embodiment
Actual situation of the present invention blocks the process of processing mixed reality as shown in figure 1, details are as follows automatically:
(1) calculating of off-line phase image depth values
The threedimensional model of object in scene need not be known a priori by by the way of depth calculation, it is only necessary to deep by searching
Angle value and being compared can complete mutual occlusion processing.
The calculating process of depth value is as follows:
Construct energy function --- Potts energy functions are improved, using the parallax of pixel are basic by Potts energy functions
It is using matching double points as fundamental optimum unit to optimize cell translation, and the hypothesis bar of pixel Point matching can not effectively be utilized by improving it
Part.Energy function construction is as follows:
E (f)=Edata(f)+Eocc(f)+Esmooth(f),
Wherein, data item Edata(f) similarity degree between expression matched pixel point in light characteristic, blocks an Eocc
(f) the fixation penalty value of pixel, smooth item E are blocked in descriptionsmooth(f) approach the parallax of neighbor pixel.
Energy function minimizes --- and depth value is sought by minimizing energy function, tries to achieve each pixel of real scene
After depth value, all pixels blocked in the real-world object region of dummy object can be extracted by comparing the size of depth value
Point.
Depth map improves --- and the improvement of depth is split using Fast Mean Shift method to image, and image is divided into
Region with similar quality, then the average value of all pixels depth in the same area is replaced to the depth of pixel in the region
Value, so as to ensure that the depth information of each pixel on same object is consistent, target object is completely extracted from background
Come.
Block object contours extract:It is assured that by the depth value for comparing dummy object and real-world object between them
Hiding relation.
(2) off-line phase color image is handled
Enhancing processing is sharpened etc. according to pixel value of the image in hsv color space, makes contour of object therein more
Substantially.The Gauss nuclear model G of color image probability distribution is built firstC, the space that image is then built using depth information is divided
Cloth Gaussian kernel GS, the Gaussian Profile core h=∑s G in final Fusion of Color and spaceCGSFinally extract and high-precision block edge.
(3) on-line stage feature point tracking
The characteristic point in previous frame objective area in image is tracked, then according to the displacement of characteristic point, by the wheel of previous frame
Exterior feature translation, the approximate contours of target object in current frame image are obtained, next in the belt-like zone centered on approximate contours
The precise boundary of target is tried to achieve, it is achieved thereby that the accurate tracking of contour of object.This method can not only real-time tracking target
Contour of object, and can also obtain accurate Contour extraction effect in the case where scene complexity, prospect background color are similar.
Feature point extraction:By the way of Fast Corner Detection method, the calculating speed of this method is fast, for illumination, block,
The conversion of complex background has robustness, meets the real-time of online processing procedure requirement.
The general principle of this method extraction operator is:The regional area related to each picture point has identical bright
Degree.If each images light intensity value and the images light intensity value of the window center in a certain window area are same or similar, this
Window area will be referred to as " USAN "." USAN " of each pixel of image is calculated, this then provides whether have the method at edge.
" USAN " of pixel on edge is smaller, and " USAN " of the pixel on angle point is smaller.Thus, it is only required to find minimum
" USAN ", so that it may determine angle point.This method is poor due to that need not calculate gradation of image, therefore, has very strong antimierophonic energy
Power.The angle point of algorithm detection is defined as having enough pixels different from the point in the surrounding neighbors of pixel
Region.It is applied in gray level image, that is, the gray value for there are enough pixels is more than the gray value of the point or less than the point
Gray value.For example consider that nearby radius is 16 points on 3 annulus to A points in certain figure, if wherein there is continuous 12 points
The gray value difference of gray value and A points exceedes a certain threshold value, then it is considered that A points are angle point.
Feature Points Matching:The characteristic point not in target area is rejected using two-way optical flow method.
(4) approximate contours are estimated
Traced into according to the characteristic point in the profile of obtained target object, previous frame frame contour area and present frame
Characteristic point, calculate the characteristic point average displacement being mutually matched, and then according to the mobile computing current frame image of profile similitude
The approximate contours of middle target object.Matching characteristic point average displacement DtIt is calculated as follows:
fi,j-1Represent the characteristic point in contour area known to t-1 moment target objects;
fi,jRepresent the characteristic point that t traces into;
M represents the number for the characteristic point that t traces into
By the way that the pixel on known t-1 moment profile is moved into Dt, object in t video frame images can be obtained
The approximate contours of body.
(5) precise boundary extracts
Belt-like zone is split using max-flow/minimum segmentation method, obtains accurate objective contour.
(6) the correct virtual-real synthesis image of hiding relation is obtained using the technology that redraws.Continue next two field picture, by present frame
Initial profile of the objective contour tried to achieve in image as next frame, repeat above step.It is fast under the requirement for meeting accuracy
Speed determines the position relationship of dummy model and real scene, realizes both real time fusions, provides common abdominal cavity to doctor in real time
The blood vessel that videoendoscopic surgery can not show walks row information.
3. draw a conclusion
3.1 virtual automatic block based on depth analysis handle mixed reality technology, can more effectively improve the accurate of navigation
Property;
3.2 this method are advantageous to the popularization of laparoscopic operation of gastric cancer, have good potential applicability in clinical practice.
Claims (8)
1. a kind of method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically, its
It is characterised by, comprises the following steps:
A. in off-line process, left and right two images are shot with laparoscope binocular head first, calculate each pixel in scene
The depth value of point, then improves to depth value, so that extracts relative coarseness blocks edge, while calculates in scene
The pixel value of each pixel in hsv color space, and handle to obtain using enhancings such as the images such as sharpening and more clearly take turns
It is wide;Afterwards using means are merged, edge and profile information are blocked with reference to coarse, obtain higher precision blocks edge;
B. in online processing procedure, tracking characteristics point first, the displacement of objective contour is calculated according to the displacement of characteristic point, is obtained
Approximate contours, the precise boundary of target object is then sought in the belt-like zone centered on approximate contours;
C. the correct virtual-real synthesis image of hiding relation is obtained using the technology that redraws.Continue next two field picture, by current frame image
In the initial profile of the objective contour tried to achieve as next frame, repeat above step;It is quick true under the requirement for meeting accuracy
Determine the position relationship of dummy model and real scene, realize both real time fusions, providing laparoscopy operation in real time can not
The blood vessel showed walks row information.
2. according to the method for claim 1, it is characterised in that in the step A, mesh is blocked in the data acquisition and extraction
Mark comprises the following steps:
(1) energy function is constructed:Potts energy functions are improved, by Potts energy functions using the parallax of pixel as fundamental optimum
Cell translation is using matching double points as fundamental optimum unit, and the assumed condition of pixel Point matching can not effectively be utilized by improving it;
Energy function construction such as following formula:
E (f)=Edata(f)+Eocc(f)+Esmooth(f),
Wherein, data item Edata(f) similarity degree between expression matched pixel point in light characteristic, blocks an Eocc(f) retouch
State the fixation penalty value for blocking pixel, smooth item Esmooth(f) approach the parallax of neighbor pixel;
(2) energy function minimizes:Depth value is sought by minimizing energy function, tries to achieve the depth of each pixel of real scene
After value, all pixels point blocked in the real-world object region of dummy object can be extracted by comparing the size of depth value;
(3) depth map improves:Image is split using Fast Mean Shift method, image is divided into similar quality
Region, then the average value of all pixels depth in the same area is replaced to the depth value of pixel in the region, it is same so as to ensure
The depth information of each pixel is consistent on one object, and target object is completely extracted from background;
(4) by comparing the depth value of dummy object and real-world object, the hiding relation between them is determined, is blocked so as to extract
The profile of object;
(5) enhancing such as it is sharpened according to pixel value of the image in hsv color space to handle, makes contour of object therein brighter
It is aobvious.
3. according to the method for claim 2, it is characterised in that:In the step (5), color image probability point is built first
The Gauss nuclear model G of clothC, then utilize the spatial distribution Gaussian kernel G of depth information structure imageS, final Fusion of Color and space
Gaussian Profile core h=∑s GCGS, finally extract and high-precision block edge.
4. according to the method for claim 1, it is characterised in that in the step B, the feature point tracking includes following step
Suddenly:The characteristic point in previous frame objective area in image is tracked, then according to the displacement of characteristic point, the profile of previous frame is put down
Move, obtain the approximate contours of target object in current frame image, next tried to achieve in the belt-like zone centered on approximate contours
The precise boundary of target, it is achieved thereby that the accurate tracking of contour of object.
5. according to the method for claim 1, it is characterised in that in the step B, the feature point extraction uses quick angle
The mode of point detection method.
6. according to the method for claim 1, it is characterised in that in the step B, the Feature Points Matching is using two-way
Optical flow method rejects the characteristic point not in target area, while target signature point set is updated using Feature Correspondence Algorithm
And correction, finally using the positional information between characteristic point, the changed factor and yardstick of target are determined, so as to realize determining for target
Position.
7. according to the method for claim 1, it is characterised in that in the step B, the estimation approximate contours include following
Step:The feature traced into according to the characteristic point in the profile of obtained target object, previous frame contour area and present frame
Point, calculates the characteristic point average displacement being mutually matched, and then according to target in the mobile computing current frame image of profile similitude
The approximate contours of object;Matching characteristic point average displacement DtCalculation equation is as follows:
<mrow>
<msub>
<mi>D</mi>
<mi>t</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>&Sigma;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
<mo>&Element;</mo>
<mi>M</mi>
</mrow>
</msub>
<msub>
<mi>f</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>f</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</msub>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mrow>
<mo>|</mo>
<mi>M</mi>
<mo>|</mo>
</mrow>
</mfrac>
</mrow>
fi,j-1Represent the characteristic point in contour area known to t-1 moment target objects;
fi,jRepresent the characteristic point that t traces into;
M represents the number for the characteristic point that t traces into
By the way that the pixel on known t-1 moment profile is moved into Dt, target object in t video frame images can be obtained
Approximate contours.
8. according to the method for claim 1, it is characterised in that in the step B, the extraction precise boundary includes following
Step:Belt-like zone is split using max-flow/minimum segmentation method, obtains accurate objective contour.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710818977.8A CN107680103A (en) | 2017-09-12 | 2017-09-12 | The method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710818977.8A CN107680103A (en) | 2017-09-12 | 2017-09-12 | The method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107680103A true CN107680103A (en) | 2018-02-09 |
Family
ID=61134700
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710818977.8A Pending CN107680103A (en) | 2017-09-12 | 2017-09-12 | The method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107680103A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110930361A (en) * | 2019-10-22 | 2020-03-27 | 西安理工大学 | Method for detecting occlusion of virtual and real objects |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104378582A (en) * | 2013-08-16 | 2015-02-25 | 北京博思廷科技有限公司 | Intelligent video analysis system and method based on PTZ video camera cruising |
CN106236263A (en) * | 2016-08-24 | 2016-12-21 | 李国新 | The gastrointestinal procedures air navigation aid decomposed based on scene and system |
CN106236264A (en) * | 2016-08-24 | 2016-12-21 | 李国新 | The gastrointestinal procedures air navigation aid of optically-based tracking and images match and system |
-
2017
- 2017-09-12 CN CN201710818977.8A patent/CN107680103A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104378582A (en) * | 2013-08-16 | 2015-02-25 | 北京博思廷科技有限公司 | Intelligent video analysis system and method based on PTZ video camera cruising |
CN106236263A (en) * | 2016-08-24 | 2016-12-21 | 李国新 | The gastrointestinal procedures air navigation aid decomposed based on scene and system |
CN106236264A (en) * | 2016-08-24 | 2016-12-21 | 李国新 | The gastrointestinal procedures air navigation aid of optically-based tracking and images match and system |
Non-Patent Citations (4)
Title |
---|
卢胜男等: "结合双向光流约束的特征点匹配车辆跟踪方法", 《交通运输系统工程与信息》 * |
徐维鹏等: "增强现实中的虚实遮挡处理综述", 《计算机辅助设计与图形学学报》 * |
田元: "增强现实中的虚实遮挡处理方法研究", 《万方数据知识服务平台》 * |
霍薪: "基于双目立体视觉的三维场景重建研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110930361A (en) * | 2019-10-22 | 2020-03-27 | 西安理工大学 | Method for detecting occlusion of virtual and real objects |
CN110930361B (en) * | 2019-10-22 | 2022-03-25 | 西安理工大学 | Method for detecting occlusion of virtual and real objects |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Qin et al. | Surgical instrument segmentation for endoscopic vision with data fusion of cnn prediction and kinematic pose | |
Yang et al. | Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images | |
Hasan et al. | Detection, segmentation, and 3D pose estimation of surgical tools using convolutional neural networks and algebraic geometry | |
Giannarou et al. | Probabilistic tracking of affine-invariant anisotropic regions | |
Lin et al. | Video‐based 3D reconstruction, laparoscope localization and deformation recovery for abdominal minimally invasive surgery: a survey | |
Bouget et al. | Detecting surgical tools by modelling local appearance and global shape | |
Wang et al. | Multi-cue based tracking | |
CN103150723B (en) | The stomach CT image lymph node detection system of Shape-based interpolation and ellipse fitting and method | |
CN112464847B (en) | Human body action segmentation method and device in video | |
CN104778441A (en) | Multi-mode face identification device and method fusing grey information and depth information | |
Mei et al. | Scene-adaptive off-road detection using a monocular camera | |
Chen et al. | Automatic and accurate needle detection in 2D ultrasound during robot-assisted needle insertion process | |
Lin et al. | Efficient vessel feature detection for endoscopic image analysis | |
Mahmood et al. | DSRD-Net: Dual-stream residual dense network for semantic segmentation of instruments in robot-assisted surgery | |
Luo et al. | Unsupervised learning of depth estimation from imperfect rectified stereo laparoscopic images | |
Penza et al. | Long term safety area tracking (LT-SAT) with online failure detection and recovery for robotic minimally invasive surgery | |
CN106236264A (en) | The gastrointestinal procedures air navigation aid of optically-based tracking and images match and system | |
CN115690178A (en) | Cross-module non-rigid registration method, system and medium based on deep learning | |
CN114020155A (en) | High-precision sight line positioning method based on eye tracker | |
Rieke et al. | Computer vision and machine learning for surgical instrument tracking: Focus: random forest-based microsurgical tool tracking | |
Chu et al. | Multi-level feature aggregation network for instrument identification of endoscopic images | |
Luo et al. | SiamSMDFFF: Siamese network tracker based on shallow-middle-deep three-level feature fusion and clustering-based adaptive rectangular window filtering | |
Chi et al. | A discussion on the evaluation of a new automatic liver volume segmentation method for specified CT image datasets | |
Zeng et al. | Learning-based US-MR liver image registration with spatial priors | |
CN107680103A (en) | The method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180209 |