CN112866670A - Operation 3D video image stabilization synthesis system and method based on binocular space-time self-adaptation - Google Patents

Operation 3D video image stabilization synthesis system and method based on binocular space-time self-adaptation Download PDF

Info

Publication number
CN112866670A
CN112866670A CN202110020768.5A CN202110020768A CN112866670A CN 112866670 A CN112866670 A CN 112866670A CN 202110020768 A CN202110020768 A CN 202110020768A CN 112866670 A CN112866670 A CN 112866670A
Authority
CN
China
Prior art keywords
binocular
time
image
video
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110020768.5A
Other languages
Chinese (zh)
Other versions
CN112866670B (en
Inventor
方维
李海源
张斌
赵磊
刘宝国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202110020768.5A priority Critical patent/CN112866670B/en
Publication of CN112866670A publication Critical patent/CN112866670A/en
Application granted granted Critical
Publication of CN112866670B publication Critical patent/CN112866670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/555Constructional details for picking-up images in sites, inaccessible due to their dimensions or hazardous conditions, e.g. endoscopes or borescopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Abstract

The invention discloses a surgery 3D video image stabilization synthesis system and method based on binocular spatio-temporal adaptation, and the method comprises the following steps: step one, constructing a space-time associated data model; the method specifically comprises the steps of dividing a 3D imaging system of a binocular endoscope into a time correlation-based mathematical model and a space correlation-based mathematical model; and secondly, in the process of synthesizing the two-channel endoscope 3D video, constructing a 3D video synthesis model with binocular space-time self-adaptation based on the matching relation of the time sequence and the space sequence so as to construct a stable global optimization model. The method can remove the shake in the 3D video of the endoscope in the operation process, and provide stable and continuous 3D video for guiding the operation for doctors; and the robustness is better under the condition of weak texture tissue in operation.

Description

Operation 3D video image stabilization synthesis system and method based on binocular space-time self-adaptation
Technical Field
The invention relates to the field of medical image processing, in particular to an operation 3D video image stabilization system based on binocular spatiotemporal adaptation and a synthesis method.
Background
During medical detection and surgical navigation, image acquisition imaging is often required for a lesion of a patient. Compared with the traditional 2D image mode, the method for restoring the operation area site in the medical field through 3D endoscopic imaging can acquire the depth information of the scene and reflect the real situation of the scene, so that a doctor can feel the condition of the operation part 'personally on the scene'.
However, in the process of performing the surgical area imaging by the 3D endoscope, the endoscope inevitably has jitter problems in the imaging process due to uncertainty of the surgical area environment, and the jitter can seriously affect the 3D viewing effect of the doctor on the surgical area in the surgical process, and can bring large errors to the treatment of focus positioning and the like of the subsequent image sequence. Therefore, the application of the continuous and reliable operation 3D video image stabilization method has important practical significance for improving operation guidance in the operation process.
The existing image stabilization methods are mainly divided into two types, namely 2D time image sequence based and 3D space reconstruction based. The 2D image sequence-based method mainly uses motion estimation between adjacent image frames and adopts a similar or simulation transformation method to realize modeling between the adjacent frames, and when the change of a field of view or a depth range in the surgical navigation process is obvious, the image stabilizing effect is poor. The 3D information obtained by scene reconstruction is utilized to filter and smooth the camera motion path based on the 3D space reconstruction method, so as to compensate video jitter, however, the method excessively depends on the existing scene characteristics, and the robustness is poor under the condition of weak texture tissue in the operation.
Disclosure of Invention
The invention aims to solve the problem of jitter of the existing operation 3D video, and provides an operation 3D video image stabilization synthesis system and method based on binocular spatiotemporal adaptation, so that the jitter existing in the endoscope 3D video in the operation process is removed, and stable and continuous 3D video guided operation is provided for doctors.
The invention provides a binocular spatiotemporal adaptive operation 3D video image stabilization synthesis method, which comprises the following steps:
step one, constructing a space-time associated data model;
the method specifically comprises the steps of dividing a 3D imaging system of a binocular endoscope into a time correlation-based mathematical model and a space correlation-based mathematical model;
wherein, a time-dependent mathematical model is constructed:
s11, providing corresponding Shi-Tomasi corner point characteristics according to the gray image sequence of the operative area obtained by each camera of the binocular endoscope, and setting the corresponding characteristic point quantity N and the uniform distribution parameter rho according to the operative area scene;
s12, in the process of guiding the binocular endoscopic surgery, constructing a tracking mathematical model based on time correlation by using the assumption of gray level consistency of corresponding feature points between adjacent image frames:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt)
wherein I (x, y, t) represents the gray-scale value at the characteristic coordinate point (x, y) at the f-th moment, and I (x + Δ x, y + Δ y, t + Δ t) represents the gray-scale value I (x + Δ x, y + Δ y) of the new coordinate point corresponding to the time series at the interval Δ t.
S13, expanding the expression on the right side of the medium sign in the step S12 according to a first-order Taylor series to obtain the following expression:
Figure BDA0002887349220000021
wherein
Figure BDA0002887349220000022
For the movement speed of the feature point at the moment f, a neighborhood window of (2w +1) × (2w +1) around the feature point I (x, y) is selected, and the movement direction vector of the feature point I (x, y) at the moment f is obtained by solving
Figure BDA0002887349220000023
S14, determining the motion direction vector of the feature point in the step S13
Figure BDA0002887349220000024
And inter-frame interval time delta t, and solving to obtain the characteristic point coordinates (x + delta x, y + delta y) corresponding to the adjacent frames. Respectively constructing feature point sets X at t and t +1 momentstAnd Xt+1Solving to obtain a homography transformation matrix H between adjacent visual framest
Xt+1=HtXt
Thereby obtaining an endoscope single-channel video steady-state model function H (H) based on time sequencet)。
Wherein, the construction is based on a spatial correlation mathematical model:
s21, obtaining an internal parameter K of the two-channel camera through calibration of a binocular endoscope system before operationL(KR) And relative pose relationship between the two-channel cameras [ R, t ]]。
S22, when the optical flow of the binocular endoscope system is initialized, constructing a basic matrix F between the left camera and the right camera according to internal and external parameters calibrated by the system, and associating the left optical flow characteristic points IL(x, y) and Right optical flow feature Point IR(x, y), as follows:
IL(x,y)=FtIR(x,y)。
therefore, a basic matrix F under a time sequence is solved in real time through the matching condition of image feature points of the binocular operation areat
Step two, in the process of synthesizing the two-channel endoscope 3D video, based on the matching relation of the time sequence and the space sequence, a 3D video synthesis model with binocular space-time self-adaptation is constructed to construct a stable global optimization model ft
Figure BDA0002887349220000031
Wherein H (H)t) A projection error correlation function of a single-channel time sequence of the endoscope, g (F) an evaluation function of left and right space correlation constraint after initialization, alpha and beta are correlation parameters of the time sequence and the space sequence respectively, and self-adaption is obtained by solving global optimizationAnd (3) performing stable phase synthesis effect on the corresponding operation 3D video.
The invention provides a surgery 3D video image stabilization synthesis system based on binocular spatio-temporal adaptation for realizing the method, which comprises the following steps:
the binocular operation area real-time video acquisition module is used for controlling video streams of a binocular endoscope in an operation process to obtain a binocular image stream sequence with timestamps;
the image stereo correction module is used for carrying out distortion and stereo correction on the original image stream acquired by the binocular surgery area real-time video acquisition module to obtain an orthoscopic image stream without distortion;
the image feature extraction module is used for extracting features of the orthoscopic image flow of the endoscope operation area without distortion after the correction of the image stereo correction module to obtain feature information on the image flow of the operation area;
the time anti-shake module is used for tracking the left and right image stream data characteristics extracted by the image characteristic extraction module under the dimension of a time sequence to obtain a homography transformation matrix between the image sequences of the single camera with the dimension of time, namely single camera time constraint;
the spatial anti-shake module is used for adding binocular geometric constraint to a single camera image sequence obtained under the temporal anti-shake module, constructing a collaborative phase-stabilizing framework for image tracking of a left operation area and a right operation area in a spatial range, and obtaining spatial characteristic information constraint with stronger constraint capacity;
and the space-time self-adaptive anti-shake module is used for combining the single-camera time constraint output by the time anti-shake module and the binocular-camera-based space constraint output by the space anti-shake module to obtain a space-time self-adaptive anti-shake model under the condition of different feature point distribution of the operation area.
The present invention further provides a computer apparatus, comprising: the memory stores a computer program, and the processor runs the computer program to realize the operation 3D video image stabilization and synthesis method.
The invention relates to a binocular spatiotemporal adaptive operation 3D video image stabilization system and a synthesis method, which have the advantages and effects that: the method can remove the shake in the 3D video of the endoscope in the operation process, and provide stable and continuous 3D video for guiding the operation for doctors; and the robustness is better under the condition of weak texture tissue in operation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is understood that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic view of the structure of the present invention.
FIG. 2 is a block diagram of the system of the present invention.
Fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present invention in its various embodiments. However, the technical solution claimed in the present invention can be implemented without these technical details and various changes and modifications based on the following embodiments.
The embodiment of the invention provides a binocular spatiotemporal adaptive surgery 3D video image stabilization synthesis method, wherein a verification environment of the embodiment is an Intel i7-10710U processor, 1.10GHz dominant frequency and 16G memory, and software is a Visual Studio 2017 version. The method comprises the following specific steps:
step one, constructing a space-time associated data model;
subdividing a 3D imaging system of a binocular endoscope into mathematical models based on time correlation and space correlation;
wherein, in the time-dependent mathematical model: s11, providing corresponding Shi-Tomasi corner point characteristics according to the gray image sequence of the operative area obtained by each camera of the binocular endoscope, and setting the corresponding characteristic point number N and the uniform distribution parameter p according to the operative area scene;
s12, in the process of guiding the binocular endoscopic surgery, constructing a tracking mathematical model based on time correlation by using the assumption of gray level consistency of corresponding feature points between adjacent image frames:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt)
wherein I (x, y, t) represents the gray-scale value at the characteristic coordinate point (x, y) at the f-th moment, and I (x + Δ x, y + Δ y, t + Δ t) represents the gray-scale value I (x + Δ x, y + Δ y) of the new coordinate point corresponding to the time series at the interval Δ t.
S13, expanding the expression on the right side of the medium sign in the step S12 according to a first-order Taylor series to obtain the following expression:
Figure BDA0002887349220000051
wherein
Figure BDA0002887349220000052
For the movement speed of the feature point at the moment f, a neighborhood window of (2w +1) × (2w +1) around the feature point I (x, y) is selected, and the movement direction vector of the feature point I (x, y) at the moment f is obtained by solving
Figure BDA0002887349220000053
S14, determining the motion direction vector of the feature point in the step S13
Figure BDA0002887349220000054
And inter-frame interval time delta t, and solving to obtain the characteristic point coordinates (x + delta x, y + delta y) corresponding to the adjacent frames. Respectively constructing feature point sets X at t and t +1 momentstAnd Xt+1Solving to obtain a homography transformation matrix H between adjacent visual framest
Xt+1=HtXt
Thereby obtaining an endoscope single-channel video steady-state model function H (H) based on time sequencet)。
Wherein, in the mathematical model based on the spatial correlation: s21, obtaining an internal parameter K of the two-channel camera through calibration of a binocular endoscope system before operationL(KR) And relative pose relationship between the two-channel cameras [ R, t ]]。
S22, when the optical flow of the binocular endoscope system is initialized, constructing a basic matrix F between the left camera and the right camera according to internal and external parameters calibrated by the system, and associating the left optical flow characteristic points IL(x, y) and Right optical flow feature Point IR(x, y), as follows:
IL(x,y)=FtIR(x,y)。
therefore, a basic matrix F under a time sequence is solved in real time through the matching condition of image feature points of the binocular operation areat
Step two, in the process of synthesizing the two-channel endoscope 3D video, based on the matching relation of the time sequence and the space sequence, a 3D video synthesis model with binocular space-time self-adaptation is constructed to construct a stable global optimization model ft
Figure BDA0002887349220000061
Wherein H (H)t) The method comprises the steps of obtaining a projection error correlation function of a single-channel time sequence of an endoscope, g (F) an evaluation function of left and right space correlation constraint after initialization, wherein alpha and beta are correlation parameters of the time sequence and the space sequence respectively, and obtaining space-time fusion parameters (alpha and beta) corresponding to stable phase synthesis of a self-adaptive 3D video by solving global optimization. The parameter can adaptively adjust the fusion effect of the temporal anti-shake module and the spatial anti-shake module according to the temporal-spatial matching condition of the visual feature points of the operating area, and further meet the phase stabilization requirement of 3D navigation videos in different operating area environments and motion states.
A surgery 3D video image stabilization synthetic system based on binocular spatiotemporal adaptation comprises:
the binocular operation area real-time video acquisition module is used for controlling video streams of a binocular endoscope in an operation process to obtain a binocular image stream sequence with timestamps;
the image stereo correction module is used for carrying out distortion and stereo correction on the original image stream acquired by the binocular surgery area real-time video acquisition module to obtain an orthoscopic image stream without distortion;
the image feature extraction module is used for extracting features of the orthoscopic image flow of the endoscope operation area without distortion after the correction of the image stereo correction module to obtain feature information on the image flow of the operation area;
the time anti-shake module is used for tracking the left and right image stream data characteristics extracted by the image characteristic extraction module under the dimension of a time sequence to obtain a homography transformation matrix between the image sequences of the single camera with the dimension of time, namely single camera time constraint;
the spatial anti-shake module is used for adding binocular geometric constraint to a single camera image feature sequence obtained under the temporal anti-shake module, constructing a collaborative framework for image tracking of a left operation region and a right operation region in a spatial range, and obtaining spatial feature information constraint with stronger constraint capacity;
and the space-time self-adaptive anti-shake module is used for combining the single-camera time constraint output by the time anti-shake module and the binocular camera-based space output by the space anti-shake module to obtain a space-time self-adaptive anti-shake model under the different feature point distribution conditions of the operation area.
An embodiment of the invention also provides computer equipment. As shown in fig. 3, the apparatus includes: a memory, a processor;
the memory stores instructions executable by the at least one processor to implement the surgical 3D video image stabilization and synthesis system of the foregoing embodiments.
The computer device includes one or more processors and a memory, one processor being exemplified in fig. 3. The processor and the memory may be connected by a bus or other means, and fig. 3 illustrates the connection by a bus as an example. The memory, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor executes various functional applications of the device and data processing by executing nonvolatile software programs, instructions, and modules stored in the memory, that is, implements the image processing method described above.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
One or more modules are stored in the memory and, when executed by the one or more processors, perform the surgical 3D video stabilization and synthesis system of any of the method embodiments described above.
The computer device of the embodiment first constructs a 3D video synthesis model with binocular space-time adaptation by constructing mathematical models based on temporal correlation and spatial correlation, and based on a matching relationship between a temporal sequence and a spatial sequence in a two-channel endoscopic 3D video synthesis process, so as to construct a steady-state global optimization model. Therefore, the embodiment of the invention can remove the jitter in the endoscope 3D video in the operation process, and provide stable and continuous 3D video guide operation for doctors; and the robustness is better under the condition of weak texture tissue in operation.
The above-mentioned device can execute the method provided by the embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method, and reference may be made to the method provided by the embodiment of the present invention for technical details that are not described in detail in the embodiment.
An embodiment of the present application further provides a non-volatile storage medium for storing a computer-readable program, where the computer-readable program is used for a computer to execute some or all of the above method embodiments.
That is, those skilled in the art can understand that all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (5)

1. A surgery 3D video image stabilization synthetic method based on binocular space-time self-adaptation is characterized in that: the method comprises the following steps:
step one, constructing a space-time associated data model; the method specifically comprises the steps of dividing a 3D imaging system of a binocular endoscope into a time correlation-based mathematical model and a space correlation-based mathematical model;
step two, in the process of synthesizing the two-channel endoscope 3D video, based on the matching relation of the time sequence and the space sequence, a 3D video synthesis model with binocular space-time self-adaptation is constructed to construct a stable global optimization model ft
Figure FDA0002887349210000011
Wherein H (H)t) The method comprises the steps of obtaining a projection error correlation function of a single-channel time sequence of an endoscope, g (F) an evaluation function of left and right space correlation constraints after initialization, wherein alpha and beta are correlation parameters of the time sequence and the space sequence respectively, and obtaining a self-adaptive surgery 3D video phase-stable synthesis effect by solving global optimization.
2. The binocular spatiotemporal adaptation-based surgical 3D video image stabilization method according to claim 1, wherein: the construction is based on a time correlation mathematical model, and the process is as follows:
s11, providing corresponding Shi-Tomasi corner point characteristics according to the gray image sequence of the operative area obtained by each camera of the binocular endoscope, and setting the corresponding characteristic point quantity N and the uniform distribution parameter rho according to the operative area scene;
s12, in the process of guiding the binocular endoscopic surgery, constructing a tracking mathematical model based on time correlation by using the assumption of gray level consistency of corresponding feature points between adjacent image frames:
I(x,Y,f)=I(x+Δx,y+Δy,t+Δt)
wherein I (x, y, f) represents the gray value at the characteristic coordinate point (x, y) at the f-th moment, and I (x + delta x, y + delta y, t + delta t) represents the gray value I (x + delta x, y + delta y) of a new coordinate point corresponding to the time sequence of the interval delta f;
s13, expanding the expression on the right side of the medium sign in the step S12 according to a first-order Taylor series to obtain the following expression:
Figure FDA0002887349210000012
wherein
Figure FDA0002887349210000013
For the movement speed of the feature point at the moment f, a neighborhood window of (2w +1) × (2w +1) around the feature point I (x, y) is selected, and the movement direction vector of the feature point I (x, y) at the moment f is obtained by solving
Figure FDA0002887349210000021
S14, determining the motion direction vector of the feature point in the step S13
Figure FDA0002887349210000022
And inter-frame interval time delta f, and solving to obtain the corresponding feature point coordinates (x + delta x, y + delta y) of the adjacent frames; constructing f and f +1 moments, respectivelyFeature point set X oftAnd Xt+1Solving to obtain a homography transformation matrix H between adjacent visual framest
Xt+1=HtXt
Thereby obtaining an endoscope single-channel video steady-state model function H (H) based on time sequencet)。
3. The binocular spatiotemporal adaptation-based surgical 3D video image stabilization method according to claim 1, wherein: the construction of the mathematical model based on the spatial correlation comprises the following processes:
s21, obtaining an internal parameter K of the two-channel camera through calibration of a binocular endoscope system before operationL(KR) And relative pose relationship between the two-channel cameras [ R, t ]];
S22, when the optical flow of the binocular endoscope system is initialized, constructing a basic matrix F between the left camera and the right camera according to internal and external parameters calibrated by the system, and associating the left optical flow characteristic points IL(x, y) and Right optical flow feature Point IR(x, y), as follows:
IL(x,y)=FtIR(x,y);
therefore, a basic matrix F under a time sequence is solved in real time through the matching condition of image feature points of the binocular operation areat
4. A binocular spatiotemporal adaptation based surgical 3D video image stabilization synthesis system that implements the methods of claims 1-3, comprising:
the binocular operation area real-time video acquisition module is used for controlling video streams of a binocular endoscope in an operation process to obtain a binocular image stream sequence with timestamps;
the image stereo correction module is used for carrying out distortion and stereo correction on the original image stream acquired by the binocular surgery area real-time video acquisition module to obtain an orthoscopic image stream without distortion;
the image feature extraction module is used for extracting features of the orthoscopic image flow of the endoscope operation area without distortion after the correction of the image stereo correction module to obtain feature information on the image flow of the operation area;
the time anti-shake module is used for tracking the left and right image stream data characteristics extracted by the image characteristic extraction module under the dimension of a time sequence to obtain a homography transformation matrix between the image sequences of the single camera with the dimension of time, namely single camera time constraint;
the spatial anti-shake module is used for adding binocular geometric constraint to a single camera image sequence obtained under the temporal anti-shake module, constructing a collaborative phase-stabilizing framework for image tracking of a left operation area and a right operation area in a spatial range, and obtaining spatial characteristic information constraint with stronger constraint capacity;
and the space-time self-adaptive anti-shake module is used for combining the single-camera time constraint output by the time anti-shake module and the binocular-camera-based space constraint output by the space anti-shake module to obtain a space-time self-adaptive anti-shake model under the condition of different feature point distribution of the operation area.
5. A computer device, comprising: a memory storing a computer program that is executed by the processor to implement the surgical 3D video image stabilization synthesis method according to any one of claims 1 to 3, and a processor.
CN202110020768.5A 2021-01-07 2021-01-07 Operation 3D video image stabilization synthesis system and method based on binocular space-time self-adaptation Active CN112866670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110020768.5A CN112866670B (en) 2021-01-07 2021-01-07 Operation 3D video image stabilization synthesis system and method based on binocular space-time self-adaptation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110020768.5A CN112866670B (en) 2021-01-07 2021-01-07 Operation 3D video image stabilization synthesis system and method based on binocular space-time self-adaptation

Publications (2)

Publication Number Publication Date
CN112866670A true CN112866670A (en) 2021-05-28
CN112866670B CN112866670B (en) 2021-11-23

Family

ID=76005107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110020768.5A Active CN112866670B (en) 2021-01-07 2021-01-07 Operation 3D video image stabilization synthesis system and method based on binocular space-time self-adaptation

Country Status (1)

Country Link
CN (1) CN112866670B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935009A (en) * 2023-09-19 2023-10-24 中南大学 Operation navigation system for prediction based on historical data analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140133762A1 (en) * 2012-11-14 2014-05-15 Seiko Epson Corporation Point Set Matching with Outlier Detection
CN106534833A (en) * 2016-12-07 2017-03-22 上海大学 Space and time axis joint double-viewpoint three dimensional video stabilizing method
CN107864374A (en) * 2017-11-17 2018-03-30 电子科技大学 A kind of binocular video digital image stabilization method for maintaining parallax
CN108765317A (en) * 2018-05-08 2018-11-06 北京航空航天大学 A kind of combined optimization method that space-time consistency is stablized with eigencenter EMD adaptive videos

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140133762A1 (en) * 2012-11-14 2014-05-15 Seiko Epson Corporation Point Set Matching with Outlier Detection
CN106534833A (en) * 2016-12-07 2017-03-22 上海大学 Space and time axis joint double-viewpoint three dimensional video stabilizing method
CN107864374A (en) * 2017-11-17 2018-03-30 电子科技大学 A kind of binocular video digital image stabilization method for maintaining parallax
CN108765317A (en) * 2018-05-08 2018-11-06 北京航空航天大学 A kind of combined optimization method that space-time consistency is stablized with eigencenter EMD adaptive videos

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935009A (en) * 2023-09-19 2023-10-24 中南大学 Operation navigation system for prediction based on historical data analysis
CN116935009B (en) * 2023-09-19 2023-12-22 中南大学 Operation navigation system for prediction based on historical data analysis

Also Published As

Publication number Publication date
CN112866670B (en) 2021-11-23

Similar Documents

Publication Publication Date Title
Allan et al. Stereo correspondence and reconstruction of endoscopic data challenge
US9105207B2 (en) Four dimensional image registration using dynamical model for augmented reality in medical applications
CN109040575B (en) Panoramic video processing method, device, equipment and computer readable storage medium
JP5153940B2 (en) System and method for image depth extraction using motion compensation
JP7127785B2 (en) Information processing system, endoscope system, trained model, information storage medium, and information processing method
Collins et al. Towards live monocular 3D laparoscopy using shading and specularity information
US8872928B2 (en) Methods and apparatus for subspace video stabilization
US9615075B2 (en) Method and device for stereo base extension of stereoscopic images and image sequences
CN108090954A (en) Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning
CN112866670B (en) Operation 3D video image stabilization synthesis system and method based on binocular space-time self-adaptation
Ansari et al. Scalable dense monocular surface reconstruction
Lee et al. Fast 3D video stabilization using ROI-based warping
Zhou et al. Synthesis of stereoscopic views from monocular endoscopic videos
WO2020155024A1 (en) Method and apparatus for missing data processing of three dimensional trajectory data
Parchami et al. Endoscopic stereo reconstruction: A comparative study
CN115082537A (en) Monocular self-monitoring underwater image depth estimation method and device and storage medium
CN115797991A (en) Method for generating recognizable face image according to face side image
JP7105370B2 (en) Tracking device, learned model, endoscope system and tracking method
CN111178501B (en) Optimization method, system, electronic equipment and device for dual-cycle countermeasure network architecture
Ke et al. Towards real-time, multi-view video stereopsis
JP7105369B2 (en) Tracking device, learned model, endoscope system and tracking method
CN114494445A (en) Video synthesis method and device and electronic equipment
WO2017109997A1 (en) Image processing device, image processing method, and program
Wang et al. Adaptive video stabilization based on feature point detection and full-reference stability assessment
Weld et al. Regularising disparity estimation via multi task learning with structured light reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant