EP3549093A1 - Dispositif de traitement d'image et procédé de production en temps réel d'une image composite numérique à partir d'une séquence d'images numériques d'un intérieur d'une structure creuse - Google Patents
Dispositif de traitement d'image et procédé de production en temps réel d'une image composite numérique à partir d'une séquence d'images numériques d'un intérieur d'une structure creuseInfo
- Publication number
- EP3549093A1 EP3549093A1 EP16805382.5A EP16805382A EP3549093A1 EP 3549093 A1 EP3549093 A1 EP 3549093A1 EP 16805382 A EP16805382 A EP 16805382A EP 3549093 A1 EP3549093 A1 EP 3549093A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- transformation
- transforming
- coordinate system
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 239000002131 composite material Substances 0.000 title claims abstract description 41
- 238000012545 processing Methods 0.000 title claims abstract description 31
- 238000004519 manufacturing process Methods 0.000 title claims description 5
- 230000009466 transformation Effects 0.000 claims abstract description 165
- 230000001131 transforming effect Effects 0.000 claims abstract description 91
- 238000013507 mapping Methods 0.000 claims abstract description 26
- 238000005304 joining Methods 0.000 claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 41
- 210000003932 urinary bladder Anatomy 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 12
- 210000000056 organ Anatomy 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 7
- 230000009183 running Effects 0.000 claims description 3
- 230000033001 locomotion Effects 0.000 description 20
- 238000000844 transformation Methods 0.000 description 16
- 238000004422 calculation algorithm Methods 0.000 description 14
- 239000013256 coordination polymer Substances 0.000 description 13
- 239000011159 matrix material Substances 0.000 description 9
- 239000000203 mixture Substances 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000001965 increasing effect Effects 0.000 description 4
- 210000001525 retina Anatomy 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 238000001839 endoscopy Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000002574 cystoscopy Methods 0.000 description 2
- 230000010339 dilation Effects 0.000 description 2
- 238000009472 formulation Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 244000291564 Allium cepa Species 0.000 description 1
- 241000170006 Bius Species 0.000 description 1
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 229940050561 matrix product Drugs 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 238000012892 rational function Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
Definitions
- the present invention relates to real-time digital image processing.
- Digital image stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high- resolution composite image.
- the invention relates to producing in real-time a digital composite image from a sequence of digital images of an interior of a hollow struc- ture recorded by an endoscopic camera device.
- the shape of hollow structures may be approximated better by a sphere than by a plane.
- spherical image stitching algorithms seem to be more appropriate than planar image stitching algorithms in case that a digital composite image needs to be produced from a sequence of digital images of an interior of a hollow structure recorded by an endoscopic camera device.
- An object of the present invention is to provide an improved image pro- cessing device for producing in real-time a digital composite image from a sequence of digital images of an interior of a hollow structure recorded by an endoscopic camera device.
- an image processing device for producing in real- time a digital composite image from a sequence of digital images of an interior of a hollow structure recorded by an endoscopic camera device, in particular of an interior of a hollow organ, such as an urinary bladder, recorded by an medical endoscopic camera device, so that the composite image has a wider field of view than the images of the sequence of images
- the image processing device comprising: a selecting unit configured for selecting a reference image and a further image from the sequence of images, wherein the reference image is specified in a global coordinate system of the composite image as a stereographic projection of a part of the interior of the hollow structure in a complex plane, wherein the further image is specified in a local coordinate system of the further image as a projection of a further part of the interior of the hollow structure in a projective space, and wherein the further image is overlapping the reference image; a key point detection unit configured for detecting global key points in the reference image and for detecting local key points in the further image; a transforming unit configured for transforming the further image into
- the present invention may be useful in all applications in which a composite image of an interior of a hollow structure needs to be produced.
- the main applications of the invention may be seen in the field of medical endoscopy of an interior of a hollow organ, such as a urinary bladder, recorded by a medical endoscopic camera device.
- the invention allows producing composite images of an interior of a hollow structure which have fewer perspective distortions than composite images produced with prior art devices using a linear or quadratic stitching method. This is beneficial in all cases in which a composite image of an interior of a hollow structure needs to be produced.
- the invention may be used especially in the field of medical endoscopy of an interior of a hollow organ, such as a urinary bladder, as the techniques involved require a high degree of orientation, coordination, and fine motor skills on the part of the medical practitioner, due to the very limited field of view provided by the endoscope and the lack of relation between the orientation of the image and the physical environment.
- the device according to the invention needs to use less parameter so that the computational effort is lowered. This leads to a reduced processing time for adding a further im- age to the global image. Furthermore, the inventive device is more reliable as the needed parameters are determined by using a method which is more stable, even if the field of view is small, so that the results are more robust in that sense that the likelihood of a misalignment of the further image is reduced.
- Each point Xon the sphere is mapped onto the plane by extending the ray from the north pole through Xonto the plane.
- the sphere as the Riemann sphere and the projection plane as the complex plane C extended by the additional number infinity, denoted as C ⁇ .
- C ⁇ the additional number infinity
- the inverse mapping is defined as
- the stereographic projection s transforms the south pole (0, 0, -1) T to the origin of the complex plane z— 0, the equator of the sphere to a circle with radius r— 2, and the north pole (0, 0, l) T to ⁇ .
- the point ⁇ can be imag- ined to lie at a "very large distance" from the origin and this point turns the complex plane into a geometrical surface of the nature of a sphere. Mapping the surface of a sphere onto a plane is free of distortion at the center of the projection plane and distortion increases with the distance from the center. Angles are locally preserved [7, pp. 22; 8, pp. 162].
- the invention addresses this problem by using a transformation for transforming the further image into the global coordinate system, wherein the transformation comprises a Mobius transformation in the complex plane, a isomorphic mapping between the complex plane and the projective space and a perspective transformation in the projective space.
- the Mobius transformation is a rational function of the complex plane, defined as
- Mobius transformations are bijective conformal mappings of the Riemann sphere to itself.
- any bi- jective conformal automorphism of the Riemann sphere is a Mobius transformation. Therefore, any rigid motion of the Riemann sphere can be expressed as a Mobius transformation. These motions include translation in any direction and rotation about any axis. This implies that any transformation according to (7) of the complex plane corresponds to some movement of the Riemann sphere [7, Chap. 2; 8, Chap. 3].
- Mobius transformations are conformal mappings, they preserve angles and map circles to circles. We can see a relation to similarity transformations in the Euclidean case, which also preserve angles. Similarity transformations can only describe the action of a camera with the optical axis perpendicular to the scene plane. Analogously, a Mobius transformation is able to model optical flow that results from a camera moving along the surface of the sphere with the optical axis perpendicular to the plane that is tangent to the sphere's surface. This can be explained by the characteristics of stereo- graphic projection. In (3), the stereographic projection has been shown to be equivalent to the action of a projective camera located at the north pole. (1 ) and (3) only describe one possible way of defining a stereographic projection.
- any point on the sphere can be chosen as projection center Ce &.
- the projection plane can then be any plane perpendicular to the diameter through C( ⁇ .e. the projection plane is parallel to the plane through ⁇ tangential to the sphere) [9]. So, the projection by any projective camera positioned at Cwith viewing direction through the sphere's center and focal length f ⁇ Q is equivalent to a stereographic projection.
- a Mobius transformation has six degrees of freedom and can therefore be determined from three point correspondences. While a Mobius transformation is defined by four complex coefficients a, b, c, d, these are only unique up to a common scale factor. For
- Such perspective transformation can be represented by a 3 * 3 matrix H, mapping homogeneous pixel coordinates.
- H is called a projectivity or homog- raphy.
- the relation between two projections of a world point X e P 3 by two independent perspective cameras is given by
- the general two-dimensional perspective transformation has 8 degrees of freedom.
- the general homography can be used to model image motion which results from a perspective camera undergoing arbitrary motion. De- tailed derivations of this relationship from general perspective projections can be found in Hartey and Zissermann [6, pp. 325] and Szeliski [11 , pp. 56].
- this perspective transform "virtually" aligns the projection plane (image sensor of the camera) with the surface patch.
- Z E C ⁇ be the stereographic projection of Xe 9- according to (1 ).
- the relation of an image point x viewed by a projective camera located inside the unit sphere and the point ze L can be expressed by the concatenation of a perspective transformation and a Mobius transformation.
- the perspective transformation h be defined in terms of the homography Has
- the transformation de- termination unit is configured in such way that the Mobius transformation is a simplified Mobius transformation.
- the combined Mobius and perspective transform may be defined in such a way that the unconstrained homography is applied to the image coordinate and the Mobius transformation is restricted to an inversion which corresponds to a rotation of the Riemann sphere.
- Rotation of the Riemann sphere can be defined by a Mobius transformation in the following way: For any point its antipode on the Riemann sphere
- the transformation determination unit is configured in such way that the perspective transformation is a reduced perspective transformation.
- This transformation may be called Mobius affine transform. Table 1 summarizes the motion models for spherical stitching.
- the transformation de- termination unit is configured in such way that the parameters of the transformation for transforming the further image into the global coordinate system are determined from the at least some of the key point pairs by using a direct linear transformation.
- DLT algorithm Direct Linear Transform algorithm
- a complex linear system of equations can be setup from n >3 point correspondences to determine the transformation parameters:
- the transformation determination unit is configured in such way that the parameters of the trans- formation for transforming the further image into the global coordinate system are determined from the at least some of the key point pairs by using a least squares method.
- the method of least squares is an approach in regression analysis to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation.
- the transformation de- termination unit is configured in such way that the parameters of the transformation for transforming the further image into the global coordinate system are determined from the at least some of the key point pairs by using a random sampling consensus method. It is unavoidable that the feature matching algorithm produces some false matches. Random sample consensus (RANSAC) has been established to identify and remove such outliers.
- RANSAC Random sample consensus
- the original RANSAC algorithm was introduced in 1981 by Fischler and Bolles [12]. It is still one of the most widely used robust estimators in the field of computer vision [13]. Although it works well in practice, many different contributions improve the original algorithm, aiming either at faster processing or higher robustness.
- MSAC and MLESAC by Torr and Zissermann [14] locally optimized RANSAC and PROSAC by Chum et al. [13]
- RANSAC is a hypothesize-and-verify method.
- a model is generated based on a minimal set of point correspondences randomly chosen from all correspondences. This model is verified by the remaining point correspondences. Let, for example, the model be represented by a homography, calculated from four point correspondences.
- RANSAC calculates an error measure between the model hypothesis and each remaining point correspondence. If this error measure is below a given threshold, the point correspondence is considered an inlier correspondence, otherwise an outlier correspondence. The quality of the current model hypo paper is given by the number of inliers. This hypothesize and verification procedure is repeated iteratively until no further improvement of the model is expected.
- a theoretical discussion of the optimal termination criterion can be found in [6, pp. 120-121].
- the final model is accepted if a minimal number of inliers ⁇ ⁇ ⁇ is reached and if a minimal ratio of inliers versus outliers exceeds a given threshold t mt . If a model has been found which satisfies both conditions, a final refinement step re-calculates the model from all inlier correspondences by least squares optimization.
- the transformation de- termination unit is configured in such way that the parameters of the transformation for transforming the further image into the global coordinate system are determined from the at least some of the key point pairs by using a guided sampling method.
- the guided sampling method was proposed by Tordoff and Murray [15] and adapted for PROSAC by Chum et al. [13]. It is applied here in order to speed up the search for the image transformation.
- Tordoff and Murray replaced the random sampling of the original RANSAC by a guided sampling. It uses information about the quality of point correspondences which is readily available during feature-based image registration. A correspondence score is often calculated during feature matching, as e.g.
- the invention provides an endoscopic camera system for producing in real-time a digital composite image
- the endoscopic camera sys- tern comprising: an endoscopic camera device configured for recording a sequence of digital images of an interior of a hollow structure, in particular a medical endoscopic camera device configured for recording a sequence of digital images of an interior of a hollow organ, such as an urinary bladder; and an image processing device according to the invention .
- the invention provides a method for producing in real-time a digital composite image from a sequence of digital images of an interior of a hollow structure recorded by an endoscopic camera device, in particular of an interior of a hollow organ, such as an urinary bladder, recorded by an medical endoscopic camera device, so that the composite image has a wider field of view than the images of the sequence of images
- the image pro- cessing device comprising: selecting a reference image and a further image from the sequence of images by using a selecting unit, wherein the reference image is specified in a global coordinate system of the composite image as a stereographic projec- tion of a part of the interior of the hollow structure in a complex plane, wherein the further image is specified in a local coordinate system of the further image as a projection of a further part of the interior of the hollow structure in a projective space, and wherein the further image is overlapping the reference image; detecting global key points in the reference image and detecting local key points in the further image by using a key point detection unit; transforming the further image
- the invention provides a computer program for, when running on a processor, executing the method according to the invention.
- Fig. 1 illustrates an embodiment of an endoscopic camera system
- Fig. 2 depicts an example of a stereographic projection to a complex plane, wherein the projection center is located at the north pole of a unit sphere, and wherein the complex plane is tangent to the south pole of the unit sphere;
- Fig. 3 illustrates that an action of a fixed camera being positioned at a north pole of a unit sphere is identical to the stereographic projection shown in Fig. 2; depicts an example of mapping image points of an movable camera being positioned at an arbitrary position within the unit sphere and points on the sphere being represented by their respective complex equivalent;
- Fig. 5 depicts an example of a stereographic projection to a complex plane, wherein the projection center is located at an arbitrary position at a unit sphere, and wherein the complex plane is ar- bitrary, but perpendicular to a diameter starting at the respective projection center;
- Figs. 6 to 8 illustrate the transformation of a further image into the global coordinate system by using the transformation for transforming the further image into the global coordinate system.
- Fig. 1 illustrates an embodiment of an endoscopic camera system comprising an image processing device 1 according to the invention in a schematic view.
- the invention provides an image processing device 1 for producing in real- time a digital composite image CI from a sequence SI of digital images of an interior of a hollow structure HS (see Figs. 2 to 8) recorded by an endoscopic camera device 2, in particular of an interior of a hollow organ HS, such as an urinary bladder HS, recorded by an medical endoscopic camera device 2, so that the composite image CI has a wider field of view than the images of the sequence SI of images, the image processing device 1 comprising: a selecting unit 3 configured for selecting a reference image Rl and a further image FI from the sequence of images SI, wherein the reference image Rl is specified in a global coordinate system of the composite image CI as a ste- reographic projection of a part of the interior of the hollow structure HS in a complex plane CP (see Figs.
- the further image FI is specified in a local coordinate system of the further image Fl as a projection of a further part of the interior of the hollow structure HS in a projective space PS (see Figs. 4 and 6), and wherein the further image Fl is overlapping the reference image Rl; a key point detection unit 4 configured for detecting global key points GKP in the reference image Rl and for detecting local key points LKP in the further image Fl; a transforming unit 5 configured for transforming the further image Fl into the global coordinate system based on the global key points GKP and based on the local key points LKP in order to produce a transformed further image TFI, wherein the transforming unit 5 comprises a key point matching unit 6 configured for determining key point pairs KPP, wherein each of the key point pairs KPP comprises one global key point GKP of the global key points GKP and one local key point LKP of the local key points LKP, wherein the global key point GKP and the local key point LKP of each of the
- the transformation determination unit 7 is configured in such way that the Mobius transformation is a simplified Mobius transformation. According to a preferred embodiment of the invention the transformation determination unit 7 is configured in such way that the perspective transformation is a reduced perspective transformation.
- the transformation de- termination unit 7 is configured in such way that the parameters of the transformation for transforming the further image Fl into the global coordinate system are determined from the at least some of the key point pairs KPP by using a direct linear transformation.
- the transformation determination unit 7 is configured in such way that the parameters of the transformation for transforming the further image into the global coordinate system are determined from the at least some of the key point pairs KPP by using a least squares method.
- the transformation determination unit 7 is configured in such way that the parameters of the transformation for transforming the further image into the global coordinate system are determined from the at least some of the key point pairs KPP by using a random sampling consensus method.
- the transformation determination unit 7 is configured in such way that the parameters of the transformation for transforming the further image into the global coordinate system are determined from the at least some of the key point pairs KPP by using a guided sampling method.
- the invention provides an endoscopic camera system for producing in real-time a digital composite image CI, the endoscopic camera system comprising: an endoscopic camera device 2 configured for recording a sequence SI of digital images of an interior of a hollow structure HS, in particular a medical endoscopic camera device 2 configured for recording a sequence SI of digital images of an interior of a hollow organ HS, such as an urinary bladder; and an image processing device 1 according to the invention .
- the invention provides a method for producing in real-time a digital composite image CI from a sequence SI of digital images of an inte- rior of a hollow structure HS recorded by an endoscopic camera device 2, in particular of an interior of a hollow organ HS, such as an urinary bladder HS, recorded by an medical endoscopic camera device 2, so that the composite image CI has a wider field of view than the images of the sequence SI of images, the image processing device 1 comprising: selecting a reference image Rl and a further image Fl from the sequence of images SI by using a selecting unit 3, wherein the reference image Rl is specified in a global coordinate system of the composite image CI as a ste- reographic projection of a part of the interior of the hollow structure HS in a complex plane CP, wherein the further image Fl is specified in a local coordinate system of the further image Fl as a projection of a further part of the interior of the hollow structure HS in a projective space PS, and wherein the further image Fl is overlapping the reference image Rl
- the invention provides a computer program for, when run- ning on a processor, executing the method according to the invention.
- Fig. 2 depicts an example of a stereographic projection to a complex plane CP, wherein the projection center C is located at the north pole of a unit sphere, which is an approximation for the shape of a hollow structure HS, and wherein the complex plane is tangent to the south pole of the unit sphere.
- the stereographic projection maps points Xon the unit sphere to z in the complex plane.
- the stereographic projection may be described according to (1 ) and (2).
- Fig. 3 illustrates that an action of an imaginary fixed camera FC being positioned at a north pole of a unit sphere is identical to the stereographic projection shown in Fig. 2.
- the imaginary fixed camera FC may have the properties P 0 as mathematically described by (3). Projecting a point X ⁇ R 3 by this camera may be described according to (4).
- Fig. 4 depicts an example of mapping image points x of a movable camera MC being positioned at an arbitrary position within the unit sphere and points Xon the sphere being represented by their respective complex equivalent z at the complex plane CP.
- mapping between image points x of the movable camera MC and points Xon the sphere represented by their respective complex equivalent points z can be described by a homography assuming that the sphere is planar within the field of view of the movable camera MC.
- Such perspective transformation can be represented by a 3 x 3 matrix H, mapping homogeneous pixel coordinates as defined in (10).
- Fig. 5 depicts an example of a stereographic projection to a projection plane, which may be a complex plane CP as discussed above, wherein the projection center C is located at an arbitrary position at a unit sphere, and wherein the projective plane CP' is arbitrary, but perpendicular to a diameter starting at the respective projection center C. It has to be noted that any definition with the projection center C on the surface of the unit sphere and the projection plane CP' perpendicular to the respective diameter is a valid definition of the stereographic projection.
- the projection by any projective camera positioned at the unit sphere viewing direction through the sphere's center and focal length f ⁇ 0 is equivalent to a stereographic projection. So, changing the projection center C as well as the projection plane CP' is tantamount to moving a projective camera along the sphere's surface (and altering its focal length).
- the camera located at the north pole projects the world point Xto the image point represented by z.
- the camera located at projection center C projects the world point X to the image point represented by Points z maybe transformed to points z' using a Mobius transform as defined in (8).
- Figs. 6 to 8 illustrate the transformation of a further image FI into the global coordinate system by using the transformation for transforming the further image into the global coordinate system.
- Fig. 6 illustrates a first step of the transformation.
- the further image FI is specified in a local coordinate system of the further image FI as a projection of a part of the interior of the hollow structure HS to an image plane IP in a projective space PS.
- a perspective projection which is the inverse of the perspective projection specified in (10) transforms each point xof the further image FI to a point of a further image plane FIP in the protective space PS', which locally approximates the interior surface of the hollow structure HS.
- the perspective projection uses a 3 * 3 matrix which is the in
- Fig. 7 illustrates a second step of the transformation.
- the isomorphic mapping ⁇ is the inverse of ⁇ 1 as defined above.
- the isomorphic mapping ⁇ maps each point of a further image plane FIP to a point z' in an intermediate complex plane CP'. Position and orientation of the intermediate complex plane CP' is identical to the further image plane FIP shown in Fig. 6.
- the position of the intermediate protection center C may be determined by a Mobius transform as defined in (8) or (14).
- Fig. 8 illustrates a third step of the transformation.
- the Mobius transformation m -1 which may be the inverse of the full Mobius transformation m as defined in (8) or the inverse of the reduced Mobius transformation m as defined (14), maps each point z' to a point z of the complex plane CP in which the reference image Rl is specified, so that each point z is transformed into the global coordinate system of the reference image Rl.
- embodiments of the inventive device and system can be implemented in hardware and/or in soft- ware.
- the implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that one or more or all of the functionalities of the inventive device or system is performed.
- a programmable logic device for example a field programmable gate array
- a field programmable gate array may cooperate with a microprocessor in order to perform one or more or all of the functionalities of the devices and systems described herein.
- aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a fea- ture of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Depending on certain implementation requirements, embodiments of the inventive method can be implemented using an apparatus comprising hardware and/or software.
- the implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having elec- tronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
- a digital storage medium for example a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having elec- tronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
- embodiments of the in- ventive method can be implemented using an apparatus comprising hardware and/or software.
- Some or all of the method steps may be executed by (or using) a hardware apparatus, like a microprocessor, a programmable computer or an electronic circuit. Some one or more of the most important method steps may be executed by such an apparatus.
- Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system such that one of the methods described herein is performed.
- embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer.
- the program code may for example be stored on a machine readable carrier.
- inventions comprise the computer program for performing one of the methods described herein, which is stored on a machine readable carrier or a non-transitory storage medium.
- a further embodiment comprises a processing means, for example a computer, or a programmable logic device, in particular a processor comprising hardware, configured or adapted to perform one of the methods described herein.
- a further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
- the methods are advantageously performed by any apparatus comprising hardware and or software.
- Tanenbaum A feature-based, robust, hierarchical algorithm for registering pairs of images of the curved human retina. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(3):347-364, 2002.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2016/079323 WO2018099556A1 (fr) | 2016-11-30 | 2016-11-30 | Dispositif de traitement d'image et procédé de production en temps réel d'une image composite numérique à partir d'une séquence d'images numériques d'un intérieur d'une structure creuse |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3549093A1 true EP3549093A1 (fr) | 2019-10-09 |
Family
ID=57471859
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16805382.5A Withdrawn EP3549093A1 (fr) | 2016-11-30 | 2016-11-30 | Dispositif de traitement d'image et procédé de production en temps réel d'une image composite numérique à partir d'une séquence d'images numériques d'un intérieur d'une structure creuse |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3549093A1 (fr) |
WO (1) | WO2018099556A1 (fr) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108876858A (zh) * | 2018-07-06 | 2018-11-23 | 北京字节跳动网络技术有限公司 | 用于处理图像的方法和装置 |
CN109697734B (zh) * | 2018-12-25 | 2021-03-09 | 浙江商汤科技开发有限公司 | 位姿估计方法及装置、电子设备和存储介质 |
JP7110397B2 (ja) * | 2019-01-09 | 2022-08-01 | オリンパス株式会社 | 画像処理装置、画像処理方法および画像処理プログラム |
CN110443154B (zh) * | 2019-07-15 | 2022-06-03 | 北京达佳互联信息技术有限公司 | 关键点的三维坐标定位方法、装置、电子设备和存储介质 |
CN111524071B (zh) * | 2020-04-24 | 2022-09-16 | 安翰科技(武汉)股份有限公司 | 胶囊内窥镜图像拼接方法、电子设备及可读存储介质 |
CN113362438A (zh) * | 2021-06-30 | 2021-09-07 | 北京百度网讯科技有限公司 | 全景渲染的方法、装置、电子设备、介质及程序 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2961218A1 (fr) * | 2014-09-17 | 2016-03-24 | Taris Biomedical Llc | Methodes et systemes de cartographie diagnostique de la vessie |
-
2016
- 2016-11-30 EP EP16805382.5A patent/EP3549093A1/fr not_active Withdrawn
- 2016-11-30 WO PCT/EP2016/079323 patent/WO2018099556A1/fr unknown
Also Published As
Publication number | Publication date |
---|---|
WO2018099556A1 (fr) | 2018-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3549093A1 (fr) | Dispositif de traitement d'image et procédé de production en temps réel d'une image composite numérique à partir d'une séquence d'images numériques d'un intérieur d'une structure creuse | |
US9729787B2 (en) | Camera calibration and automatic adjustment of images | |
US11568516B2 (en) | Depth-based image stitching for handling parallax | |
US10334168B2 (en) | Threshold determination in a RANSAC algorithm | |
EP3428875A1 (fr) | Procédés et appareils de traitement d'image panoramique | |
CN110070598B (zh) | 用于3d扫描重建的移动终端及其进行3d扫描重建方法 | |
US20120306874A1 (en) | Method and system for single view image 3 d face synthesis | |
CN103839227B (zh) | 鱼眼图像校正方法和装置 | |
CN110070564A (zh) | 一种特征点匹配方法、装置、设备及存储介质 | |
GB2567245A (en) | Methods and apparatuses for depth rectification processing | |
Bastanlar et al. | Multi-view structure-from-motion for hybrid camera scenarios | |
Wan et al. | Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform | |
CN117173012A (zh) | 无监督的多视角图像生成方法、装置、设备及存储介质 | |
Manda et al. | Image stitching using ransac and bayesian refinement | |
WO2018150086A2 (fr) | Procédés et appareils pour la détermintion de positions d'appareils de capture d'image multidirectionnelle | |
Zhu et al. | Homography estimation based on order-preserving constraint and similarity measurement | |
Xu et al. | Real-time keystone correction for hand-held projectors with an RGBD camera | |
Ju et al. | Panoramic image generation with lens distortions | |
Yu et al. | Plane-based calibration of cameras with zoom variation | |
JPWO2019244200A1 (ja) | 学習装置、画像生成装置、学習方法、画像生成方法及びプログラム | |
Shimizu et al. | Robust and accurate image registration with pixel selection | |
Dib et al. | A real time visual SLAM for RGB-D cameras based on chamfer distance and occupancy grid | |
Lee et al. | Fast panoramic image generation method using morphological corner detection | |
Sakamoto et al. | Homography optimization for consistent circular panorama generation | |
Venjarski et al. | Automatic Image Stitching for Stereo Spherical Image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20190522 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: ERNST, ANDREAS Inventor name: AVENHAUS, MALTE Inventor name: MUENZENMAYER, CHRISTIAN Inventor name: ZILLY, FREDERIK Inventor name: BENZ, MICHAELA Inventor name: WITTENBERG, THOMAS Inventor name: BERGEN, TOBIAS |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20210408 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20210819 |