CN109493352B - Three-dimensional image area contour generation method based on GPU parallel acceleration - Google Patents

Three-dimensional image area contour generation method based on GPU parallel acceleration Download PDF

Info

Publication number
CN109493352B
CN109493352B CN201811215584.9A CN201811215584A CN109493352B CN 109493352 B CN109493352 B CN 109493352B CN 201811215584 A CN201811215584 A CN 201811215584A CN 109493352 B CN109493352 B CN 109493352B
Authority
CN
China
Prior art keywords
view image
pixel set
left view
triangulated
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811215584.9A
Other languages
Chinese (zh)
Other versions
CN109493352A (en
Inventor
黄辉
赵汉理
吴承文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN201811215584.9A priority Critical patent/CN109493352B/en
Publication of CN109493352A publication Critical patent/CN109493352A/en
Application granted granted Critical
Publication of CN109493352B publication Critical patent/CN109493352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a three-dimensional image area contour generation method based on GPU parallel acceleration, which comprises the steps of giving a left view image, a right view image and an area contour pixel set in the left view image; obtaining a feature pixel set pair matched in a robust mode according to the GPU accelerated scale invariant feature transformation and a random pomelo sample consistency test method; uniformly sampling edge lines to obtain an edge-sampled pixel set in the left view image; additionally randomly sampling to obtain a triangulated pixel set in the left view image; calculating a plane triangular mesh of the left view image according to a Delaunay triangularization method accelerated by a GPU; obtaining a triangulated pixel set in a right view image by using a conjugate gradient method accelerated by a GPU; and calculating a region contour pixel set in the right view image, wherein a closed loop line formed by the region contour pixel set is a region contour line result in the right view image. By implementing the method, the parallelism of the generation of the region contour line can be effectively improved, and the calculation efficiency is improved.

Description

Three-dimensional image area contour generation method based on GPU parallel acceleration
Technical Field
The invention belongs to the technical field of graphic image processing, and particularly relates to a three-dimensional image region contour generation method based on GPU parallel acceleration, which is used for solving the problem of region contour consistency in left and right views of the existing three-dimensional image.
Background
When people observe a surrounding three-dimensional scene, due to the fact that the positions of the left eye and the right eye are different, the two eyes can respectively observe two left view images and right view images with certain horizontal parallax. The human brain can perceive a three-dimensional scene with a sense of depth according to the two images. The two left and right view images with horizontal parallax are combined to form a three-dimensional image, the three-dimensional image has three-dimensional stereoscopic vision experience which cannot be transmitted by the traditional single image, and the three-dimensional stereoscopic image has wide and important application in the fields of virtual reality technology, 3D video games and the like.
With the increasing popularity of stereoscopic images, people expect to edit stereoscopic images as conveniently as ordinary images. For a common image, if a certain area in the image is selected, the outline of the area can be directly edited by a mouse. For a stereoscopic image, if one wants to select a certain region in the stereoscopic image, it needs to be determined that the region corresponds to the regions in the left view and the right view of the stereoscopic image at the same time. However, the direct application of the conventional image editing technology to edit the stereoscopic image is likely to make the consistency of the left and right views contained in the stereoscopic image challenging (please refer to the documents TJ Mu, JH Wang, SPDu, et al. stereoscopic image composition and depth recovery [ J ]. Visualcomputer 2014,30(6-8): 833-843). Therefore, in the existing method, a region contour of a left view of a stereoscopic image is edited by a mouse, and then a contour migration technique is applied to automatically calculate a contour of the left view region contour corresponding to a contour of a right view region. However, the existing method is realized based on a CPU serial method, and the execution efficiency is low when processing images with larger resolution, thereby influencing the interactive experience of people in the editing process.
The GPU is called a Graphics Processing Unit and is named a Graphics processor in Chinese. Due to its high degree of parallel processing capability, GPUs have become high performance parallel coprocessing units for CPUs. Nowadays, more and more serial methods based on CPU are optimized and improved to methods based on GPU parallel acceleration so as to obtain the improvement of operating efficiency. The image is used as two-dimensional pixel dot matrix data and is very suitable for being used as data content of GPU parallel processing. However, the existing stereo image area contour generation method is realized based on a CPU serial method, and the execution efficiency is low when processing a larger resolution image.
Disclosure of Invention
The embodiment of the invention aims to provide a method for generating a contour of a stereo image area based on GPU parallel acceleration aiming at the defects of the prior art, and the method can effectively improve the calculation efficiency of generating the contour of the stereo image area.
The embodiment of the invention provides a three-dimensional image area contour generation method based on GPU parallel acceleration, which comprises the following steps:
s101, giving a left view image S, a right view image T and a region contour pixel set C _ S in the left view image S of a stereo image; the region contour line in S is a closed loop line formed by a region contour pixel set C _ S.
Step S102, according to the left view image S and the right view image T in the step S101, calculating a matched feature pixel set pair according to a GPU parallel accelerated Scale Invariant Feature Transform (SIFT) method, and removing feature pixels with overlarge matching errors according to a GPU parallel accelerated random shaddock sample test consistency (RANSAC) method to obtain a robust matched feature pixel set pair which is marked as (F _ S, F _ T).
Step 103, according to a given left view image S, uniformly sampling pixels on 4 edge lines of the left view image S, such as the top edge line, the bottom edge line, the left edge line, the right edge line, and the like, to obtain an edge-sampled pixel set in the left view image S, which is denoted as B _ S.
Step S104, combining the region contour pixel set C _ S in the left view image S given in step S101, the feature pixel set F _ S in robust matching in the left view image S obtained in step S102, and the edge-sampled pixel set B _ S in the left view image S obtained in step S103 into an initial pixel set (denoted as a _ S) in the left view image S, that is, a _ S ═ F _ S ∪ C _ S ∪ B _ S.
Step S105, calculating a planar triangular mesh (denoted as P _ S) of the left view image S according to the Delaunay triangulation method based on GPU parallel acceleration, with a closed loop line formed by the region contour pixel set C _ S as a triangulated constraint side, according to the triangulated pixel set T _ S in the left view image S obtained in step S104 and the region contour pixel set C _ S in the left view image S given in step S101.
Step S106, establishing an energy function for calculating the triangulated pixel set (denoted as T _ T) in the right view image T according to the feature pixel set F _ T which is robustly matched in the right view image T obtained in the step S102, the triangulated pixel set T _ S in the left view image S obtained in the step S104 and the plane triangular mesh P _ S of the left view image S obtained in the step S105, minimizing the energy function E by using minimum multiplication to obtain a sparse linear equation set which takes two-dimensional coordinates of pixels in a triangulated pixel set T _ T in the right view image T as unknowns, solving the sparse linear equation set efficiently and parallelly by using a conjugate gradient method of a compressed sparse row storage format based on GPU parallel acceleration, completing the minimization of the energy function, and obtaining the triangulated pixel set T _ T in the right view image T.
Step S107, finding out the position of C _ T in T _ T by using the correspondence between T _ S and T _ T and the position of C _ S in T _ T, and collecting a pixel composition set C _ T at the corresponding position from T _ T, according to the region contour pixel set C _ S in the left view image S given in step S101, the triangulated pixel set T _ S in the left view image S obtained in step S104, and the triangulated pixel set T _ T in the right view image T calculated in step S106. Thus, the obtained C _ T is the region contour pixel set in the right view image T of the stereoscopic image matched with the region contour pixel set C _ S in the left view image S of the stereoscopic image. The region contour line in T is a closed loop line formed by a region contour pixel set C _ T.
The embodiment of the invention has the following beneficial effects:
the method adopts a GPU parallel acceleration scale invariant feature conversion method to calculate matched feature pixel set pairs, adopts a GPU parallel acceleration random shaddock sample consistency checking method to remove feature pixels with overlarge matching errors, adopts a Delaunay triangularization method based on GPU parallel acceleration to calculate the plane triangular grid of the left view image, and adopts a conjugate gradient method based on a GPU parallel acceleration compressed sparse row storage format to solve a sparse linear equation set, thereby effectively improving the parallelism of the generation of the area contour lines of the stereo image and improving the calculation efficiency of the generation of the area contour lines of the stereo image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
Fig. 1 is a flowchart of a method for generating a contour of a stereoscopic image area based on GPU parallel acceleration according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, in an embodiment of the present invention, a method for generating a contour of a stereo image region based on GPU parallel acceleration is provided, where the method includes:
s101, giving a left view image S, a right view image T and a region contour pixel set C _ S in the left view image S of a stereo image; the region contour line in S is a closed loop line formed by a region contour pixel set C _ S.
A region contour pixel set in the right view image T of the stereoscopic image V matched with the region contour pixel set C _ S in the left view image S of the stereoscopic image V is marked as C _ T; the region contour line in T is a closed loop line formed by a region contour pixel set C _ T. Therefore, C _ T is the set of region contour pixels in the right view image T of the stereoscopic image that is found to match the set of region contour pixels C _ S in the left view image S of the stereoscopic image.
Step S102, based on the left view image S and the right view image T in step S101The proposed GPU parallel accelerated Scale Invariant Feature Transform (SIFT) method computes pairs of matched feature pixel sets (F0_ S, F0_ T), where F0_ S represents the matched feature pixel set in the left view image S, and F0_ T is shown asShowing the set of matched feature pixels in the right view image T. The method for converting the scale invariant features of the GPU parallel acceleration comprises the following two steps: firstly, respectively extracting a characteristic pixel set (recorded as F1_ S) of the left view image S and a characteristic pixel set (recorded as F1_ T) of the right view image T; then, according to F1_ S and F1_ T, mutually matched feature pixels are extracted from the image, and a matched feature pixel set pair (F0_ S, F0_ T) is obtained. It should be noted that, for a specific process of the scale-invariant feature transformation method for GPU parallel acceleration, refer to the document: and M.
Figure BDA0001833467330000051
N.
Figure BDA0001833467330000052
D.Kragic.Detecting, segmenting and trackingunknown objects using multi-label MRF inference[J]. Computer Vision and ImageUnderstanding,2014,118:111-127。
And according to the matched feature pixel set pair (F0_ S, F0_ T), removing feature pixels with overlarge matching errors according to a random sample testing consistency (RANSAC) method accelerated by a GPU in parallel to obtain a feature pixel set pair (F _ S, F _ T) matched with the robustness. Obviously, F _ S and F _ T are subsets of F0_ S and F0_ T, respectively. Therefore, F _ S represents a set of robustly matched feature pixels in the left-view image S, and F _ T represents a set of robustly matched feature pixels in the right-view image T. The original random shaddock-like test for consistency procedure is described in the documents M.A. Fischler, R.C. bolles.random sample consensus, application for model fixing with applications to image analysis and automatic card graphics, Communication of the ACM,1981,24: 381-395. Given the random shaddock-like inspection times, denoted as F _ N, according to the matched feature pixel set pair (F0_ S, F0_ T), the method for inspecting consistency of random shaddock-like inspection accelerated in parallel by the GPU of the present invention comprises two steps: firstly, setting F _ N GPU parallel threads, for each GPU parallel thread, using a GPU parallel accelerated random number generation function curand _ init to generate random numbers in parallel, randomly selecting 4 pairs of matched feature pixel pairs from (F0_ S, F0_ T), calculating a feature pixel transformation matrix according to the selected 4 pairs of matched feature pixel pairs, calculating the matching error of each pair of matched feature pixel pairs according to the feature pixel transformation matrix, and counting the number of the feature pixel pairs with smaller matching errors; secondly, calculating the maximum value of the number of the counted feature pixel pairs with smaller matching errors by using a max _ element function in a thrast library accelerated by the GPU in parallel; finally, the corresponding characteristic pixel transformation matrix is found according to the maximum value, and the characteristic pixel pair with the overlarge matching error is removed from (F0_ S, F0_ T), so that the characteristic pixel set pair (F _ S, F _ T) with robust matching is obtained. The F _ S and the F _ T ensure the robust matching relation of corresponding feature pixels in the left view image S and the right view image T.
Step 103, according to a given left view image S, uniformly sampling pixels on 4 edge lines of the left view image S, such as the top edge line, the bottom edge line, the left edge line, the right edge line, and the like, to obtain an edge-sampled pixel set in the left view image S, which is denoted as B _ S. B _ S can make the planar triangular mesh in the left view image described in step S105 cover the entire area of the left view image.
Step S104 combines the region contour pixel set C _ S in the left view image S given in step S101, the feature pixel set F _ S of robust matching in the left view image S obtained in step S102, and the edge-sampled pixel set B _ S in the left view image S obtained in step S103 into an initial pixel set (denoted as a _ S) in the left view image S, that is, a _ S ═ F _ S ∪ C _ S ∪ B _ S, according to a _ S, an additional random sampling is performed on the left view image S, the left view image S is uniformly divided into several small squares, if there are some small squares in which no pixels in a _ S are contained, one pixel is randomly sampled in each of these small squares, an additional randomly sampled pixel set (denoted as R _ S) in the left view image S is obtained, according to a _ S and R _ S, a triangular pixel set (denoted as T _ S) in the left view image S is combined into a triangular pixel set (denoted as T _ S — S) in the left view image S, that is not over-triangular in the left view image S, that is obtained in the step S105.
Step S105, calculating a planar triangular mesh (denoted as P _ S) of the left view image S according to the triangulated pixel set T _ S in the left view image S obtained in step S104, and the region contour pixel set C _ S in the left view image S given in step S101, and a Delaunay triangularization method based on GPU parallel acceleration proposed by Cao et al, with a closed loop line formed by the region contour pixel set C _ S as a triangulated constraint side. It should be noted that, for a specific process of the Delaunay triangulation method based on GPU parallel acceleration, refer to the literature: t. T.Cao, A.Nanjappa, M.Gao, T. S.Tan.A. GPUacerated algorithm for 3D Delaunay triangulation [ C ]. Proceedings of the symposium on Interactive 3D Graphics, ACM,2014, pp.47-54.
Step S106, establishing an energy function (denoted as T _ T) for calculating the triangulated pixel set (denoted as T _ T) in the right view image T according to the feature pixel set F _ T in robust matching in the right view image T obtained in step S102, the triangulated pixel set T _ S in the left view image S obtained in step S104, and the planar triangular mesh P _ S of the left view image S obtained in step S105. The energy function is a weighted sum of three energy terms, which is specifically defined as:
E=w1×E1+w2×E2+w3×E3
Figure BDA0001833467330000062
E3=∑(i,j,k)∈P_S||T_T(i)-Trans(T_T(j),T_T(k))||2
in the formula, w1, w2, and w3 represent given three weight coefficients, and the values of the three weight coefficients are set to w 1-10, w 2-10, and w 3-1 in this patent; n is a radical ofFRepresenting the number of pixels in a feature pixel set F _ S of robust matching in the left view image S; n is a radical ofTRepresents the number of pixels of the triangulated pixel set T _ S in the left view image S; it should be noted that the planar triangular mesh P _ S of the left view image S is a set of planar triangles, each element in the set is a triple, which is denoted as (i, j, k) ∈ P _ S, so that i, j, k are respectively represented at T \uThe ith, jth and kth positions of the pixels in S; t _ T (i) denotes the ith pixel value in T _ T, F _ T (i) denotes the ith pixel value in F _ T, T _ T (j) denotes the jth pixel value in T _ T, and T _ T (k) denotes the kth pixel value in T _ T; an image is essentially a two-dimensional matrix of pixels, each pixel thus having a two-dimensional coordinate, T _ T (i)yDenotes the ordinate of the i-th pixel in T _ T, T _ S (i)yRepresents the ordinate of the ith pixel in T _ S; the function Trans represents a local coordinate transformation function, so that the triangular shape in T _ T can be kept as close as possible to the triangular shape in T _ S, and it should be noted that the specific calculation process of the function is described in the literature: T.Igarashi, T.Moscovich, J.F. Hughes.As-rigid-as-porous shape manipulation [ J.F. ]].ACM Transactions on Graphics, 2005,24(3):1134-1141。
E1 is used to guarantee the first N in the triangulated set of pixels T _ T in the right view image TFThe individual pixels are as close as possible to the two-dimensional coordinate distance of the pixels in the robust matched feature pixel set F _ T in the right view image T. E2 is used to ensure that the ordinate of the pixels in the triangulated set of pixels T _ T in the right view image T is as close as possible to the ordinate distance of the pixels of the triangulated set of pixels T _ S in the left view image S. E3 is used to ensure that the triangular shape of the planar triangular mesh of the right view image T is as close as possible to the triangular shape of the planar triangular mesh P _ S of the left view image S. The energy function E performs weighted summation on the above three energy terms, specifically please refer to the document: s. J.Luo, I.C.Shen, B.Y.Chen, W.H.Cheng, Y.Chuang.Perspectral-aware warping for seamless stereo-leather fastening [ J.]. ACM Transactions on Graphics,2012,31(6):182:1-182:8。
Minimizing the energy function E by using minimum multiplication to obtain a sparse linear equation set which takes the two-dimensional coordinates of the pixels in the triangulated pixel set T _ T in the right view image T as unknowns, and solving the sparse linear equation set to obtain the triangulated pixel set T _ T in the right view image T. The invention adopts a compressed sparse row storage format to efficiently solve the sparse linear equation set, and only stores non-0-value coefficients but not 0-value coefficients in the sparse linear equation set. The sparse linear equation set can be solved efficiently and parallelly by using a conjugate gradient method of a compressed sparse row storage format based on GPU parallel acceleration in a cubarse library, the minimization of an energy function is completed, and a triangulated pixel set T _ T in a right view image T is obtained.
Step S107, according to the region contour pixel set C _ S in the left view image S given in step S101, the triangulated pixel set T _ S in the left view image S obtained in step S104, and the triangulated pixel set T _ T in the right view image T calculated in step S106, the position of C _ T in T _ T can be conveniently found out by using the correspondence between T _ S and T _ T and the position of C _ S in T _ T, and the pixel composition set C _ T at the corresponding position is collected from T _ T. Thus, the obtained C _ T is the region contour pixel set in the right view image T of the stereoscopic image matched with the region contour pixel set C _ S in the left view image S of the stereoscopic image. The region contour line in T is a closed loop line formed by a region contour pixel set C _ T.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (2)

1. A method for generating a contour of a stereo image area based on GPU parallel acceleration is characterized by comprising the following steps:
s101, giving a left view image S, a right view image T and a region contour pixel set C _ S in the left view image S of a stereo image; the region contour line in the left view image S is a closed loop line formed by a region contour pixel set C _ S;
step S102, according to the left view image S and the right view image T in the step S101, respectively, calculating a matched feature pixel set pair by using a GPU parallel accelerated scale invariant feature conversion method, and removing feature pixels with overlarge matching errors by using a GPU parallel accelerated random shaddock sample consistency checking method to obtain a robust matched feature pixel set pair which is respectively marked as F _ S, F _ T;
step S103, according to a given left view image S, uniformly sampling pixels on the upper edge line, the lower edge line, the left edge line and the right edge line of the left view image S to obtain an edge-sampled pixel set in the left view image S, and recording the edge-sampled pixel set as B _ S;
step S104, combining the region contour pixel set C _ S in the left view image S given in step S101, the feature pixel set F _ S in robust matching in the left view image S obtained in step S102, and the edge-sampled pixel set B _ S in the left view image S obtained in step S103 into an initial pixel set in the left view image S, denoted as a _ S, i.e., a _ S = F _ S ∪ C _ S ∪ B _ S, performing additional random sampling on the left view image S to obtain an additional random-sampled pixel set in the left view image S, denoted as R _ S, combining the triangulated pixel set in the left view image S, denoted as T _ S, i.e., T _ S = a _ S ∪ R _ S, according to a _ S and R _ S;
step S105, calculating a plane triangular mesh of the left view image S, which is marked as P _ S, according to the triangulated pixel set T _ S in the left view image S obtained in the step S104, the area contour pixel set C _ S in the left view image S given in the step S101, and a closed loop line formed by the area contour pixel set C _ S as a triangulated constraint side, and a Delaunay triangularization method based on GPU parallel acceleration;
step S106, establishing an energy function E for calculating the triangulated pixel set in the right view image T according to the feature pixel set F _ T which is subjected to robust matching in the right view image T obtained in the step S102, the triangulated pixel set T _ S in the left view image S obtained in the step S104 and the plane triangular mesh P _ S of the left view image S obtained in the step S105, the triangulated pixel set in the right view image T is marked as T _ T, the energy function E is minimized by utilizing minimum multiplication to obtain a sparse linear equation set which takes the two-dimensional coordinates of the pixels in the triangulated pixel set T _ T in the right view image T as unknowns, the sparse linear equation set is solved in parallel by utilizing a conjugate gradient method of a compressed sparse row storage format based on GPU parallel acceleration, the minimization of the energy function is completed, and the triangulated pixel set T _ T in the right view image T is obtained;
step S107, finding out the position of C _ T in T _ T by using the correspondence between T _ S and T _ T and the position of C _ S in T _ T according to the region contour pixel set C _ S in the left view image S given in step S101, the triangulated pixel set T _ S in the left view image S obtained in step S104, and the triangulated pixel set T _ T in the right view image T calculated in step S106, and collecting pixels at corresponding positions from T _ T to form a set C _ T, where the obtained C _ T is the region contour pixel set in the right view image T of the stereoscopic image that is obtained and matches with the region contour pixel set C _ S in the left view image S of the stereoscopic image, and the region contour in the right view image T is a closed loop line formed by the region contour pixel set C _ T.
2. The method for generating a contour of a stereoscopic image region based on GPU parallel acceleration according to claim 1, wherein the energy function E in step S106 is a weighted sum of three energy terms, which is specifically defined as:
Figure DEST_PATH_IMAGE002
in the formula, w1, w2, w3 denote given three weight coefficients whose values are set to w1=10, w2=10, w3=1, NFRepresenting the number of pixels in a feature pixel set F _ S of robust matching in the left view image S; n is a radical ofTRepresents the number of pixels of the triangulated pixel set T _ S in the left view image S; t _ T (i) denotes the ith pixel value in T _ T, F _ T (i) denotes the ith pixel value in F _ T, T _ T (i)yDenotes the ordinate of the i-th pixel in T _ T, T _ S (i)yRepresents the ordinate of the i-th pixel in T _ S.
CN201811215584.9A 2018-10-18 2018-10-18 Three-dimensional image area contour generation method based on GPU parallel acceleration Active CN109493352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811215584.9A CN109493352B (en) 2018-10-18 2018-10-18 Three-dimensional image area contour generation method based on GPU parallel acceleration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811215584.9A CN109493352B (en) 2018-10-18 2018-10-18 Three-dimensional image area contour generation method based on GPU parallel acceleration

Publications (2)

Publication Number Publication Date
CN109493352A CN109493352A (en) 2019-03-19
CN109493352B true CN109493352B (en) 2020-01-14

Family

ID=65691546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811215584.9A Active CN109493352B (en) 2018-10-18 2018-10-18 Three-dimensional image area contour generation method based on GPU parallel acceleration

Country Status (1)

Country Link
CN (1) CN109493352B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719316A (en) * 2015-09-29 2016-06-29 温州大学 Interactive three-dimensional image segmentation method
CN106600526A (en) * 2016-12-12 2017-04-26 温州大学 Grayscale image colorizing method based on GPU acceleration
CN108665470A (en) * 2018-05-14 2018-10-16 华南理工大学 A kind of interactive mode contour extraction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719316A (en) * 2015-09-29 2016-06-29 温州大学 Interactive three-dimensional image segmentation method
CN106600526A (en) * 2016-12-12 2017-04-26 温州大学 Grayscale image colorizing method based on GPU acceleration
CN108665470A (en) * 2018-05-14 2018-10-16 华南理工大学 A kind of interactive mode contour extraction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Reconstruction of contour lines in TPS using Delaunay Triangulation;ZHEN Xin等;《2007 1st International Conference on Bioinformatics and Biomedical Engineering》;20070708;第1023-1024页 *

Also Published As

Publication number Publication date
CN109493352A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
Ham et al. Computer vision based 3D reconstruction: A review
US9013482B2 (en) Mesh generating apparatus, method and computer-readable medium, and image processing apparatus, method and computer-readable medium
EP2622581B1 (en) Multi-view ray tracing using edge detection and shader reuse
US9443345B2 (en) Method and apparatus for rendering three-dimensional (3D) object
CN103530907B (en) Complicated three-dimensional model drawing method based on images
KR20120093063A (en) Techniques for rapid stereo reconstruction from images
IL256458A (en) Fast rendering of quadrics
CN110781937B (en) Point cloud feature extraction method based on global visual angle
CN109191510B (en) 3D reconstruction method and device for pathological section
IL256459A (en) Fast rendering of quadrics and marking of silhouettes thereof
WO2018133119A1 (en) Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
KR20130120730A (en) Method for processing disparity space image
Archirapatkave et al. GPGPU acceleration algorithm for medical image reconstruction
CN112819726A (en) Light field rendering artifact removing method
JP6901885B2 (en) Foreground extractor and program
CN109493352B (en) Three-dimensional image area contour generation method based on GPU parallel acceleration
CN110020987B (en) Medical image super-resolution reconstruction method based on deep learning
JP7026029B2 (en) Image processing equipment, methods and programs
JP2021033682A (en) Image processing device, method and program
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
Waizenegger et al. Parallel high resolution real-time visual hull on gpu
Berent et al. Unsupervised Extraction of Coherent Regions for Image Based Rendering.
CN112767548B (en) Three-dimensional threshold value stereo graph unfolding method
Chen et al. A quality controllable multi-view object reconstruction method for 3D imaging systems
Zeng et al. Archaeology drawing generation algorithm based on multi-branch feature cross fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190319

Assignee: Big data and Information Technology Research Institute of Wenzhou University

Assignor: Wenzhou University

Contract record no.: X2020330000098

Denomination of invention: A method of stereo image region contour generation based on GPU parallel acceleration

Granted publication date: 20200114

License type: Common License

Record date: 20201115

EE01 Entry into force of recordation of patent licensing contract