CN113112537A - Stereoscopic vision generation method and device based on deep network object segmentation - Google Patents

Stereoscopic vision generation method and device based on deep network object segmentation Download PDF

Info

Publication number
CN113112537A
CN113112537A CN202110437387.7A CN202110437387A CN113112537A CN 113112537 A CN113112537 A CN 113112537A CN 202110437387 A CN202110437387 A CN 202110437387A CN 113112537 A CN113112537 A CN 113112537A
Authority
CN
China
Prior art keywords
image
stereoscopic vision
original image
shadow
outline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110437387.7A
Other languages
Chinese (zh)
Inventor
李毅
陈轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN202110437387.7A priority Critical patent/CN113112537A/en
Publication of CN113112537A publication Critical patent/CN113112537A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a stereoscopic vision generation method based on deep network target segmentation, which comprises the steps of obtaining an original image; determining a key feature perception area of the original image, and adopting U according to the key feature perception area2The network structure divides and splices the object with the most visual attraction in the original image to obtain a contour map; calculating the corridor diagram by adopting a preset affine transformation matrix to obtain a shadow; the original image and the shadow obtained by the original image are shadedAnd (5) line fusion to obtain a stereoscopic vision image. By implementing the method, the object segmentation with the most visual attraction can be realized, and the problems of low robustness, low efficiency, low precision and the like of the conventional stereoscopic vision generation method are solved.

Description

Stereoscopic vision generation method and device based on deep network object segmentation
Technical Field
The invention relates to the technical field of image processing, in particular to a stereoscopic vision generation method and device based on deep network target segmentation.
Background
Currently, in the field of computer vision, more and more researchers are focusing on virtual stereoscopic vision simulation technology. In the field of image processing, virtual shadows and deflection of images can be established more conveniently and quickly by using technologies such as image edge feature segmentation, feature extraction, affine transformation and the like, so that a three-dimensional visual impression based on two-dimensional images can be constructed. Meanwhile, the local features of the image can be extracted and analyzed by combining with an artificial intelligent neural network technology, the stereoscopic vision natural interaction experience of a user can be improved more quickly, and the multi-terminal lightweight augmented reality technology is realized.
Currently, object segmentation and motion-coherent layer reconstruction based on edge features are important techniques for stereoscopic vision generation based on salient objects. Among them, the Salient Object Detection (SOD) attempts to segment the most visually appealing object in an image, and most salient object detection methods are evaluated using different evaluation scores and data sets to compare existing models and evaluate their strengths and weaknesses. For example, Chang proposes a method for constructing a graphical model, which explains the correlation between the object and the saliency by iteratively optimizing an energy function. As another example, DHS is a deep level saliency network for salient object detection that is based on a new level recursive convolutional neural network (HRCNN) that is trained directly to generate saliency maps using the entire region of the image.
However, the conventional stereoscopic vision generation method based on the salient object cannot realize the object segmentation with the most visual attraction, so that the problems of low robustness, low efficiency, low precision and the like of the stereoscopic vision image are caused.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a stereoscopic vision generation method and apparatus based on deep network object segmentation, which can realize object segmentation with the most visual appeal, and solve the problems of low robustness, low efficiency, low precision, and the like of the existing stereoscopic vision generation method.
In order to solve the above technical problem, an embodiment of the present invention provides a stereoscopic vision generating method based on deep network object segmentation, where the method includes the following steps:
s1, acquiring an original image;
s2, determining a key feature sensing area of the original image, and adopting U according to the key feature sensing area2The network structure divides and splices the object with the most visual attraction in the original image to obtain a contour map;
s3, calculating the outline drawing by adopting a preset affine transformation matrix to obtain a shadow;
and S4, fusing the original image and the obtained shadow to obtain a stereoscopic image.
In step S2, the key feature perception area of the original image is determined by a pixel image boundary prediction technique and a contour detection technique.
Wherein, between the step S2 and the step S3, the method further comprises the following steps:
and carrying out image correction on the contour map.
Wherein, the specific step of performing image correction on the contour map comprises the following steps:
graying the outline image, stretching the grayed outline image to a preset size, and further performing Fourier transform on the stretched outline image to obtain a frequency domain image;
carrying out binarization processing on the frequency domain image, and further carrying out Hough line transformation on the frequency domain image after binarization to obtain a corresponding line;
and calculating to obtain an offset angle according to the obtained corresponding straight line, and further carrying out affine transformation on the outline map on the image based on the offset angle to obtain the outline map after image correction.
Wherein, the step S4 specifically includes:
after binarization and inverse color processing are carried out on the original image, image fusion is carried out on the original image after binarization and inverse color processing and the shadow according to a preset first weight ratio, and a fusion image is obtained;
and fusing the obtained fused image and the original image according to a preset second weight ratio to obtain the stereoscopic vision image.
Wherein the preset first weight ratio is 1: 1; the preset second weight ratio is 0.85: 0.15.
the embodiment of the invention also provides a stereoscopic vision generating device based on the deep network target segmentation, which comprises the following steps:
an original image acquisition unit for acquiring an original image;
an image network segmentation unit for determining the key feature perception region of the original image and adopting U according to the key feature perception region2The network structure divides and splices the object with the most visual attraction in the original image to obtain a contour map;
the shadow calculation unit is used for calculating the outline map by adopting a preset affine transformation matrix to obtain a shadow;
and the stereoscopic vision image forming unit is used for fusing the original image and the obtained shadow to obtain a stereoscopic vision image.
Wherein, still include: an image correction unit; wherein,
and the image correction unit is used for carrying out image correction on the outline map.
The embodiment of the invention has the following beneficial effects:
the invention introduces U2The network structure divides the object with the most visual attraction in the image, and combines the self-adaptive image correction and affine transformation matrix image shadow generation technology to realize the virtual simulation generation of the interactive image stereoscopic vision, thereby solving the problems of low robustness, low efficiency, low precision and the like of the traditional stereoscopic vision generation method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
Fig. 1 is a flowchart of a stereoscopic vision generation method based on deep network object segmentation according to an embodiment of the present invention;
fig. 2 shows U in step S2 of a stereoscopic vision generation method based on deep network object segmentation according to an embodiment of the present invention2A model structure schematic diagram of the network;
fig. 3 shows that U is utilized in step S2 of a stereoscopic vision generation method based on deep network object segmentation according to an embodiment of the present invention2A result graph of image segmentation is carried out by the network;
fig. 4 is a schematic structural diagram of a stereoscopic vision generating apparatus based on deep network object segmentation according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, in an embodiment of the present invention, a stereoscopic vision generation method based on deep network object segmentation is provided, where the method includes the following steps:
step S1, acquiring an original image;
the specific process is that firstly, an original image is input as an image to be processed, and the large difference of the complexity of foreground and background of different segmented sample images is considered, and the change conditions of the color and the light and shadow attributes in depth and direction are considered.
Step S2, determining a key feature perception area of the original image, and adopting U according to the key feature perception area2The network structure divides and splices the object with the most visual attraction in the original image to obtain a contour map;
the specific process is that the image segmentation has the function of separating the foreground and the background of the image and can directly process the outline of the image. The invention adopts U2The network structure divides the original image, which is a novel depth network for detecting the salient objectCollaterals, utilize U2The network captures richer local and global information from the shallow layer and the deep layer, can obtain segmented region images according to the key feature perception region, and can splice to obtain a contour map.
U2The main structure of the network is a two-layer nested U structure, which is the smallest deep network and efficient scale. U shape2The network is a simple deep structure model and has rich multi-scale characteristics and relatively low calculation and memory overhead. It can be trained from scratch to achieve competitive performance and maintain high resolution feature maps. U shape2The network is a two-layer nested U-shaped structure, and UnThe representation of the network is different. Using U2The main processing for generating a stereo image by a network is shown in fig. 2, and the result of network segmentation is shown in fig. 3.
Step S3, calculating the outline drawing by adopting a preset affine transformation matrix to obtain a shadow;
the specific process is that before the outline image is subjected to affine transformation calculation to obtain the shadow, the outline image can be subjected to image correction. The image correction has the effect that the algorithm can be more widely applied to various images, and some images with offset distortion are corrected, so that the generated effect is more real.
The steps of correcting the image of the contour map are as follows:
(1) graying the outline image, stretching the grayed outline image to a preset size, and further performing Fourier transform on the stretched outline image to obtain a frequency domain image;
it should be noted that extending the image to the appropriate size can increase the speed of operation. Meanwhile, the use of fourier transform is to convert the image from the spatial domain to the frequency domain. In the frequency domain, for an image, the high-frequency part represents the detail and texture information of the image; the low frequency part represents contour information of the image. The fourier transform of the two-dimensional image of the tool used to decompose the function is represented by:
Figure BDA0003033684570000051
wherein F is a Spatial domain value, F is a Frequency domain value, and eixCos x + i sin x. The frequency domain values after conversion are complex numbers.
Therefore, displaying the result after fourier transform requires the use of either a real image plus an imaginary image, or an amplitude image plus a phase image.
(2) And carrying out binarization processing on the frequency domain image, and further carrying out Hough line transformation on the frequency domain image after binarization to obtain a corresponding line.
(3) And calculating to obtain an offset angle according to the obtained corresponding straight line, and further carrying out affine transformation on the outline image based on the offset angle to obtain the outline image after image correction.
When performing affine transformation calculation on a contour map, the purpose of affine transformation is to transform all points on an image by the same offset by transformation of selected points. An affine transformation represents a mapping between two graphs.
For example, affine transformations are typically represented using a 2x3 matrix.
Figure BDA0003033684570000052
Using matrices A and B to pair two-dimensional vectors
Figure BDA0003033684570000053
It is transformed and can therefore also be represented in the following form:
T=M.[X Y 1]TOR
Figure BDA0003033684570000054
Figure BDA0003033684570000061
the affine transformation basically identifies the relation between two pictures, and the relation can be known as M and X. T is obtained by applying the formula T as M.X. The information for this connection can be represented by the matrix M (i.e. giving an explicit 2x3 matrix) or also by the geometrical relation between two picture points. Because the matrix M links two pictures, which represent the direct connection of three points in the two pictures, an affine transformation (optionally points) can be found from these two sets of points, which can then be applied to all points in the image.
In one embodiment, 2 sets of points are selected in the initially input image, first two points at the bottom of the object and then one point at the top of the object, which is the first set of points; then, selecting one point as the position of the point forming the top of the shadow after affine transformation, and taking two points at the bottom of the first object as a second group of points; and obtaining a transformed matrix by taking the first group of points and the second group of points as parameters, and then applying the transformed matrix to the obtained binary image to obtain the shadow, so that the position of the bottom of the shadow obtained by affine transformation and the position of the bottom of the original object are basically not changed.
And step S4, fusing the original image and the obtained shadow to obtain a stereoscopic image.
Firstly, carrying out binarization and inverse color processing on an original image, and carrying out image fusion on the original image subjected to binarization and inverse color processing and the shadow according to a preset first weight ratio to obtain a fused image; wherein, the preset first weight ratio is 1: 1;
finally, according to a preset second weight ratio, fusing the obtained fused image with the original image to obtain the stereoscopic vision image; wherein the preset second weight ratio is 0.85: 0.15.
in one embodiment, a binary image obtained by binarizing an original image is subjected to reverse color operation, a black area and a white area in the image are exchanged, and then the black area and the white area are fused with a shadow obtained by affine transformation according to the weight of w1: w2, wherein w1 and w2 both take 1, so that a fused part is removed;
then, the obtained fused image and the original image are fused according to the weight of w3: w4, wherein w3 is 0.85, and w4 is 0.15, so that shadow appears on the original image, and finally the effect that the image is shaded to generate visual three-dimensional effect, namely a result image (stereoscopic vision image) is achieved. Meanwhile, mean filtering is adopted for regions with less obvious segmentation edge regions, so that a good effect of widening excessive edges is achieved.
Fig. 2 is an application scene diagram of a stereoscopic vision generation method based on deep network object segmentation according to an embodiment of the present invention. In fig. 2, a local feature contour region of an image is referenced for stereoscopic vision generation, image acquisition and feature correction of multi-target template matching in real time, and color space fusion is performed on local edge region sampling by adopting an interactive simulation shadow generation of a nearest region and a binarization filtering technology.
As shown in fig. 4, in an embodiment of the present invention, a stereoscopic vision generating apparatus based on deep network object segmentation includes:
an original image acquisition unit 110 for acquiring an original image;
an image network segmentation unit 120, configured to determine a key feature sensing region of an original image, and use U according to the key feature sensing region2The network structure divides and splices the object with the most visual attraction in the original image to obtain a contour map;
a shadow calculating unit 130, configured to calculate the contour map by using a preset affine transformation matrix to obtain a shadow;
and a stereoscopic image forming unit 140 for obtaining a stereoscopic image by fusing the original image and the obtained shadow.
Wherein, still include: an image correction unit; wherein,
and the image correction unit is used for carrying out image correction on the outline map.
The embodiment of the invention has the following beneficial effects:
the invention introduces U2The network structure divides the object with the most visual attraction in the image, and combines the self-adaptive image correction and affine transformation matrix image shadow generation technology to realize the virtual simulation generation of the interactive image stereoscopic vision, thereby solving the problems of low robustness, low efficiency, low precision and the like of the traditional stereoscopic vision generation method.
It should be noted that, in the above device embodiment, each included unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be achieved; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (8)

1. A stereoscopic vision generation method based on deep network object segmentation is characterized by comprising the following steps:
s1, acquiring an original image;
s2, determining a key feature sensing area of the original image, and adopting U according to the key feature sensing area2The network structure divides and splices the object with the most visual attraction in the original image to obtain a contour map;
s3, calculating the outline drawing by adopting a preset affine transformation matrix to obtain a shadow;
and S4, fusing the original image and the obtained shadow to obtain a stereoscopic image.
2. The method for generating stereoscopic vision based on object segmentation in depth network as claimed in claim 1, wherein in step S2, the key feature perception area of the original image is determined by pixel image boundary prediction technique and contour detection technique.
3. The method for generating stereoscopic vision based on deep network object segmentation of claim 1, wherein between the step S2 and the step S3, the method further comprises the steps of:
and carrying out image correction on the contour map.
4. The method for stereoscopic vision generation based on deep network object segmentation as claimed in claim 3, wherein the specific step of image rectification on the contour map comprises:
graying the outline image, stretching the grayed outline image to a preset size, and further performing Fourier transform on the stretched outline image to obtain a frequency domain image;
carrying out binarization processing on the frequency domain image, and further carrying out Hough line transformation on the frequency domain image after binarization to obtain a corresponding line;
and calculating to obtain an offset angle according to the obtained corresponding straight line, and further carrying out affine transformation on the outline map on the image based on the offset angle to obtain the outline map after image correction.
5. The stereoscopic vision generating method based on the deep network object segmentation as claimed in claim 1, wherein the step S4 specifically includes:
after binarization and inverse color processing are carried out on the original image, image fusion is carried out on the original image after binarization and inverse color processing and the shadow according to a preset first weight ratio, and a fusion image is obtained;
and fusing the obtained fused image and the original image according to a preset second weight ratio to obtain the stereoscopic vision image.
6. The method for generating stereoscopic vision based on deep network object segmentation as claimed in claim 5, wherein the preset first weight ratio is 1: 1; the preset second weight ratio is 0.85: 0.15.
7. a stereoscopic vision generating apparatus based on a deep network object segmentation, comprising:
an original image acquisition unit for acquiring an original image;
an image network segmentation unit for determining the key feature perception region of the original image and adopting U according to the key feature perception region2The network structure divides and splices the object with the most visual attraction in the original image to obtain a contour map;
the shadow calculation unit is used for calculating the outline map by adopting a preset affine transformation matrix to obtain a shadow;
and the stereoscopic vision image forming unit is used for fusing the original image and the obtained shadow to obtain a stereoscopic vision image.
8. The apparatus for generating stereoscopic vision based on deep network object segmentation of claim 7, further comprising: an image correction unit; wherein,
and the image correction unit is used for carrying out image correction on the outline map.
CN202110437387.7A 2021-04-22 2021-04-22 Stereoscopic vision generation method and device based on deep network object segmentation Pending CN113112537A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110437387.7A CN113112537A (en) 2021-04-22 2021-04-22 Stereoscopic vision generation method and device based on deep network object segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110437387.7A CN113112537A (en) 2021-04-22 2021-04-22 Stereoscopic vision generation method and device based on deep network object segmentation

Publications (1)

Publication Number Publication Date
CN113112537A true CN113112537A (en) 2021-07-13

Family

ID=76719773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110437387.7A Pending CN113112537A (en) 2021-04-22 2021-04-22 Stereoscopic vision generation method and device based on deep network object segmentation

Country Status (1)

Country Link
CN (1) CN113112537A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6157733A (en) * 1997-04-18 2000-12-05 At&T Corp. Integration of monocular cues to improve depth perception
TW201211936A (en) * 2010-09-10 2012-03-16 Oriental Inst Technology System and method for compensating binocular vision deficiency

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6157733A (en) * 1997-04-18 2000-12-05 At&T Corp. Integration of monocular cues to improve depth perception
TW201211936A (en) * 2010-09-10 2012-03-16 Oriental Inst Technology System and method for compensating binocular vision deficiency

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XUEBIN QIN 等: ""U2-Net: Going deeper with nested U-structure for salient object detection"", 《PATTERN RECOGNITION》, vol. 106, pages 1 - 12 *
王晓君; 陈锐; 刁彦华: "一种倾斜文本图像的旋转校正技术", 《邯郸职业技术学院学报》, vol. 31, no. 3, pages 61 - 64 *

Similar Documents

Publication Publication Date Title
CN109658515B (en) Point cloud meshing method, device, equipment and computer storage medium
CN113240691A (en) Medical image segmentation method based on U-shaped network
CN107369204B (en) Method for recovering basic three-dimensional structure of scene from single photo
CN110827312B (en) Learning method based on cooperative visual attention neural network
JPH03218581A (en) Picture segmentation method
CN103248911A (en) Virtual viewpoint drawing method based on space-time combination in multi-view video
CN113223070A (en) Depth image enhancement processing method and device
CN107194985A (en) A kind of three-dimensional visualization method and device towards large scene
CN112734914A (en) Image stereo reconstruction method and device for augmented reality vision
CN107507263B (en) Texture generation method and system based on image
CN113297988A (en) Object attitude estimation method based on domain migration and depth completion
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
CN117994480A (en) Lightweight hand reconstruction and driving method
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
CN111325184A (en) Intelligent interpretation and change information detection method for remote sensing image
Hung et al. Multipass hierarchical stereo matching for generation of digital terrain models from aerial images
CN113724329A (en) Object attitude estimation method, system and medium fusing plane and stereo information
CN110766609B (en) Depth-of-field map super-resolution reconstruction method for ToF camera
CN113112537A (en) Stereoscopic vision generation method and device based on deep network object segmentation
CN116385577A (en) Virtual viewpoint image generation method and device
CN114049423B (en) Automatic realistic three-dimensional model texture mapping method
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
Yang et al. RIFO: Restoring images with fence occlusions
CN108154485B (en) Ancient painting restoration method based on layering and stroke direction analysis
Sun Colorization of gray scale images based on convolutional block attention and Pix2Pix network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination