CN109688397B - Method for converting 2D (two-dimensional) video into 3D video - Google Patents
Method for converting 2D (two-dimensional) video into 3D video Download PDFInfo
- Publication number
- CN109688397B CN109688397B CN201710968815.2A CN201710968815A CN109688397B CN 109688397 B CN109688397 B CN 109688397B CN 201710968815 A CN201710968815 A CN 201710968815A CN 109688397 B CN109688397 B CN 109688397B
- Authority
- CN
- China
- Prior art keywords
- image
- sequence
- black
- camera
- white
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of video processing, in particular to a method for converting 2D into 3D video, which utilizes the motion track of a characteristic region with high identification degree in a plane image sequence to reversely calculate the space position and the motion track of a camera to obtain the characteristic points of the image sequence in the space; matching according to the data of the motion track of the camera and the displacement data of the characteristic points of the three-dimensional space, identifying error characteristic points and correcting to obtain accurate image sequence characteristic points; calculating black and white depth information of the single-frame image by using the difference of the distances from different characteristic points to the camera; calculating black-white depth information of all image sequences by using black-white depth information of a single frame image and continuous information of a source image sequence; the corresponding area of the source image sequence is correspondingly and stereoscopically separated from the black-and-white information sequence chart, so that the real three-dimensional space of the view of the camera is realized, and the conversion time is greatly shortened; and the characteristic points have uniqueness, the conversion is accurate and efficient, and the processing method is simple.
Description
Technical Field
The invention relates to the technical field of video processing, in particular to a method for converting 2D (two-dimensional) video into 3D video.
Background
With the continuous progress of science and technology, people have higher requirements on movies, and the visual effect of the 3D movies is greatly favored by audiences. Therefore, in order to achieve greater benefits, many of the producers and producers of 2D movies want to be 3D movies through corresponding post-production processes.
However, in the current movie field, the technology of converting 2D to 3D is difficult, the efficiency is low, and the manufacturing process is complex. The current general 2D-to-3D technology firstly utilizes tracking software to track a camera and guides camera information obtained by tracking and a corresponding video sequence into three-dimensional production software; thirdly, model and animation are made according to the position of the model in the scene in the image, which is equivalent to the reconstruction of the scene and the animation, wherein the time consumption for making the model and the animation is huge; then, the existing image is mapped to the newly-made animation scene, and a large amount of time is consumed for rendering the whole scene again, so that the whole process is long in time consumption and low in efficiency.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for converting a 2D video into a 3D video efficiently, accurately and quickly.
In order to achieve the above object, a method for converting 2D into 3D video is designed, which is characterized by comprising the following processing steps:
a. introducing a single lens image sequence, obtaining the motion tracks of a plurality of characteristic region blocks of the image sequence by utilizing the displacement of a characteristic region with high identification degree in the plane image sequence from a first frame to a last frame, reversely solving the space position and the motion track of the camera, and obtaining a plurality of image characteristic points: assuming that the resolution size of the image is m, n, and defining the space coordinate of Source Point at any Point of the Source image as Wherein z is 0, x ∈ [ -m/2, m/2],y∈[-n/2,n/2]The center point of the source image satisfies | f (x) | + | f (y) | ═ 0; let the positions of arbitrary feature points in space be respectively Wherein z is 0, x belongs to (-m/2, m/2), y belongs to (-n/2, n/2), the plane central point passing through the feature point and parallel to the source image plane satisfies | g (x) | + | g (y) | 0; wherein the distance from the camera image plane to the central axis of the source image plane is h1, and the distance from the camera image plane to the central axis of the feature point plane is h 2; the included angle between the line segment from any characteristic point to the source image plane and the central axis is theta, and the equation [ f (x) -g (x)]L (h1-h2) ═ tan theta constantIf true;
b. and (3) carrying out error correction on all the obtained image characteristic points:
b1, matching the information carried by the characteristic points with the information carried by the camera, setting a matching coefficient as r, calculating an error threshold p and an error value of the characteristic points as q, and performing error correction for the first time;
b2, collecting the information carried by all the characteristic points into a group of curve tables, setting the x axis as time, the y axis as amplitude, the average variable of all the curve amplitudes as delta W, the amplitude variable of Single Point as delta A, and when delta A is less than delta W, the correct characteristic Point is obtained; when delta A is larger than delta W, the characteristic points are wrong, the amplitude fluctuation of the correct characteristic points is stable, the amplitude fluctuation of the wrong characteristic points is strong, the wrong characteristic points are manually trimmed, and errors are corrected;
b3, entering error threshold judgment: when the error value q of the characteristic point is smaller than the error threshold p, entering the step e; when the error value q of the characteristic point is larger than the error threshold p, the moving object exists in the sequence image, and a moving object separation step is carried out, wherein the moving object separation step comprises the following steps: selecting a single-frame image with higher identification degree in the image sequence, manually separating the local area of the error characteristic point exceeding the error threshold, and performing secondary error correction; setting an x1 axis as time, a y1 axis as amplitude, the average variable of all curve amplitudes as delta W1, the amplitude variable of Single Point as delta A1, when delta A1 is smaller than delta W1, the Single Point is a correct characteristic Point, and when delta A1 is larger than delta W1, the Single Point is an error characteristic Point;
c. connecting the shape information of one or more continuous moving objects separated in the step b with the motion track information of the camera obtained by reverse calculation in the step a in a front-back manner to obtain the size change of the moving object piece in the image sequence and the position change information of the adjacent characteristic points of the moving object;
d. calculating to obtain black-and-white sequence information changes Of the moving object pieces in all the image sequences by using the distance changes from one or more continuous moving object pieces in the image sequences obtained in the step c to the camera image plane, defining black-and-white sequence information Depth Of Field as lambda, lambda belongs to [0, 100], and then g (x) u/lambda, wherein lambda is 0 pure black, lambda is 100 pure white, and u is a proportionality constant;
e. acquiring a black and white depth sequence of a full sequence image:
e1, selecting a single frame image with higher identification degree in the image sequence;
e2, calculating to obtain a black-and-white information image Of the single-frame camera view in the image sequence by using the distance relationship between the static characteristic point Of the single-frame image relative to the camera view area slice and the camera image plane, defining black-and-white sequence information Depth Of Field as lambda, where lambda belongs to [0, 100], and g (x) u/lambda, where lambda is 0, lambda is pure black, lambda is 100, and u is a proportionality constant;
e3, tracking and resolving the black-and-white depth of field sequence of the full sequence image by using the single-frame black-and-white depth of field information image in the image sequence and the motion track information of the camera;
f. if the error value q of the feature point in the step b is lower than the error threshold p, directly entering a step g; if the error value q of the characteristic point exceeds the error threshold p, connecting the black-and-white depth of field sequence of the moving object obtained in the step d with the black-and-white depth of field full sequence of the opposite camera view area sheet obtained in the step e to obtain a black-and-white depth of field information image of the source image full sequence;
g. based on the obtained full-sequence black-white depth of field information image, the black-white depth of field image and the characteristic points of the source image correspond to each other, the brightness information of the black-white depth of field image is utilized to orderly stretch and separate the single frame image, the brightness of the black-white depth of field image is inversely proportional to the distance between the characteristic points and the camera image plane, and the more white the depth of field image is, the closer the corresponding characteristic points on the image are to the camera image plane, namely g (x) is (h1-h 2)/lambda;
h. and g, carrying out Stereo Camera Stereo Camera offset processing on the three-dimensional space image obtained in the step g, and manually adjusting specific positions of the Stereo Interocular distance and the zero plane Convergence according to the presented effect requirement to obtain the final Stereo effect.
The three-dimensional software adopts 3ds Max.
The invention has the technical advantages that the conversion time is greatly shortened; and the characteristic points have uniqueness, the conversion is accurate and efficient, and the processing method is simple.
Drawings
Fig. 1 is a schematic view of the distance relationship between the depth of field and the feature point in the present invention.
Fig. 2 is a black and white depth information map.
Detailed Description
The invention will now be further described with reference to the accompanying drawings and examples.
The design principle of the invention is as follows: reversely solving the space position and the motion track of the camera by utilizing the motion track of the characteristic region with high identification degree in the planar image sequence to obtain the characteristic points of the image sequence in the space; matching according to the data of the motion track of the camera and the displacement data of the characteristic points of the three-dimensional space, identifying error characteristic points and correcting to obtain accurate image sequence characteristic points; calculating black and white depth information of the single-frame image by using the difference of the distances from different characteristic points to the camera; calculating black-white depth information of all image sequences by using black-white depth information of a single frame image and continuous information of a source image sequence; the corresponding area of the source image sequence is correspondingly and stereoscopically separated from the black-and-white information sequence chart, so that the real three-dimensional space of the view of the camera is realized, and the conversion time is greatly shortened; and the characteristic points have uniqueness, the conversion is accurate and efficient, and the processing method is simple.
Example 1
Referring to fig. 1 and 2, a method for converting 2D video into 3D video, is characterized by adopting the following processing steps:
a. introducing a single lens image sequence, obtaining the motion tracks of a plurality of characteristic region blocks of the image sequence by utilizing the displacement of a characteristic region with high identification degree in the plane image sequence from a first frame to a last frame, reversely solving the space position and the motion track of the camera, and obtaining a plurality of image characteristic points: assuming that the resolution size of the image is m, n, and defining the space coordinate of Source Point at any Point of the Source image as Wherein z is 0, x ∈ [ -m/2, m/2],y∈[-n/2,n/2]The center point of the source image satisfies | f (x) | + | f (y) | ═ 0; let the positions of arbitrary feature points in space be respectively Wherein z is 0, x belongs to (-m/2, m/2), y belongs to (-n/2, n/2), the plane central point passing through the feature point and parallel to the source image plane satisfies | g (x) | + | g (y) | 0; wherein the distance from the camera image plane to the central axis of the source image plane is h1, and the distance from the camera image plane to the central axis of the feature point plane is h 2; the included angle between the line segment from any characteristic point to the source image plane and the central axis is theta, and the equation [ f (x) -g (x)]V (h1-h2) is always true for tan θ;
b. and (3) carrying out error correction on all the obtained characteristic points:
b1, matching the information carried by the characteristic points with the information carried by the camera, setting a matching coefficient as r, calculating an error threshold p and an error value of the characteristic points as q, and performing error correction for the first time;
b2, collecting the information carried by all the characteristic points into a group of curve tables, setting the x axis as time, the y axis as amplitude, the average variable of all the curve amplitudes as delta W, the amplitude variable of Single Point as delta A, and when delta A is less than delta W, the correct characteristic Point is obtained; when delta A is larger than delta W, the characteristic points are wrong, the amplitude fluctuation of the correct characteristic points is stable, the amplitude fluctuation of the wrong characteristic points is strong, the wrong characteristic points are manually trimmed, and errors are corrected;
b3, entering error threshold judgment: when the error value q of the characteristic point is smaller than the error threshold p, entering the step e; when the error value q of the characteristic point is larger than the error threshold p, the moving object exists in the sequence image, and a moving object separation step is carried out, wherein the moving object separation step comprises the following steps: selecting a single-frame image with higher identification degree in the image sequence, manually separating the local area of the error characteristic point exceeding the error threshold, and performing secondary error correction; setting an x1 axis as time, a y1 axis as amplitude, the average variable of all curve amplitudes as delta W1, the amplitude variable of Single Point as delta A1, when delta A1 is smaller than delta W1, the Single Point is a correct characteristic Point, and when delta A1 is larger than delta W1, the Single Point is an error characteristic Point;
c. connecting the shape information of one or more continuous moving objects separated in the step b with the motion track information of the camera obtained by reverse calculation in the step a in a front-back manner to obtain the size change of the moving object piece in the image sequence and the position change information of the adjacent characteristic points of the moving object;
d. calculating to obtain black-and-white sequence information changes Of the moving object pieces in all the image sequences by using the distance changes from one or more continuous moving object pieces in the image sequences obtained in the step c to the camera image plane, defining black-and-white sequence information Depth Of Field as lambda, lambda belongs to [0, 100], and then g (x) u/lambda, wherein lambda is 0 pure black, lambda is 100 pure white, and u is a proportionality constant;
e. acquiring a black and white depth sequence of a full sequence image:
e1, selecting a single frame image with higher identification degree in the image sequence;
e2, calculating to obtain a black-and-white information image Of the single-frame camera view in the image sequence by using the distance relationship between the static characteristic point Of the single-frame image relative to the camera view area slice and the camera image plane, defining black-and-white sequence information Depth Of Field as lambda, where lambda belongs to [0, 100], and g (x) u/lambda, where lambda is 0, lambda is pure black, lambda is 100, and u is a proportionality constant;
e3, tracking and resolving the black-and-white depth of field sequence of the full sequence image by using the single-frame black-and-white depth of field information image in the image sequence and the motion track information of the camera;
f. if the error value q of the feature point in the step b is lower than the error threshold p, directly entering a step g; if the error value q of the characteristic point exceeds the error threshold p, connecting the black-and-white depth of field sequence of the moving object obtained in the step d with the black-and-white depth of field full sequence of the opposite camera view area sheet obtained in the step e to obtain a black-and-white depth of field information image of the source image full sequence;
g. based on the obtained full-sequence black-white depth of field information image, the black-white depth of field image and the characteristic points of the source image correspond to each other, the brightness information of the black-white depth of field image is utilized to orderly stretch and separate the single frame image, the brightness of the black-white depth of field image is inversely proportional to the distance between the characteristic points and the camera image plane, and the more white the depth of field image is, the closer the corresponding characteristic points on the image are to the camera image plane, namely g (x) is (h1-h 2)/lambda;
h. and g, carrying out Stereo Camera Stereo Camera offset processing on the three-dimensional space image obtained in the step g, and manually adjusting specific positions of the Stereo Interocular distance and the zero plane Convergence according to the presented effect requirement to obtain the final Stereo effect.
Further, the three-dimensional software adopts 3ds Max.
Claims (2)
1. A method for converting 2D into 3D video is characterized by comprising the following processing steps:
a. introducing an image sequence of a single lens, obtaining motion tracks of a plurality of characteristic region blocks of the image sequence by utilizing the displacement of a characteristic region with high identification degree in the image sequence from a first frame to a last frame, reversely solving the spatial position and the motion track of the camera, and obtaining a plurality of image characteristic points: assuming that the resolution size of the image is m, n, and defining the space coordinate of Source Point at any Point of the Source image as Wherein z is 0, x ∈ [ -m/2, m/2],y∈[-n/2,n/2]The center point of the source image satisfies | f (x) | + | f (y) | ═ 0; let the positions of arbitrary feature points in space be respectively Wherein z is 0, x belongs to (-m/2, m/2), y belongs to (-n/2, n/2), the plane central point passing through the feature point and parallel to the source image plane satisfies | g (x) | + | g (y) | 0; wherein the distance from the camera image plane to the central axis of the source image plane is h1, and the distance from the camera image plane to the central axis of the feature point plane is h 2; the included angle between the line segment from any characteristic point to the source image plane and the central axis is theta, and the equation [ f (x) -g (x)]V (h1-h2) is always true for tan θ;
b. and (3) carrying out error correction on all the obtained image characteristic points:
b1, matching the information carried by the characteristic points with the information carried by the camera, setting a matching coefficient as r, calculating an error threshold p and an error value of the characteristic points as q, and performing error correction for the first time;
b2, collecting information carried by all the characteristic points into a group of curve tables, setting the x axis as time, the y axis as amplitude, the average variable of all the curve amplitudes as delta W, the amplitude variable of Single Point as delta A, and when the delta A is less than the delta W, the curve table is the correct characteristic Point; when delta A is larger than delta W, the characteristic points are error characteristic points, the amplitude fluctuation of the correct characteristic points is stable, the amplitude fluctuation of the error characteristic points is strong, the error characteristic points are manually trimmed, and errors are corrected;
b3, entering error threshold judgment: when the error value q of the characteristic point is smaller than the error threshold p, entering the step e; when the error value q of the characteristic point is larger than the error threshold p, the moving object exists in the sequence image, and a moving object separation step is carried out, wherein the moving object separation step comprises the following steps: selecting a single-frame image with higher identification degree in the image sequence, manually separating the local area of the error characteristic point exceeding the error threshold, and performing secondary error correction; setting an x1 axis as time, a y1 axis as amplitude, the average variable of all curve amplitudes as delta W1, the amplitude variable of Single Point as delta A1, when delta A1 is less than delta W1, the Single Point is a correct characteristic Point, and when delta A1 is more than delta W1, the Single Point is a wrong characteristic Point;
c. connecting the shape information of one or more continuous moving objects separated in the step b with the motion track information of the camera obtained by reverse calculation in the step a in a front-back manner to obtain the size change of the moving object piece in the image sequence and the position change information of the adjacent characteristic points of the moving object;
d. calculating to obtain black-and-white sequence information changes Of the moving object pieces in all the image sequences by using the distance changes from one or more continuous moving object pieces in the image sequences obtained in the step c to the camera image plane, defining black-and-white sequence information Depth Of Field as lambda, lambda belongs to [0, 100], and then g (x) u/lambda, wherein lambda is 0 pure black, lambda is 100 pure white, and u is a proportionality constant;
e. acquiring a black and white depth sequence of a full sequence image:
e1, selecting a single frame image with higher identification degree in the image sequence;
e2, calculating to obtain a black-and-white information image Of the single-frame camera view in the image sequence by using the distance relationship between the static characteristic point Of the single-frame image relative to the camera view area slice and the camera image plane, defining black-and-white sequence information Depth Of Field as lambda, where lambda belongs to [0, 100], and g (x) u/lambda, where lambda is 0, lambda is pure black, lambda is 100, and u is a proportionality constant;
e3, tracking and resolving the black-and-white depth of field sequence of the full sequence image by using the single-frame black-and-white depth of field information image in the image sequence and the motion track information of the camera;
f. if the error value q of the feature point in the step b is lower than the error threshold p, directly entering a step g; if the error value q of the characteristic point exceeds the error threshold p, connecting the black-and-white depth of field sequence of the moving object obtained in the step d with the black-and-white depth of field full sequence of the opposite camera view area sheet obtained in the step e to obtain a black-and-white depth of field information image of the source image full sequence;
g. based on the obtained full-sequence black-white depth of field information image, the black-white depth of field image and the characteristic points of the source image correspond to each other, the brightness information of the black-white depth of field image is utilized to orderly stretch and separate the single frame image, the brightness of the black-white depth of field image is inversely proportional to the distance between the characteristic points and the camera image plane, and the more white the depth of field image is, the closer the corresponding characteristic points on the image are to the camera image plane, namely g (x) is (h1-h 2)/lambda;
h. and g, carrying out Stereo Camera Stereo Camera offset processing on the three-dimensional space image obtained in the step g, and manually adjusting specific positions of the Stereo Interocular distance and the zero plane Convergence according to the presented effect requirement to obtain the final Stereo effect.
2. A method of converting 2D to 3D video according to claim 1, wherein the processing step uses 3ds Max three-dimensional software for processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710968815.2A CN109688397B (en) | 2017-10-18 | 2017-10-18 | Method for converting 2D (two-dimensional) video into 3D video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710968815.2A CN109688397B (en) | 2017-10-18 | 2017-10-18 | Method for converting 2D (two-dimensional) video into 3D video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109688397A CN109688397A (en) | 2019-04-26 |
CN109688397B true CN109688397B (en) | 2021-10-22 |
Family
ID=66183179
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710968815.2A Active CN109688397B (en) | 2017-10-18 | 2017-10-18 | Method for converting 2D (two-dimensional) video into 3D video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109688397B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102469322A (en) * | 2010-11-18 | 2012-05-23 | Tcl集团股份有限公司 | Image processing method for plane stereoscopic bodies |
CN102630033A (en) * | 2012-03-31 | 2012-08-08 | 彩虹集团公司 | Method for converting 2D (Two Dimension) into 3D (Three Dimension) based on dynamic object detection |
CN103581650A (en) * | 2013-10-21 | 2014-02-12 | 四川长虹电器股份有限公司 | Method for converting binocular 3D video into multicast 3D video |
CN106447718A (en) * | 2016-08-31 | 2017-02-22 | 天津大学 | 2D-to-3D depth estimation method |
US9661307B1 (en) * | 2011-11-15 | 2017-05-23 | Google Inc. | Depth map generation using motion cues for conversion of monoscopic visual content to stereoscopic 3D |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9414048B2 (en) * | 2011-12-09 | 2016-08-09 | Microsoft Technology Licensing, Llc | Automatic 2D-to-stereoscopic video conversion |
-
2017
- 2017-10-18 CN CN201710968815.2A patent/CN109688397B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102469322A (en) * | 2010-11-18 | 2012-05-23 | Tcl集团股份有限公司 | Image processing method for plane stereoscopic bodies |
US9661307B1 (en) * | 2011-11-15 | 2017-05-23 | Google Inc. | Depth map generation using motion cues for conversion of monoscopic visual content to stereoscopic 3D |
CN102630033A (en) * | 2012-03-31 | 2012-08-08 | 彩虹集团公司 | Method for converting 2D (Two Dimension) into 3D (Three Dimension) based on dynamic object detection |
CN103581650A (en) * | 2013-10-21 | 2014-02-12 | 四川长虹电器股份有限公司 | Method for converting binocular 3D video into multicast 3D video |
CN106447718A (en) * | 2016-08-31 | 2017-02-22 | 天津大学 | 2D-to-3D depth estimation method |
Non-Patent Citations (2)
Title |
---|
加权SIFT流深度迁移的单幅图像2D转3D;袁红星;《电子学报》;20150228;全文 * |
基于非局部随机游走和运动补偿的2D转3D优化方法;张凝;《光电子·激光》;20170630;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109688397A (en) | 2019-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101640809B (en) | Depth extraction method of merging motion information and geometric information | |
CN103236082B (en) | Towards the accurate three-dimensional rebuilding method of two-dimensional video of catching static scene | |
CN101933335B (en) | Method and system for converting 2d image data to stereoscopic image data | |
CN102609950B (en) | Two-dimensional video depth map generation process | |
CN102184540B (en) | Sub-pixel level stereo matching method based on scale space | |
CN107545586B (en) | Depth obtaining method and system based on light field polar line plane image local part | |
CN102223556A (en) | Multi-view stereoscopic image parallax free correction method | |
CN103530889A (en) | Method for decoding coding mark points of measuring stick based on skeleton extraction | |
CN101179745A (en) | Preprocessing method of multi-viewpoint image | |
CN101883291A (en) | Method for drawing viewpoints by reinforcing interested region | |
CN108460792B (en) | Efficient focusing stereo matching method based on image segmentation | |
CN111028295A (en) | 3D imaging method based on coded structured light and dual purposes | |
CN111260720A (en) | Target height measuring system based on deep learning method | |
CN104954780A (en) | DIBR (depth image-based rendering) virtual image restoration method applicable to high-definition 2D/3D (two-dimensional/three-dimensional) conversion | |
CN107578430A (en) | A kind of solid matching method based on adaptive weight and local entropy | |
CN104408772A (en) | Grid projection-based three-dimensional reconstructing method for free-form surface | |
CN101557534B (en) | Method for generating disparity map from video close frames | |
CN103581650A (en) | Method for converting binocular 3D video into multicast 3D video | |
CN113538569A (en) | Weak texture object pose estimation method and system | |
CN101739683A (en) | Image segmentation and multithread fusion-based method and system for evaluating depth of single image | |
CN104240229A (en) | Self-adaptation polarline correcting method based on infrared binocular camera | |
CN103493482A (en) | Method and device for extracting and optimizing depth map of image | |
CN111951310A (en) | Binocular stereo matching method, disparity map acquisition device and computer storage medium | |
CN109064536B (en) | Page three-dimensional reconstruction method based on binocular structured light | |
KR101103511B1 (en) | Method for Converting Two Dimensional Images into Three Dimensional Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220418 Address after: 201400 No. 299, Zhuangwu Road, Zhuangxing Town, Fengxian District, Shanghai Patentee after: ZHIZHUN ELECTRONIC SCIENCE AND TECHNOLOGY Co.,Ltd. SHANGHAI Address before: 201415 room 130, No. 410, Zhuangbei Road, Zhuangxing Town, Fengxian District, Shanghai Patentee before: SHANGHAI ZHIZUN CULTURE MEDIA DEVELOPMENT CO.,LTD. |