CN103533332A - Image processing method for converting 2D video into 3D video - Google Patents

Image processing method for converting 2D video into 3D video Download PDF

Info

Publication number
CN103533332A
CN103533332A CN201310501309.4A CN201310501309A CN103533332A CN 103533332 A CN103533332 A CN 103533332A CN 201310501309 A CN201310501309 A CN 201310501309A CN 103533332 A CN103533332 A CN 103533332A
Authority
CN
China
Prior art keywords
image
video
characteristic point
region
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310501309.4A
Other languages
Chinese (zh)
Other versions
CN103533332B (en
Inventor
王好谦
张新
邵航
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201310501309.4A priority Critical patent/CN103533332B/en
Publication of CN103533332A publication Critical patent/CN103533332A/en
Application granted granted Critical
Publication of CN103533332B publication Critical patent/CN103533332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

An image processing method for converting 2D (2-dimensional) video into 3D (3-dimensional) video comprises the steps of taking images, shot from different angles, of a target object as sample images, calculating color invariable images of the sample images and a to-be-identified image, extracting feature points from the color invariable images, matching the feature points in the color invariable image of the to-be-identified image with the feature points in the color invariable images of the sample images, judging whether the target object exists in the to-be-identified image according to distribution of the feature points matched successfully, if the target object exists in the to-be-identified image, dividing a target object outline based on the feature points matched successfully. The method can identify the target object when the 2D video is converted into the 3D video, precisely divides the object outline, and greatly reduces the workload of manual operation.

Description

A kind of 2D video turns the image processing method of 3D video
Technical field
The present invention relates to technical field of computer vision, particularly relate to the image processing method that a kind of 2D video turns 3D video.
Background technology
Object identification need to be to identify target object from complex environment, makes computer have preliminary cognitive ability.If can identify the object in image, just can restore the partial 3-D information in image.The parsing of image and the tracking of object all need to have well behaved object recognition algorithm as basis.In fields such as image retrieval, security monitoring, medical imaging, robot vision, object identification all has important application.The progress of object recognition technique can further promote association area to intellectuality, automation direction development.
At computer assisted artificial 2D video, turn in the process of 3D video, need manual object marker being gone out and utilize image partition method that complete object profile is split of operator, manual operation amount is large, and human cost is quite high.Film Titanic 2D video turns the artificial cost of 3D video up to 1,800 ten thousand dollars.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, provide a kind of 2D video to turn the image processing method of 3D video, can turn in 3D video process target object is identified and Accurate Segmentation contour of object at 2D video, and greatly reduce manually-operated workload.
For achieving the above object, the present invention is by the following technical solutions:
2D video turns an image processing method for 3D video, comprises the following steps:
A. using the image of captured from different perspectives target object as sample image;
B. calculate the Color invariants image of each sample image, the characteristic point in the Color invariants image of extraction sample image, as training sample set;
C. calculate the Color invariants image of image to be identified, extract the characteristic point in the Color invariants image of image to be identified;
D. the characteristic point in the Color invariants image of the characteristic point in the Color invariants image of image to be identified and sample image is mated;
E. according to the distribution situation of the characteristic point that the match is successful, judge in image to be identified, whether there is target object;
If f. there is target object in target image, the characteristic point based on the match is successful splits objects' contour.
In step b, the Color invariants image that calculates each sample image comprises:
Calculate spectral characteristics of radiation E and the first derivative E thereof of object λwith second dervative E λ λ, the RGB component of image and (E, E λ, E λ λ) relation is approximately following formula:
E E λ E λλ = 0.06 0.63 0.27 0.30 0.04 - 0.35 0.34 - 0.60 0.17 × R G B
R wherein, G, B represents the gray value of three passages of RGB image, E represents the spectral characteristics of radiation of object, E λand E λ λbe respectively first derivative and the second dervative of E, according to following formula, calculate Color invariants image:
H = E λ E λλ ,
Wherein H is Color invariants image.
In step b and step c, utilize affine yardstick invariant features mapping algorithm (ASIFT) to extract the ASIFT characteristic point in the constant image of color, and the SIFT that generates ASIFT characteristic point describes operator, said process comprises the following steps:
(1) according to video camera affine model, use different affine transformation parameters to carry out affine transformation to input picture, simulation is because of the different anamorphose causing of shooting angle, and the difference according to affine transformation parameter, obtains a series of images;
(2) use the ASIFT characteristic point in a series of affine transformation gained images that obtain in SIFT algorithm detecting step (1).
(3) SIFT of the characteristic point that calculation procedure (2) detects describes operator, preferably adopts the operator of 128 dimensions, sets up feature point set.
Step (2) comprises the following steps:
(2.1) set up Gauss's gold tower of input picture, Gauss's gold tower image of adjacent level is the poor DoG image D (x, y, δ) of acquisition, x, and y is image coordinate, δ Gaussian function standard deviation used when setting up Gauss's gold tower;
(2.2) detect the local pole point value in DoG image;
(2.3) according to established standards, give up the unsettled extreme point of part, the characteristic point that remaining extreme point finally detects as SIFT.
The SIFT that calculates characteristic point in step (3) describes operator and comprises the following steps:
(3.1), in each characteristic point around gradient direction and the size of computed image, obtain the most significant direction of characteristic point;
(3.2) near characteristic point, choice direction collecting zone carrys out the scope of controlling feature point impact, and the Size-dependent of direction collecting zone is in the yardstick of its place image, and yardstick is larger, and the region of collection is larger;
(3.3) with histogram, add up gradient magnitude and the direction of pixel in direction collecting zone, finally histogram is carried out to the SIFT that Gauss's weighting obtains 128 dimensions and describe operator.
In steps d, use arest neighbors matching algorithm, according to Euler's distance, carry out the coupling of characteristic point.
In steps d, use s mrepresent m the characteristic point that training sample is concentrated, w nfor n characteristic point in image to be identified, by following formula, carry out Feature Points Matching:
n ′ = arg n min Σ i 128 ( s i m - w i n ) 2
Above formula represents that the individual characteristic point of n ' of image to be identified makes Euler's distance reach minimum, if this Euler's distance is less than the threshold value of setting, thinks that the individual characteristic point of n ' and m the Feature Points Matching in training sample in image to be identified is successful,
All characteristic points of concentrating in training sample are carried out to identical operation, until coupling finishes.
Step e comprises:
Image to be identified is divided into M*N part rectangular area, calculate the characteristic point number in each rectangular area, if having the feature that in a rectangular block, the match is successful at least counts more than set point, judge that target object appears in image to be identified, otherwise judge that target object does not appear in image to be identified, M, N >=2.
Step f comprises:
(1) for each rectangular area, if characteristic point number more than set point, thinks that the characteristic point in this rectangular area is validity feature point, add up all validity feature points in image to be identified, cast out and be not judged as effectively other characteristic points;
(2) calculate the convex closure region A that validity feature point forms, ground is used Morphology Algorithm that convex closure region is expanded to and obtains region B, be preferably 1.5 times of original area, it is region C that region B deducts the formed annular region of region A, in image to be detected, in the region of B outside, region, is region D;
(3) respectively region A is used as to foreground area, region C is used as to not definite area, region D is used as to background area;
(4) according to the region of step (3), divide, objects' contour is split completely.
In step (4), with described foreground area, described not definite area and described background area initialization grabcut algorithm, operation grabcut algorithm splits objects' contour.
The present invention accurately localizing objects object also splits target object automatically, realizes convenient automatic object identification and cuts apart, and for computer assisted semi-automatic 2D video, turns 3D video process, can greatly reduce manually-operated workload.
The present invention is by extracting the local invariant feature of sample object image, set up object features collection, local invariant feature in image to be identified is mated with object features collection, thereby can judge and in image to be identified, whether have object to be identified, if there is object to be identified, according to matching result, target object split.
Due to the color of sample object and the color of objects in images to be identified basically identical, so the present invention has preferably proposed the ASIFT algorithm based on Color invariants.ASIFT algorithm may change by imitating the optical axis direction of camera based on video camera affine model, simulates all possible affine distortion.Therefore ASIFT algorithm has good affine invariant feature, and ASIFT has kept the constant characteristic of SIFT algorithm yardstick simultaneously.By the input picture using Color invariants image as ASIFT algorithm, take full advantage of color and the local invariant feature of object, therefore there is good recognition accuracy, and can more characteristic point be detected with respect to SIFT algorithm, be conducive to accurately further Accurate Segmentation contour of object to be identified.
Accompanying drawing explanation
Fig. 1 is the flow chart that 2D video of the present invention turns a kind of embodiment of image processing method of 3D video;
Fig. 2 is that the SIFT algorithm in an embodiment of the present invention detects the local pole point value schematic diagram in DoG image;
Fig. 3 is that in an embodiment of the present invention, schematic diagram is divided in the region based on validity feature point.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are elaborated.Should be emphasized that, following explanation is only exemplary, rather than in order to limit the scope of the invention and to apply.
Consult Fig. 1, a kind of 2D video turns the image processing method of 3D video, comprises the following steps:
A. using the image of captured from different perspectives target object as sample image;
B. calculate the Color invariants image of each sample image, the characteristic point in the Color invariants image of extraction sample image;
C. calculate the Color invariants image of image to be identified, extract the characteristic point in the Color invariants image of image to be identified;
D. the characteristic point in the Color invariants image of the characteristic point in the Color invariants image of image to be identified and sample image is mated;
E. according to the distribution situation of the characteristic point that the match is successful, judge in image to be identified, whether there is target object;
If f. there is target object in target image, the characteristic point based on the match is successful splits objects' contour.
In specific embodiment, image processing method comprises the following steps:
1. calculate the Color invariants image of target object
Using the image of captured from different perspectives target object as sample image, according to the theoretical Color invariants image H that calculates each sample image of Kubelka Munk k(k=1,2 ... n), the number that n is sample image.
Described Color invariants image calculation method: the spectral characteristics of radiation E and the first derivative E thereof that first calculate object λwith second dervative E λ λ.According to the reference condition of CIE1964XYZ, the RGB component of coloured image and (E, E λ, E λ λ) relation is approximately formula
E E λ E λλ = 0.06 0.63 0.27 0.30 0.04 - 0.35 0.34 - 0.60 0.17 × R G B
R wherein, G, B represents the gray value of three passages of RGB image, (E, E λ, E λ λ) represent respectively spectral characteristics of radiation E and the first derivative E thereof of object λwith second dervative E λ λ., Color invariants image computational methods be shown below:
H = E λ E λλ
H is Color invariants image.
2. setting up target object feature point set is training sample set, utilizes affine yardstick invariant features mapping algorithm ASIFT (Affine scale-invariant feature transform), extracts the constant image H of color kin ASIFT characteristic point, and the description operator of calculated characteristics point.Characteristic point is described operator and is used SIFT to describe operator, and this describes the characteristic vector that operator is 128 dimensions.
3. the method in use step 1 obtains the constant image H of color of image to be identified.Use affine yardstick invariant features mapping algorithm ASIFT in step 2 to extract characteristic point in the constant image H of color of image to be identified, and calculated characteristics point description operator.
The concrete steps of described affine yardstick invariant features mapping algorithm ASIFT are:
(1) according to video camera affine model, use different affine transformation parameters, input picture is carried out to affine transformation, simulation is because of the different anamorphose causing of shooting angle.Difference according to affine transformation parameter, obtains a series of images.
(2) use the ASIFT characteristic point in a series of affine transformation gained images that obtain in SIFT algorithm detecting step (1).
(3) SIFT of the characteristic point that calculating (2) detects describes operator, sets up target object feature point set.
Described SIFT algorithm, full name is Scale Invariant Feature Transform, is the important algorithm of detected image mesoscale invariant features point, has good robustness.There is certain rotation at image in the characteristic point that SIFT detects, still can keep stable when dimensional variation and illumination variation.
Preferably, the concrete steps of using SIFT algorithm to detect ASIFT characteristic point are:
(1) set up Gauss's gold tower of input picture, Gauss's gold tower image of adjacent level is the poor DoG image D (x, y, δ) (x, y presentation video coordinate, Gaussian function standard deviation used when δ represents to set up Gauss's gold tower) of acquisition.
(2) detect the local pole point value in DoG image.As shown in Figure 2, preferably, the concrete grammar that detects Local Extremum is: first judge that 8 pixels that this point is adjacent make comparisons, if the gray value of 8 pixels more adjacent than same layer is all large or all little, then compare with 9, coordinate position place pixel grey scale with each leisure of image of adjacent yardstick levels.If the gray value of this point is all less or all large than the gray value of 26 pixels of its neighborhood, think that this point is for Local Extremum.
(3) give up according to established standards and think stable not extreme point, the characteristic point that remaining extreme point finally detects as SIFT.
Preferably, the concrete steps that described calculating SIFT characteristic point is described operator are:
(1), in each characteristic point around gradient direction and the size of computed image, obtain the most significant direction of characteristic point.
(2) near characteristic point, choice direction collecting zone carrys out the scope of controlling feature point impact, and the Size-dependent of direction collecting zone is in the yardstick of its place image, and yardstick is larger, and the region of collection is larger.
(3) with a histogram, add up gradient magnitude and the direction of pixel in direction collecting zone, finally histogram is carried out to the SIFT that Gauss's weighting obtains 128 dimensions and describe operator.
4. mate characteristic point and the target object characteristic point of image to be identified.
Preferably, use arest neighbors matching algorithm characteristic point to mate, characteristic point is described between operator and is used Euler's distance to mate.
In target object training sample, m characteristic point is s m, in image to be identified, having n characteristic point is w n, use Euler's range formula to obtain s mmatching characteristic point, optimize formula as follows,
n ′ = arg n min Σ i 128 ( s i m - w i n ) 2
In formula, i representation feature point is described the i dimension of operator, describe operator and be preferably 128 dimensions, above formula represents that the individual characteristic point of n ' of image to be identified makes Euler's distance reach minimum, if this Euler's distance is less than the threshold value of setting, think that the individual characteristic point of n ' and m the Feature Points Matching in training sample in image to be identified is successful.
5. according to the match condition of characteristic point, judge in target image, whether there is target object.
Concrete determination methods is:
The length-width ratio of image to be identified is divided into M*N part rectangular area by image to be identified, M, N >=2, calculate the characteristic point number in each rectangular area, if having the feature that in a rectangular block, the match is successful at least counts more than set point, think that target object appears in image to be identified, otherwise think that target object does not appear in image to be identified.Preferably, every part of rectangular area comprise 20*20~30*30 pixel, M, the value of N is determined according to the size of image length and width and every part of rectangular area.Preferably, described set point is 10.
6., if there is target object in target image, the feature points segmentation based on the match is successful goes out objects' contour.
Preferably, the characteristic point that the match is successful, as Seed Points initialization grabcut algorithm, is then moved grabcut algorithm by objects' contour Accurate Segmentation out.
Use characteristic point as the concrete steps of Seed Points initialization grabcut algorithm is:
(1) get the rectangular area in step 5, if the characteristic point number in rectangular area more than set point, thinks that the characteristic point in rectangular area is validity feature point.All validity feature points in statistics target image.Other are not judged as effective characteristic point and directly cast out.
(2) as shown in Figure 3, use the correlation technique of computational geometry to calculate the convex closure region A that validity feature point forms, and the setting multiple that uses Morphology Algorithm that convex closure region is expanded to original area obtains region B, region B deducts the formed annular region of region A and is denoted as C.Region B is denoted as D in image peripheral to be detected region.Preferably, described setting multiple is 1.5 times.
(3) respectively region A is used as to the desired foreground area of grabCut algorithm initialization, region C, as the desired not definite area of grabcut algorithm initialization, is used as to the desired background area of grabcut algorithm initialization by region D.
Next, operation grabcut algorithm can split objects' contour completely.
The present invention can be in target image the shooting angle of object shooting angle and training sample still target object can comparatively accurately be detected while differing larger, and can be partitioned into the integrity profile of object.2D video for indirect labor turns 3D video, can reduce greatly manual intervention, increases work efficiency, and reduces cost of labor.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace, all should be considered as belonging to protection scope of the present invention.

Claims (10)

1. 2D video turns an image processing method for 3D video, it is characterized in that, comprises the following steps:
A. using the image of captured from different perspectives target object as sample image;
B. calculate the Color invariants image of each sample image, the characteristic point in the Color invariants image of extraction sample image, as training sample set;
C. calculate the Color invariants image of image to be identified, extract the characteristic point in the Color invariants image of image to be identified;
D. the characteristic point in the Color invariants image of the characteristic point in the Color invariants image of image to be identified and sample image is mated;
E. according to the distribution situation of the characteristic point that the match is successful, judge in image to be identified, whether there is target object;
If f. there is target object in target image, the characteristic point based on the match is successful splits objects' contour.
2. 2D video as claimed in claim 1 turns the image processing method of 3D video, it is characterized in that, in step b, the Color invariants image that calculates each sample image comprises:
Calculate spectral characteristics of radiation E and the first derivative E thereof of object λwith second dervative E λ λ, the RGB component of image and (E, E λ, E λ λ) relation is approximately following formula:
E E λ E λλ = 0.06 0.63 0.27 0.30 0.04 - 0.35 0.34 - 0.60 0.17 × R G B
R wherein, G, B represents the gray value of three passages of RGB image, E represents the spectral characteristics of radiation of object, E λand E λ λbe respectively first derivative and the second dervative of E, according to following formula, calculate Color invariants image:
H = E λ E λλ ,
Wherein H is Color invariants image.
3. 2D video as claimed in claim 1 turns the image processing method of 3D video, it is characterized in that, in step b and step c, utilize affine yardstick invariant features mapping algorithm (ASIFT) to extract the ASIFT characteristic point in the constant image of color, and the SIFT that generates ASIFT characteristic point describes operator, said process comprises the following steps:
(1) according to video camera affine model, use different affine transformation parameters to carry out affine transformation to input picture, simulation is because of the different anamorphose causing of shooting angle, and the difference according to affine transformation parameter, obtains a series of images;
(2) use the ASIFT characteristic point in a series of affine transformation gained images that obtain in SIFT algorithm detecting step (1).
(3) SIFT of the characteristic point that calculation procedure (2) detects describes operator, preferably adopts the operator of 128 dimensions, sets up feature point set.
4. 2D video as claimed in claim 3 turns the image processing method of 3D video, it is characterized in that, step (2) comprises the following steps:
(2.1) set up Gauss's gold tower of input picture, Gauss's gold tower image of adjacent level is the poor DoG image D (x, y, δ) of acquisition, x, and y is image coordinate, δ Gaussian function standard deviation used when setting up Gauss's gold tower;
(2.2) detect the local pole point value in DoG image;
(2.3) according to established standards, give up the unsettled extreme point of part, the characteristic point that remaining extreme point finally detects as SIFT.
5. 2D video as claimed in claim 3 turns the image processing method of 3D video, it is characterized in that, the SIFT that calculates characteristic point in step (3) describes operator and comprises the following steps:
(3.1), in each characteristic point around gradient direction and the size of computed image, obtain the most significant direction of characteristic point;
(3.2) near characteristic point, choice direction collecting zone carrys out the scope of controlling feature point impact, and the Size-dependent of direction collecting zone is in the yardstick of its place image, and yardstick is larger, and the region of collection is larger;
(3.3) with histogram, add up gradient magnitude and the direction of pixel in direction collecting zone, finally histogram is carried out to the SIFT that Gauss's weighting obtains 128 dimensions and describe operator.
6. 2D video as claimed in claim 1 turns the image processing method of 3D video, it is characterized in that, in steps d, uses arest neighbors matching algorithm, carries out the coupling of characteristic point according to Euler's distance.
7. 2D video as claimed in claim 6 turns the image processing method of 3D video, it is characterized in that, in steps d, uses s mrepresent m the characteristic point that training sample is concentrated, w nfor n characteristic point in image to be identified, by following formula, carry out Feature Points Matching:
n ′ = arg n min Σ i 128 ( s i m - w i n ) 2
Above formula represents that the individual characteristic point of n ' of image to be identified makes Euler's distance reach minimum, if this Euler's distance is less than the threshold value of setting, thinks that the individual characteristic point of n ' and m the Feature Points Matching in training sample in image to be identified is successful,
All characteristic points of concentrating in training sample are carried out to identical operation, until coupling finishes.
8. the 2D video as described in claim 1 to 7 any one turns the image processing method of 3D video, it is characterized in that, step e comprises:
Image to be identified is divided into M*N part rectangular area, calculate the characteristic point number in each rectangular area, if having the feature that in a rectangular block, the match is successful at least counts more than set point, judge that target object appears in image to be identified, otherwise judge that target object does not appear in image to be identified, M wherein, N >=2.
9. 2D video as claimed in claim 8 turns the image processing method of 3D video, it is characterized in that, step f comprises:
(1) for each rectangular area, if characteristic point number more than set point, thinks that the characteristic point in this rectangular area is validity feature point, add up all validity feature points in image to be identified, cast out and be not judged as effectively other characteristic points;
(2) calculate the convex closure region A that validity feature point forms, ground is used Morphology Algorithm that convex closure region is expanded to and obtains region B, be preferably 1.5 times of original area, it is region C that region B deducts the formed annular region of region A, in image to be detected, in the region of B outside, region, is region D;
(3) respectively region A is used as to foreground area, region C is used as to not definite area, region D is used as to background area;
(4) according to the region of step (3), divide, objects' contour is split completely.
10. 2D video as claimed in claim 9 turns the image processing method of 3D video, it is characterized in that, in step (4), with described foreground area, described not definite area and described background area initialization grabcut algorithm, operation grabcut algorithm splits objects' contour.
CN201310501309.4A 2013-10-22 2013-10-22 A kind of 2D video turns the image processing method of 3D video Active CN103533332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310501309.4A CN103533332B (en) 2013-10-22 2013-10-22 A kind of 2D video turns the image processing method of 3D video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310501309.4A CN103533332B (en) 2013-10-22 2013-10-22 A kind of 2D video turns the image processing method of 3D video

Publications (2)

Publication Number Publication Date
CN103533332A true CN103533332A (en) 2014-01-22
CN103533332B CN103533332B (en) 2016-01-20

Family

ID=49934950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310501309.4A Active CN103533332B (en) 2013-10-22 2013-10-22 A kind of 2D video turns the image processing method of 3D video

Country Status (1)

Country Link
CN (1) CN103533332B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020489A1 (en) * 2015-08-03 2017-02-09 京东方科技集团股份有限公司 Virtual reality display method and system
CN108090891A (en) * 2017-11-01 2018-05-29 浙江农林大学 It is a kind of to detect the method and system for omitting cell compartment and newly-increased cell compartment
CN108629261A (en) * 2017-03-24 2018-10-09 纬创资通股份有限公司 Remote identity recognition method and system and computer readable recording medium
CN110580691A (en) * 2019-09-09 2019-12-17 京东方科技集团股份有限公司 dynamic processing method, device and equipment of image and computer readable storage medium
CN110610453A (en) * 2019-09-02 2019-12-24 腾讯科技(深圳)有限公司 Image processing method and device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101657839A (en) * 2007-03-23 2010-02-24 汤姆森许可贸易公司 System and method for region classification of 2D images for 2D-to-3D conversion
CN102012939A (en) * 2010-12-13 2011-04-13 中国人民解放军国防科学技术大学 Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features
CN102231191A (en) * 2011-07-17 2011-11-02 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)
US20120051625A1 (en) * 2010-08-23 2012-03-01 Texas Instruments Incorporated Method and Apparatus for 2D to 3D Conversion Using Scene Classification and Face Detection
US20120235988A1 (en) * 2010-09-10 2012-09-20 Dd3D, Inc. Systems and methods for converting two-dimensional images into three-dimensional images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101657839A (en) * 2007-03-23 2010-02-24 汤姆森许可贸易公司 System and method for region classification of 2D images for 2D-to-3D conversion
US20120051625A1 (en) * 2010-08-23 2012-03-01 Texas Instruments Incorporated Method and Apparatus for 2D to 3D Conversion Using Scene Classification and Face Detection
US20120235988A1 (en) * 2010-09-10 2012-09-20 Dd3D, Inc. Systems and methods for converting two-dimensional images into three-dimensional images
CN102012939A (en) * 2010-12-13 2011-04-13 中国人民解放军国防科学技术大学 Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features
CN102231191A (en) * 2011-07-17 2011-11-02 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020489A1 (en) * 2015-08-03 2017-02-09 京东方科技集团股份有限公司 Virtual reality display method and system
US9881424B2 (en) 2015-08-03 2018-01-30 Boe Technology Group Co., Ltd. Virtual reality display method and system
CN108629261A (en) * 2017-03-24 2018-10-09 纬创资通股份有限公司 Remote identity recognition method and system and computer readable recording medium
CN108629261B (en) * 2017-03-24 2021-04-09 纬创资通股份有限公司 Remote identity recognition method and system and computer readable recording medium
CN108090891A (en) * 2017-11-01 2018-05-29 浙江农林大学 It is a kind of to detect the method and system for omitting cell compartment and newly-increased cell compartment
CN110610453A (en) * 2019-09-02 2019-12-24 腾讯科技(深圳)有限公司 Image processing method and device and computer readable storage medium
CN110580691A (en) * 2019-09-09 2019-12-17 京东方科技集团股份有限公司 dynamic processing method, device and equipment of image and computer readable storage medium

Also Published As

Publication number Publication date
CN103533332B (en) 2016-01-20

Similar Documents

Publication Publication Date Title
CN104331682B (en) A kind of building automatic identifying method based on Fourier descriptor
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN103077521B (en) A kind of area-of-interest exacting method for video monitoring
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN110232389B (en) Stereoscopic vision navigation method based on invariance of green crop feature extraction
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN108171715B (en) Image segmentation method and device
CN104091324A (en) Quick checkerboard image feature matching algorithm based on connected domain segmentation
CN102592288B (en) Method for matching pursuit of pedestrian target under illumination environment change condition
CN104268853A (en) Infrared image and visible image registering method
CN103727930A (en) Edge-matching-based relative pose calibration method of laser range finder and camera
CN104537689B (en) Method for tracking target based on local contrast conspicuousness union feature
CN103533332B (en) A kind of 2D video turns the image processing method of 3D video
CN104517095A (en) Head division method based on depth image
CN110245566B (en) Infrared target remote tracking method based on background features
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
Malekabadi et al. A comparative evaluation of combined feature detectors and descriptors in different color spaces for stereo image matching of tree
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN104361573B (en) The SIFT feature matching algorithm of Fusion of Color information and global information
CN109410272B (en) Transformer nut recognition and positioning device and method
CN106355576A (en) SAR image registration method based on MRF image segmentation algorithm
CN109544608B (en) Unmanned aerial vehicle image acquisition characteristic registration method
CN103186899A (en) Method for extracting feature points with invariable affine sizes
Jung et al. Estimation of 3D head region using gait motion for surveillance video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant