CN103824303A - Image perspective distortion adjusting method and device based on position and direction of photographed object - Google Patents

Image perspective distortion adjusting method and device based on position and direction of photographed object Download PDF

Info

Publication number
CN103824303A
CN103824303A CN201410096007.8A CN201410096007A CN103824303A CN 103824303 A CN103824303 A CN 103824303A CN 201410096007 A CN201410096007 A CN 201410096007A CN 103824303 A CN103824303 A CN 103824303A
Authority
CN
China
Prior art keywords
image
point
prime
information
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410096007.8A
Other languages
Chinese (zh)
Inventor
赵立新
焉逢运
史嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Galaxycore Shanghai Ltd Corp
Original Assignee
Galaxycore Shanghai Ltd Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Galaxycore Shanghai Ltd Corp filed Critical Galaxycore Shanghai Ltd Corp
Priority to CN201410096007.8A priority Critical patent/CN103824303A/en
Publication of CN103824303A publication Critical patent/CN103824303A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an image perspective distortion adjusting method and device based on a position and a direction of a photographed object. The method comprises the following steps: step A, namely photographing the photographed object by using an imaging module through focal length adjustment from the infinity point to the nearest point to capture a plurality of original images; step B, namely carrying out operation on imaging information of the original images to obtain depth information of the photographed object; and step C, namely carrying out perspective distortion processing on the first original image in the original images by the depth information. With the adoption of the method and the device, a plurality of pictures with different photographing angles in a same scene can be obtained by using the imaging module and imaging information is used for carrying out the operation on the original images; the scene depth is obtained by calculation; the distortion processing is carried out on the photographed images based on the obtained scene depth so as to overcome the perspective distortion.

Description

Position based on object, the method and apparatus that direction is adjusted image perspective distortion
Technical field
The present invention relates to image processing techniques, relate in particular to a kind of position based on object, direction and adjust the method and apparatus of image perspective distortion.
Background technology
Because camera site restriction, or composition needs, usually can run into the situation that cannot keep camera sensor parallel with subject.As shown in Figure 1, to take buildings as example, due to image aspects relation, whole building building all can not be photographed.For this reason, just need to adjust camera shooting angle.If but adjusted the shooting angle of camera in the mode shown in Fig. 2, because buildings bottom is compared with near and top is far away, the degree of depth of the relative camera lens of its each several part will change so.Correspondingly, the imaging size on sensor also can change, thereby produces perspective distortion.
Along with popularizing of smart mobile phone, the various application such as video calling and auto heterodyne are widely used.But existing technique for taking is not considered above-mentioned due to shooting distance, the image aspects problem causing, makes to take picture and physical presence certain error out, for example, the general face of taking out will seem loose than actual, has affected shooting effect and user and has experienced.
Summary of the invention
The problem that the embodiment of the present invention solves is how to solve the perspective distortion producing in image processing.
For addressing the above problem, the embodiment of the present invention provides a kind of position based on object, direction to adjust the method for image perspective distortion, comprise step as described below: steps A: by from infinite distance to nearest focus adjustment, by imaging modules, object is taken, captured several original images; Step B: by the image-forming information of described original image is carried out to computing, obtain the depth information of object; Step C: the first original image in described original image is carried out to perspective distortion processing by described depth information.
Optionally, in the time that focal plane and described object are not exclusively a plane, in described step B, also comprise: the coupling of the coupling in the coupling of the feature based on described image-forming information, the region based on described image-forming information or the phase place based on image-forming information, obtaining after corresponding intermediate parameters information, further calculating described depth information.
Optionally, the coupling of the described feature based on described image-forming information comprises: step B1: adopt horizontal dual camera, catch the original image obtaining and be respectively I land I r; Step B2: collection apparatus step: the detected value C that obtains single pixel in template: C ( x , y ) = 1 , | I ( x , y ) - I ( X 0 , y 0 ) | ≤ t 0 , | I ( x , y ) - I ( x 0 , y 0 ) | > t , Detect the each pixel in template is carried out, I (x 0, y 0) be template center's point gray-scale value, I (x, y) is other gray-scale values in template, t is the threshold value of determining similarity degree, x, and y is the coordinate in the coordinate system of initial point take the source images I lower left corner, detected value C to the point that belongs to template A sues for peace, and obtains the distance of swimming and the S of output:
Figure BDA0000477490460000022
the eigenwert R of the respective point (x0, y0) of source images I is: R ( x 0 , y 0 ) = h - S ( x 0 , y 0 ) , S ( x 0 , y 0 ) < h 0 , S ( x 0 , y 0 ) &GreaterEqual; h , Wherein h is geometry threshold value and h=3S max/ 4, wherein S maxthe maximal value that the distance of swimming and S can get, to described two width original image I land I rdeal with, obtain characteristic pattern and be respectively H land H r; Step B3: parallax matrix computations step: with H lin point (x to be matched 0, y 0) centered by point create the rectangular window Q that a size is wide m, high n; At H rin, along continuous straight runs side-play amount dx takes out and point (x to be matched in disparity range 0, y 0) adjacent onesize be another rectangular window Q ' of m × n; By First Characteristic figure H lrectangular window Q and Second Characteristic figure H rrectangular window Q ' compare; , H lin with point (x to be matched 0, y 0) centered by point m × n rectangular window and H rthe matching factor of the rectangular window of middle correspondingly-sized horizontal offset dx is: &Gamma; dx ( x 0 , y 0 ) = &Sigma; ( i , j ) &Element; Q [ H R ( x 0 + i + dx , y 0 + j ) - H L [ x 0 + i , y 0 + j ) ] 2 , I, j is the coordinate take rectangular window Q as coordinate system; How much threshold value k are set in preliminary election, if Γ dx(x 0, y 0)≤k, is that the match is successful; Γ dx(x 0, y 0) value of dx while obtaining minimum value, come the successfully off-set value dx of point of record matching, D (x by parallax matrix D here 0, y 0)=dx; At traversal characteristic pattern H lafter, to parallax matrix D interpolation, to the unique point that the match is successful and the coordinate valuation of unsuccessful extract minutiae; The offset information that parallax matrix D is comprised is for compute depth; Step B4:I lupper 1 P lT(x 1, y 1, z 1), by matching I rupper 1 P rT(x 2, y 2, z 2), calculate spatial point P w(x 0, y 0, z 0) depth z 0; For source images I lon any point (x, y), in image-forming module optical axis center base length b and lens focus f situation, the degree of depth of its corresponding spatial point is:
Figure BDA0000477490460000025
d is the described parallax matrix that comprises offset information.
Optionally, when focal plane and described object are a plane, and when parallel with the camera lens of described image-forming module, pass through: formula: 1/L '-1/L=1/F ' calculates the depth information of described object, wherein: L is object distance, L ' is image distance, and F is lens focus, and described depth information is object distance.
Optionally, in described step C, for described P lT=(x 1, y 1, z 1), its volume coordinate is P in the coordinate system that is initial point through sensor w=(x 0, y 0, z 0), have: if rotation θ angle, subpoint P w'=(x 0', y 0', z0'), have: &alpha; = arctan y ( y 0 z 0 ) ; 0 &prime; = y 0 2 + z 0 2 &times; sin ( &theta; + &alpha; ) ;
Figure BDA0000477490460000034
x 0'=x 0; Recalculate P w'=(x 0', y 0', z 0') theoretical picture point P on the sensor of left side lT'=(x 1', y 1', z 1'), have:
Figure BDA0000477490460000035
z 1'=f; Calculate successively every bit, obtain image E; To image, E makes interpolation, cuts out, and obtains described the first original image figure G after treatment.
Optionally, described anglec of rotation θ obtains by following any one mode: acceleration transducer, user specify, image modeling.
The embodiment of the present invention also provides a kind of position based on object, direction to adjust the device of image perspective distortion, comprising: taking module, object is carried out to the shooting of multiple angles, and capture several original images; Depth calculation module, by the image-forming information of more described original image is carried out to computing, obtains the depth information of object; Image processing module, carries out perspective distortion processing by described depth information to the first original image in described original image.
Optionally, in the time that focal plane and described object are not exclusively a plane, described depth calculation module can be based on described image-forming information the coupling in coupling, region based on described image-forming information or the coupling of phase place based on image-forming information of feature obtaining after corresponding intermediate parameters information, further calculate described depth information.
Optionally, the coupling of the feature of described depth calculation module based on described image-forming information comprises: adopt horizontal dual camera, catch the original image obtaining and be respectively I land I r; Obtain the detected value C of single pixel in template: C ( x , y ) = 1 , | I ( x , y ) - I ( X 0 , y 0 ) | &le; t 0 , | I ( x , y ) - I ( x 0 , y 0 ) | > t , Detect the each pixel in template is carried out, I( x0, y 0) be template center's point gray-scale value, I(x, y) be other gray-scale values in template, t is the threshold value of determining similarity degree, x, and y is the coordinate in the coordinate system of initial point take the source images I lower left corner, detected value C to the point that belongs to template A sues for peace, and obtains the distance of swimming and the S of output: respective point (the x of source images I 0, y 0) eigenwert R be: R ( x 0 , y 0 ) = h - S ( x 0 , y 0 ) , S ( x 0 , y 0 ) < h 0 , S ( x 0 , y 0 ) &GreaterEqual; h ; Wherein h is geometry threshold value and h=3S max/ 4, wherein S maxit is the maximal value that the distance of swimming and S can get.To described two width original image I land I rdeal with, obtain characteristic pattern and be respectively H land H r; Parallax matrix computations step: H lin point (x to be matched 0, y 0) centered by point create the rectangular window Q that a size is wide m, high n; H rin, along continuous straight runs side-play amount dx takes out and point (x to be matched in disparity range 0, y 0) adjacent onesize be another rectangular window Q ' of m × n; By First Characteristic figure H lrectangular window Q and Second Characteristic figure H rrectangular window Q ' compare; , H lin with point (x to be matched 0, y 0) centered by point m × n rectangular window and H rthe matching factor of the rectangular window of middle correspondingly-sized horizontal offset dx is: &Gamma; dx ( x 0 , y 0 ) = &Sigma; ( i , j ) &Element; Q [ H R ( x 0 + i + dx , y 0 + j ) - H L [ x 0 + i , y 0 + j ) ] 2 , I, j is the coordinate take rectangular window as coordinate system; How much threshold value k are set in preliminary election, if Γ dx(x 0, y 0)≤k, is that the match is successful.Come the successfully off-set value dx of point of record matching, D (x by parallax matrix D 0, y 0)=dx; At traversal characteristic pattern H lafter, to parallax matrix D interpolation, to the unique point that the match is successful and the coordinate valuation of unsuccessful extract minutiae; The offset information that parallax matrix D is comprised is for compute depth.I lupper 1 P lT(x 1, y 1, z 1), by matching I rupper 1 P rT(x 2, y 2, z 2), calculate spatial point P w(x 0, y 0, z 0) depth z 0; For source images I lon any point (x, y), in image-forming module optical axis center base length b and lens focus f situation, the degree of depth of its corresponding spatial point is:
Figure BDA0000477490460000042
d is the described parallax matrix that comprises offset information.
Optionally, when focal plane and described object are a plane, and when parallel with the camera lens of described image-forming module, pass through: formula: 1/L '-1/L=1/F ' calculates the depth information of described object; Wherein: L is object distance, L ' is image distance, and F is lens focus, and described depth information is object distance.
Optionally, in described step C, for described P lT=(x 1, y 1, z 1), its volume coordinate is P in the coordinate system that is initial point through sensor w=(x 0, y 0, z 0), have:
Figure BDA0000477490460000043
if rotation θ angle, subpoint P w'=(x 0', y 0', z 0'), have: &alpha; = arctan ( y 0 z 0 ) ; y 0 &prime; = y 0 2 + z 0 2 &times; sin ( &theta; + &alpha; ) ; x 0'=x 0; Recalculate P w'=(x 0', y 0', z 0') theoretical picture point P on the sensor of left side lT'=(x 1', y 1', z 1'), have:
Figure BDA0000477490460000046
z 1'=f; Calculate successively every bit, obtain image E; To image, E makes interpolation, cuts out, and obtains described the first original image figure G after treatment.
Optionally, described anglec of rotation θ obtains by following any one mode: acceleration transducer, user specify, image modeling.
Compared with prior art, the technical scheme of the embodiment of the present invention has the following advantages:
Use imaging modules to obtain the multiple pictures of the different camera angle of Same Scene, and by the image-forming information of original image is carried out to computing, calculate scene depth, and scene depth based on obtaining does distortion to photographic images and process, thereby overcome perspective distortion.
Accompanying drawing explanation
Fig. 1 is the shooting schematic diagram of taking buildings in prior art by imaging device;
Fig. 2 is the shooting schematic diagram changing in prior art after shooting angle;
Fig. 3 is the embodiment of the present invention a kind of position based on object, the schematic flow sheet that direction is adjusted the method for image perspective distortion;
Fig. 4 uses SUSAN to detect the schematic diagram of template in the embodiment of the present invention;
Fig. 5 is the schematic diagram that carries out characteristic matching of the embodiment of the present invention;
Fig. 6 be the embodiment of the present invention take horizontal dual camera scheme as example, the schematic diagram that depth information is calculated;
Fig. 7 is the surface level perspective view of the horizontal dual camera imaging of the embodiment of the present invention;
Fig. 8 is the schematic diagram of the surface level projection of longitudinal dual camera imaging of the embodiment of the present invention;
Fig. 9 is the schematic diagram of the calculating focusing mapping of the embodiment of the present invention;
Figure 10 is the embodiment of the present invention a kind of position based on object, the structural representation that direction is adjusted the device of image perspective distortion;
Figure 11 represents a kind of application form of the present invention in camera installation, is wherein used for range finding at sensor of the each placement in the left and right of camera installation.
Figure 12 represents the another kind of application form of the present invention in camera installation, and wherein a side of camera installation is periscopic master reference, and opposite side is that microsensor is for range finding;
Figure 13 represents the another kind of application form of the present invention in camera installation, and wherein camera installation comprises symmetrical periscopic sensor;
Figure 14 represents a kind of application form of the present invention in picture pick-up device, is wherein master reference in a side of picture pick-up device, and opposite side is that microsensor is for range finding.
Embodiment
In existing technical scheme, due in the process of shooting, need to adjust the shooting angle of imaging device, cause the part of subject compared with near and partly far away, thereby the degree of depth of the relative camera lens of its each several part is changed, cause perspective distortion.
The embodiment of the present invention, is used imaging modules to obtain the multiple pictures of the different camera angle of Same Scene, and by the image-forming information of original image is carried out to computing, calculates scene depth, thereby the transparent effect of image is changed, and prevents perspective distortion.
For above-mentioned purpose of the present invention, feature and advantage can more be become apparent, below in conjunction with accompanying drawing, specific embodiments of the invention are described in detail.
The embodiment of the present invention provides a kind of position based on object, direction to adjust the method for image perspective distortion.With reference to Fig. 3, below be elaborated by concrete steps.
Steps A, by from infinite distance to nearest focus adjustment, takes object by imaging modules, captures several original images.
In concrete enforcement, above-mentioned imaging modules can be single camera, can be also multiple cameras.Based on the repeatedly imaging of single camera, level or the vertical displacement of photographing device position can arrange repeatedly imaging with operator time, also can arrange repeatedly imaging with operator time, photographing device position moves forward and backward distance.Based on dual camera, can predetermined level or vertical displacement be set by particular device, apply horizontal dual camera scheme and calculate parallax matrix, or by particular device, the predetermined distance that moves forward and backward is set, apply longitudinal dual camera and calculate parallax matrix.Based on the more repeatedly imaging of multi-cam, similar with dual camera, can be that the multiple cameras of application calculate parallax matrix.
Step B, by the image-forming information of more described original image is carried out to computing, obtains the depth information of object.
In a kind of specific embodiment, when described object is a plane, when described object is a plane, and when parallel with the camera lens of described image-forming module, pass through formula: 1/L '-1/L=1/F ', calculate the depth information of described object, described depth information is focal length.Wherein: L is object distance, L ' is image distance, and F is lens focus.
In another kind of specific embodiment, in the time that described object is not exclusively a plane, when not parallel between camera lens and sensor and object, can pass through Region Matching Algorithm, obtaining after corresponding intermediate parameters information, further calculating described depth information.The method of the Region Matching that specifically, range finding is used can comprise three classes: the coupling of the feature based on described image-forming information, the coupling of the coupling in the region based on described image-forming information and the phase place based on described image-forming information.
Wherein, the coupling primitive that the coupling based on feature is used has comprised the dirigibility on abundant statistical property and arithmetic programming, is easy to realize by hardware.Coupling based on region is comparatively applicable to indoor grade and has the environment of notable feature, has larger limitation, need to have other artificial intelligence approaches to assist.Based on the coupling of phase place, because can causing disparity map, the reason such as existence and occlusion effect of periodicity pattern, smooth domain produces error, and also need to have other method to carry out error-detecting and correction, comparatively complicated.
In above-mentioned specific embodiment, set forth a kind of concrete methods of realizing of the present invention with the matching process based on feature.Wherein disparity computation comprises feature extraction and characteristic matching.But it should be understood that the present invention is not limited to the coupling based on feature.
Step B1, first needs to obtain source images I.Take horizontal dual camera as example, the source images obtaining by two of left and right sensor is respectively I land I r.To pre-service such as source images process figure image intensifying, filtering, convergent-divergents, enter feature extraction.Feature extraction can realize by following steps:
Step B2, obtains the detected value C of single pixel in template by feature point detection.Described unique point is the extreme point of image, substantially possesses translation, rotation, convergent-divergent, affine unchangeability, such as grey scale pixel value, angle point, edge, flex point etc.Conventional feature point detecting method comprises: SUSAN(Smallest Univalue Segment Assimilating Nucleus, small nut Tong Zhi district) angle point extracts, Harris angle point extracts, SIFT(Scale-invariant feature transform, yardstick invariant features) extract etc.As a kind of example, be extracted as example with SUSAN angle point here and describe:
In SUSAN angle point extracts, He Tongzhi district is the core with respect to template, has certain region and it to have identical gray scale in template.As shown in Figure 4, use 37 pixels in described original graph as detecting template, pass through formula C ( x , y ) = 1 , | I ( x , y ) - I ( X 0 , y 0 ) | &le; t 0 , | I ( x , y ) - I ( x 0 , y 0 ) | > t , Can obtain detecting the detected value C of each single pixel in template.Wherein, I (x 0, y 0) be template center's point gray-scale value, I (x, y) detects other gray-scale values in template, and t is the threshold value of determining similarity degree, x, y is the coordinate in the coordinate system of initial point take the source images I lower left corner.
Calculate after the detected value that detects each point in template, to detected value C summation, the distance of swimming and the S that obtain output are: further, according to formula: R ( x 0 , y 0 ) = h - S ( x 0 , y 0 ) , S ( x 0 , y 0 ) < h 0 , S ( x 0 , y 0 ) &GreaterEqual; h , Calculate the respective point (x of original image I 0, y 0) eigenwert R (x 0, y 0).Wherein, h is geometry threshold value and h=3S max/ 4, S maxit is the maximal value that the distance of swimming and S can get.
By above-mentioned steps, respectively to two secondary original image I land I rcarry out above-mentioned computing, obtain characteristic of correspondence figure H land H r.
Step B3, on this basis, need to do further characteristic matching to the characteristic pattern obtaining, to obtain parallax matrix.Characteristic matching can realize as follows:
As shown in Figure 5, with First Characteristic figure H lin point (x to be matched 0, y 0) centered by point, create one wide be m, the high rectangular window Q for n.At Second Characteristic figure H rin, along continuous straight runs side-play amount dx takes out and point (x to be matched in disparity range 0, y 0) (reference point) adjacent, onesize is another rectangular window Q ' of m × n.By First Characteristic figure H lrectangular window Q and Second Characteristic figure H rrectangular window Q ' compare.If can match the corresponding point with maximum comparability in two rectangular windows, just can be judged to be optimum matching.
Can mate two rectangular windows corresponding to characteristic pattern by several different methods.To adopt the quadratic sum algorithm of gray scale difference as example, H lin with point (x to be matched 0, y 0) centered by point m × n rectangular window and H rthe matching factor of the rectangular window Q ' of middle correspondingly-sized horizontal offset dx is: &Gamma; dx ( x 0 , y 0 ) = &Sigma; ( i , j ) &Element; Q [ H R ( x 0 + i + dx , y 0 + j ) - H L [ x 0 + i , y 0 + j ) ] 2 , Wherein, i, j is any the coordinate take rectangular window as coordinate system.
The initial point of described coordinate system is the corresponding position of rectangular window Q and rectangular window Q '.For example, can be the coordinate in the coordinate system of initial point take rectangular window Q and the rectangular window Q ' lower left corner, can be also coordinate in the coordinate system of initial point etc. take rectangular window Q and the rectangular window Q ' lower right corner.
By Γ dx(x 0, y 0) and preset how much threshold value k comparison.Work as Γ dx(x 0, y 0when)≤k, can be judged to be that the match is successful.If Γ dx(x 0, y 0) can obtain minimum value, be template and mate completely.Come the successfully off-set value dx of point of record matching, D (x by parallax matrix D 0, y 0)=dx.
Repeat above-mentioned method, travel through whole characteristic pattern H lafterwards with characteristic pattern H rcarry out matching ratio, and by parallax matrix D interpolation, to the unique point that the match is successful and the coordinate valuation of unsuccessful extract minutiae, to form complete parallax matrix D.
Be understandable that, the above-mentioned method of obtaining parallax matrix can be also at H rin choose rectangle frame, with H lthe rectangle frame of middle correspondence is done to mate, and by further traveling through H r, calculate complete parallax matrix D.For convenience's sake, below based in this step to characteristic pattern H ltraversal describe.
Step B4, based on the offset information comprising in parallax matrix D, compute depth further, the i.e. depth of field.In the optional specific embodiment of one, to adopt horizontal dual camera scheme as example, can pass through triangle formula compute depth.
Shown in Fig. 6-8, for the sensor of two parallel placements, the source images I that left side sensor obtains lupper 1 P lT(x 1, y1, z 1), by matching spatial point point P w(x 0, y0, z 0) 1 P in right sensor rT(x 2, y 2, z 2), can calculate spatial point point P w(x 0, y 0, z 0) depth z 0.For source images I lon any point (x, y), the in the situation that of known two sensors optical axis center base length b and lens focus f, the degree of depth of its corresponding spatial point is:
Z ( x , y ) = f &times; b D ( x , y ) ,
Wherein, D is the parallax matrix that comprises offset information calculating in characteristic matching.Through ergodic source image I l, get final product to obtain the matrix of depths Z of the depth of field.
Step C, carries out perspective distortion processing by described depth information to the first original image in more described original image, improves the effect of the first original image of processing.
Be arranged on imaging device left side as example, for I take distance measuring sensor lupper 1 P lT=(x 1, y 1, z 1), its volume coordinate, through sensor, is P in the coordinate system of initial point w=(x 0, y 0, z 0).Wherein,
y 0 = y 1 &times; z 0 f ;
, as axle center three dimensions every bit is remapped take a straight line in three dimensions, thereby form final target image.Can be based on the imaging device left side horizontal axis of sensor or the rotation of vertical axis.The embodiment of the present invention, using the rotation based on horizontal axis as example, supposes that the anglec of rotation is θ, and subpoint is P so w'=(x 0', y 0', z 0').As seen from Figure 9, &alpha; = arctan ( y 0 z 0 ) ; y 0 &prime; = y 0 2 + z 0 2 &times; sin ( &theta; + &alpha; ) ;
Figure BDA0000477490460000102
x 0'=x 0;。
Recalculate P w'=(x 0', y 0', z 0') theoretical picture point P on the sensor of left side lT'=(x 1', y 1', z 1'), obtain, x 1 &prime; = x 0 &prime; &times; f z 0 &prime; , y 1 &prime; = y 0 &prime; &times; f z 0 &prime; , z 1 &prime; = f .
Successively every bit in the image after taking pictures is calculated to new images E, and new images E is made to interpolation, cut out, thereby obtain final target figure G.
Above-mentioned anglec of rotation θ is as described in the background art, and user is in order to photograph object completely, and the shooting angle of adjustment imaging device.As can be seen here, although adjusted shooting angle, by as the space relocation process of the embodiment of the present invention, can overcome because shooting angle changes the perspective distortion problem causing.
In the above-described embodiments, can obtain anglec of rotation θ by several different methods.For example, can utilize acceleration transducer (G-Sensor) to obtain.Acceleration transducer is a kind of electronic equipment that can measure accelerating force, comprises, if the gyroscope of detection angular velocity varies etc.By described acceleration transducer, rotational angle that can automatic acquisition imaging device, and then according to embodiments of the invention, calculate corresponding theoretical image point position.
The above-mentioned embodiment that obtains the anglec of rotation by acceleration transducer is only a kind of example.Other methods that can obtain the anglec of rotation all belong to technological thought of the present invention.For example, specify anglec of rotation size by user, or can calculate by image modeling.
In the above embodiment of the present invention, sensor setting is only a kind of example in imaging device left side, and the setting position of sensor also can be arranged on right side or other positions of imaging device, or can be provided with multiple sensors on imaging device.Be understandable that, other embodiment all can be according to the technological thought of the embodiment of the present invention, the i.e. depth parameter based on obtaining in range finding step, by remapping of space, on corresponding sensor, coupling obtains theoretical picture point, thereby complete the adjustment to image perspective distortion, therefore repeat no longer one by one herein.
To sum up, the embodiment of the present invention is found range by the depth of field, and depth of field parameter based on obtaining completes image correction, can compensate because of the not parallel perspective distortion causing between camera lens and sensor and object.Wherein, depth of field range finding can be the depth data (being the depth of field) that obtains each unique point in scene according to several steps such as shooting, coupling, depth calculation.
The embodiment of the present invention also provides corresponding a kind of position based on object, the device that direction is adjusted image perspective distortion, as shown in figure 10, comprising: taking module 101, object is carried out to the shooting of multiple angles, and capture several original images.Depth calculation module 102, by the image-forming information of more described original image is carried out to computing, obtains the depth information of object.Image processing module 103, carries out perspective distortion processing by described depth information to the first original image in described original image, causes the first original image of processing consistent with described object shape, improves the effect of the first original image.
In instantiation, in the time that focal plane and described object are not exclusively a plane, described depth calculation module 102 can be based on described image-forming information the coupling, the coupling of phase place based on image-forming information in coupling, region based on described image-forming information of feature obtaining after corresponding intermediate parameters information, further calculate described depth information.
In instantiation, the coupling of the feature of described depth calculation module based on described image-forming information comprises: adopt horizontal dual camera, catch the original image obtaining and be respectively I land I r; Obtain the detected value C of single pixel in template: C ( x , y ) = 1 , | I ( x , y ) - I ( X 0 , y 0 ) | &le; t 0 , | I ( x , y ) - I ( x 0 , y 0 ) | > t , Detect the each pixel in template is carried out, I(x 0, y 0) be template center's point gray-scale value, I(x, y) be other gray-scale values in template, t is the threshold value of determining similarity degree, x, y is the coordinate in the coordinate system of initial point take the source images I lower left corner.
Detected value C to the point that belongs to template A sues for peace, and obtains the distance of swimming and the S of output: S ( x 0 , y 0 ) = &Sigma; ( x , y ) &Element; A C ( x , y ) .
Respective point (the x of source images I 0, y 0) eigenwert R be: R ( x 0 , y 0 ) = h - S ( x 0 , y 0 ) , S ( x 0 , y 0 ) < h 0 , S ( x 0 , y 0 ) &GreaterEqual; h , Wherein h is geometry threshold value and h=3S max/ 4, wherein S maxthe maximal value that the distance of swimming and S can get, to described two width original image I land I rdeal with, obtain characteristic pattern and be respectively H land H r.
Parallax matrix computations step: H lin point (x to be matched 0, y 0) centered by point create the rectangular window Q that a size is wide m, high n; H rin, along continuous straight runs side-play amount dx takes out and point (x to be matched in disparity range 0, y 0) adjacent onesize be another rectangular window Q ' of m × n; By First Characteristic figure H lrectangular window Q and Second Characteristic figure H rrectangular window Q ' compare.
H lin with point (x to be matched 0, y 0) centered by point m × n rectangular window and H rthe matching factor of the rectangular window of middle correspondingly-sized horizontal offset dx is: &Gamma; dx ( x 0 , y 0 ) = &Sigma; ( i , j ) &Element; Q [ H R ( x 0 + i + dx , y 0 + j ) - H L [ x 0 + i , y 0 + j ) ] 2 , I, j is the coordinate take rectangular window as coordinate system.How much threshold value k are set in preliminary election, if Γ dx(x 0, y 0)≤k, is that the match is successful.Come the successfully off-set value dx of point of record matching, D (x by parallax matrix D 0, y 0)=dx.
At traversal characteristic pattern H lafter, to parallax matrix D interpolation, to the unique point that the match is successful and the coordinate valuation of unsuccessful extract minutiae; The offset information that parallax matrix D is comprised is for compute depth.
I lupper 1 P lT(x 1, y 1, z 1), by matching I rupper 1 P rT(x 2, y 2, z 2), calculate spatial point P w(x 0, y 0, z 0) depth z 0; For source images I lon any point (x, y), in image-forming module optical axis center base length b and lens focus f situation, the degree of depth of its corresponding spatial point is:
Figure BDA0000477490460000122
wherein D is the described parallax matrix that comprises offset information.
In instantiation, when focal plane and described object are a plane, and when parallel with the camera lens of described image-forming module, pass through: formula: 1/L '-1/L=1/F ' calculates the depth information of described object, described depth information is object distance.Wherein: L is object distance, L ' is image distance, and F is lens focus.
In instantiation, in described step C, for I lupper 1 P lT=(x 1, y 1, z 1),, its volume coordinate is P in the coordinate system that is initial point through left sensor w=(x 0, y 0, z 0), have: x 0 = x 1 &times; z 0 f , y 0 = y 1 &times; z 0 f .
If rotation θ angle, subpoint P w'=(x 0', y 0', z 0'), have,
Figure BDA0000477490460000124
y 0 &prime; = y 0 2 + z 0 2 &times; sin ( &theta; + &alpha; ) ; z 0 &prime; = y 0 2 + z 0 2 &times; cos ( &theta; + &alpha; ) ;
Recalculate P w'=(x 0', y 0', z 0') theoretical picture point P on the sensor of left side lT'=(x 1', y 1', z 1'), have, x 1 &prime; = x 0 &prime; &times; f z 0 &prime; , y 1 &prime; = y 0 &prime; &times; f z 0 &prime; , z 1 &prime; = f .
According to above-mentioned method, successively every bit on the image after taking pictures is calculated, obtain image E; To image, E makes interpolation, cuts out, and obtains described the first original image figure G after treatment.
In instantiation, described anglec of rotation θ passes through: acceleration transducer, user specify, image modeling calculates and obtains.
Be understandable that, the above embodiment of the present invention all can apply to various imaging devices, for example smart mobile phone, camera or video camera etc.Because the embodiment of the present invention is without using large scale sensor, long-focus large aperture camera lens or moving the devices such as lens shaft, therefore can in miniaturization imaging device, uses, thereby significantly reduce hardware cost and the space requirement of imaging device.
Specifically, concrete application of the present invention can include but not limited to following form:
1, camera installation (for example: card type camera, mobile phone etc.)
Figure 11 represents a kind of application form of the present invention in camera installation, is wherein used for range finding at sensor of the each placement in the left and right of camera installation.
Figure 12 represents the another kind of application form of the present invention in camera installation, and wherein a side of camera installation is periscopic master reference, and opposite side is that microsensor is for range finding.
Figure 13 represents the another kind of application form of the present invention in camera installation, and wherein camera installation comprises symmetrical periscopic sensor.
2, picture pick-up device (for example: video camera etc.)
Figure 14 represents a kind of application form of the present invention in picture pick-up device, is wherein master reference in a side of picture pick-up device, and opposite side is that microsensor is for range finding.
One of ordinary skill in the art will appreciate that all or part of step in the whole bag of tricks of above-described embodiment is can carry out the hardware that instruction is relevant by program to complete, this program can be stored in a computer-readable recording medium, and storage medium can comprise: ROM, RAM, disk or CD etc.
Although the present invention discloses as above, the present invention is not defined in this.Any those skilled in the art, without departing from the spirit and scope of the present invention, all can make various changes or modifications, and therefore protection scope of the present invention should be as the criterion with claim limited range.

Claims (12)

1. the position based on object, direction are adjusted a method for image perspective distortion, it is characterized in that, comprise step as described below:
Steps A: by from infinite distance to nearest focus adjustment, by imaging modules, object is taken, captured several original images;
Step B: by the image-forming information of described original image is carried out to computing, obtain the depth information of object;
Step C: the first original image in described original image is carried out to perspective distortion processing by described depth information.
2. position based on object according to claim 1, direction are adjusted the method for image perspective distortion, it is characterized in that, in the time that focal plane and described object are not exclusively a plane, in described step B, also comprise: the coupling of the coupling in the coupling of the feature based on described image-forming information, the region based on described image-forming information or the phase place based on image-forming information, obtaining after corresponding intermediate parameters information, further calculating described depth information.
3. position based on object according to claim 2, direction are adjusted the method for image perspective distortion, it is characterized in that, the coupling of the described feature based on described image-forming information comprises:
Step B1: adopt horizontal dual camera, catch the original image obtaining and be respectively I land I r;
Step B2: collection apparatus step: the detected value C that obtains single pixel in template:
C ( x , y ) = 1 , | I ( x , y ) - I ( X 0 , y 0 ) | &le; t 0 , | I ( x , y ) - I ( x 0 , y 0 ) | > t , Detect the each pixel in template is carried out, I (x 0, y 0) be template center's point gray-scale value, I (x, y) is other gray-scale values in template, t is the threshold value of determining similarity degree, x, and y is the coordinate in the coordinate system of initial point take the source images I lower left corner, detected value C to the point that belongs to template A sues for peace, and obtains the distance of swimming and the S of output:
S ( x 0 , y 0 ) = &Sigma; ( x , y ) &Element; A C ( x , y ) ;
Respective point (the x of source images I 0, y 0) eigenwert R be:
R ( x 0 , y 0 ) = h - S ( x 0 , y 0 ) , S ( x 0 , y 0 ) < h 0 , S ( x 0 , y 0 ) &GreaterEqual; h ,
Wherein h is geometry threshold value and h=3S max/ 4, wherein S maxthe maximal value that the distance of swimming and S can get, to described two width original image I land I rdeal with, obtain characteristic pattern and be respectively H land H r;
Step B3: parallax matrix computations step: with H lin point (x to be matched 0, y 0) centered by point create the rectangular window Q that a size is wide m, high n; At H rin, along continuous straight runs side-play amount dx takes out and point (x to be matched in disparity range 0, y 0) adjacent onesize be another rectangular window Q ' of m × n; By First Characteristic figure H lrectangular window Q and Second Characteristic figure H rrectangular window Q ' compare;
, H lin with point (x to be matched 0, y 0) centered by point m × n rectangular window and H rthe matching factor of the rectangular window of middle correspondingly-sized horizontal offset dx is:
&Gamma; dx ( x 0 , y 0 ) = &Sigma; ( i , j ) &Element; Q [ H R ( x 0 + i + dx , y 0 + j ) - H L [ x 0 + i , y 0 + j ) ] 2 ,
I, j is the coordinate take rectangular window as coordinate system;
How much threshold value k are set in preliminary election, if Γ dx(x 0, y 0)≤k, is that the match is successful;
Γ dx(x 0, y 0) value of dx while obtaining minimum value, carry out the successful point of record matching by parallax matrix D here
Off-set value dx,
D(x 0,y 0)=dx;
At traversal characteristic pattern H lafter, to parallax matrix D interpolation, to the unique point that the match is successful and do not become
The coordinate valuation of merit extract minutiae;
The offset information that parallax matrix D is comprised is for compute depth;
Step B4:I lupper 1 P lT(x 1, y 1, z 1), by matching I rupper 1 P rT(x 2, y 2, z 2), calculate spatial point P w(x 0, y 0, z 0) depth z 0; For source images I lon any point (x, y), in image-forming module optical axis center base length b and lens focus f situation, the degree of depth of its corresponding spatial point is:
Z ( x , y ) = f &times; b D ( x , y ) ,
D is the described parallax matrix that comprises offset information.
4. position based on object according to claim 1, direction are adjusted the method for image perspective distortion, it is characterized in that, when focal plane and described object are a plane, and when parallel with the camera lens of described image-forming module, pass through: formula: 1/L '-1/L=1/F ' calculates the depth information of described object, wherein: L is object distance, L ' is image distance, F is lens focus, and described depth information is object distance.
5. position based on object according to claim 3, direction are adjusted the method for image perspective distortion, it is characterized in that, in described step C, for described P lT=(x 1, y 1, z 1), its volume coordinate is P in the coordinate system that is initial point through sensor w=(x 0, y 0, z 0), have:
x 0 = x 1 &times; z 0 f ;
y 0 = y 1 &times; z 0 f ;
If rotation θ angle, subpoint P w'=(x 0', y 0', z 0'), have:
&alpha; = arctan ( y 0 z 0 ) ;
y 0 &prime; = y 0 2 + z 0 2 &times; sin ( &theta; + &alpha; ) ;
z 0 &prime; = y 0 2 + z 0 2 &times; cos ( &theta; + &alpha; ) ; x 0′=x 0
Recalculate P w'=(x 0', y 0', z 0') theoretical picture point P on the sensor of left side lT'=(x 1', y 1', z 1'), have:
x 1 &prime; = x 0 &prime; &times; f z 0 &prime; ;
y 1 &prime; = y 0 &prime; &times; f z 0 &prime; ;
z 1'=f;
Calculate successively every bit, obtain image E;
To image, E makes interpolation, cuts out, and obtains described the first original image figure G after treatment.
6. position based on object according to claim 5, direction are adjusted the method for image perspective distortion, it is characterized in that, described anglec of rotation θ obtains by following any one mode: acceleration transducer, user specify, image modeling.
7. the position based on object, direction are adjusted a device for image perspective distortion, it is characterized in that, comprising:
Taking module, carries out the shooting of multiple angles to object, capture several original images;
Depth calculation module, by the image-forming information of more described original image is carried out to computing, obtains the depth information of object; Image processing module, carries out perspective distortion processing by described depth information to the first original image in described original image.
8. position based on object according to claim 7, direction are adjusted the device of image perspective distortion, it is characterized in that, in the time that focal plane and described object are not exclusively a plane, described depth calculation module can be based on described image-forming information the coupling in coupling, region based on described image-forming information or the coupling of phase place based on image-forming information of feature obtaining after corresponding intermediate parameters information, further calculate described depth information.
9. position based on object according to claim 8, direction are adjusted the device of image perspective distortion, it is characterized in that, the coupling of the feature of described depth calculation module based on described image-forming information comprises:
Adopt horizontal dual camera, catch the original image obtaining and be respectively I land I r;
Obtain the detected value C of single pixel in template:
C ( x , y ) = 1 , | I ( x , y ) - I ( X 0 , y 0 ) | &le; t 0 , | I ( x , y ) - I ( x 0 , y 0 ) | > t , Detect the each pixel in template is carried out, I(x 0, y 0) be template center's point gray-scale value, I(x, y) be other gray-scale values in template, t is the threshold value of determining similarity degree, x, and y is the coordinate in the coordinate system of initial point take the source images I lower left corner, detected value C to the point that belongs to template A sues for peace, and obtains the distance of swimming and the S of output:
S ( x 0 , y 0 ) = &Sigma; ( x , y ) &Element; A C ( x , y ) ;
Respective point (the x of source images I 0, y 0) eigenwert R be:
R ( x 0 , y 0 ) = h - S ( x 0 , y 0 ) , S ( x 0 , y 0 ) < h 0 , S ( x 0 , y 0 ) &GreaterEqual; h
Wherein h is geometry threshold value and h=3S max/ 4, wherein S maxthe maximal value that the distance of swimming and S can get,
To described two width original image I land I rdeal with, obtain characteristic pattern and be respectively H land H r;
Parallax matrix computations step: H lin point (x to be matched 0, y 0) centered by point create the rectangular window Q that a size is wide m, high n; H rin, along continuous straight runs side-play amount dx takes out and point (x to be matched in disparity range 0, y 0) adjacent onesize be another rectangular window Q ' of m × n; By First Characteristic figure H lrectangular window Q and Second Characteristic figure H rrectangular window Q ' compare;
, H lin with point (x to be matched 0, y 0) centered by point m × n rectangular window and H rthe matching factor of the rectangular window of middle correspondingly-sized horizontal offset dx is:
&Gamma; dx ( x 0 , y 0 ) = &Sigma; ( i , j ) &Element; Q [ H R ( x 0 + i + dx , y 0 + j ) - H L [ x 0 + i , y 0 + j ) ] 2 ,
I, j is the coordinate take rectangular window as coordinate system;
How much threshold value k are set in preliminary election, if Γ dx(x 0, y 0)≤k, is that the match is successful;
Carry out the successfully off-set value dx of point of record matching by parallax matrix D,
D(x 0,y 0)=dx;
At traversal characteristic pattern H lafter, to parallax matrix D interpolation, to the unique point that the match is successful and the coordinate valuation of unsuccessful extract minutiae;
The offset information that parallax matrix D is comprised is for compute depth;
I lupper 1 P lT(x 1, y 1, z 1), by matching I rupper 1 P rT(x 2, y 2, z 2), calculate spatial point P w(x 0, y 0, z 0) depth z 0; For source images I lon any point (x, y), in image-forming module optical axis center base length b and lens focus f situation, the degree of depth of its corresponding spatial point is:
Z ( x , y ) = f &times; b D ( x , y ) ;
D is the described parallax matrix that comprises offset information.
10. position based on object according to claim 7, direction are adjusted the device of image perspective distortion, it is characterized in that, when focal plane and described object are a plane, and when parallel with the camera lens of described image-forming module, pass through: formula: 1/L '-1/L=1/F ' calculates the depth information of described object; Wherein: L is object distance, L ' is image distance, and F is lens focus, and described depth information is object distance.
11. positions based on object according to claim 9, direction are adjusted the device of image perspective distortion, it is characterized in that, in described step C, for described P lT=(x 1, y 1, z 1), its volume coordinate is P in the coordinate system that is initial point through sensor w=(x 0, y 0, z 0), have:
x 0 = x 1 &times; z 0 f ;
y 0 = y 1 &times; z 0 f ;
If rotation θ angle, subpoint P w'=(x 0', y 0', z 0'), have:
&alpha; = arctan ( y 0 z 0 ) ;
y 0 &prime; = y 0 2 + z 0 2 &times; sin ( &theta; + &alpha; ) ;
z 0 &prime; = y 0 2 + z 0 2 &times; cos ( &theta; + &alpha; ) ;
x 0'=x 0
Recalculate P w'=(x 0', y 0', z 0') theoretical picture point P on the sensor of left side lT'=(x 1', y 1', z 1'), have:
x 1 &prime; = x 0 &prime; &times; f z 0 &prime; ;
y 1 &prime; = y 0 &prime; &times; f z 0 &prime; ;
z 1′=f;
Calculate successively every bit, obtain image E;
To image, E makes interpolation, cuts out, and obtains described the first original image figure G after treatment.
12. positions based on object according to claim 11, direction are adjusted the device of image perspective distortion, it is characterized in that, described anglec of rotation θ obtains by following any one mode: acceleration transducer, user specify, image modeling.
CN201410096007.8A 2014-03-14 2014-03-14 Image perspective distortion adjusting method and device based on position and direction of photographed object Pending CN103824303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410096007.8A CN103824303A (en) 2014-03-14 2014-03-14 Image perspective distortion adjusting method and device based on position and direction of photographed object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410096007.8A CN103824303A (en) 2014-03-14 2014-03-14 Image perspective distortion adjusting method and device based on position and direction of photographed object

Publications (1)

Publication Number Publication Date
CN103824303A true CN103824303A (en) 2014-05-28

Family

ID=50759344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410096007.8A Pending CN103824303A (en) 2014-03-14 2014-03-14 Image perspective distortion adjusting method and device based on position and direction of photographed object

Country Status (1)

Country Link
CN (1) CN103824303A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867113A (en) * 2015-03-31 2015-08-26 酷派软件技术(深圳)有限公司 Method and system for perspective distortion correction of image
CN105227948A (en) * 2015-09-18 2016-01-06 广东欧珀移动通信有限公司 A kind of method and device searching distorted region in image
CN105335958A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Processing method and device for light supplement of flash lamp
CN105335959A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Quick focusing method and device for imaging apparatus
WO2016150291A1 (en) * 2015-03-24 2016-09-29 Beijing Zhigu Rui Tuo Tech Co., Ltd. Imaging control methods and apparatuses, and imaging devices
CN107222737A (en) * 2017-07-26 2017-09-29 维沃移动通信有限公司 The processing method and mobile terminal of a kind of depth image data
CN111614890A (en) * 2019-02-26 2020-09-01 佳能株式会社 Image pickup apparatus, control method of image pickup apparatus, and storage medium
CN113826376A (en) * 2019-05-24 2021-12-21 Oppo广东移动通信有限公司 User equipment and strabismus correction method
CN114827561A (en) * 2022-03-07 2022-07-29 成都极米科技股份有限公司 Projection control method, projection control device, computer equipment and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7161614B1 (en) * 1999-11-26 2007-01-09 Sanyo Electric Co., Ltd. Device and method for converting two-dimensional video to three-dimensional video
CN101813467A (en) * 2010-04-23 2010-08-25 哈尔滨工程大学 Blade running elevation measurement device and method based on binocular stereovision technology
CN102867304A (en) * 2012-09-04 2013-01-09 南京航空航天大学 Method for establishing relation between scene stereoscopic depth and vision difference in binocular stereoscopic vision system
CN103353663A (en) * 2013-06-28 2013-10-16 北京智谷睿拓技术服务有限公司 Imaging adjustment apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7161614B1 (en) * 1999-11-26 2007-01-09 Sanyo Electric Co., Ltd. Device and method for converting two-dimensional video to three-dimensional video
CN101813467A (en) * 2010-04-23 2010-08-25 哈尔滨工程大学 Blade running elevation measurement device and method based on binocular stereovision technology
CN102867304A (en) * 2012-09-04 2013-01-09 南京航空航天大学 Method for establishing relation between scene stereoscopic depth and vision difference in binocular stereoscopic vision system
CN103353663A (en) * 2013-06-28 2013-10-16 北京智谷睿拓技术服务有限公司 Imaging adjustment apparatus and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
詹曙等: "融合SUSAN特征的医学图像Graph Cuts算法", 《电子测量与仪器学报》, vol. 27, no. 6, 30 June 2013 (2013-06-30), pages 5 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335958B (en) * 2014-08-15 2018-12-28 格科微电子(上海)有限公司 The processing method and equipment of flash lighting
CN105335959B (en) * 2014-08-15 2019-03-22 格科微电子(上海)有限公司 Imaging device quick focusing method and its equipment
CN105335958A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Processing method and device for light supplement of flash lamp
CN105335959A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Quick focusing method and device for imaging apparatus
WO2016150291A1 (en) * 2015-03-24 2016-09-29 Beijing Zhigu Rui Tuo Tech Co., Ltd. Imaging control methods and apparatuses, and imaging devices
US10298835B2 (en) 2015-03-24 2019-05-21 Beijing Zhigu Rui Tuo Tech Co., Ltd. Image control methods and apparatuses, and imaging devices with control of deformation of image sensor
CN104867113B (en) * 2015-03-31 2017-11-17 酷派软件技术(深圳)有限公司 The method and system of perspective image distortion correction
CN104867113A (en) * 2015-03-31 2015-08-26 酷派软件技术(深圳)有限公司 Method and system for perspective distortion correction of image
CN105227948A (en) * 2015-09-18 2016-01-06 广东欧珀移动通信有限公司 A kind of method and device searching distorted region in image
CN107222737A (en) * 2017-07-26 2017-09-29 维沃移动通信有限公司 The processing method and mobile terminal of a kind of depth image data
CN111614890A (en) * 2019-02-26 2020-09-01 佳能株式会社 Image pickup apparatus, control method of image pickup apparatus, and storage medium
US11184519B2 (en) 2019-02-26 2021-11-23 Canon Kabushiki Kaisha Image pickup apparatus, control method of image pickup apparatus, program, and storage medium
CN111614890B (en) * 2019-02-26 2022-07-12 佳能株式会社 Image pickup apparatus, control method of image pickup apparatus, and storage medium
CN113826376A (en) * 2019-05-24 2021-12-21 Oppo广东移动通信有限公司 User equipment and strabismus correction method
CN113826376B (en) * 2019-05-24 2023-08-15 Oppo广东移动通信有限公司 User equipment and strabismus correction method
CN114827561A (en) * 2022-03-07 2022-07-29 成都极米科技股份有限公司 Projection control method, projection control device, computer equipment and computer-readable storage medium
CN114827561B (en) * 2022-03-07 2023-03-28 成都极米科技股份有限公司 Projection control method, projection control device, computer equipment and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN103824303A (en) Image perspective distortion adjusting method and device based on position and direction of photographed object
CN110285793B (en) Intelligent vehicle track measuring method based on binocular stereo vision system
JP3539788B2 (en) Image matching method
CN103761737B (en) Robot motion&#39;s method of estimation based on dense optical flow
US20170214846A1 (en) Auto-Focus Method and Apparatus and Electronic Device
CN107084680B (en) A kind of target depth measurement method based on machine monocular vision
CN106529495A (en) Obstacle detection method of aircraft and device
CN108335337B (en) method and device for generating orthoimage picture
CN110334701B (en) Data acquisition method based on deep learning and multi-vision in digital twin environment
CN106570899B (en) Target object detection method and device
CN103053154A (en) Autofocus for stereo images
CN108470356B (en) Target object rapid ranging method based on binocular vision
WO2011005783A2 (en) Image-based surface tracking
JP5672112B2 (en) Stereo image calibration method, stereo image calibration apparatus, and computer program for stereo image calibration
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
TWI587241B (en) Method, device and system for generating two - dimensional floor plan
CN106485753A (en) Method and apparatus for the camera calibration of pilotless automobile
CN109214254B (en) Method and device for determining displacement of robot
CN109059868A (en) A kind of binocular distance measuring method based on Adaptive matching window
CN105335959B (en) Imaging device quick focusing method and its equipment
CN107680035B (en) Parameter calibration method and device, server and readable storage medium
Kruger et al. In-factory calibration of multiocular camera systems
WO2020173194A1 (en) Image feature point tracking method and apparatus, image feature point matching method and apparatus, and coordinate obtaining method and apparatus
CN104504691A (en) Camera position and posture measuring method on basis of low-rank textures
CN110012236A (en) A kind of information processing method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140528