CN110232715A - A kind of self-alignment method, apparatus of more depth cameras and system - Google Patents
A kind of self-alignment method, apparatus of more depth cameras and system Download PDFInfo
- Publication number
- CN110232715A CN110232715A CN201910379483.3A CN201910379483A CN110232715A CN 110232715 A CN110232715 A CN 110232715A CN 201910379483 A CN201910379483 A CN 201910379483A CN 110232715 A CN110232715 A CN 110232715A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- camera
- depth image
- visual field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The present invention is suitable for optics and electronic technology field, provides a kind of self-alignment method, apparatus of more depth cameras and system, which comprises receives multiple depth images of multiple depth cameras acquisitions, wherein multiple depth images have common visual field;Judge whether each depth image deforms;According to the judging result for judging whether each depth image deforms, the camera pose parameter for the depth camera that the depth image deforms is updated.The embodiment of the present invention realizes more depth cameras and rapidly and accurately calibrates in use.
Description
Technical field
The present invention relates to optics and electronic technology fields, and in particular to a kind of more self-alignment method, apparatus of depth camera
And system.
Background technique
The depth image of the available target of depth camera realizes that 3D is built so as to be based further on the depth image
The functions such as mould, human-computer interaction, avoidance navigation or recognition of face.Therefore, depth camera has been widely used in robot, consumption
In the fields such as electronics, AR/VR.However, depth camera is in use, inevitably measurement error can be generated because of some factors,
For example the factors such as temperature change, and/or malformation, these factors can make the measurement accuracy of depth camera decline, measurement knot
Fruit is insincere.
For this problem, there are some corresponding technical solutions in the prior art, such as by first having calculated error
Depth image series of parameters after depth camera is corrected again, or according to pre-stored reference picture to depth
Camera is corrected, these methods are relatively cumbersome, if cannot achieve and rapidly and accurately calibrated using multicamera system.
Summary of the invention
In view of this, the embodiment of the invention provides a kind of self-alignment method, apparatus of more depth cameras and systems, to mention
The self-alignment efficiency of the more depth cameras of height and precision.
First aspect present invention provides a kind of self-alignment method of more depth cameras, comprising:
Receive multiple depth images of multiple depth camera acquisitions, wherein multiple depth images have common visual field;
Judge whether each depth image deforms;
According to the judging result for judging whether each depth image deforms, updates the depth image and become
The camera pose parameter of the depth camera of shape.
Second aspect of the present invention provides a kind of self-alignment device of more depth cameras, including memory and processor,
The computer program that can be run on the processor is stored in the memory, the processor executes the computer journey
When sequence, realize as described in relation to the first aspect method the step of.
Third aspect present invention provides a kind of self-alignment system of more depth cameras, comprising: multiple depth cameras are used for
Multiple depth images are acquired, and the device as described in second aspect.
Fourth aspect present invention provides a kind of computer readable storage medium, the computer-readable recording medium storage
The step of having computer program, method as described in relation to the first aspect is realized when the computer program is executed by processor.
Multiple depth images that the embodiment of the present invention is acquired by receiving multiple depth cameras, wherein multiple depth
Image has common visual field;Then judge whether each depth image deforms;According to judging each depth map
Seem the no judging result to deform, updates the camera pose ginseng for the depth camera that the depth image deforms
Number, is rapidly and accurately calibrated in use to realize more depth cameras.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is a kind of more depth camera self calibration schematic illustrations provided in an embodiment of the present invention;
Fig. 2 is a kind of method flow diagram of more self-alignment methods of depth camera provided in an embodiment of the present invention;
Fig. 3 is a kind of more depth camera distortion correction schematic diagrames provided in an embodiment of the present invention;
Fig. 4 is the more depth camera distortion correction schematic diagrames of another kind provided in an embodiment of the present invention.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific
The present invention also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, in case unnecessary details interferes description of the invention.
In the description of the embodiment of the present invention, only it is illustrated by taking infrared structure optical depth camera as an example and solves polyphaser certainly
The method of calibration, it should be understood that the present invention is readily applicable to the self calibration of other any depth cameras.In addition, " more
It is a " it is meant that two or more, unless otherwise specifically defined.
In the depth image that depth camera obtains, what the value in each pixel indicated is corresponding spatial point apart from depth phase
The distance of machine, the quality of depth image include precision and accuracy, and wherein precision is referred between depth camera and target
Position it is relatively fixed when, the difference between the multi-amplitude deepness image of acquisition, difference is smaller to show that precision is higher, in other words depth
Measurement consistency, the stability of camera are high, and accuracy refers to the gap between measured value and true value, and gap is smaller to show standard
Degree is higher, and shown value, true value refer to actual distance institute between target and depth camera on measured value, that is, depth image
The value of representative.
In some applications it is desirable to realize that the depth image of bigger visual field is obtained using multiple depth cameras simultaneously, respectively
Multi-amplitude deepness image is carried out splicing fusion again after obtaining depth image alone by a depth camera, when depth camera due to it is various because
When element deforms, the depth image precision of acquisition will decline, and will lead to depth value calculating and error occur, finally
Lead to not obtain true wide view-field three-D image.
By taking structure light depth camera as an example, when there is no when deformation, the two optical axis is mutual with acquisition mould group for projective module group
In parallel, relative positional relationship can be determined with external calibration parameter (Extrinsic calibration parameters), ginseng
Speckle pattern and practical speckle pattern are examined when carrying out matching primitives, it is only necessary to along with direction baseline (baseline) between the two
It is calculated.When projective module group and acquisition mould group relative position deform (including rotational deformation and translation transformation),
Deformation meeting between the two when carrying out matching primitives, may be can not find along baseline so that external perimysium reference parameter changes
The high point of similarity, or there is also large errors even if finding.Therefore, the embodiment of the present invention describes solution depth camera and exists
In use process due to temperature, fall etc. factors cause to deform between projective module group and acquisition mould group (including displacement and/or
Deflection etc.) and lead to the technical solution of depth map inaccuracy.
Fig. 1 is the more depth camera self calibration schematic illustrations of the embodiment of the present invention, as shown in Figure 1, when using at least two
When depth camera 101 and 102 acquires scene objects image simultaneously, respectively obtain target scene partial depth image 103 with
104, generally, due to which the relative positional relationship etc. of multiple depth cameras of setting, can acquire between depth camera 101 and 102
To the scene image with the common visual field, i.e. depth image 103 and 104 has common visual field 105.If depth camera 101 and 102
Deform situation, then depth image 103 and 104 will deviate real goal scene, even if in addition there are common field of view portion,
There is also differences for the depth data of corresponding common visual field 105 in two amplitude deepness images.
When depth camera 101 and 102 is structure light depth camera, projective module group 106 and 107 thereon projects respectively
For infrared structure light to target scene, corresponding speckle pattern 103 and 104 will respectively be collected by acquiring mould group 108 and 109, and be dissipated
There is common visual field 105, while processing unit 110 receives the speckle pattern from acquisition mould group between spot Figure 103 and 104, starts
The speckle pattern is analyzed and processed, to realize the correction of distortion inaccuracy.Processing unit 110 can be with depth camera 101 and 102
It is separately connected, forms independent processing system, can also be integrated in depth camera.It should be pointed out that the present invention its
In his embodiment, processing unit 110 can be a part of terminal device, or the terminal device with computing capability.
It is deforming and in the undeformed situation of at least one depth camera, the thinking of the present embodiment is according to common
The feature pixel of visual field 105, using undeformed depth image as reference map, with this reference map to the depth phase to deform
Machine carries out the correction of camera pose parameter, so that it be made to carry out correct depth to subsequent frame image using the pose parameter after correction
It calculates.Wherein, pose parameter is generally the location matrix parameter of camera internal, such as rotation and translation matrix.It is carried out to target
When depth measurement, needs to be converted to world coordinates into camera coordinates, the depth value of target can be just calculated, inside and outside camera
Conversion can be realized in parameter, and internal reference is generally the parameter of camera internal optical element etc., and outer ginseng is rotation R and translation T matrix.
It therefore, is correction rotation and translation matrix to the self calibration of polyphaser.
Fig. 2 is a kind of flow chart of more self-alignment methods of depth camera provided in an embodiment of the present invention, and this method is by Fig. 1
Shown in processing unit 110 execute.As shown in Fig. 2, the method for correcting deformed error includes:
Step 201: receiving multiple depth images of multiple depth camera acquisitions.
Wherein, each depth camera acquires a depth image respectively, and received depth image is each depth camera
The depth image of depth image, each depth camera acquisition has common visual field.
As an embodiment of the present invention, refering to what is shown in Fig. 1, depth can be collected using two depth cameras 101 and 102
Image 103 and 104, depth camera can be adjacent or non-conterminous, but the depth map collected should have common view
?.
In other embodiments of the present invention, the quantity of depth camera can also be for greater than 2, each depth camera be at least adopted
Collect a depth image, each depth camera sends at least one depth image to processing unit, and processing unit at least receives
The depth image of one depth image of each depth camera, the acquisition of different depth camera has common visual field.
Step 202: judging whether presently described multiple depth images deform.
Wherein, after the depth image for receiving multiple depth cameras, judge whether presently described multiple depth images are sent out
Change shape.That is, judging the depth image of each depth camera acquisition respectively, judge whether to deform.
When determine depth image deform, then mean that corresponding depth camera is deformed, that is, acquire the depth
The depth camera of image is deformed.It should be noted that when judgement depth image does not deform, it is meant that all
Depth camera there is no deformation, then stops correcting.
Specifically, feature extraction is carried out to reference picture and each depth image, is sentenced according to the characteristic point extracted
Whether each depth image that breaks deforms.
Illustratively, for each depth image, if the characteristic point extracted is unevenly distributed, or go out
Existing pore quantity is more than the first preset threshold, or carries out the average similarity value that matching primitives obtain and be lower than the second preset threshold
Or similarity changing value is lower than third predetermined threshold value, it is determined that the depth image deforms.
As an embodiment of the present invention, shown in continuing to refer to figure 1, after receiving depth image 103 and 104, Ke Yitong
It crosses and analyzes the depth map and distinguished with reference to the characteristic point between speckle pattern to judge current acquired depth map with the presence or absence of change
Shape.
By carrying out feature extraction to reference speckle pattern and the spot of the speckle pattern actually obtained, according to the feature extracted
Point judges in current depth camera projective module group and acquires whether mould group deforms.When occur one of or
When multiple features, so that it may assert that current depth map i.e. depth camera is deformed:
A: the Spatial Density Distribution of the speckle pattern spot actually obtained is uneven, if depth camera does not deform, adopts
The speckle pattern collected should be evenly distributed, and when deformation occurs, speckle pattern distribution can become uneven, therefore pass through measurement
The Spatial Density Distribution of speckle pattern can decide whether to deform.
B: carrying out matching primitives along base direction, if (the corresponding point of hole is to look for not for the hole or noise that occur
Meeting the point of condition to similarity, noise is generally the interference such as environment light generation) quantity is more than a certain threshold value, then it is assumed that it has occurred
Deformation.
C: along base direction carry out matching primitives, to the characteristic point being matched to, calculate they average similarity value or
Person's similarity changing value, if when average similarity value is lower than the threshold value set or when similarity changing value is lower than the threshold value set,
Then think to be deformed.
D: along matching primitives are carried out in the width centered on baseline, when the similarity for finding the point outside baseline is higher
When, then it is assumed that it is deformed.
Step 203: according to the judging result for judging whether each depth image deforms, updating the depth map
As the camera pose parameter of the depth camera to deform.
Wherein, the depth image for deforming and not deforming is determined in step 202, is based on the judging result, more
The camera pose parameter of the depth camera of kainogenesis deformation.
As an embodiment of the present invention, step 203 includes: if it is determined that at least one depth in multiple depth images
Image does not deform, at least one depth image deforms, then undeformed depth image is set as benchmark image, and
Distortion inaccuracy correction is carried out to the depth image that each of deforms based on the benchmark image, to update corresponding depth camera
Camera pose parameter.
Illustratively, refering to what is shown in Fig. 1, depth image 103 is deformed in the first frame image of acquisition, and it is deep
Degree image 104 does not deform, then depth image 104 is set as reference map, is deformed with the reference map to depth image 103
Error calculation, to make camera pose parameter R and the T real-time update of the corresponding depth camera 101 of depth image 103.
Due to being using depth image 104 as reference map, it is therefore desirable to guarantee region and the base of the acquisition of deforming depth camera
Quasi- figure at least part is identical, could pass through the difference of comparison between the two in this way and carry out error correction, that is, calculate new
The external calibration parameter of reflection translation and gyrobearing.Specific implementation can be by minimizing cost function and count
It calculates, what cost function reflected is the difference between known coordinate and coordinates computed.
Therefore, above-mentioned error calculation needs the depth information based on the pixel point cloud in common visual field 105 to carry out, i.e., first
It needs to find multiple depth maps according to the relative positional relationship of multiple depth cameras or using three-dimensional point cloud registration Algorithm (ICP)
The common visual field of picture, relative positional relationship can be setting positional relationship (including the adjacent or non-conterminous position of multiple depth cameras
Set relationship) or baseline position relationship etc., these positional relationships can guarantee that the multiple depth images collected have weight
Folded part, i.e., common visual field.The spatial transform relation of two groups of point cloud data collection can also be found out according to ICP algorithm, i.e., rotation and
Then translation vector transforms to this two groups of point cloud datas under same coordinate system, so that intersection area overlapping between the two,
So that it is determined that the common visual field of multiple depth images out.
After determining common visual field, then it will belong to the depth image not deformed in the common visual field as benchmark
Thus image further calculates and obtains camera pose parameter.
As another embodiment of the present invention, if step 203 includes: to judge that multiple depth images have deformation, according to
So multiple depth images are found according to the relative positional relationship of multiple depth cameras or using three-dimensional point cloud registration Algorithm first
Common visual field;According to the information of the depth image of common visual field, so that cost function value is minimum, to update each depth camera
Camera pose parameter.
Such situation is measured compared to the slightly aobvious complexity of the undeformed situation of at least one depth map according to deforming depth camera
Depth information cost function is minimized, multiple camera pose parameters may be calculated, can be right according to the actual situation
Utilization is in optimized selection in these parameters.
It optionally, in other embodiments of the present invention, after step 203, further include step 204: according to updated
The depth information of camera pose parameter calculated for subsequent frame image.
Illustratively, next after the first frame depth image based on each depth camera carries out depth camera calibration
Second frame image is that new depth image obtained, the depth information of the depth image are then bases after error correction
Updated camera pose parameter is calculated.
Concrete scheme will be corrected to distortion inaccuracy by embodiment emphasis below to be illustrated.
As shown in figure 3, being a kind of more depth camera distortion correction schematic diagrames provided according to embodiments of the present invention.Such feelings
Shape corresponds to the undeformed situation of at least one in depth image.As shown in figure 3, setting the reference map of undeformed depth camera acquisition
As being 310, then deforming depth camera depth image 311 collected deviates benchmark image 310.What undeformed depth camera measured
Depth value is Z0, the depth value that deforming depth camera measures is Z1。
Thought as consistent as possible is needed according to the depth image that in common visual field, different depth camera is obtained, that is, poor
Different the smallest thought, the correction of depth camera distortion inaccuracy is especially by making cost function value minimum calculate phase in the present embodiment
Machine pose parameter, to realize the self calibration of polyphaser.
Specifically, according to the depth value Z of the benchmark image0, it calculates so that cost function J,
Value minimum when, the depth value Z of the depth image to deform in the common visual field1, obtain the institute of corresponding depth camera
State camera pose parameter;Wherein, k is deformation coefficient, and i is the number of pixel node in common visual field, Z1=f (R, T), R and T are
The camera pose parameter of the corresponding depth camera of depth image to deform in the common visual field.
Wherein, cost functionIn, deformation coefficient k is constant coefficient, generally takes in 0.3 to 0.6 and appoints
One numerical value, preferably value are the number that 0.5, i is pixel node in common visual field, and m, which is usually no more than in common visual field, to be owned
Node number.According to Z0Calculate suitable Z1So that the value of cost function J is minimum, to obtain updated camera pose ginseng
Number R', T', in next frame image, deforming depth camera just carries out correct depth calculation according to R', T'.Make cost
The smallest method for solving of the value of function J includes gradient descent method, Newton iteration method, normal equation method (normal equations)
Deng this will not be repeated here for specific solution.
As shown in figure 4, being the more depth camera distortion correction schematic diagrames of another kind provided according to embodiments of the present invention.It is such
Situation corresponds to the situation that depth image all sends deformation.If the depth camera being deformed depth image collected is
321 and 322, and deviate true picture 320.Δ is had differences between its common area of visual field, depth image 321 and 322
Z.In a specific embodiment, m pixel in common area of visual field is chosen, the value of m should be less than picture in common area of visual field
Actually measured depth value is substituted into cost function and makes cost function value minimum, to carry out error school by the total quantity of element
Just.
Wherein, cost function are as follows:Wherein, k is deformation coefficient, and i is in common visual field
The number of pixel node, the value of m are less than the total number of pixel node in common visual field;Depth deviation valueR1And T1The depth to deform for one
Spend the camera pose parameter of the corresponding depth camera of image, Z1For the depth value of the depth image, Z1=f (R1,T1);R2And T2Separately
The camera pose parameter of the corresponding depth camera of one depth image to deform, Z2For the depth value of the depth image, Z2=
f(R2,T2)。
It should be noted that Z1、Z2For the currently practical depth value that deforming depth camera measures, current depth value Z is utilized1
And Z2Depth disagreement value A Z is calculated, so that the value of cost function J is minimum, to obtain camera pose parameter.Solve cost letter
The value minimum method of number J includes gradient descent method, Newton iteration method, normal equation method (normal equations) etc., specific solution
This will not be repeated here for method.
The above method realizes that process can be completed by processor, can also be by including the terminal device of processor Lai complete
At.More depth camera distortion inaccuracy correction systems will include depth camera and processor, wherein depth camera includes projective module
Group and acquisition mould group, for obtaining depth image.It further include memory except of course that except processor, in addition to depositing in memory
It stores up outside reference picture, such as reference blob image, is also used to save deformation coefficient k etc., memory can be a kind of computer can
Storage medium is read, the computer-readable recording medium storage has computer program, when which is executed by processor
The step of method as described above may be implemented.
Processor receive by acquisition mould group transmit Lai current spot image after, carry out matching meter with reference blob image
It calculates, judges whether current multiple depth images deform, multiple depth images have deformation if judging, and wherein at least have
One does not deform, then undeformed depth image is set as benchmark image, and carry out distortion inaccuracy based on the benchmark image
Correction, by the difference between comparison deforming depth figure and reference map, based on the principle that difference minimizes, so that cost function value
Minimum, so that camera pose parameter be made to update;Alternatively, if judging, current depth figure deforms, still to multiple depths
Degree figure carries out the processing of difference minimum, so that cost function value is minimum, camera pose parameter is updated, in addition, after also according to update
New camera pose parameter calculated for subsequent frame image real depth information, to realize correction to distortion inaccuracy.Depth gauge
Calculating processor can be integrated in depth camera, other can also be present in independently of depth camera and is calculated in equipment.
It can also include correction engine in the above depth camera distortion inaccuracy correction system, and it is located in processor, this
Sample processor, can be direct after receiving depth map without exporting depth image to correction is realized again after individual correction engine
Start to be corrected.
Processor may include such as digital signal processor (Digital Signal Processing, DSP), using place
Manage device (Multimedia Application Processor, MAP), field programmable gate array (Field-
Programmable Gate Array, FPGA), application-specific IC (Application Specific
Integrated Circuit, ASIC) etc. one of or combination, memory may include such as random access memory
In (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), flash memory (Flash) etc.
A kind of or combination.Control performed by processing unit, data processing instructions can be stored in memory in the form of software, firmware etc.
In and called when needed by processor, directly instruction can also be cured in circuit and form special circuit (or dedicated processes
Device) to execute corresponding instruction, it can also be realized by way of software and special circuit combination.Processing unit can be with
Comprising input/output interface, and/or support the network interface of network communication.It in some embodiment of the invention, will by interface
Data that treated are transmitted to other units in other equipment or system, such as display unit or exterior terminal equipment etc..
In some other embodiment of the present invention, display unit can also be in conjunction with one or more processors in processing equipment.
It is handled in present invention method as described above in order to construct specific error used by mathematical model using simplified,
The error in practical application corresponding to it is more complicated.By the way that method of the present invention is applied to specific complexity
In scene, application can be directly application and can also make reasonable accommodation on the basis of thinking of the present invention and reapply, energy
The precision of depth camera is improved to a certain extent.The conjunction made on the basis of thinking of the present invention based on concrete application scene
Reason accommodation should be considered as protection scope of the present invention.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that
Specific implementation of the invention is only limited to these instructions.For those skilled in the art to which the present invention belongs, it is not taking off
Under the premise of from present inventive concept, several equivalent substitute or obvious modifications can also be made, and performance or use is identical, all answered
When being considered as belonging to protection scope of the present invention.
Claims (12)
1. a kind of self-alignment method of more depth cameras characterized by comprising
Receive multiple depth images of multiple depth camera acquisitions, wherein multiple depth images have common visual field;
Judge whether each depth image deforms;
According to the judging result for judging whether each depth image deforms, update what the depth image deformed
The camera pose parameter of the depth camera.
2. the method as described in claim 1, which is characterized in that it is described to judge whether each depth image deforms,
Include:
Feature extraction is carried out to reference picture and each depth image, each depth is judged according to the characteristic point extracted
Whether degree image deforms.
3. method according to claim 2, which is characterized in that the characteristic point that the basis extracts judges each depth image
Whether deform, comprising:
For each depth image, if the characteristic point extracted is unevenly distributed, or the pore quantity occurred
More than the first preset threshold, or carries out the average similarity value that matching primitives obtain and change lower than the second preset threshold or similarity
Value is lower than third predetermined threshold value, it is determined that the depth image deforms.
4. the method according to claim 1, which is characterized in that described according to judging that each depth image is
The no judging result to deform updates the camera pose parameter for the depth camera that the depth image deforms, packet
It includes:
If it is determined that at least one depth image does not deform in multiple depth images, at least one depth image hair
Change shape, then the depth image not deformed is set as benchmark image, and every to what is deformed based on the benchmark image
A depth image carries out distortion inaccuracy correction, to update the camera pose parameter of corresponding depth camera.
5. the method according to claim 1, which is characterized in that described according to judging that each depth image is
The no judging result to deform updates the camera pose parameter for the depth camera that the depth image deforms, packet
It includes:
If it is determined that at least one depth image does not deform in multiple depth images, at least one depth image hair
Change shape, then finds multiple institutes according to the relative positional relationship of multiple depth cameras or using three-dimensional point cloud registration Algorithm
State the common visual field of depth image;
The depth image not deformed in the common visual field will be belonged to as benchmark image, be based on the benchmark image pair
The depth image to deform in the common visual field carries out distortion inaccuracy correction, to update the camera pose of corresponding depth camera
Parameter.
6. method as claimed in claim 5, which is characterized in that the relative positional relationship according to multiple depth cameras
Or the common visual field of multiple depth images is found using three-dimensional point cloud registration Algorithm, comprising:
According to the adjacent or non-conterminous positional relationship and baseline position relationship of multiple depth cameras, multiple institutes are determined
State the common visual field of depth image.
7. more self-alignment methods of depth camera as claimed in claim 5, which is characterized in that described to be based on the benchmark image pair
The depth image to deform in the common visual field carries out distortion inaccuracy correction, to update the camera pose of corresponding depth camera
Parameter, comprising:
According to the depth value Z of the benchmark image0, it calculates so that cost function J,Value minimum when,
The depth value Z of the depth image to deform in the common visual field1, obtain the camera pose ginseng of corresponding depth camera
Number;Wherein, k is deformation coefficient, and i is the number of pixel node in common visual field, and the value of m is less than pixel node in common visual field
Total number;Z1=f (R, T), R and T are the camera of the corresponding depth camera of depth image to deform in the common visual field
Pose parameter.
8. the method according to claim 1, which is characterized in that described according to judging that each depth image is
The no judging result to deform updates the camera pose parameter for the depth camera that the depth image deforms, packet
It includes:
If multiple depth images deform, according to the relative positional relationship or utilization of multiple depth cameras
Three-dimensional point cloud registration Algorithm finds the common visual field of multiple depth images;
According to the information of the depth image of the common visual field, so that cost function value is minimum, to update each depth phase
The camera pose parameter of machine.
9. method according to claim 8, which is characterized in that the cost function are as follows:
Wherein, k is deformation coefficient, and i is the number of pixel node in common visual field, and the value of m is less than the total of pixel node in common visual field
Number;Depth deviation value R1And T1For
The camera pose parameter of the corresponding depth camera of one depth image to deform, Z1For the depth value of the depth image, Z1=
f(R1,T1);R2And T2The camera pose parameter of another corresponding depth camera of depth image to deform, Z2For the depth
The depth value of image, Z2=f (R2,T2)。
10. a kind of self-alignment device of more depth cameras, including memory and processor, being stored in the memory can be
The computer program run on the processor, which is characterized in that when the processor executes the computer program, realize such as
The step of any one of claim 1 to 9 the method.
11. a kind of self-alignment system of more depth cameras characterized by comprising multiple depth cameras, for acquiring multiple depths
Spend image and device as claimed in claim 10.
12. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In when the computer program is executed by processor the step of any one of such as claim 1 to 9 of realization the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910379483.3A CN110232715B (en) | 2019-05-08 | 2019-05-08 | Method, device and system for self calibration of multi-depth camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910379483.3A CN110232715B (en) | 2019-05-08 | 2019-05-08 | Method, device and system for self calibration of multi-depth camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110232715A true CN110232715A (en) | 2019-09-13 |
CN110232715B CN110232715B (en) | 2021-11-19 |
Family
ID=67861169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910379483.3A Active CN110232715B (en) | 2019-05-08 | 2019-05-08 | Method, device and system for self calibration of multi-depth camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110232715B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028294A (en) * | 2019-10-20 | 2020-04-17 | 深圳奥比中光科技有限公司 | Multi-distance calibration method and system based on depth camera |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101064780A (en) * | 2006-04-30 | 2007-10-31 | 台湾新力国际股份有限公司 | Method and apparatus for improving image joint accuracy using lens distortion correction |
CN107079141A (en) * | 2014-09-22 | 2017-08-18 | 三星电子株式会社 | Image mosaic for 3 D video |
CN107730561A (en) * | 2017-10-17 | 2018-02-23 | 深圳奥比中光科技有限公司 | The bearing calibration of depth camera temperature error and system |
CN108447097A (en) * | 2018-03-05 | 2018-08-24 | 清华-伯克利深圳学院筹备办公室 | Depth camera scaling method, device, electronic equipment and storage medium |
CN108780504A (en) * | 2015-12-22 | 2018-11-09 | 艾奎菲股份有限公司 | Three mesh camera system of depth perception |
US20180342110A1 (en) * | 2017-05-27 | 2018-11-29 | Fujitsu Limited | Information processing method and information processing device |
-
2019
- 2019-05-08 CN CN201910379483.3A patent/CN110232715B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101064780A (en) * | 2006-04-30 | 2007-10-31 | 台湾新力国际股份有限公司 | Method and apparatus for improving image joint accuracy using lens distortion correction |
CN107079141A (en) * | 2014-09-22 | 2017-08-18 | 三星电子株式会社 | Image mosaic for 3 D video |
CN108780504A (en) * | 2015-12-22 | 2018-11-09 | 艾奎菲股份有限公司 | Three mesh camera system of depth perception |
US20180342110A1 (en) * | 2017-05-27 | 2018-11-29 | Fujitsu Limited | Information processing method and information processing device |
CN107730561A (en) * | 2017-10-17 | 2018-02-23 | 深圳奥比中光科技有限公司 | The bearing calibration of depth camera temperature error and system |
CN108447097A (en) * | 2018-03-05 | 2018-08-24 | 清华-伯克利深圳学院筹备办公室 | Depth camera scaling method, device, electronic equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028294A (en) * | 2019-10-20 | 2020-04-17 | 深圳奥比中光科技有限公司 | Multi-distance calibration method and system based on depth camera |
CN111028294B (en) * | 2019-10-20 | 2024-01-16 | 奥比中光科技集团股份有限公司 | Multi-distance calibration method and system based on depth camera |
Also Published As
Publication number | Publication date |
---|---|
CN110232715B (en) | 2021-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2019432052B2 (en) | Three-dimensional image measurement method, electronic device, storage medium, and program product | |
CN111199564B (en) | Indoor positioning method and device of intelligent mobile terminal and electronic equipment | |
CN111735439B (en) | Map construction method, map construction device and computer-readable storage medium | |
CN109523595A (en) | A kind of architectural engineering straight line corner angle spacing vision measuring method | |
CN112489099B (en) | Point cloud registration method and device, storage medium and electronic equipment | |
CN110500954A (en) | A kind of aircraft pose measuring method based on circle feature and P3P algorithm | |
CN108038885A (en) | More depth camera scaling methods | |
CN115376109B (en) | Obstacle detection method, obstacle detection device, and storage medium | |
CN112946679B (en) | Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence | |
JP2016217941A (en) | Three-dimensional evaluation device, three-dimensional data measurement system and three-dimensional measurement method | |
CN116778288A (en) | Multi-mode fusion target detection system and method | |
CN111123242A (en) | Combined calibration method based on laser radar and camera and computer readable storage medium | |
CN114140539A (en) | Method and device for acquiring position of indoor object | |
Xinmei et al. | Passive measurement method of tree height and crown diameter using a smartphone | |
CN105787464A (en) | A viewpoint calibration method of a large number of pictures in a three-dimensional scene | |
CN115187612A (en) | Plane area measuring method, device and system based on machine vision | |
CN113012238B (en) | Method for quick calibration and data fusion of multi-depth camera | |
CN114137564A (en) | Automatic indoor object identification and positioning method and device | |
CN110232715A (en) | A kind of self-alignment method, apparatus of more depth cameras and system | |
CN116935013A (en) | Circuit board point cloud large-scale splicing method and system based on three-dimensional reconstruction | |
CN117197775A (en) | Object labeling method, object labeling device and computer readable storage medium | |
CN111899277A (en) | Moving object detection method and device, storage medium and electronic device | |
CN115457130A (en) | Electric vehicle charging port detection and positioning method based on depth key point regression | |
CN115100287A (en) | External reference calibration method and robot | |
CN113865506A (en) | Automatic three-dimensional measurement method and system for non-mark point splicing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant after: Obi Zhongguang Technology Group Co., Ltd Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant before: SHENZHEN ORBBEC Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |