CN110088563A - Calculation method, image processing apparatus and the three-dimension measuring system of picture depth - Google Patents
Calculation method, image processing apparatus and the three-dimension measuring system of picture depth Download PDFInfo
- Publication number
- CN110088563A CN110088563A CN201980000341.XA CN201980000341A CN110088563A CN 110088563 A CN110088563 A CN 110088563A CN 201980000341 A CN201980000341 A CN 201980000341A CN 110088563 A CN110088563 A CN 110088563A
- Authority
- CN
- China
- Prior art keywords
- image
- parallax
- uncalibrated image
- pixel
- uncalibrated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/2433—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
A kind of calculation method of picture depth, image processing apparatus and three-dimension measuring system, the calculation method of picture depth includes: to project to target object surface, the first calibration plane of reference, the structured image, the first uncalibrated image and the second uncalibrated image that are formed on the second calibration plane of reference respectively according to structure light, determines that the structure light is respectively relative to the first parallax and the second parallax of the object pixel of the structure light on first uncalibrated image and second uncalibrated image in the object pixel in the structured image;According to first parallax and the second parallax, depth of the object pixel in the structured image is calculated, wherein the upper limit of the corresponding measurement distance of the first calibration plane of reference, the second calibration plane of reference correspond to the lower limit of the measurement distance.The calculation method of picture depth makes depth unrelated with the inside and outside ginseng of measuring system, avoids the introducing of additional error and demarcates the positive relationship of reference planes quantity and measurement accuracy.
Description
Technical field
The invention relates to technical field of data processing more particularly to a kind of calculation methods of picture depth, image
Processing unit and three-dimension measuring system.
Background technique
In three-dimensional measurement technology, it is generally the case that the three-dimensional measurement based on structure light is all to use principle of triangulation,
Wherein based on the measurement process of calibrating external parameters are as follows: calibration includes ginseng outside projection device-camera three dimension system first
Number, then disparity map is found out, the depth of respective point is finally found out according to disparity map;In addition, the measurement based on reference planes calibration
Journey demarcates multiple reference planes in effective range, and the Local Steady of specific structure light is recycled to pass through matching reference
The depth of plane acquisition respective point.Regardless of be that calibration external parameter or calibration reference planes all have biggish limitation, it is first
First, whether the extraction that the stated accuracy of external parameter generally depends on angle point is accurate, while usually also needing to improve precision
Multiple images are demarcated, in addition to obtaining disparity map usually will also carry out polar curve correction, are thus introduced additionally for depth calculation
Error;Secondly, it is comparatively laborious on the proving operation of multiple reference planes, time and the carrying cost of production is significantly greatly increased, together
When accuracy also by calibration plane quantity influenced, be difficult in the three-dimensional measurement of degree of precision.
Summary of the invention
In view of this, one of the technical issues of embodiment of the present invention is solved is to provide a kind of calculating side of picture depth
Method, image processing apparatus and three-dimension measuring system, to overcome drawbacks described above in the prior art.
The embodiment of the present application provides a kind of calculation method of picture depth comprising:
Projected to respectively according to structure light target object surface, first calibration the plane of reference, second calibration the plane of reference on formed
Structured image, the first uncalibrated image and the second uncalibrated image, determine the structure light in the structured image
Object pixel is respectively relative to the mesh of the structure light on first uncalibrated image and second uncalibrated image
Mark the first parallax and the second parallax of pixel;
According to first parallax and the second parallax, depth of the object pixel in the structured image is calculated,
Wherein, the upper limit of the corresponding measurement distance of the first calibration plane of reference, the second calibration plane of reference correspond to the measurement distance
Lower limit.
Optionally, in any embodiment of the application, further includes: first parallax, the second parallax are projected to base
Line direction obtains the first projected disparity and the second projected disparity;
Accordingly, according to first parallax, the second parallax, the object pixel is calculated in the structured image
Depth, comprising: according to the first projected disparity and the second projected disparity, calculate the object pixel in the structured image
Depth.
Optionally, in any embodiment of the application, further includes: establish on the first uncalibrated image and the second uncalibrated image
The first model of fit and the second model of fit of different stripeds;
Projected to respectively according to structure light target object surface, first calibration the plane of reference, second calibration the plane of reference on formed
Structured image, the first uncalibrated image and the second uncalibrated image, determine the structure light in the structured image
Object pixel is respectively relative to the mesh of the structure light on first uncalibrated image and second uncalibrated image
Mark the first parallax and the second parallax of pixel, comprising: according to the object pixel respectively in structured image, the first calibration maps
As and the second uncalibrated image position and first model of fit and the second model of fit, determine first parallax with
And second parallax.
Optionally, in any embodiment of the application, further includes: determine on the first uncalibrated image and the second uncalibrated image
The center pixel of every striped with establish on the first uncalibrated image and the second uncalibrated image different stripeds the first model of fit and
Second model of fit.
Optionally, in any embodiment of the application, further includes: determine and demarcated to first uncalibrated image and second
The mask label of the center pixel distribution of every striped is different on the first uncalibrated image and the second uncalibrated image to establish on image
The first model of fit and the second model of fit of striped.
Optionally, in any embodiment of the application, further includes: with first uncalibrated image and the second uncalibrated image
The center pixel of upper every striped is the pixel search with reference to progress neighborhood to establish the first uncalibrated image and the second uncalibrated image
The first model of fit and the second model of fit of upper difference striped.
Optionally, in any embodiment of the application, the pixels statistics searched in neighborhood are analyzed to judge to be
No the first model of fit and the second model of fit for needing to establish different stripeds on the first uncalibrated image and the second uncalibrated image.
Optionally, in any embodiment of the application, further includes: true according to the first model of fit and the second model of fit
Determine match pixel, error of fitting is determined with corresponding actual pixel according to the match pixel.
Optionally, in any embodiment of the application, further includes: extract first uncalibrated image and the second calibration
Crest value in image, with the center pixel of every striped on determination first uncalibrated image and the second uncalibrated image.
The embodiment of the present application provides a kind of image processing apparatus comprising,
Parallax unit, for projecting to target object surface, the first calibration plane of reference, the second calibration respectively according to structure light
Structured image, the first uncalibrated image and the second uncalibrated image formed on the plane of reference, determines the structure light in the knot
Object pixel on structure image is respectively relative to the structure light in first uncalibrated image and second calibration maps
As the first parallax and the second parallax of the upper object pixel;
Depth calculation unit, for calculating the object pixel in the knot according to first parallax and the second parallax
Depth on structure image, wherein the upper limit of the corresponding measurement distance of the first calibration plane of reference, the second calibration plane of reference
The lower limit of the corresponding measurement distance.
The embodiment of the present application provides a kind of three-dimension measuring system comprising: projection arrangement, photographic device and this Shen
Please image processing apparatus described in any embodiment, the projection arrangement is used to that coded image to be projected to mesh by structure light
It marks on object, the photographic device projects to the structured image formed on target object for capturing the coded image.
In the embodiment of the present application, target object surface, the first calibration plane of reference, the second mark are projected to respectively according to structure light
Determine the structured image formed on the plane of reference, the first uncalibrated image and the second uncalibrated image, determines the structure light described
Object pixel in structured image is respectively relative to the structure light in first uncalibrated image and second calibration
The first parallax and the second parallax of the object pixel on image;According to first parallax and the second parallax, institute is calculated
State depth of the object pixel in the structured image, wherein the upper limit of the corresponding measurement distance of the first calibration plane of reference,
The lower limit that the second calibration plane of reference corresponds to the measurement distance avoids so that depth is unrelated with the inside and outside ginseng of measuring system
The introducing of additional error and the positive relationship for demarcating reference planes quantity and measurement accuracy.
Detailed description of the invention
The some specific of the embodiment of the present application is described in detail by way of example and not limitation with reference to the accompanying drawings hereinafter
Embodiment.Identical appended drawing reference denotes same or similar part or part in attached drawing.Those skilled in the art should manage
Solution, the drawings are not necessarily drawn to scale.In attached drawing:
Fig. 1 is the use schematic diagram of three-dimension measuring system in the embodiment of the present application one;
Fig. 2 is the calculation method flow diagram of picture depth in the embodiment of the present application two;
Fig. 3 is principle of parallax schematic diagram in the embodiment of the present application three;
Fig. 4 is depth calculation schematic illustration in the embodiment of the present application four.
Specific embodiment
Any technical solution for implementing the embodiment of the present invention must be not necessarily required to reach simultaneously above all advantages.
Below with reference to attached drawing of the embodiment of the present invention the embodiment of the present invention will be further explained specific implementation.
In the embodiment of the present application, target object surface, the first calibration plane of reference, the second mark are projected to respectively according to structure light
Determine the structured image formed on the plane of reference, the first uncalibrated image and the second uncalibrated image, determines the structure light described
Object pixel in structured image is respectively relative to the structure light in first uncalibrated image and second calibration
The first parallax and the second parallax of the object pixel on image;According to first parallax and the second parallax, institute is calculated
State depth of the object pixel in the structured image, wherein the upper limit of the corresponding measurement distance of the first calibration plane of reference,
The second calibration plane of reference corresponds to the lower limit of the measurement distance.The calculation method of picture depth makes depth and measuring system
Inside and outside ginseng it is unrelated, avoid additional error introducing and demarcate reference planes quantity and measurement accuracy positive relationship.
Fig. 1 is the use schematic diagram of three-dimension measuring system in the embodiment of the present application one;As shown in Figure 1, comprising: projecting dress
It sets, photographic device and image processing apparatus (not shown), the projection arrangement are used to pass through structure light for coded image
It projects on target object, the photographic device projects to the structuring formed on target object for capturing the coded image
Image.Described image depth calculation device for projected to respectively according to structure light target object surface, first calibration the plane of reference,
Structured image, the first uncalibrated image and the second uncalibrated image formed on the second calibration plane of reference, determines the structure light
Object pixel in the structured image is respectively relative to the structure light in first uncalibrated image and described
The first parallax and the second parallax of the object pixel on two uncalibrated images;According to first parallax and the second parallax,
Calculate depth of the object pixel in the structured image, wherein the first calibration plane of reference corresponding three-dimensional measurement
The upper limit of the measurement distance of system, the second calibration plane of reference correspond to the lower limit of the measurement distance of the three-dimension measuring system.
Fig. 2 is the calculation method flow diagram of picture depth in the embodiment of the present application two;As shown in Fig. 2, comprising:
S201, structured light projection to target object surface is formed to structured image, and the upper limit according to measurement distance
And lower limit, the first calibration maps are formed on the first calibration plane of reference and the second calibration plane of reference by projecting to structure light respectively
Picture and the second uncalibrated image;
In the present embodiment, structure light is projected to respectively it is described first calibration the plane of reference and second calibration the plane of reference on from
And the first uncalibrated image and the second uncalibrated image are formed, similarly, structured light projection to target object surface is formed into structure
Change image.
In addition, in the present embodiment, step S201 can also include: by determining the first uncalibrated image and the second calibration maps
As the center pixel of upper every striped, the first model of fit of different stripeds on the first uncalibrated image and the second uncalibrated image is established
With the second model of fit, specifically, this is sentenced to form the first model of fit for be illustrated, further according to the first model of fit shape
At the first uncalibrated image.The step of forming the first model of fit includes the following steps S211-S291 in detail:
S211, the center pixel for determining every striped on the first uncalibrated image;
It in the present embodiment, is illustrated so that structure light is strip encoding as an example, therefore, described the is extracted in step S211
The crest value of one uncalibrated image determines that the center pixel of upper every striped, the center pixel are specially according to the crest value
On the first uncalibrated image in the striped geometric center several pixels.
S221, to the center pixel distribution mask label of i-th striped on first uncalibrated image;
In the present embodiment, it is contemplated that the thickness of the strip encoding in different distance in structure light is likely to occur variation, because
This, extracts the center pixel of every striped, further for difference center pixel and non-central pixel, in every striped
Imago element distributes a mask and marks, such as 1, other all pixels on the first uncalibrated image in addition to center pixel, to distribution
It is marked for 0 mask.So-called mask label is the flag bit of center pixel, on the center line of a so-called center pixel i.e. striped
Intermediate pixel.
S231, with the center pixel of i-th striped on first uncalibrated image be with reference to carry out neighborhood pixel search;
In the present embodiment, in order to store strip encoding position one by one, therefore, the pixel search of neighborhood is carried out.Such as preceding institute
State, such as if a total of N item of striped and mutually it is non-intersecting, each striped is intended to scan for, way of search be on to
Under, the point for first finding the top labeled as 1 carries out 8 neighborhood search as the starting point of certain root striped still further below.In search,
The continuous search based on 8 neighborhoods is carried out downwards as starting point labeled as 1 center pixel using a certain mask in the top, and will be searched
Rope to mask label be all that 1 pixel is added in a set.Herein, 8 neighborhoods specifically refer to 8 around the center pixel
Pixel, search downwards is then practical to be only subject to 3 pixels in lower section in 8 neighborhoods, further therefrom determines covering for those pixels
Code labeling is 1.It is then reference with this pixel if during lower downward search, searching its pixel that mask is labeled as 1
Point continues search downwards, determines that mask is labeled as 1 other pixels, and so on, to the last when primary search, 8
There is no the pixels that mask is labeled as 1 for lower section in neighborhood.
It should be noted that the number of neighborhood is not limited to 8 in the present embodiment, can also flexibly be set according to application scenarios
It sets.
S241, the pixels statistics searched in neighborhood are analyzed;
In the present embodiment, since mask can reflect strip encoding position labeled as 1 pixel, and quasi-
It is as unit of strip encoding, i.e., each strip encoding corresponds to first model of fit when closing model foundation.Therefore,
In step S241, when the center pixel searched in neighborhood is for statistical analysis, statistics is located at same on stripe centerline
And be located at neighborhood in all pixels, then judge in step S251 these pixels quantity whether be more than setting quantity threshold
Value, if it does, then showing that the strip encoding is legal striped, that is, striped interrupts caused by being not present because of reasons such as picture qualities
Or non-striped generation, therefore, it is necessary to establish the first model of fit of corresponding striped otherwise to then follow the steps S273B: to searching
Rope to mask labeled as 1 the clear operation that is marked of pixel so that the mask label of these pixels becomes 0.
And those are not needed to establish for the striped of the first model of fit, can directly it give up.
S251, the first model of fit for judging whether to need to establish i-th striped in the first uncalibrated image, if so, holding
Otherwise row step S261 executes step S273B.
S261, the first model of fit for establishing the i-th stripe on the first uncalibrated image;
In the present embodiment, mainly reflect that the position of strip encoding is closed with the mathematics of location of pixels thereon in the first model of fit
System can know the position of corresponding striped by the coordinate of these pixels.If according to the above-mentioned side searched for from top to bottom
If formula, fitting is abscissa direction under longitudinal mainly using the image upper left corner as coordinate origin, and laterally right is ordinate side
To then striped being fitted it as straight line or curve on coordinate system, so that it is determined that striped is in the first mark out
Determine the position on image and the second uncalibrated image.
Further, in a concrete application scene, neighborhood can be carried out from the center pixel based on of the provision
Mask obtained from search arbitrarily selects several pixels, and based on these pixels filtered out labeled as in 1 all pixels
Coordinate is fitted, thus the position for obtaining corresponding striped and the relationship of pixel coordinate thereon.
S271, match pixel is determined according to the first model of fit, it is true according to the match pixel and corresponding actual pixel
Determine error of fitting.
In the present embodiment, in order to verify the validity of the first model of fit, but after obtaining the first model of fit, to know item
The coordinate of line, and the anti-coordinate for pushing away some pixel, determine whether the pixel is located on the striped, if the anti-pixel released
Coordinate is greater than the error of fitting threshold value of setting with the error between the actual coordinate of the pixel, then shows first model of fit
Accuracy is poor, needs to re-start fitting.When being fitted again, several pixels, and base are arbitrarily selected from above-mentioned set again
It is fitted in the coordinate of the pixel filtered out again, thus the coordinate for retrieving corresponding striped and the pass of pixel coordinate thereon
System, until obtained error of fitting is less than the error of fitting threshold value of setting.
S281, judge whether the error of fitting is less than the error of fitting threshold value of setting;If satisfied, thening follow the steps
Otherwise S291A executes step S291B;
S291A, first model of fit is saved;
In the present embodiment, the first model of fit of preservation is mainly used for participating in the subsequent processing according to disparity computation depth.
S291B, the mask of pixel is marked into clearing, and jumps to step S231.
In the present embodiment, when the error of fitting is greater than the error of fitting threshold value of setting, to the pixel in above-mentioned set
Mask mark and from 1 be revised as 0, be again with reference to carrying out neighborhood territory pixel with the center pixel of striped with i-th to jump to S231
Mask is labeled as 1 search, further to re-establish the first model of fit.
In the present embodiment, the foundation of the second model of fit repeats no more in detail similar to above-mentioned S211-291B.
S202, projected to respectively according to structure light target object surface, first calibration the plane of reference, second calibration the plane of reference on
The structured image of formation, the first uncalibrated image and the second uncalibrated image determine the structure light in the structured image
On object pixel be respectively relative to institute of the structure light on first uncalibrated image and second uncalibrated image
State the first parallax and the second parallax of object pixel;
It is shown in Figure 3, it is principle of parallax schematic diagram in the embodiment of the present application three, is with some root in structured image
On striped (if striped 1) for the corresponding parallax of object pixel P0, as shown in figure 3, object pixel P0 is in the second calibration maps
As upper respective pixel is denoted as P1, respective pixel is P2 on the first uncalibrated image.In fact, for the striped in structured image
Object pixel P0 on 1, corresponds on the first uncalibrated image and the second uncalibrated image, can only determine that it is located on striped 1, but
As for the specific location on striped 1, need further to analyze.
It is shown in Figure 3, measurement distance from the near to the distant during, the motion track of object pixel P0 practically along
Fig. 3 bend direction, it can be seen that, object pixel P0 is respectively relative to the parallax of pixel P1, P2 as shown in d1 and d0 in figure.
It should be noted that in other embodiments, above-mentioned the step of establishing model of fit also may include in step
In S202.
S203, according to first parallax, the second parallax, calculate depth of the structure light in the structured image
Degree.
In the present embodiment, step S203 be can specifically include:
S213, first parallax, the second parallax are projected into base direction, obtains the first projected disparity and the second throwing
Video display are poor;
Further, referring again to above-mentioned Fig. 3, it is contemplated that the specific location of P1, P2 not can determine that, along the view of oblique line directions
Poor d0, d1 cannot directly participate in Deep Computing, therefore, project to base direction along parallax d0, d1 of oblique line directions for above-mentioned
Obtain projected disparity dy0 and dy1.
Specifically, as previously described, because establishing the model of fit of every striped, therefore, as long as knowing object pixel P0
It is located at that striped on the first uncalibrated image and the second uncalibrated image, then directly mould can be fitted based on the first of the striped
Type and the second model of fit, determine object pixel P0 on the first uncalibrated image and the second uncalibrated image corresponding pixel P1 and
The ordinate value of P2, and for abscissa, since object pixel P0, pixel P1 and P2 are the point being mutually matched, the side of being expert at
Upwards, i.e., which row pixel is characterization be located in, these three pixels are consistent.If the coordinate of object pixel P0 is (x, y0),
The then projection that pixel P1 and P2 is respectively obtained after the coordinate projection to base direction on the first uncalibrated image and the second calibration maps
Coordinate is (x, y1), (x, y2), and corresponding parallax is respectively dy0=y0-y1, dy1=y2-y0, and wherein y1 and y2 be actually
Corresponding first model of fit and the second model of fit, it can be seen that, the first model of fit and the second model of fit are characterization pixels
In lateral coordinates and the relationship to coordinate is arranged, as a result, when x is learnt, model of fit is known, pixel can be obtained in each self-calibration
Ordinate on image.
In fact, referring to above-mentioned Fig. 4, due to there are d1/d0=dy1/dy0, therefore dy1, dy0 can be used to replace d1,
D0, detailed reason can be found in the explanation of the calculation formula of following depth Z.
S223, according to the first projected disparity and the second projected disparity, calculate the structure light in the structured image
On depth.
As shown in figure 4, for depth calculation schematic illustration in the embodiment of the present application four, starting distance and terminating distance difference
Indicate that the lower and upper limit of measurement distance, corresponding calibration maps are actually respectively the second uncalibrated image and the first calibration maps
Picture, spatial point A, C, F are subpoint of the same point of projector projects out at different distance plane, and point C is in structured image
On imaging point can correspond to above-mentioned object pixel P0, and imaging point of the point A on the first uncalibrated image then corresponds to above-mentioned pixel P1,
Imaging point of the point F in the second calibration plane corresponds to above-mentioned pixel P2, the object pixel P parallax with pixel P1, pixel P2 respectively
Respectively parallax d0, parallax d1, and fasten that (pixel coordinate fastens practical manifestation by projecting to the direction baseline B in pixel coordinate
For the direction perpendicular to striped), obtain corresponding projected disparity dy0, dy1.And indeed according to the relationship of similar triangles, it deposits
In following formula:
By above-mentioned formula as it can be seen that the depth Z of object pixel P0 is unrelated with the inside and outside ginseng of measuring system, only with parallax d0, d1
It is related, without demarcating measuring system inside and outside parameter, therefore, also there is in the present embodiment above-mentioned by oblique line directions
Parallax projects to the direction baseline b and forms projected disparity to calculate depth.
Referring to above-mentioned Fig. 5, the formula proving of depth Z is as follows, is learnt according to similar triangle theory for structuring
There are following formula (2) relationships by point C on image:
Similarly, the point Z for measuring the lower limit of distance, on the first uncalibrated image1There are following formula (3) relationships:
At the same time, there are following formula (4) relationships:
Formula (5) can be obtained according to above-mentioned formula (2) (3):
Referring back to known to above-mentioned formula (4) (5):
(Z2-Z1)d1*Z=Z1*(d0+d1)(Z2-Z) (6)
Carrying out parsing to above-mentioned formula (6) can be obtained above-mentioned formula (1).
The embodiment of the present application provides a kind of image processing apparatus comprising:
Parallax unit, for projecting to target object surface, the first calibration plane of reference, the second calibration respectively according to structure light
Structured image, the first uncalibrated image and the second uncalibrated image formed on the plane of reference, determines the structure light in the knot
Object pixel on structure image is respectively relative to the structure light in first uncalibrated image and second calibration maps
As the first parallax and the second parallax of the upper object pixel;
Depth calculation unit, for calculating the object pixel in the knot according to first parallax and the second parallax
Depth on structure image, wherein the upper limit of the corresponding measurement distance of the first calibration plane of reference, the second calibration plane of reference
The lower limit of the corresponding measurement distance.
Herein, it should be noted that above-mentioned image processing apparatus can realize on picture processing chip, can also be at it
He realizes on chip.
So far, the specific embodiment of this theme is described.Other embodiments are in the appended claims
In range.In some cases, the movement recorded in detail in the claims can execute and still in a different order
Desired result may be implemented.In addition, process depicted in the drawing not necessarily requires the particular order shown or continuous suitable
Sequence, to realize desired result.In some embodiments, multitasking and parallel processing can be advantageous.
In the 1990s, the improvement of a technology can be distinguished clearly be on hardware improvement (for example,
Improvement to circuit structures such as diode, transistor, switches) or software on improvement (improvement for method flow).So
And with the development of technology, the improvement of current many method flows can be considered as directly improving for hardware circuit.
Designer nearly all obtains corresponding hardware circuit by the way that improved method flow to be programmed into hardware circuit.Cause
This, it cannot be said that the improvement of a method flow cannot be realized with hardware entities module.For example, programmable logic device
(Programmable Logic Device, PLD) (such as field programmable gate array (Field Programmable Gate
Array, FPGA)) it is exactly such a integrated circuit, logic function determines device programming by user.By designer
Voluntarily programming comes a digital display circuit " integrated " on a piece of PLD, designs and makes without asking chip maker
Dedicated IC chip.Moreover, nowadays, substitution manually makes IC chip, this programming is also used instead mostly " is patrolled
Volume compiler (logic compiler) " software realizes that software compiler used is similar when it writes with program development,
And the source code before compiling also write by handy specific programming language, this is referred to as hardware description language
(Hardware Description Language, HDL), and HDL is also not only a kind of, but there are many kind, such as ABEL
(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description
Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL
(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby
Hardware Description Language) etc., VHDL (Very-High-Speed is most generally used at present
Integrated Circuit Hardware Description Language) and Verilog.Those skilled in the art also answer
This understands, it is only necessary to method flow slightly programming in logic and is programmed into integrated circuit with above-mentioned several hardware description languages,
The hardware circuit for realizing the logical method process can be readily available.
Controller can be implemented in any suitable manner, for example, controller can take such as microprocessor or processing
The computer for the computer readable program code (such as software or firmware) that device and storage can be executed by (micro-) processor can
Read medium, logic gate, switch, specific integrated circuit (Application Specific Integrated Circuit,
ASIC), the form of programmable logic controller (PLC) and insertion microcontroller, the example of controller includes but is not limited to following microcontroller
Device: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320 are deposited
Memory controller is also implemented as a part of the control logic of memory.It is also known in the art that in addition to
Pure computer readable program code mode is realized other than controller, can be made completely by the way that method and step is carried out programming in logic
Controller is obtained to come in fact in the form of logic gate, switch, specific integrated circuit, programmable logic controller (PLC) and insertion microcontroller etc.
Existing identical function.Therefore this controller is considered a kind of hardware component, and to including for realizing various in it
The device of function can also be considered as the structure in hardware component.Or even, it can will be regarded for realizing the device of various functions
For either the software module of implementation method can be the structure in hardware component again.
For convenience of description, it is divided into various units when description apparatus above with function to describe respectively.Certainly, implementing this
The function of each unit can be realized in the same or multiple software and or hardware when application.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net
Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves
State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable
Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM),
Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices
Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates
Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art
For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal
Replacement, improvement etc., should be included within the scope of the claims of this application.
Claims (11)
1. a kind of calculation method of picture depth characterized by comprising
Project to target object surface, the first calibration plane of reference, the knot that is formed on the second calibration plane of reference respectively according to structure light
Structure image, the first uncalibrated image and the second uncalibrated image determine target of the structure light in the structured image
Pixel is respectively relative to the target picture of the structure light on first uncalibrated image and second uncalibrated image
The first parallax and the second parallax of element;
According to first parallax and the second parallax, depth of the object pixel in the structured image is calculated, wherein
The upper limit of the corresponding measurement distance of the first calibration plane of reference, the second calibration plane of reference correspond under the measurement distance
Limit.
2. the method according to claim 1, wherein further include: first parallax, the second parallax are projected to
Base direction obtains the first projected disparity and the second projected disparity;
Accordingly, according to first parallax, the second parallax, depth of the object pixel in the structured image is calculated
Degree, comprising: according to the first projected disparity and the second projected disparity, calculate the object pixel in the structured image
Depth.
3. the method according to claim 1, wherein further include: establish the first uncalibrated image and the second calibration maps
As the first model of fit and the second model of fit of upper different stripeds;
Project to target object surface, the first calibration plane of reference, the knot that is formed on the second calibration plane of reference respectively according to structure light
Structure image, the first uncalibrated image and the second uncalibrated image determine target of the structure light in the structured image
Pixel is respectively relative to the target picture of the structure light on first uncalibrated image and second uncalibrated image
Element the first parallax and the second parallax, comprising: according to the object pixel respectively structured image, the first uncalibrated image with
And second uncalibrated image position and first model of fit and the second model of fit, determine first parallax and
Two parallaxes.
4. according to the method described in claim 3, it is characterized by further comprising: determining the first uncalibrated image and the second calibration maps
As the center pixel of upper every striped is fitted mould with establish different stripeds on the first uncalibrated image and the second uncalibrated image first
Type and the second model of fit.
5. according to the method described in claim 4, it is characterized by further comprising: determining to first uncalibrated image and second
The mask of the center pixel distribution of every striped is marked to establish on the first uncalibrated image and the second uncalibrated image on uncalibrated image
The first model of fit and the second model of fit of different stripeds.
6. according to the method described in claim 5, it is characterized by further comprising: being demarcated with first uncalibrated image and second
The center pixel of every striped is with reference to the pixel search for carrying out neighborhood to establish the first uncalibrated image and the second calibration on image
The first model of fit and the second model of fit of different stripeds on image.
7. according to the method described in claim 6, it is characterized in that, analyzing the pixels statistics searched in neighborhood to judge
Whether need to establish the first model of fit and the second model of fit of different stripeds on the first uncalibrated image and the second uncalibrated image.
8. according to the method described in claim 4, it is characterized by further comprising: according to the first model of fit and the second fitting mould
Type determines match pixel, determines error of fitting with corresponding actual pixel according to the match pixel.
9. according to the described in any item methods of claim 4-8, which is characterized in that further include: extract first uncalibrated image
And the second crest value in uncalibrated image, in every striped on determination first uncalibrated image and the second uncalibrated image
Imago element.
10. a kind of image processing apparatus, which is characterized in that including,
Parallax unit, for projecting to target object surface, the first calibration plane of reference, the second calibration reference respectively according to structure light
Structured image, the first uncalibrated image and the second uncalibrated image formed on face, determines the structure light in the structuring
Object pixel on image is respectively relative to the structure light on first uncalibrated image and second uncalibrated image
The object pixel the first parallax and the second parallax;
Depth calculation unit, for calculating the object pixel in the structuring according to first parallax and the second parallax
Depth on image, wherein the upper limit of the corresponding measurement distance of the first calibration plane of reference, the second calibration plane of reference are corresponding
The lower limit of the measurement distance.
11. a kind of three-dimension measuring system characterized by comprising described in projection arrangement, photographic device and claim 10
Image processing apparatus, the projection arrangement is used to by structure light coded image is projected to target object, the camera shooting
Device projects to the structured image formed on target object for capturing the coded image.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/077993 WO2020181524A1 (en) | 2019-03-13 | 2019-03-13 | Image depth calculation method, image processing device, and three-dimensional measurement system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110088563A true CN110088563A (en) | 2019-08-02 |
CN110088563B CN110088563B (en) | 2021-03-19 |
Family
ID=67424510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980000341.XA Active CN110088563B (en) | 2019-03-13 | 2019-03-13 | Image depth calculation method, image processing device and three-dimensional measurement system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110088563B (en) |
WO (1) | WO2020181524A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085752A (en) * | 2020-08-20 | 2020-12-15 | 浙江华睿科技有限公司 | Image processing method, device, equipment and medium |
CN113099120A (en) * | 2021-04-13 | 2021-07-09 | 南昌虚拟现实研究院股份有限公司 | Depth information acquisition method and device, readable storage medium and depth camera |
CN113592706A (en) * | 2021-07-28 | 2021-11-02 | 北京地平线信息技术有限公司 | Method and device for adjusting homography matrix parameters |
CN114160961A (en) * | 2021-12-14 | 2022-03-11 | 深圳快造科技有限公司 | System and method for calibrating laser processing parameters |
CN112752088B (en) * | 2020-07-28 | 2023-03-28 | 腾讯科技(深圳)有限公司 | Depth image generation method and device, reference image generation method and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1888815A (en) * | 2006-07-13 | 2007-01-03 | 上海交通大学 | Projecting structural optical space position and shape multi-point fitting calibrating method |
US20140055560A1 (en) * | 2012-08-24 | 2014-02-27 | Microsoft Corporation | Depth Data Processing and Compression |
CN104408732A (en) * | 2014-12-10 | 2015-03-11 | 东北大学 | Large-view-field depth measuring system and method based on omni-directional structured light |
KR20150098035A (en) * | 2014-02-19 | 2015-08-27 | 엘지전자 주식회사 | Device for estimating three-dimensional shape of object and method thereof |
CN106875435A (en) * | 2016-12-14 | 2017-06-20 | 深圳奥比中光科技有限公司 | Obtain the method and system of depth image |
CN109405765A (en) * | 2018-10-23 | 2019-03-01 | 北京的卢深视科技有限公司 | A kind of high accuracy depth calculation method and system based on pattern light |
CN109461181A (en) * | 2018-10-17 | 2019-03-12 | 北京华捷艾米科技有限公司 | Depth image acquisition method and system based on pattern light |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5986364B2 (en) * | 2011-10-17 | 2016-09-06 | キヤノン株式会社 | Three-dimensional shape measuring apparatus, control method for three-dimensional shape measuring apparatus, and program |
-
2019
- 2019-03-13 WO PCT/CN2019/077993 patent/WO2020181524A1/en active Application Filing
- 2019-03-13 CN CN201980000341.XA patent/CN110088563B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1888815A (en) * | 2006-07-13 | 2007-01-03 | 上海交通大学 | Projecting structural optical space position and shape multi-point fitting calibrating method |
US20140055560A1 (en) * | 2012-08-24 | 2014-02-27 | Microsoft Corporation | Depth Data Processing and Compression |
KR20150098035A (en) * | 2014-02-19 | 2015-08-27 | 엘지전자 주식회사 | Device for estimating three-dimensional shape of object and method thereof |
CN104408732A (en) * | 2014-12-10 | 2015-03-11 | 东北大学 | Large-view-field depth measuring system and method based on omni-directional structured light |
CN106875435A (en) * | 2016-12-14 | 2017-06-20 | 深圳奥比中光科技有限公司 | Obtain the method and system of depth image |
CN109461181A (en) * | 2018-10-17 | 2019-03-12 | 北京华捷艾米科技有限公司 | Depth image acquisition method and system based on pattern light |
CN109405765A (en) * | 2018-10-23 | 2019-03-01 | 北京的卢深视科技有限公司 | A kind of high accuracy depth calculation method and system based on pattern light |
Non-Patent Citations (1)
Title |
---|
贾同 等: "一种基于全向结构光的深度测量方法", 《自动化学报》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112752088B (en) * | 2020-07-28 | 2023-03-28 | 腾讯科技(深圳)有限公司 | Depth image generation method and device, reference image generation method and electronic equipment |
CN112085752A (en) * | 2020-08-20 | 2020-12-15 | 浙江华睿科技有限公司 | Image processing method, device, equipment and medium |
CN112085752B (en) * | 2020-08-20 | 2024-01-30 | 浙江华睿科技股份有限公司 | Image processing method, device, equipment and medium |
CN113099120A (en) * | 2021-04-13 | 2021-07-09 | 南昌虚拟现实研究院股份有限公司 | Depth information acquisition method and device, readable storage medium and depth camera |
CN113099120B (en) * | 2021-04-13 | 2023-04-18 | 南昌虚拟现实研究院股份有限公司 | Depth information acquisition method and device, readable storage medium and depth camera |
CN113592706A (en) * | 2021-07-28 | 2021-11-02 | 北京地平线信息技术有限公司 | Method and device for adjusting homography matrix parameters |
CN113592706B (en) * | 2021-07-28 | 2023-10-17 | 北京地平线信息技术有限公司 | Method and device for adjusting homography matrix parameters |
CN114160961A (en) * | 2021-12-14 | 2022-03-11 | 深圳快造科技有限公司 | System and method for calibrating laser processing parameters |
CN114160961B (en) * | 2021-12-14 | 2023-10-13 | 深圳快造科技有限公司 | System and method for calibrating laser processing parameters |
Also Published As
Publication number | Publication date |
---|---|
WO2020181524A1 (en) | 2020-09-17 |
CN110088563B (en) | 2021-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110088563A (en) | Calculation method, image processing apparatus and the three-dimension measuring system of picture depth | |
Vespa et al. | Efficient octree-based volumetric SLAM supporting signed-distance and occupancy mapping | |
Zhu et al. | Nicer-slam: Neural implicit scene encoding for rgb slam | |
US9098909B2 (en) | Three-dimensional distance measurement apparatus, three-dimensional distance measurement method, and non-transitory computer-readable storage medium | |
US8447099B2 (en) | Forming 3D models using two images | |
US8452081B2 (en) | Forming 3D models using multiple images | |
US9053571B2 (en) | Generating computer models of 3D objects | |
US8711143B2 (en) | System and method for interactive image-based modeling of curved surfaces using single-view and multi-view feature curves | |
CN106033621B (en) | A kind of method and device of three-dimensional modeling | |
JP5881743B2 (en) | Self-position estimation of mobile camera using depth map | |
US20120176478A1 (en) | Forming range maps using periodic illumination patterns | |
CN102446343B (en) | Reconstruction of sparse data | |
KR102197067B1 (en) | Method and Apparatus for rendering same region of multi frames | |
KR101893788B1 (en) | Apparatus and method of image matching in multi-view camera | |
US20200336673A1 (en) | Phase detect auto-focus three dimensional image capture system | |
JP2011242183A (en) | Image processing device, image processing method, and program | |
CN103530861B (en) | A kind of core image splicing and amalgamation method | |
CN109993826A (en) | A kind of structural light three-dimensional image rebuilding method, equipment and system | |
CN112416133A (en) | Hand motion capture method and device, electronic equipment and storage medium | |
KR20110032366A (en) | Image processing apparatus and method | |
CN110851978A (en) | Camera position optimization method based on visibility | |
Ling et al. | Real‐time dense mapping for online processing and navigation | |
CN103559710B (en) | A kind of scaling method for three-dimensional reconstruction system | |
CN113532266B (en) | Box volume measuring method, system, equipment and storage medium based on three-dimensional vision | |
US20200041262A1 (en) | Method for performing calibration by using measured data without assumed calibration model and three-dimensional scanner calibration system for performing same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |