CN105352482A - Bionic compound eye microlens technology-based 3-3-2 dimension object detection method and system - Google Patents

Bionic compound eye microlens technology-based 3-3-2 dimension object detection method and system Download PDF

Info

Publication number
CN105352482A
CN105352482A CN201510732346.5A CN201510732346A CN105352482A CN 105352482 A CN105352482 A CN 105352482A CN 201510732346 A CN201510732346 A CN 201510732346A CN 105352482 A CN105352482 A CN 105352482A
Authority
CN
China
Prior art keywords
image
target
object detection
microlens
bionic compound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510732346.5A
Other languages
Chinese (zh)
Other versions
CN105352482B (en
Inventor
晏磊
景欣
赵红颖
杨鹏
万杰
孙华波
高鹏骐
罗博仁
刘飒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201510732346.5A priority Critical patent/CN105352482B/en
Publication of CN105352482A publication Critical patent/CN105352482A/en
Application granted granted Critical
Publication of CN105352482B publication Critical patent/CN105352482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Abstract

The invention relates to a bionic compound eye microlens technology-based 3-3-2 dimension object detection method and system. Through a bionic compound eye microlens system-based low-resolution data acquisition method, an object region is subjected to capture imaging, according to microlens array images taken by two microlens devices, constructing a low-resolution image by a linear weighted average method, reconstructing a 3D profile of an object by a forward intersection measurement method, if the object image is effectively captured and is shown in the low-resolution image, reconstructing a high-resolution image of the object region by the microlens array images as base data by a regularization method, and after the high-resolution 2D image of the object region is acquired, carrying out precise recognition on the object by a texture gradient-based GAC model. Through addition of a process for acquiring the 3D profile of an object by a low-resolution image, redundant image unmeaning processing is effectively avoided and real-time processing efficiency and accuracy of the system are improved.

Description

3-3-2 based on bionic compound eyes micro lens technology ties up object detection method and system
Technical field
The present invention relates to a kind of method and system improving target detection efficiency, 3-3-2 particularly based on bionic compound eyes micro lens technology ties up object detection method and a system, namely utilizes the two-level resolution data acquisition of microlens system to carry out low resolution three-D profile to target and catches and the high precision of high resolution 2 d staring imaging, high-level efficiency object detection method and system.
Background technology
Traditional object detection method takes target based on single imaging device, and the magnanimity high-definition picture obtained carries out target detection.The image of camera shooting is more, and the imaging resolution of camera is higher means that the data volume of acquisition is larger, data message is more.It is longer that this causes computing machine to carry out time of automatic business processing, and efficiency is lower.In addition, namely there is a large amount of redundancy images in the uncertain effectively capture movement target that causes differing in high-definition picture surely of target position information.Traditional method can not consider whether there is target in image, but carries out unitized process to image.This efficiency that computing machine certainly will be caused to carry out target detection reduces.Current researcher concentrates on goal in research detection algorithm.Although the algorithm that researcher proposes can improve the target detection efficiency of algorithm, the real-time treatment effeciency that the redundant data of magnanimity result in system is on the low side.How to avoid carrying out insignificant process to redundancy image, research both domestic and external is also almost in space state.
Summary of the invention
Insignificant process need be carried out to the redundancy image of magnanimity to overcome classic method, the invention provides a kind of 3-3-2 based on bionic compound eyes micro lens technology and tieing up object detection method and system.
For achieving the above object, the present invention is by the following technical solutions:
Microlens system is utilized to build two-level resolution data acquisition, carry out low resolution three-D profile to target to catch and high resolution 2 d staring imaging, by increasing the step that uses low resolution image target acquisition three-D profile, can judge that whether target is in this target area more efficiently, effectively avoid the insignificant process of redundancy image, improve real-time treatment effeciency and the accuracy of system.
Particularly, the 3-3-2 based on bionic compound eyes micro lens technology ties up object detection method, comprises the following steps:
1) as imaging system, seizure imaging being carried out to target area using the microlens system based on bionic compound eyes structure, adopting linear weighted function method of average reconstruct low resolution image by taking the microlens array image obtained;
2) by step 1) in reconstruct low resolution image based on, adopt forward intersection measuring method to calculate the three-dimensional coordinate of impact point, low resolution three-D profile carried out to target and catches;
3) if effectively after target acquisition, then by step 1) take data based on the microlens array image that obtains, adopt regularization method reconstruct high resolution image, high resolution 2 d staring imaging is carried out to target area; Otherwise mobile microlens system also gets back to step 1);
4), after obtaining the high resolution 2 d image of target area, adopt the GAC model based on texture gradient accurately to identify target, complete target detection.
Step 1) in, the described linear weighted function method of average is as follows:
g ‾ = p 1 * g 1 + p 2 * g 2 + ... p m * g m , Wherein,
p i = n i n 1 + n 2 + ... + n m ( 1 ≤ i ≤ m )
In formula, the pixel in unit image is sorted from small to large according to gray-scale value, and is numbered 1 to m, g mrepresent the gray-scale value being numbered the pixel of m in unit image, p ifor this gray-scale value g icorresponding weight, n iexpression gray-scale value is g inumber of pixels.
Step 2) specifically comprise following content: 1. image is carried out feature point extraction and mated; 2. in match point, screen part same place carry out relative orientation, namely determine the relative attitude information between lenticule device; 3. according to the relative attitude information between lenticule device, adopt forward intersection measuring method to obtain the relative dimensional coordinate of match point, namely obtain the three-D profile of target.
This algorithm utilizes topography's Gradient Features of unique point to determine the principal direction of each unique point, and its formula is as follows:
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) 2 ) θ ( x , y ) = a r c t a n ( L ( x , y + 1 ) - L ( x , y - 1 ) L ( x + 1 , y ) - L ( x - 1 , y ) )
(x in formula, y) be the coordinate of unique point, m (x, y) with θ (x, y) gradient of the gaussian pyramid image being respectively current scale at (x, y) place and direction, L (x, y) be the gray scale of gaussian pyramid image at (x, y) place of current scale.
Determine the relative attitude information between two lenticule devices, namely calculate the elements of relative orientation of two photos equation be:
Wherein, Q = N 1 Y 1 - N 2 Y 2 - B Y , N 1 = B X Z 2 - B Z X 2 X 1 Z 2 - X 2 Z 1 , N 2 = B X Z 1 - B Z X 1 X 1 Z 2 - X 2 Z 1 , Wherein Q is vertical parallax, N 1, N 2for projection coefficient, (X 1, Y 1, Z 1), (X 2, Y 2, Z 2) be the coordinate of picture point in the auxiliary coordinates of image space, B x, B y, B zbe the projection of photographic base on XYZ direction, d is differential sign.Utilize the principle of least square, can establish an equation
In formula, l is the free term of indirect adjustment, a n, b n, c n, d n, e nfor the coefficient of error equation, v nfor the error term of indirect adjustment, elements of relative orientation can be tried to achieve by indirect adjustment wherein be the elements of interior orientation of second relative first photo of photo, the drift angle that (μ, ν) is baseline and inclination angle.
Adopt the concrete steps of the relative dimensional coordinate of forward intersection measuring method acquisition match point as follows: first to calculate angular range element and baseline component (B x, B y, B z); Calculate the orthogonal matrix of left and right photo in photogrammetric coordinate system; Calculate the coordinate (X of picture point in the auxiliary coordinates of image space 1, Y 1, Z 1) and (X 2, Y 2, Z 2); Calculate projection coefficient N 1, N 2; The three-dimensional coordinate (X, Y, Z) of computation model point.The three-dimensional coordinate computing formula of model points in the auxiliary coordinates of image space is as follows:
X = B X + N 2 X Y = 1 2 ( N 1 Y + N 2 Y 2 + B Y ) Z = B Z + N 2 Z 2
Namely the three-dimensional coordinate obtaining a large amount of model points by said method obtains the three-D profile of target.
Step 3) in, the method that use regularization method reconstructs high resolution image is as follows:
f *=argmin||g-Af|| 2+λΩ(f)
Wherein Ω (f) is regularization term, and Ω is called regularizing operator, and f is the high resolution image reconstructed, and A is the operator that degrades, and g is the image that microlens array observes, λ is called regular parameter, by just reconstructing high-resolution image like this.
Step 4) in, in view of GAC model mainly utilizes border to stop function g to split image, the structure of g directly affects the result of segmentation; The present invention proposes the GAC model gradient current equation based on texture gradient, as follows:
∂ u ∂ t = μ [ ▿ u - d i v ( ▿ u | ▿ u | ) + δ ( u ) [ d i v ( g ( T G ) ▿ u | ▿ u | ) + c g ]
In formula, g is the nonnegative function of any monotone decreasing, and δ (x) function can be expressed as the derivative of H (x), and μ, c are constant, and div is divergence operator.
Present invention also offers a kind of 3-3-2 based on bionic compound eyes micro lens technology and tie up object detection system, comprising: microlens system, control system and target detection output system;
Described microlens system comprises 2 symmetrical lenticule devices, and this lenticule device can obtain microlens array image by microlens array, then reconstructs low resolution and high-resolution image;
Described control system comprises DSP main control core cell, fpga logic control module and graphics processing unit;
Described DSP main control core cell is used for carrying out Image Information Processing and storage;
Described fpga logic control module for controlling the signal acquisition of microlens system, and provides process data for DSP main control core cell;
The low resolution three-D profile that described graphics processing unit is used for carrying out above-mentioned microlens array image target is caught and high resolution 2 d staring imaging, and accurately detects the target in high resolution image;
Described target detection output system is for exporting object detection results.
Further, described graphics processing unit comprises extracting and matching feature points module, relative orientation module, forward intersection module and GAC model segmentation module,
The low resolution image that described extracting and matching feature points module is used for obtaining carries out feature point extraction and mates;
Described relative orientation module is used in match point, screen part same place and carries out relative orientation;
The relative dimensional coordinate of described forward intersection module for adopting forward intersection measuring method to obtain match point;
The segmentation identification of described GAC model segmentation module for adopting the GAC model based on texture gradient to carry out target.
The present invention is owing to adopting above technical scheme, and it has the following advantages:
1, the present invention proposes the new model of the Data acquisition and Proclssing that the 3-3-2 based on bionic compound eyes micro lens technology ties up.3 D captured and the high resolution 2 d staring imaging of low resolution should be carried out to objective based on the two-level resolution of the method simulated hexapod compound eye of micro lens technology.First adopt microlens system to target area imaging, by corresponding image processing algorithm, rough detection is carried out to the three-D profile of target, after preliminary latch moving target, utilize regularization method to reconstruct the high resolution image of target area, then accurately identify target.By increasing the step that uses low resolution image target acquisition three-D profile, can judge that whether target is in this target area, effectively avoids the insignificant process of redundancy image more efficiently, improve the real-time treatment effeciency of system.
2, the present invention adopts linear weighted function method of average reconstruct low resolution image to microlens array image; Carry out feature extraction and matching to low resolution image, relative orientation and forward intersection are reconstructed the three-D profile of target; Based on microlens array image, use the method for regularization, be reconstructed the high resolution image of target area; Adopt the GAC model based on texture gradient to carry out the segmentation identification of target, effectively can identify target.
3, first micro lens technology, photogrammetric, computer vision, insect bionic compound eyes correlation theory combine by the present invention, and propose a set of object detection method based on insect bionic compound eyes image, this invention has pioneering, practicality.The present invention can be widely used in target detection.
Accompanying drawing explanation
Fig. 1. microlens system structural drawing of the present invention, wherein: 1,2-lenticule device.
Fig. 2. the present invention's single lenticule device imaging schematic diagram, wherein: 3-headprism, 4-microlens array.
Fig. 3. object detection method process flow diagram of the present invention.
Fig. 4. relative orientation schematic diagram of the present invention.
Fig. 5. forward intersection schematic diagram of the present invention.
Fig. 6. target three-D profile schematic diagram of the present invention, (a) represents raw data, and (b) represents point cloud.
Embodiment
Below in conjunction with drawings and Examples, the present invention is described in detail.
Current object detection method utilizes imaging sensor, as camera etc. obtains the two dimensional image of objective spatially, then utilizes computing machine to process the image obtained.The detection viewing field of the target detection recognition methods in traditional means is little, and the view data of all regions of acquisition such as is all at the resolution.The image of camera shooting is more, and imaging resolution is higher means that the data volume of acquisition is larger, data message is more.It is longer that this causes computing machine to carry out required time of automatic business processing, and efficiency is lower.In addition, namely there is a large amount of redundancy images in the uncertain effectively target acquisition that causes differing in high-definition picture surely of target location.In a word, traditional object detection method cannot realize scouting to scrutinizing, and can carry out insignificant process to a large amount of redundant datas, causes target detection real-time efficiency low.
As shown in Figure 1, this microlens system comprises two the lenticule devices 1 and 2 in left and right in design of the present invention.The inner structure of lenticule device 1 and image-forming principle are as shown in Figure 2.Single lenticule device comprises a headprism 3, microlens array 4, photoelectric sensor (not shown).Headprism mainly carries out visual field and logical fader control to target identified region, and the light entering headprism visual field will by microlens array re-imaging, and photoelectric sensor mainly carries out imaging signal and electric signal is changed.When lenticule device is to target imaging, microlens array obtains the microlens array image of target area.The present invention based on this, by adopting the linear weighted function method of average to obtain the low resolution image of target to image, reconstructs high resolution image by regularization method.
Based on above understanding, the present invention proposes a kind of bionic compound eyes 3-3-2 based on micro lens technology and tie up object detection method, can effectively avoid carrying out insignificant process, the efficiency that raising system processes in real time to redundancy image.
As shown in Figure 3, describe the implementation process of the inventive method in figure, it comprises the following steps:
1) imaging.Use microlens system to carry out imaging to target area, obtain the microlens array image of target area.Adopt linear weighted function averaging method to process to each the unit image in microlens array, generate the low resolution image of target area.Because the lenticule device of two in device is fixing, so the distance measured between two lenticule devices can as the initial value of relative orientation in the 3rd step.
2) extracting and matching feature points.The low resolution image obtained is carried out feature point extraction and mated.This Feature Correspondence Algorithm can process the matching problem occurred between two width images in translation, rotation, affined transformation situation, has very strong matching capacity.The feature that this algorithm extracts is the local feature of image, also keeps good stability to translation, rotation, scaling, brightness change, visible change, affined transformation etc.This algorithm utilizes topography's Gradient Features of unique point to determine the principal direction of each unique point, and its formula is as follows:
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) 2 ) θ ( x , y ) = a r c t a n ( L ( x , y + 1 ) - L ( x , y - 1 ) L ( x + 1 , y ) - L ( x - 1 , y ) )
In formula, m (x, y) and θ (x, y) be gradient at (x, y) place of the gaussian pyramid image of current scale and direction, L (x, y) be the gray scale of gaussian pyramid image at (x, y) place of current scale.
3) relative orientation.As shown in Figure 4, station S is taken the photograph from two 1, S 2when absorbing a stereogram to same bottom surface, two corresponding image rayses of any object point in stereogram all intersect at this object point, namely exist corresponding image rays to intersect phenomenon.If keep the relative position between two photos and attitude relation constant, two photos are moved integrally, rotate and change the length of baseline, corresponding image rays to intersect characteristic can not change.According to corresponding image rays to the geometric relationship intersecting this stereogram inherence, by the picture point m measured 1, m 2coordinate, elements of relative orientation is asked by the method for analytical Calculation, namely the same place that some is chosen in the overlapping region in two images carries out relative orientation, determines the relative dimensional coordinate of two lenticule devices in space, namely calculates the elements of relative orientation of two photos wherein be the elements of interior orientation of second relative first photo of photo, the drift angle that (μ, ν) is baseline and inclination angle.The equation solving elements of relative orientation is:
Wherein, Q = N 1 Y 1 - N 2 Y 2 - B Y , N 1 = B X Z 2 - B Z X 2 X 1 Z 2 - X 2 Z 1 , N 2 = B X Z 1 - B Z X 1 X 1 Z 2 - X 2 Z 1 , Wherein Q is vertical parallax, N 1, N 2for projection coefficient, (X 1, Y 1, Z 1), (X 2, Y 2, Z 2) for the coordinate of picture point in the auxiliary coordinates of image space is for utilizing the principle of least square, can establish an equation
Elements of relative orientation can be tried to achieve by indirect adjustment
4) space intersection.As shown in Figure 5, adopt photogrammetric in forward intersection method, namely utilize the elements of interior orientation of photo, the relative orientation element of stereogram, corresponding image points coordinate calculate the relative dimensional coordinate of model.A large amount of same place three-dimensional coordinates, namely determines the three-D profile of target.Its concrete steps are as follows: first calculate angular range element and baseline component (B x, B y, B z); Calculate the direction cosine of the rotation matrix of left and right photo in photogrammetric coordinate system; Calculate the coordinate (X of picture point in the auxiliary coordinates of image space 1, Y 1, Z 1) and (X 2, Y 2, Z 2); Calculate projection coefficient N 1, N 2; The three-dimensional coordinate (X, Y, Z) of computation model point.The three-dimensional coordinate computing formula of model points in the auxiliary coordinates of image space is as follows:
X = B X + N 2 X Y = 1 2 ( N 1 Y 1 + N 2 Y 2 + B Y ) Z = B Z + N 2 Z 2
Namely the three-dimensional coordinate obtaining a large amount of model points by said method obtains the three-D profile of target.Fig. 6 illustrates the three-D profile of the target obtained by said method.
5) high resolution image is reconstructed.According to step 4) the middle three-D profile generated, judge whether microlens system effectively captures target.If microlens system effectively captures target, then based on microlens array image, regularization method is used to reconstruct high resolution image.Described regularization method is as follows: suppose that the process that degrades of image is:
Af=g
Wherein f is the high resolution image reconstructed, and A is the operator that degrades, and g is the image that microlens array observes.By this problem of least square solution, can obtain,
f *=argmin||g-Af|| 2
Due to the nonuniqueness separated in this indirect problem, the solution that extra prior imformation just can obtain approaching original image must be added.Original optimization problem is converted into by regularization method:
f *=argmin||g-Af|| 2+λΩ(f)
Wherein Ω (f) is regularization term, and Ω is called regularizing operator, and λ is called regular parameter, namely goes the solution of approaching former problem by the well-posed problem adjoining with former problem.By just high-resolution image can be reconstructed like this.
6) texture gradient GAC model is adopted accurately to detect the target in high resolution image.The first, calculate the Minimum Area comprising three-D profile.The second, employing Gaussian derivative approximation to function method solves the texture gradient on image i-th yardstick, and the gradient magnitude in different scale and direction is:
TG i , θ ( x , y ) = ( M i , θ ( x , y ) * G x ′ ( x , y ) ) 2 + ( M i , θ ( x , y ) * G y ′ ( x , y ) ) 2
G' in formula x, G' ythe partial derivative of Gaussian function in x and y direction respectively, M i, θ(x, y) can be converted by bi-input bi-output system and obtain.3rd, arrange initial circuit in the Minimum Area determined in a first step, arrange the condition of iteration, circuit is progressively reduced and is finally detected moving target.The initial value of circuit arranges as follows:
E ( u ) = μ ∫ ∫ Ω 1 2 ( | ▿ u - 1 | 2 ) d x d y + ∫ ∫ Ω g ( x , y ) δ ( u ) | ▿ u | d x d y + c ∫ ∫ Ω [ 1 - H ( u ) ] g d x d y
E represents the energy functional minimizing closed curve C (p), and when energy functional reaches minimum, corresponding curve is exactly the border of segmentation.GAC model mainly utilizes border to stop function g to split image, and the structure of g directly affects the result of segmentation, based on this, proposes the GAC model gradient current equation based on texture gradient herein, as follows:
∂ u ∂ t = μ [ ▿ u - d i v ( ▿ u | ▿ u | ) + δ ( u ) [ d i v ( g ( T G ) ▿ u | ▿ u | ) + c g ]
In formula, g is the nonnegative function of any monotone decreasing, and δ (x) function can be expressed as the derivative of H (x), and μ, c are for being constant, and div is divergence operator.
The various embodiments described above are only for illustration of the present invention, and every equivalents of carrying out on the basis of technical solution of the present invention and improvement, all should not get rid of outside protection scope of the present invention.

Claims (10)

1. the 3-3-2 based on bionic compound eyes micro lens technology ties up object detection method, comprises the following steps:
1) as imaging system, seizure imaging being carried out to target area using the microlens system based on bionic compound eyes structure, adopting linear weighted function method of average reconstruct low resolution image by taking the microlens array image obtained;
2) by step 1) in reconstruct low resolution image based on, adopt forward intersection measuring method to calculate the three-dimensional coordinate of impact point, low resolution three-D profile carried out to target and catches;
3) if effectively after target acquisition, then by step 1) take data based on the microlens array image that obtains, adopt regularization method reconstruct high resolution image, high resolution 2 d staring imaging is carried out to target area; Otherwise mobile microlens system also gets back to step 1);
4), after obtaining the high resolution 2 d image of target area, adopt the GAC model based on texture gradient accurately to identify target, complete target detection.
2. tie up object detection method based on the 3-3-2 of bionic compound eyes micro lens technology as claimed in claim 1, it is characterized in that, step 1) in, the described linear weighted function method of average is as follows:
g ‾ = p 1 * g 1 + p 2 * g 2 + ... p m * g m , Wherein,
p i = n i n 1 + n 2 + ... + n m ( 1 ≤ i ≤ m )
In formula, the pixel in unit image is sorted from small to large according to gray-scale value, and is numbered 1 to m, g mrepresent the gray-scale value being numbered the pixel of m in unit image, p ifor this gray-scale value g icorresponding weight, n iexpression gray-scale value is g inumber of pixels.
3. tie up object detection method based on the 3-3-2 of bionic compound eyes micro lens technology as claimed in claim 1, it is characterized in that, step 2) specifically comprise following content: 1. image is carried out feature point extraction and mated; 2. in match point, screen part same place carry out relative orientation, determine the relative attitude information between lenticule device; 3. according to the relative attitude information between lenticule device, adopt forward intersection measuring method to obtain the relative dimensional coordinate of match point, obtain the three-D profile of target.
4. the 3-3-2 based on bionic compound eyes micro lens technology stated as claim 3 ties up object detection method, and it is characterized in that, utilize topography's Gradient Features of unique point to determine the principal direction of each unique point, its formula is as follows:
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) 2 ) θ ( x , y ) = a r c t a n ( L ( x , y + 1 ) - L ( x , y - 1 ) L ( x + 1 , y ) - L ( x - 1 , y )
(x in formula, y) be the coordinate of unique point, m (x, y) with θ (x, y) gradient of the gaussian pyramid image being respectively current scale at (x, y) place and direction, L (x, y) be the gray scale of gaussian pyramid image at (x, y) place of current scale.
5. the 3-3-2 based on bionic compound eyes micro lens technology stated as claim 3 ties up object detection method, it is characterized in that, determines that the equation of the relative attitude information between two lenticule devices is:
Wherein, Q=N 1y 1-N 2y 2-B y, wherein Q is vertical parallax, N 1, N 2for projection coefficient, (X 1, Y 1, Z 1), (X 2, Y 2, Z 2) be the coordinate of picture point in the auxiliary coordinates of image space, B x, B y, B zbe the projection of photographic base on XYZ direction, d is differential sign, utilizes the principle of least square, can establish an equation
In formula, l is the free term of indirect adjustment, a n, b n, c n, d n, e nfor the coefficient of error equation, v nfor the error term of indirect adjustment, elements of relative orientation can be tried to achieve by indirect adjustment wherein be the elements of interior orientation of second relative first photo of photo, the drift angle that (μ, ν) is baseline and inclination angle.
6. the 3-3-2 based on bionic compound eyes micro lens technology stated as claim 3 ties up object detection method, it is characterized in that, adopt the concrete steps of the relative dimensional coordinate of forward intersection measuring method acquisition match point as follows: first to calculate angular range element and baseline component (B x, B y, B z); Calculate the orthogonal matrix of left and right photo in photogrammetric coordinate system; Calculate the coordinate (X of picture point in the auxiliary coordinates of image space 1, Y 1, Z 1) and (X 2, Y 2, Z 2); Calculate projection coefficient N 1, N 2; The three-dimensional coordinate (X, Y, Z) of computation model point, the three-dimensional coordinate computing formula of model points in the auxiliary coordinates of image space is as follows:
X = B X + N 2 X Y = 1 2 ( N 1 Y 1 + N 2 Y 2 + B Y ) Z = B Z + N 2 Z 2
Namely the three-dimensional coordinate obtaining a large amount of model points by said method obtains the three-D profile of target.
7. tie up object detection method based on the 3-3-2 of bionic compound eyes micro lens technology as claimed in claim 1, it is characterized in that, step 3) in, the method that use regularization method reconstructs high resolution image is as follows:
f *=argmin||g-Af|| 2+λΩ(f)
Wherein Ω (f) is regularization term, and Ω is called regularizing operator, and f is the high resolution image reconstructed, and A is the operator that degrades, and g is the image that microlens array observes, λ is called regular parameter.
8. tie up object detection method based on the 3-3-2 of bionic compound eyes micro lens technology as claimed in claim 1, it is characterized in that, step 4) in, adopt the GAC model based on texture gradient accurately to identify target, based on the GAC model gradient current equation of texture gradient, as follows:
∂ u ∂ t = μ [ ▿ u - d i v ( ▿ u | ▿ u | ) + δ ( u ) [ d i v ( g ( T G ) ▿ u | ▿ u | ) + c g ]
In formula, g is the nonnegative function of any monotone decreasing, and δ (x) function can be expressed as the derivative of H (x), and μ, c are constant, and div is divergence operator.
9. the 3-3-2 based on bionic compound eyes micro lens technology ties up object detection system, comprising: microlens system, control system and target detection output system;
Described microlens system comprises 2 symmetrical lenticule devices, and this lenticule device can obtain microlens array image by microlens array, then reconstructs low resolution and high-resolution image;
Described control system comprises DSP main control core cell, fpga logic control module and graphics processing unit;
Described DSP main control core cell is used for carrying out Image Information Processing and storage;
Described fpga logic control module for controlling the signal acquisition of microlens system, and provides process data for DSP main control core cell;
The low resolution three-D profile that described graphics processing unit is used for carrying out above-mentioned microlens array image target is caught and high resolution 2 d staring imaging, and accurately detects the target in high resolution image;
Described target detection output system is for exporting object detection results.
10. tie up object detection system based on the 3-3-2 of bionic compound eyes micro lens technology as claimed in claim 9, it is characterized in that, comprise extracting and matching feature points module, relative orientation module, forward intersection module and GAC model segmentation module,
The low resolution image that described extracting and matching feature points module is used for obtaining carries out feature point extraction and mates;
Described relative orientation module is used in match point, screen part same place and carries out relative orientation;
The relative dimensional coordinate of described forward intersection module for adopting forward intersection measuring method to obtain match point;
The segmentation identification of described GAC model segmentation module for adopting the GAC model based on texture gradient to carry out target.
CN201510732346.5A 2015-11-02 2015-11-02 332 dimension object detection methods and system based on bionic compound eyes micro lens technology Active CN105352482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510732346.5A CN105352482B (en) 2015-11-02 2015-11-02 332 dimension object detection methods and system based on bionic compound eyes micro lens technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510732346.5A CN105352482B (en) 2015-11-02 2015-11-02 332 dimension object detection methods and system based on bionic compound eyes micro lens technology

Publications (2)

Publication Number Publication Date
CN105352482A true CN105352482A (en) 2016-02-24
CN105352482B CN105352482B (en) 2017-12-26

Family

ID=55328492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510732346.5A Active CN105352482B (en) 2015-11-02 2015-11-02 332 dimension object detection methods and system based on bionic compound eyes micro lens technology

Country Status (1)

Country Link
CN (1) CN105352482B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339603A (en) * 2016-09-08 2017-01-18 武汉大学 Relative orientation method based on axial angle vector
CN107687818A (en) * 2016-08-04 2018-02-13 纬创资通股份有限公司 Three-dimensional measurement method and three-dimensional measurement device
CN110645956A (en) * 2019-09-24 2020-01-03 南通大学 Multi-channel visual ranging method imitating stereoscopic vision of insect compound eye
CN110989646A (en) * 2019-12-02 2020-04-10 西安欧意特科技有限责任公司 Compound eye imaging principle-based target space attitude processing system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002013295A (en) * 2000-06-30 2002-01-18 Nikken Atorasu Kobe:Kk Method for creating investigation data on deformed part in building
CN1932841A (en) * 2005-10-28 2007-03-21 南京航空航天大学 Petoscope based on bionic oculus and method thereof
CN102572220A (en) * 2012-02-28 2012-07-11 北京大学 Bionic compound eye moving object detection method adopting new 3-2-2 spatial information conversion model
CN103295221A (en) * 2013-01-31 2013-09-11 河海大学 Water surface target motion detecting method simulating compound eye visual mechanism and polarization imaging
CN103325088A (en) * 2013-07-04 2013-09-25 中国科学院光电技术研究所 Multichannel image processing method for curved compound eye imaging system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002013295A (en) * 2000-06-30 2002-01-18 Nikken Atorasu Kobe:Kk Method for creating investigation data on deformed part in building
CN1932841A (en) * 2005-10-28 2007-03-21 南京航空航天大学 Petoscope based on bionic oculus and method thereof
CN102572220A (en) * 2012-02-28 2012-07-11 北京大学 Bionic compound eye moving object detection method adopting new 3-2-2 spatial information conversion model
CN103295221A (en) * 2013-01-31 2013-09-11 河海大学 Water surface target motion detecting method simulating compound eye visual mechanism and polarization imaging
CN103325088A (en) * 2013-07-04 2013-09-25 中国科学院光电技术研究所 Multichannel image processing method for curved compound eye imaging system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BO-REN L,LEI Y,HUI-LI L: "A Method For Moving Target Capture Using 3D Profile Information Based On Bionic Compound Eye", 《2012 INTERNATIONAL CONFERENCE ON IEEE》 *
SUN H,TONG H, LIANG R: "Imaging Mechanism of Moving Object Detection", 《2010 18TH INTERNATIONAL CONFERENCE ON.IEEE》 *
SUN H,ZHAO H,MOONEY P: "A Novel System for Moving Object Detection Using Bionic Compound Eyes", 《JOURNAL OF BIONIC ENGINEERING》 *
ZHANG Z,YAN L,SUN H: "A Synchronous Imaging System for Moving-target Detection With Bionic Compound eyes", 《IMAGING AND SIGNAL PROCESSING》 *
百度文库: "基于332遥感信息处理模式的仿生复眼运动目标检测", 《道克巴巴 HTTP://WWW.DOC88.COM/P-0347153926975.HTML》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107687818A (en) * 2016-08-04 2018-02-13 纬创资通股份有限公司 Three-dimensional measurement method and three-dimensional measurement device
CN107687818B (en) * 2016-08-04 2020-07-10 纬创资通股份有限公司 Three-dimensional measurement method and three-dimensional measurement device
CN106339603A (en) * 2016-09-08 2017-01-18 武汉大学 Relative orientation method based on axial angle vector
CN106339603B (en) * 2016-09-08 2019-03-19 武汉大学 A kind of relative orientation method based on shaft angle vector
CN110645956A (en) * 2019-09-24 2020-01-03 南通大学 Multi-channel visual ranging method imitating stereoscopic vision of insect compound eye
CN110645956B (en) * 2019-09-24 2021-07-02 南通大学 Multi-channel visual ranging method imitating stereoscopic vision of insect compound eye
CN110989646A (en) * 2019-12-02 2020-04-10 西安欧意特科技有限责任公司 Compound eye imaging principle-based target space attitude processing system

Also Published As

Publication number Publication date
CN105352482B (en) 2017-12-26

Similar Documents

Publication Publication Date Title
WO2021004312A1 (en) Intelligent vehicle trajectory measurement method based on binocular stereo vision system
CN102697508B (en) Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
Fathi et al. Automated sparse 3D point cloud generation of infrastructure using its distinctive visual features
CN110910453B (en) Vehicle pose estimation method and system based on non-overlapping view field multi-camera system
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN104182982A (en) Overall optimizing method of calibration parameter of binocular stereo vision camera
CN105096386A (en) Method for automatically generating geographic maps for large-range complex urban environment
CN104034305B (en) A kind of monocular vision is the method for location in real time
CN104361627B (en) Binocular vision bituminous paving Micro texture 3-D view reconstructing method based on SIFT
JP6174104B2 (en) Method, apparatus and system for generating indoor 2D plan view
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN105352482A (en) Bionic compound eye microlens technology-based 3-3-2 dimension object detection method and system
CN103308000B (en) Based on the curve object measuring method of binocular vision
CN110599545A (en) Feature-based dense map construction system
WO2020221443A1 (en) Scale-aware monocular localization and mapping
CN111899345B (en) Three-dimensional reconstruction method based on 2D visual image
AliAkbarpour et al. Parallax-tolerant aerial image georegistration and efficient camera pose refinement—without piecewise homographies
CN112750203A (en) Model reconstruction method, device, equipment and storage medium
CN103793909A (en) Single-vision overall depth information acquisition method based on diffraction blurring
CN111325828B (en) Three-dimensional face acquisition method and device based on three-dimensional camera
Xu et al. UAV-based bridge geometric shape measurement using automatic bridge component detection and distributed multi-view reconstruction
CN103400374A (en) Method for inferring intrinsic parameters of camera by utilizing bimirror device and Laguerre theorem
CN103489165A (en) Decimal lookup table generation method for video stitching
CN108090930A (en) Barrier vision detection system and method based on binocular solid camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant