CN105488780A - Monocular vision ranging tracking device used for industrial production line, and tracking method thereof - Google Patents

Monocular vision ranging tracking device used for industrial production line, and tracking method thereof Download PDF

Info

Publication number
CN105488780A
CN105488780A CN201510132219.1A CN201510132219A CN105488780A CN 105488780 A CN105488780 A CN 105488780A CN 201510132219 A CN201510132219 A CN 201510132219A CN 105488780 A CN105488780 A CN 105488780A
Authority
CN
China
Prior art keywords
image
information
module
degenerate
monocular vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510132219.1A
Other languages
Chinese (zh)
Inventor
魏洪兴
黄真
董芹鹏
邵宇秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ao Bo (beijing) Technology Co Ltd
Original Assignee
Ao Bo (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ao Bo (beijing) Technology Co Ltd filed Critical Ao Bo (beijing) Technology Co Ltd
Priority to CN201510132219.1A priority Critical patent/CN105488780A/en
Publication of CN105488780A publication Critical patent/CN105488780A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a monocular vision ranging tracking device used for an industrial production line, and a tracking method thereof, and is applied to the industrial production line. The device comprises an image sensing unit, a control correction unit, a user graphical interface and a data storage module. The method comprises the following steps: 1) obtaining a defocusing blurred image through an image acquisition module, and establishing a mathematical model; 2) adopting a cepstrum algorithm to operate the mathematical model, separating true input image information and the information of a defocusing degenerate kernel function; 3) analyzing the relevant information of the defocusing degenerate kernel function to quantificationally obtain a point diffusion radius; and 4) according to the point diffusion radius, obtaining the distance information of a feature capturing object, and combining with the 2D (Two-Dimensional) positioning information of the feature capturing object to finally obtain a 3D (Three-Dimensional) positioning coordinate of the feature capturing object. The monocular vision ranging tracking device has the advantages that system cost is greatly lowered since only one midrange industrial camera is used, and points do not need to be subjected to initial calibration and labeling.

Description

A kind of range finding follow-up mechanism of the monocular vision for industrial production line and method for tracing thereof
Technical field
The invention belongs to Industrial Robot Technology field, be specifically related to a kind of monocular vision for industrial production line range finding follow-up mechanism and method for tracing thereof.
Background technology
In recent years, along with people's understanding in depth brain visual cortex (VisualCortex) functional structure, this emerging subject of computer vision have also been obtained increasing concern, and has had considerable and deep development.Vision measurement and 3D motion positions tracer technique are also widely used at corresponding numerous areas, are such as applied to: play based on the motion tracking of positioning control in amusement game industry; Defense industry has benefited from vision real time positioning technology, and destination object is carried out to identification and catches; Need in medical industries to simulate man-machine interactive operation; In artificial intelligence, use vision to manipulate the fields such as robot system.
At present, most of range positioning system all adopts binocular vision, and it is theoretical based on binocular stereo vision, and this theory is based upon on the basis to human visual system's research, obtains scene information by binocular parallax.Marr, Poggio and Grimson propose the earliest and achieve a kind of computation vision model and algorithm based on human visual system.Although binocular vision is the three-dimensional data reset mode closest to human vision, but in fact, there is certain problem in traditional binocular positioning system, such as: traditional binocular positioning system blocks blind area phenomenon, Feature Points Matching process is complicated and require that high-precision binocular cooperates.
Summary of the invention
Object of the present invention to exist in existing binocular distance measurement tracing system to overcome: block blind area phenomenon, in location algorithm performs, need complicated Feature Points Matching process and in control operation, need high-precision binocular cooperation series of problems, propose a kind of monocular vision for industrial production line range finding follow-up mechanism and method for tracing thereof.
For a monocular vision range finding follow-up mechanism for industrial production line, comprising: image sensing cell, control correcting unit, user graphical interface interface and data memory module;
Image sensing cell comprises image capture module and image pre-processing module, for Image Acquisition.
Wherein, image capture module is the hardware collecting part of image sensing cell, carries out image acquisition to Feature capturing object, and the original image information collected is exported to image pre-processing module; Image pre-processing module is software algorithm processing section, utilizes Image Pretreatment Algorithm, carries out image procossing to original image information: unitary of illumination process, white balance process, spectrum assignment process and image convolution; Treated image information is exported to control correcting unit.
Control correcting unit comprise computer control and perform terminal module, for receiving the image information that image sensing cell exports, and according to corresponding user instruction, controlling to perform terminal module and performing corresponding operation.
Computer control receives the image information that image pre-processing module exports, and receive the user instruction obtained from user graphical interface interface, according to instruction, image is processed and data extraction dredge operation, image processing data is generated control signal, exports execution terminal module to the form of control flow check.In addition, image processing data also according to user instruction, is stored to corresponding data memory module by computer control.
Perform terminal module and connect image capture module, the image processing data that receiving computer controller exports carries out correcting, adjusting.
The delivery user instruction of user graphical interface interface, to computer control, simultaneously receives from computer control the image processing commands and operational order that user needs to carry out;
Data memory module stores the image processing data that computer control exports.
For a monocular vision range finding method for tracing for industrial production line, concrete steps are as follows:
Step one, user graphical interface interface export user instruction to computer control, control the image capture module performed on terminal module and carry out image acquisition to Feature capturing object;
Step 2, image pre-processing module utilize Image Pretreatment Algorithm, process, image information is exported to computer control to the image gathered;
Step 3, computer control, by process image and data extract the method for dredge operation, the distance calculating Feature capturing object and perform between terminal module, realize monocular vision range finding and follow the trail of;
Described computer control processes image and data extract the method for dredge operation, and concrete steps are as follows:
Step 1: obtain defocus blurred image according to image capture module, founding mathematical models.
The mathematical model of the forming process of defocus blurred image is:
f(ε,η)*h(x,y)+n(x,y)=g(x,y)
The wherein true input defocus blurred image of f (ε, η) for focussing plane obtains, h (x, y) is for defocusing degenerate kernel function, and n (x, y) is environmental noise, and g (x, y) is the defocus blurred image of actual output;
Wherein defocus degenerate kernel function and adopt psf model, be i.e. circular some diffusion degenrate function form:
R is the some dilation angle of degenerate kernel function, and the fog-level of out-of-focus image is described quantitatively; X and y is two-dimensional coordinate value under the cartesian coordinate system of corresponding diffusion point in image.
Step 2: adopt cepstrum algorithm to operate mathematical model, is separated true input image information and the information defocusing degenerate kernel function.
Step 201, reality exported defocus blurred image information data and represents and be: true input image information data with defocus degenerate kernel convolution of functions form;
g(x,y)=f(x,y).*h(x,y)
Step 202, utilize Fourier transform, the convolution results in step 201 is converted into the form be multiplied on frequency domain;
DFT(f(x,y).*h(x,y))=DFT(f(x,y))*DFT(h(x,y))
Step 203, utilize the character of logarithmic function, step 202 frequency domain multiplied result is converted into the form of addition;
log(DFT(f(x,y)))+log(DFT(h(x,y)))
Step 204, addition form are by true input image information and the information separated defocusing degenerate kernel function;
Step 205, again inverse Fourier transform is carried out to the result that step 204 obtains, complete the cepstrum algorithm to image.
Again inverse Fourier transform is carried out to the log-spectral domain processing result image that step 204 obtains, completes the cepstrum algorithm to image, thus be converted in cepstrum domain, in cepstrum domain, its characteristic information is analyzed.
Step 3: defocus degenerate kernel functional dependence information by isolated in step 2, obtains the some dilation angle of the fog-level that out-of-focus image is described, further quantitatively.
Step 301, by step 2 through cepstrum algorithm obtain data three-dimensional display;
Step 302, by the envelope result of 3-D view on two dimensional surface, obtain the annular groove radius that decentering point is nearest;
Step 303, the annular groove radius obtained and some dilation angle r are proportional.
Step 4: the range information combination being obtained Feature capturing object by step 3 is located the 2D of Feature capturing object, finally obtains the 3D elements of a fix of Feature capturing object.
The invention has the advantages that:
(1) for a monocular vision range finding method for tracing for industrial production line, do not need initialization demarcation and gauge point, only need to ensure that target object is the opaque body of uniform coloring.
(2) a kind of range finding of the monocular vision for industrial production line follow-up mechanism, only use single middle-end industrial camera, greatly reduce hardware cost, and eliminate the unnecessary and shortcoming of the visual token of traditional binocular described in above-mentioned background locating device, thus range finding tracking can be carried out simply and easily.
(3) for industrial production line monocular vision range finding a method for tracing, effectively remove such as the environmental factor such as illumination and microvibration on the impact of system.
Accompanying drawing explanation
Fig. 1 is a kind of range finding of the monocular vision for industrial production line of the present invention follow-up mechanism schematic diagram;
Fig. 2 is the image sensing cell schematic diagram in monocular vision of the present invention range finding follow-up mechanism;
Fig. 3 is the mathematical model schematic diagram of defocus blurred image of the present invention;
Fig. 4 is the range information schematic diagram that the present invention quantitatively calculates captured object;
Fig. 5 is the schematic diagram that monocular vision of the present invention range finding is followed the trail of;
Fig. 6 is a kind of range finding of the monocular vision for industrial production line of the present invention method for tracing process flow diagram;
Fig. 7 is that computer control of the present invention processes image and data extract the method flow diagram of dredge operation;
Fig. 8 is the process flow diagram of 2D location algorithm in monocular vision of the present invention range finding method for tracing;
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described further.
For a monocular vision range finding follow-up mechanism for industrial production line, the range finding being applied to industrial robot motion operation in industrial production line is followed the trail of.As shown in Figure 1, comprising: image sensing cell, control correcting unit, user graphical interface interface and data memory module;
Image sensing cell comprises image capture module and image pre-processing module, for Image Acquisition;
Wherein, image capture module is the hardware collecting part of image sensing cell, uses image capture device, image acquisition is carried out to Feature capturing object, and by the original image information that collects through image data bus, export to image pre-processing module, carry out next step Image semantic classification step.
The preferred camera of image capture device, video camera or IP Camera; Adopt mega pixel level industry fixed-focus camera in the present embodiment, there is image deformation calibration function.Wherein, industrial fixed-focus camera adopts fixed focus lens, and the parameter of camera lens is as shown in table 1, processor cores be Linux3.0 kernel and more than, control operation system be Ubuntu11.04 and more than, and there is corresponding GigE gigabit ethernet interface.
Table 1 is the fixed focus lens parameter of industrial fixed-focus camera
Camera lens title Data
Focal length 12.00mm
Aperture F1.4-16C
Hardware interface GigE gigabit Ethernet
Camera image planes 1024*1280
Image pre-processing module is the software algorithm processing section of image sensing cell.
For existing any one image capture device, all need a set of software package (SDK) to support, generally include with lower module with the vision software bag that collecting device is supporting: Image semantic classification part; Image processing section; Character recognition part; Data extracting section; Image resource administrative section; Presentation Function part and other funtion parts.
As shown in Figure 2, image capture module uses tight shot and the GigE gigabit Ethernet of industrial fixed-focus camera, the image of acquisition characteristics captured object, and is transferred to computer control; Image pre-processing module applicable industry fixed-focus camera carries the Image Pretreatment Algorithm in SDK (Software Development Kit) SDK simultaneously, the original image information that image capture module exports is processed, comprise: image capture mode is set, initialisation image quality and preliminary simple Image semantic classification function; Wherein simple image processing process, comprising: unitary of illumination process, white balance process, spectrum assignment process and image convolution process;
Unitary of illumination process: remove the different impact on successive image process of illumination condition;
White balance process: caliber around perceptual image collection, adjustment color balance, carries out correction adjustment to the picture signal that image capture module exports;
Spectrum assignment process: adjustment integral image brightness;
Image convolution: edge treated is carried out to image.
The concrete treatment step of image pre-processing module is as follows:
The first step, Gamma is set corrects, improve light to the impact of picture quality obtained;
According to the Gammacorrection power function in the AnalogControls function control zone of industrial fixed-focus camera, GammaEnable is set and allows camera when obtaining image and carrying out pre-service, automatically Gamma correction is carried out to image.
Second step, Automatic white balance is set, according to the corresponding adjustment white balance of the light conditions under varying environment condition;
The power function arranging white balance in industry fixed-focus camera AnalogControls function control zone, being divided into and manually arranging and Lookup protocol two kinds, manually arranging under pattern: the white balance value choosing needs by arranging BalanceRatio; Under Lookup protocol pattern, BalanceWhiteAuto option is selected to make camera automatically carry out white balance adjusting when catching image and carrying out Image semantic classification.
3rd step, automatic exposure is set, according to the corresponding adjustment of the intensity of illumination under the varying environment condition optimum time shutter.
The AcquisitionControls function control zone that industry fixed-focus camera carries, encapsulate arrange exposure mode/time/power function of time radix, be divided into and manually arrange and Lookup protocol two kinds, manually arranging under pattern, the exposure parameter adjusted required for user is being set by arranging the options such as ExposureMode/Time/Timebase; Under Lookup protocol pattern, ExposureAuto option is selected to make camera when catching image, according to environment Lookup protocol optimum exposure parameters.
The pretreatment image that error noise the most at last after above-mentioned Image semantic classification is less, exports to lower one deck and controls correcting unit.
Control correcting unit comprise computer control and perform terminal module, for receiving the image information that image sensing cell exports, and according to corresponding user instruction, controlling to perform terminal module and performing corresponding operation.
Computer control obtains the image information bag exported by image pre-processing module, and receive the user instruction obtained from user graphical interface interface, according to instruction, specific later image process and data extraction dredge operation are carried out to image information bag, and generate corresponding control signal, by it with the form of control flow check, export execution terminal module to by data bus.
In addition, computer control also according to user instruction, stores the interested data of user, and these data is stored to corresponding data memory module, so that user is when needs, the data in calling data memory module carry out checking and other process operation.
Perform correction and the control information of the output of terminal module receiving computer controller, the posture information such as attitude, position of terminal is corrected, adjusted, finally reaches the operation object that user specifies.
User graphical interface interface belongs to slave part, in order to produce friendly human-computer interaction interface; On the one hand delivery user instruction is to computer control, receives the image processing commands and operational order that user needs to carry out from computer control simultaneously;
Data memory module belongs to slave part, for storing the real time data that computer control exports.
For a monocular vision range finding method for tracing for industrial production line, be applied on industrial control field production line; The principle that monocular vision range finding is followed the trail of, as shown in Figure 5, adopt the physical optics image-forming principle of out of focus lens imaging system, wherein, image capture device is reduced to fix-focus lens, because distance is different, Feature capturing object cannot accurately focus on clear focussing plane, thus the diffusion that defocuses produced to a certain degree is degenerated.There is diapoint diffusional effect to a certain degree in the actual output image information got, the quantificational expression for diapoint diffusional effect is some dilation angle, by physical optics lens image formation rule formula:
1 u + 1 v = 1 f
Wherein, u is object distance, is the distance between Feature capturing object and lens; F is the focal length of lens; V is focal imaging image distance.
Quantitative relationship is there is between dilation angle and object distance u: when Feature capturing object is on focussing plane during blur-free imaging, before clear focussing plane, actual imaging image distance is v by known of the ratio of similar triangles 1, some dilation angle is R 1;
R 1 d / 2 = v - v 1 v
After clear focussing plane, actual imaging image distance is v 2, some dilation angle is R 2;
R 2 d / 2 = v 2 - v v
Wherein, d is lens diameter;
According to a dilation angle, obtain the distance u between Feature capturing object and lens, the range finding carrying out captured object is followed the trail of.
This software algorithm is based on the system thinking of modern control method in information science theory, to set the blurred picture that obtains be picture rich in detail on focussing plane obtains through the fogging action that defocuses degenerate kernel function is corresponding, wherein also needs to consider environmental noise suffered in imaging process.
As shown in Figure 6, this monocular vision range finding method for tracing concrete steps are as follows:
Step one, user graphical interface interface export user instruction to computer control, control the image capture module performed on terminal module and carry out image acquisition to Feature capturing object;
Step 2, image pre-processing module utilize Image Pretreatment Algorithm, process, image information is exported to computer control to the image gathered;
Step 3, computer control, by process image and data extract the method for dredge operation, the distance calculating Feature capturing object and perform between terminal module, realize monocular vision range finding and follow the trail of;
Described computer control processes image and data extract the method for dredge operation, and as shown in Figure 7, concrete steps are as follows:
Step 1: obtain defocus blurred image according to image capture module, founding mathematical models.
This mathematical model is based on the thought of image degradation, and namely true input picture forms actual output image through image capture module, and due to the physical property of image capture module self and the impact of neighbourhood noise, true input picture is not identical with actual output image.
As shown in Figure 3: the mathematical model of the forming process of defocus blurred image is:
f(ε,η)*h(x,y)+n(x,y)=g(x,y)
The wherein true input picture of f (ε, η) for focussing plane obtains, h (x, y) is for defocusing degenerate kernel function, and n (x, y) is environmental noise, and g (x, y) is the defocus blurred image of actual output.
Wherein defocus degenerate kernel function and adopt psf model, be i.e. circular some diffusion degenrate function form:
R is the some dilation angle of degenerate kernel function, and the fog-level of out-of-focus image is described quantitatively; X and y is two-dimensional coordinate value under the cartesian coordinate system of corresponding diffusion point in image.
Control the defocus blurred image that correcting unit receives image capture module acquisition, the user instruction obtained in conjunction with user graphical interface interface corrects and control information, the posture information such as attitude, position of terminal is corrected, adjusted, finally reaches the operation object that user specifies.
Step 2: adopt cepstrum algorithm to operate mathematical model, is separated true input image information and the information defocusing degenerate kernel function.
A concrete steps of dilation angle r is asked for as follows based on cepstrum algorithm:
Step 201, reality exported defocus blurred image information data and represents and be: true input image information data with defocus degenerate kernel convolution of functions form;
g(x,y)=f(x,y).*h(x,y)
Step 202, utilize Fourier transform, the convolution results in step 201 is converted into the form be multiplied on frequency domain;
DFT(f(x,y).*h(x,y))=DFT(f(x,y))*DFT(h(x,y))
Step 203, utilize the character of logarithmic function, step 202 frequency domain multiplied result is converted into the form of addition;
log(DFT(f(x,y)))+log(DFT(h(x,y)))
Step 204, addition form are by true input image information and the information separated defocusing degenerate kernel function;
Step 205, again inverse Fourier transform is carried out to the result that step 204 obtains, complete the cepstrum algorithm to image.
Again inverse Fourier transform is carried out to the log-spectral domain processing result image that step 204 obtains, completes the cepstrum algorithm to image, thus be converted in cepstrum domain, in cepstrum domain, its characteristic information is analyzed.
Data memory module stores true input image information and the information defocusing degenerate kernel function, and carries out alternately with user graphical interface interface.
Step 3: by the analysis defocusing degenerate kernel functional dependence information isolated in step 2, obtain the some dilation angle of the fog-level that out-of-focus image is described.
Step 301, by step 2 through cepstrum algorithm obtain data three-dimensional display;
This data separate process OpenCV software processes.
Step 302, by the envelope result of 3-D view on two dimensional surface, obtain the annular groove radius that decentering point is nearest;
Described two dimensional surface refers to X-Z/Y-Z plane.
Step 303, the annular groove radius obtained and some dilation angle r are proportional.
As shown in Figure 4, be that y=0 is set to cepstrum domain zero plane by ordinate, be called for short zero line, x ≈ 641 place is cepstrum domain result picture centre peak value place; Different curve representative utilizes cepstrum algorithm to obtain, what have different r defocuses degenerate kernel function result two dimensional image, the intersection point defocusing degenerate kernel function curve and zero line of different r is zero point, the nearest intersection point of middle distance central peak x ≈ 641 at zero point is the value of annular groove radius apart from the horizontal range of central peak, and this annular groove radius is proportional to a dilation angle r.
In the present invention, this annular groove radius is 2 times of a dilation angle r.
So far, utilize annular groove radius to represent a dilation angle r quantitatively, thus quantitatively calculate the range information of captured object further.
The range information of data memory module to captured object stores, and carries out alternately with user graphical interface interface.
Step 4: the range information combination being obtained Feature capturing object by step 3 is located the 2D of Feature capturing object, finally obtains the 3D elements of a fix of Feature capturing object.
2D location for Feature capturing object adopts the extraction algorithm of automatically calibrating.2D location algorithm performs automatic reorientation function, and as shown in Figure 8, concrete steps are as follows:
Step 401, Poisson's kernel function is set pre-service is carried out to original acquisition image;
The Feature capturing object of step 402, tracking initializing set, obtains a dilation angle according to this Feature capturing object;
Continuous print image correlation and otherness information in step 403, acquisition time domain;
Step 404, according to image correlation and otherness information, judge whether continuous print image is stablized, if stable, enter step 405, otherwise enter step 406;
Step 405, according to stable consecutive image, confirm a some dilation angle, obtain and catch the 3D elements of a fix of feature object;
According to stable consecutive image, confirm some dilation angle and export to user graphical interface interface, catching feature object, obtain the 3D elements of a fix of this Feature capturing object;
Step 406, the picture altitude obtained are unstable, then the some dilation angle mistake of this Feature capturing object, returns step 402.
Software algorithm part utilizes monocular close shot range finding thought, based on the location algorithm of motion defocusing blurring algorithm, passes through the range information of the fog-level reflection Feature capturing object of obtained a series of defocus blurred images.Meanwhile, application message theory, the point dilation angle of defocus blurred image point diffusion kernel function is obtained based on frequency domain cepstrum algorithm, and then obtain the degeneration system function of image blurring degradation model, thus accurately and calculate the range information of described Feature capturing object quantitatively, finally be aided with corresponding 2D automatic calibration location algorithm, obtain range finding and the elements of a fix information of Feature capturing object.
A kind of new algorithm process thought due to this system R. concomitans, and be mainly used in industrial robot field, therefore its particular/special requirement in the use procedure of industrial robot field and integrity problem must be considered.Industrial robot operating system needs less undulatory property and higher stability, for these reasons, adds smothing filtering algorithm in algorithm of the present invention, thus to the locator data that controller provides relative smooth stable.

Claims (5)

1., for a monocular vision range finding follow-up mechanism for industrial production line, it is characterized in that, comprise image sensing cell, control correcting unit, user graphical interface interface and data memory module;
Image sensing cell comprises image capture module and image pre-processing module, the image of image capture module acquisition characteristics captured object, and exports to image pre-processing module; Image pre-processing module utilizes Image Pretreatment Algorithm, carries out image procossing to image information, and exports to control correcting unit;
Control correcting unit comprise computer control and perform terminal module, computer control receives the image information that image pre-processing module exports, and receive the user instruction of user graphical interface interface output, according to instruction, image is processed and data extraction dredge operation, image processing data is generated control signal, export execution terminal module to, control to perform terminal module and perform corresponding operation; Image processing data, according to user instruction, is stored to data memory module by simultaneous computer controller;
Perform terminal module and connect image capture module, the image processing data that receiving computer controller exports carries out correcting, adjusting;
The delivery user instruction of user graphical interface interface, to computer control, receives image processing commands and the operational order of user's needs simultaneously;
Data memory module stores the image processing data that computer control exports.
2. a kind of monocular vision for industrial production line range finding follow-up mechanism as claimed in claim 1, it is characterized in that, described image capture device is mega pixel level industry fixed-focus camera.
3. a kind of monocular vision for industrial production line range finding follow-up mechanism as claimed in claim 1, it is characterized in that, described image pre-processing module proceeds as follows image information: arrange image capture mode, initialisation image quality and Image semantic classification; Wherein Image semantic classification comprises: unitary of illumination process, white balance process, spectrum assignment process and image convolution process.
4. application rights requires a kind of range finding of the monocular vision for industrial production line method for tracing of the range finding of the monocular vision described in 1 follow-up mechanism, and it is characterized in that, concrete steps are as follows:
Step one, user graphical interface interface export user instruction to computer control, control the image capture module performed on terminal module and carry out image acquisition to Feature capturing object;
Step 2, image pre-processing module utilize Image Pretreatment Algorithm, process, image information is exported to computer control to the image gathered;
Step 3, computer control, by process image and data extract the method for dredge operation, the distance calculating Feature capturing object and perform between terminal module, realize monocular vision range finding and follow the trail of;
Described computer control processes image and data extract the method for dredge operation, and concrete steps are as follows:
Step 1: obtain defocus blurred image according to image capture module, founding mathematical models;
The mathematical model of the forming process of defocus blurred image is:
f(ε,η)*h(x,y)+n(x,y)=g(x,y)
The wherein true input defocus blurred image of f (ε, η) for focussing plane obtains, h (x, y) is for defocusing degenerate kernel function, and n (x, y) is environmental noise, and g (x, y) is the defocus blurred image of actual output;
Wherein defocus degenerate kernel function and adopt psf model, be i.e. circular some diffusion degenrate function form:
R is the some dilation angle of degenerate kernel function; X and y is two-dimensional coordinate value under the cartesian coordinate system of corresponding diffusion point in image;
Step 2: adopt cepstrum algorithm to operate mathematical model, is separated true input image information and the information defocusing degenerate kernel function;
Specifically comprise:
Step 201, reality exported defocus blurred image information data and represents and be: true input image information data with defocus degenerate kernel convolution of functions form;
g(x,y)=f(x,y).*h(x,y)
Step 202, utilize Fourier transform, the convolution results in step 201 is converted into the form be multiplied on frequency domain;
DFT(f(x,y).*h(x,y))=DFT(f(x,y))*DFT(h(x,y))
Step 203, utilize the character of logarithmic function, step 202 frequency domain multiplied result is converted into the form of addition;
log(DFT(f(x,y)))+log(DFT(h(x,y)))
Step 204, addition form are by true input image information and the information separated defocusing degenerate kernel function;
Step 205, again inverse Fourier transform is carried out to the result that step 204 obtains, complete the cepstrum algorithm to image;
Again inverse Fourier transform is carried out to the log-spectral domain processing result image that step 204 obtains, completes the cepstrum algorithm to image, thus be converted in cepstrum domain, in cepstrum domain, its characteristic information is analyzed;
Step 3: defocus degenerate kernel function information by isolated in step 2, obtains the some dilation angle of the fog-level that out-of-focus image is described;
By in step 2 through cepstrum algorithm obtain separate information 3-D display, by the envelope result of 3-D view on two dimensional surface, obtain the annular groove radius that decentering point is nearest, this annular groove radius is proportional with some dilation angle r, puts the range information of dilation angle r representation feature captured object;
Step 4: the range information combination being obtained Feature capturing object by step 3 is located the 2D of Feature capturing object, finally obtains the 3D elements of a fix of Feature capturing object.
5. a kind of monocular vision for industrial production line range finding method for tracing as claimed in claim 4, is characterized in that, it is as follows that the 2D described in step 3 locates concrete steps:
Step 401, Poisson's kernel function is set pre-service is carried out to original acquisition image;
The Feature capturing object of step 402, tracking initializing set, obtains a dilation angle according to this Feature capturing object;
Continuous print image correlation and otherness information in step 403, acquisition time domain;
Step 404, according to image correlation and otherness information, judge whether continuous print image is stablized, if stable, enter step 405, otherwise enter step 406;
Step 405, according to stable consecutive image, confirm a some dilation angle, obtain and catch the 3D elements of a fix of feature object;
Step 406, the picture altitude obtained are unstable, then the some dilation angle mistake of this Feature capturing object, returns step 402.
CN201510132219.1A 2015-03-25 2015-03-25 Monocular vision ranging tracking device used for industrial production line, and tracking method thereof Pending CN105488780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510132219.1A CN105488780A (en) 2015-03-25 2015-03-25 Monocular vision ranging tracking device used for industrial production line, and tracking method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510132219.1A CN105488780A (en) 2015-03-25 2015-03-25 Monocular vision ranging tracking device used for industrial production line, and tracking method thereof

Publications (1)

Publication Number Publication Date
CN105488780A true CN105488780A (en) 2016-04-13

Family

ID=55675746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510132219.1A Pending CN105488780A (en) 2015-03-25 2015-03-25 Monocular vision ranging tracking device used for industrial production line, and tracking method thereof

Country Status (1)

Country Link
CN (1) CN105488780A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369135A (en) * 2017-06-22 2017-11-21 广西大学 A kind of micro imaging system three-dimensional point spread function space size choosing method based on Scale invariant features transform algorithm
CN109060819A (en) * 2018-07-06 2018-12-21 中国飞机强度研究所 Error correcting method in visual field in a kind of measurement of vibration component crackle
CN109102525A (en) * 2018-07-19 2018-12-28 浙江工业大学 A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN111242861A (en) * 2020-01-09 2020-06-05 浙江光珀智能科技有限公司 Method and device for removing stray light of TOF camera, electronic equipment and storage medium
CN117876429A (en) * 2024-03-12 2024-04-12 潍坊海之晨人工智能有限公司 Real standard platform of sports type industry vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662566A (en) * 2008-08-27 2010-03-03 索尼株式会社 Information processing apparatus, information processing method, and program
CN102997891A (en) * 2012-11-16 2013-03-27 上海光亮光电科技有限公司 Device and method for measuring scene depth
CN104299246A (en) * 2014-10-14 2015-01-21 江苏湃锐自动化科技有限公司 Production line object part motion detection and tracking method based on videos

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662566A (en) * 2008-08-27 2010-03-03 索尼株式会社 Information processing apparatus, information processing method, and program
CN102997891A (en) * 2012-11-16 2013-03-27 上海光亮光电科技有限公司 Device and method for measuring scene depth
CN104299246A (en) * 2014-10-14 2015-01-21 江苏湃锐自动化科技有限公司 Production line object part motion detection and tracking method based on videos

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUADONG SUN等: "Depth From Defocus and Blur for Signal Image", 《VISUAL COMMUNICATION AND IMAGE PROCESSING》 *
李秀怡: "一种基于倒谱相关特性的散焦模糊图像复原方法", 《计算机技术与发展》 *
董杰: "基于单目视觉的散焦测距算法的研究", 《中国博士论文全文数据库》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369135A (en) * 2017-06-22 2017-11-21 广西大学 A kind of micro imaging system three-dimensional point spread function space size choosing method based on Scale invariant features transform algorithm
CN109060819A (en) * 2018-07-06 2018-12-21 中国飞机强度研究所 Error correcting method in visual field in a kind of measurement of vibration component crackle
CN109060819B (en) * 2018-07-06 2021-03-30 中国飞机强度研究所 Method for correcting errors in field of view in measurement of cracks of vibration component
CN109102525A (en) * 2018-07-19 2018-12-28 浙江工业大学 A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN109102525B (en) * 2018-07-19 2021-06-18 浙江工业大学 Mobile robot following control method based on self-adaptive posture estimation
CN111242861A (en) * 2020-01-09 2020-06-05 浙江光珀智能科技有限公司 Method and device for removing stray light of TOF camera, electronic equipment and storage medium
CN111242861B (en) * 2020-01-09 2023-09-12 浙江光珀智能科技有限公司 Method and device for removing stray light of TOF camera, electronic equipment and storage medium
CN117876429A (en) * 2024-03-12 2024-04-12 潍坊海之晨人工智能有限公司 Real standard platform of sports type industry vision
CN117876429B (en) * 2024-03-12 2024-06-07 潍坊海之晨人工智能有限公司 Real standard system of sports type industry vision

Similar Documents

Publication Publication Date Title
CN106251399B (en) A kind of outdoor scene three-dimensional rebuilding method and implementing device based on lsd-slam
JP5871862B2 (en) Image blur based on 3D depth information
KR101893047B1 (en) Image processing method and image processing device
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
US8754963B2 (en) Processing images having different focus
CN106529495A (en) Obstacle detection method of aircraft and device
JP6862569B2 (en) Virtual ray tracing method and dynamic refocus display system for light field
WO2015117905A1 (en) 3-d image analyzer for determining viewing direction
CN105488780A (en) Monocular vision ranging tracking device used for industrial production line, and tracking method thereof
CN108121931A (en) two-dimensional code data processing method, device and mobile terminal
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN109118544A (en) Synthetic aperture imaging method based on perspective transform
CN110505398A (en) A kind of image processing method, device, electronic equipment and storage medium
CN111899345B (en) Three-dimensional reconstruction method based on 2D visual image
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN116958419A (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on wavefront coding
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN113225484B (en) Method and device for rapidly acquiring high-definition picture shielding non-target foreground
WO2023086398A1 (en) 3d rendering networks based on refractive neural radiance fields
CN111489384A (en) Occlusion assessment method, device, equipment, system and medium based on mutual view
US11578968B1 (en) Compact metalens depth sensors
CN111161399B (en) Data processing method and assembly for generating three-dimensional model based on two-dimensional image
Salfelder et al. Markerless 3D spatio-temporal reconstruction of microscopic swimmers from video
JP7326965B2 (en) Image processing device, image processing program, and image processing method
CN112419361B (en) Target tracking method and bionic vision device

Legal Events

Date Code Title Description
DD01 Delivery of document by public notice

Addressee: Ao Bo (Beijing) Technology Co. Ltd.

Document name: Notice of non patent agency

DD01 Delivery of document by public notice

Addressee: Ao Bo (Beijing) Technology Co. Ltd.

Document name: Notification to Make Rectification

C06 Publication
PB01 Publication
DD01 Delivery of document by public notice

Addressee: Ao Bo (Beijing) Technology Co. Ltd.

Document name: Notification of Passing Examination on Formalities

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160413

RJ01 Rejection of invention patent application after publication