CN103559703B - Crane barrier based on binocular vision is monitored and method for early warning and system - Google Patents

Crane barrier based on binocular vision is monitored and method for early warning and system Download PDF

Info

Publication number
CN103559703B
CN103559703B CN201310462213.1A CN201310462213A CN103559703B CN 103559703 B CN103559703 B CN 103559703B CN 201310462213 A CN201310462213 A CN 201310462213A CN 103559703 B CN103559703 B CN 103559703B
Authority
CN
China
Prior art keywords
crane
binocular vision
image
early warning
barrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310462213.1A
Other languages
Chinese (zh)
Other versions
CN103559703A (en
Inventor
李志勇
赵勇
陈祥红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201310462213.1A priority Critical patent/CN103559703B/en
Publication of CN103559703A publication Critical patent/CN103559703A/en
Application granted granted Critical
Publication of CN103559703B publication Critical patent/CN103559703B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of crane barrier monitoring based on binocular vision and method for early warning and system, its method includes: step 1, structure binocular vision model, initializes relevant parameter;Step 2, real-time image acquisition, and carry out pretreatment;Step 3, detection object edge, extract characteristics of image;Step 4, complete left and right image registration, split public territory;Step 5, iterate, distinguish prospect and background;Step 6, reconstruction three-dimensional information, identify prospect and barrier;Step 7, ask for kinematic parameter, analyze and export early warning information.The present invention is based on adopting binocular vision system, monitor the working environment of crane in real time, efficiently solve a difficult problem for the pre-anticollision of crane, have efficiency height, precision is suitable, system structure is simple, low cost and other advantages, is very suitable for manufacturing on-the-spot online, noncontact Product checking and quality control.

Description

Crane barrier based on binocular vision is monitored and method for early warning and system
Technical field
The present invention relates to a kind of crane barrier monitoring based on binocular vision and method for early warning and system.
Background technology
The industries such as hoisting machinery industry is China's metallurgy, mine, railway, shipbuilding, transportation, hydraulic and electric engineering provide various hoisting machineries equipment, are industries indispensable in the development of the national economy.Crane belongs to the one of hoisting machinery, be a kind of circulate, the machinery of intermittent movement, be widely applied to now in the industry-by-industry of people's life, provided more facility for people.
At present, the real-time of crane and safety are then one of hot issues of widely paying close attention to.Crane is when operation, at crane under the traction of arm, weight and hang thing and have the motion of certain speed, also have barrier such as pedestrian, the UFO etc. of motion in working site simultaneously, once sash weight collides with barrier with hanging thing, probably crane, barrier are caused very big impact, even make crane have the probability toppled.It addition, the work of crane also can be interfered by the external environments such as wind, this has also greatly threatened the safety that crane runs.
Safety problem for crane; people further investigate; crane is mounted with various protection device; as prevented the load limiter of overload crane; prevent the limiter of moment that load moment is excessive; the restriction rising extreme position limiter of boom position, falling-threshold position limiter and operational limit position limiter etc., but these devices all can not solve above-mentioned problem.Generally, operator by the vision of oneself, sensation or with assist personnel exchange the ambient conditions understanding around crane, crane and barrier can not be analyzed exactly, and barrier itself also has uncertainty and randomness, is only difficult to avoid that collision accident with existing device and manpower.
Therefore, it is necessary to design a kind of brand-new crane barrier monitoring and method for early warning and system.
Summary of the invention
The technical problem to be solved is to provide a kind of crane barrier monitoring based on binocular vision and method for early warning and system, it is somebody's turn to do the crane barrier monitoring based on binocular vision and method for early warning and system adopts binocular vision system and image processing techniques, barrier is judged and identifies, detection based on barrier is implemented to report to the police, and safety is high.
The technical solution of invention is as follows:
A kind of crane barrier based on binocular vision is monitored and method for early warning, crane is provided with run-in index binocular vision system, photographic head in run-in index binocular vision system straight down, and is fixed on the top of crane, for the image of Real-time Collection depression angle;Comprise the following steps:
Step 1: build binocular vision model;
Step 2: real-time image acquisition, and the image gathered is carried out pretreatment;
Step 3: detect the object edge in pretreated image, extracts characteristics of image;
Selection opertor carries out rim detection, extracts the profile information of target;Polygon approach is carried out, using polygonal summit as characteristic point by the curve that edge point set is constituted;
Step 4: complete left and right image registration;
According to this characteristic of Edge Feature Points ordered arrangement, adopt the Eigenvector that characteristic point is sequentially connected, calculating the total curvature of its line segment, matching characteristic line segment, the Eigenvector after coupling being launched into characteristic point form, thus realizing the coupling of characteristic point;
Step 5: by iterative operation, distinguish the prospect in image and background;
The motion feature different with background according to prospect, so that prospect and background show this feature of different position relationships in consecutive frame image, by the method iterated, distinguishes foreground and background;
Step 6: based on described binocular vision model, rebuilds three-dimensional information, identifies prospect and barrier;
Step 7: ask for kinematic parameter, analyzes and exports early warning information, and return to step 2.
Binocular vision model is characterized by following formula:
For space any point A, its absolute coordinate is (X, Y, Z), the respectively (x of the three-dimensional coordinate in left and right video cameraL, yL, zL)、(xR, yR, zR), the respectively (u of the coordinate in left and right imaging planeL, vL)、(uR, vR), the respectively (U of the coordinate in pixel planesL, VL)、(UR, VR), and the parameter of two video cameras is identical, has:
μ U L V L 1 = 1 / d x s ′ U 0 0 1 / d y V 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R t 0 T 1 X Y Z 1 ;
μ ′ U R V R 1 = 1 / d x s ′ U 0 0 1 / d y V 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R ′ t ′ 0 T 1 X Y Z 1 ;
Wherein, μ, μ ' are 2 factors, have μ=zL, μ '=zR, dxAnd dyRespectively each pixel physical size on imaging plane coordinate system u axle and v direction of principal axis, (U0, V0) for the initial point (midpoint) in image imager coordinate face corresponding to the coordinate of the point on pixel coordinate face, s ' represents because of the obliquity factor of the video camera imaging mutually mutually non-orthogonal extraction of plane coordinates axle;[Rt] is video camera external parameter, and f is the focal length of video camera, 0T=(0,0,0).(adquisitiones of these parameters is the demarcation by video camera, and camera marking method has a variety of)
Described pretreatment includes gray processing and filtering, and reduces the luminance difference of left and right image pair, strengthens edge and the details of image;[being prior art]
Following formula is adopted to carry out gray processing, I (i, j)=[ωrR (i, j)+ωgG (i, j)+ωbB (i, j)]/3,
R (i, j), G (i, j) and B (i, j) respectively pixel (i, j) corresponding red, green and blue colouring component;Wherein weights ωr=0.30, ωg=0.59, ωr=0.11;
Described is filtered into medium filtering: first determine the window W of a 3X3 pixel, after each pixel is queued up by gray scale size in window, replaces former f (i with the gray value of intermediate position, j) gray value, become window center gray value g (i, j), namely
G (i, j)=Med{f (x-k, y-l), (k, l ∈ W) },
F (x-k, y-l) is the gray value of pixel each in window W.
In step 4, the identification based on weight carries out registration.
In steps of 5, described prospect includes crane self key operation part and hangs thing, and described crane self key operation part includes suspension hook, weight and lifting rope;Background refers to other objects except prospect in visual field;
The method of differentiation prospect and background:
Respectively the left and right image of several two field pictures adjacent after doing rotation transformation is done poor shadow, if the poor shadow of certain scenery is less than the threshold value set, then it is assumed that be prospect, otherwise it is assumed that be background, the position of record prospect and background respectively.
In step 6, calculate the absolute-coordinate three-dimensional of characteristic point based on binocular vision model, then approach the surface of target with three-dimensional grid, so that it is determined that each object dimensional information;Three-dimensional information includes position and shape information;
The recognition methods of barrier:
With the horizontal plane of the minimum point B of current time weight excessively for interfering picture;
With the public view field region of binocular camera shooting system in the subregion of the interference side of drawing for interference region;
Interfering the subregion below picture for noninterference region with the public view field region of binocular camera shooting system;
If suspicion thing is positioned partially or entirely in interference region, then this suspicion thing is barrier.
In step 7,
The moving situation of prospect is obtained by crane self;The break the barriers change of position of the kinematic parameter of barrier is obtained;
According to the dependent thresholds set, it is determined that advanced warning grade, analyze the safe condition of crane, it was predicted that collision situation, by final early warning information output, start under dangerous situation and report to the police.
A kind of crane barrier based on binocular vision is monitored and early warning system, crane is provided with run-in index binocular vision system, photographic head in run-in index binocular vision system straight down, and is fixed on the top of crane, for the image of Real-time Collection depression angle;Adopt the aforesaid crane barrier based on binocular vision to monitor and method for early warning implementation barrier thing is monitored and early warning.
Beneficial effect:
The crane barrier based on binocular vision of the present invention is monitored and method for early warning and system, based on adopting binocular vision system, monitor the working environment of crane in real time, according to principle of parallax, rebuild scene, accurately identify out the dynamic barrier under dynamic background and be accurately positioned, the kinestate of acquired disturbance thing, analyze the safety of crane, provide early warning information to operator in real time, improve the gentle safety in operation of Automated water of crane system.The method efficiently solves a difficult problem for the pre-anticollision of crane, has efficiency height, precision is suitable, system structure is simple, low cost and other advantages, is very suitable for manufacturing on-the-spot online, noncontact Product checking and quality control.
The present invention adopts run-in index binocular vision system, overlook from two parallel angle in real time, gather the ambient image of crane working site, be available for operator's real-time monitored, simultaneously by the process of the image to synchronization diverse location, according to principle of parallax, recover the three-dimensional on-the-spot of three-dimensional from two two-dimentional flat picture shapes, accurately identify and barrier around positioning crane, and determine its movable information such as speed, acceleration, feed back to operator in time, it is prevented that the generation of collision accident.
The method is based on computer binocular vision technology and sets up, and the image information of acquisition is intuitively accurate, it is possible to analyze the potential potential safety hazard at scene qualitatively.By the process of the image information to different angles and analysis, it is accurately determined and hangs the position of thing, weight and barrier etc., shape and motion feature, understand the situation of crane working site exactly, take effectively to prevent pre-measure in time, it is to avoid security incident.The method and system have efficiency height, precision is suitable, system structure is simple, low cost and other advantages, are very suitable for manufacturing on-the-spot online, noncontact Product checking and quality control.
Accompanying drawing explanation
Fig. 1 is crane barrier provided by the invention monitoring and method for early warning flow chart.
Fig. 2 is the coordinate model figure of binocular vision system provided by the invention.
Fig. 3 is identification weight schematic diagram provided by the invention.
Fig. 4 is the computation model of total curvature provided by the invention.
The illustraton of model of Fig. 5 cognitive disorders thing provided by the invention.
Detailed description of the invention
Below with reference to the drawings and specific embodiments, the present invention is described in further details:
Embodiment 1:
Fig. 1 is the flow chart of crane barrier provided by the invention monitoring and method for early warning.
Crane barrier based on binocular vision provided by the invention is monitored and method for early warning, including following step:
Step 101: build binocular vision model, initializes relevant parameter.For the status information of accurate description, analysis space object, binocular vision coordinate model need to be set up, mainly have world coordinate system, camera coordinate system, imaging coordinate system and pixel coordinate system;Simultaneously, reply related data initializes, namely obtain the inner parameter of video camera by camera calibration, pass through to measure or preset acquisition baseline distance, initialize world coordinate system by other devices (alignment system such as bridge crane), so that it is determined that the transformational relation between each coordinate system.
Step 102: real-time image acquisition, and carry out pretreatment.Directly over the arm of crane, gather crane job visual field present image along two parallel directions vertically downward, be designated as left image, right image, by raw image storage;Meanwhile, the image obtained is done pretreatment, mainly gray processing and filtering, reduces the distortion of effect of noise and image, reduces the luminance difference of left and right image pair, strengthen edge and the details of image.
Step 103: detection object edge, extracts characteristics of image.Select suitable operator to carry out rim detection, extract the profile information of target;Polygon approach is carried out, using polygonal summit as characteristic point by the curve that edge point set is constituted.
Step 104: complete left and right image registration, splits public territory.According to this characteristic of Edge Feature Points ordered arrangement, adopt the Eigenvector that characteristic point is sequentially connected, calculate the total curvature of its line segment, matching characteristic line segment, by the Eigenvector after coupling is launched into characteristic point form according to certain rule, thus realizing the coupling of characteristic point.While coupling, with weight for important references, and note the coordinate relation between match point according to principle of parallax.
Step 105: iterate, distinguishes foreground and background.The motion feature different with background according to prospect, so that prospect and background show this feature of different position relationships in consecutive frame image, is selected the method iterated, is distinguished foreground and background.
Step 106: rebuild three-dimensional information, identifies prospect and barrier.According to the camera coordinates model set up, obtain the absolute coordinate space of the point of each Feature point correspondence, approach the surface of target with three-dimensional grid;According to the prospect that step 5 is distinguished, it is thus achieved that the position of prospect and shape;Relative position according to suspicion thing Yu interference surface, identifies the barrier in background, and records its information such as shape and position.
Step 107: ask for kinematic parameter, analyzes and exports early warning information.Calculate the relative distance between each barrier and prospect;By the process to continuous print multiple image, the change according to Obstacle Position, ask for the kinematic parameter of each barrier;Relative distance according to weight and barrier and movement velocity, acceleration etc., carry out prediction of collision;By relevant early warning information output, and report to the police.
Fig. 2 is the specific embodiment that the present invention proposes, and in an embodiment, the present invention installs run-in index binocular vision system on crane, and this system photographic head vertically downward, and is fixed on the top of crane, Real-time Collection overhead view image.Fig. 2 illustrates the coordinate model of the run-in index binocular vision system in the present invention.
The absolute coordinate of 1 A in hypothesis space is (X, Y, Z), the respectively (x of the three-dimensional coordinate in left and right video cameraL, yL, zL)、(xR, yR, zR), the respectively (u of the coordinate in left and right imaging planeL, vL)、(uR, vR), the respectively (U of the coordinate in pixel planesL, VL)、(UR, VR), and assume that the parameter of two video cameras is identical.
The transformational relation of left image pixel coordinates system (208) and left imaging plane coordinate system (204) is:
U L V L 1 = 1 / d x s ′ U 0 0 1 / d y V 0 0 0 1 u L v L 1 = M u L v L 1 ... ... 1.,
The transformational relation of right image pixel coordinates system (209) and right imaging plane coordinate system (205) is:
U R V R 1 = 1 / d x s ′ U 0 0 1 / d y V 0 0 0 1 u R v R 1 = M u R v R 1 ... ... 2.,
1., 2. in formula, dxAnd dyRespectively each pixel physical size on imaging plane coordinate system u axle and v direction of principal axis, (U0, V0) for the initial point (midpoint) in image imager coordinate face corresponding to the coordinate of the point on pixel coordinate face, s ' represents because of the obliquity factor of the video camera imaging mutually mutually non-orthogonal extraction of plane coordinates axle.
The transformational relation of left camera coordinate system (206) and left imaging plane coordinate system (204) is:
μ u L v L 1 = f 0 0 0 0 f 0 0 0 0 1 0 x L y L z L 1 = N x L y L z L 1 ... ... 3.,
The transformational relation of right camera coordinate system (207) and right imaging plane coordinate system (205) is:
μ ′ u R v R 1 = f 0 0 0 0 f 0 0 0 0 1 0 x R y R z R 1 = N x R y R z R 1 ... ... 4.,
3., 4. in, f is the focal length of left and right video camera, and μ, μ ' are invariant.
x L y L z L 1 = R t 0 T 1 X Y Z 1 ... ... 5.,
In 5. formula, spin matrix R and translation vector t can be obtained by camera calibration, and 0T=(0,0,0).
The transformational relation of right camera coordinate system (207) and world coordinate system (201) is:
x R y R z R 1 = R ′ t ′ 0 T 1 X Y Z 1 ... ... 6.,
In 6. formula, spin matrix R ' and translation vector t ' can be obtained by camera calibration, and 0T=(0,0,0).
Obtained by 1., 3. and 5. formula,
μ U L V L 1 = 1 / d x s ′ U 0 0 1 / d y V 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R t 0 T 1 X Y Z 1 = α s U 0 0 β V 0 0 0 1 R t X Y Z 1 = K · R t · T = S · T ... ... 7.,
Wherein, α=f/dx, β=f/dy, s=s ' f, K = α s U 0 0 β V 0 0 0 1 , S=K [Rt],
T=(X, Y, Z, 1)T.K is camera intrinsic parameter matrix, and only relevant with video camera internal structure, wherein α, β respectively left video camera is at ULAxle and VLScale factor on axle, s is the parameter being described as picture and image coordinate axle inclined degree;[Rt] is video camera external parameter, is determined relative to the position of world coordinate system by left video camera completely;S is called projection matrix, and namely world coordinates is tied to the transition matrix of image coordinate system.Intrinsic parameters of the camera matrix K can be passed through camera calibration is obtained, and video camera external parameter matrix [Rt] can be determined according to the length of crane arm, inclination angle and moving situation.
In like manner, also have:
μ ′ μ U R V R 1 = 1 / d x s ′ U 0 0 1 / d y V 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R ′ t ′ 0 T 1 X Y Z 1 = α s U 0 0 β V 0 0 0 1 R ′ t ′ X Y Z 1 = K · R ′ t ′ · T = S ′ · T ... ... 8.,
In this part, give real-time image acquisition the method doing pretreatment.
In the present invention, with the parallel visual system real-time image acquisition of binocular, and the original image that will gather preserves.
To the current frame image obtained, first should by its gray processing, the preferred weighted mean method of the present invention, namely
I (i, j)=[ωrR (i, j)+ωgG (i, j)+ωbB (i, j)]/3,
Wherein weights ωr=0.30, ωg=0.59, ωrWhen=0.11, the most rational gray level image can be obtained.
The image quality of image can be produced large effect owing to the environment at crane job scene has a lot of uncertainty, noise and light.Therefore must select suitable wave filter, noise jamming can be efficiently reduced, the positional information of each scenery in image can be kept again to greatest extent not to be damaged, be the key issue to solve of pretreatment.
Passing through in the theory of various filtering methods and the basis of experimentation, the preferred median filter of the present invention reduces noise jamming.Medium filtering is the nonlinear smoothing filter processing method of a kind of modal removal noise, can retain again the edge details of image while removing impulsive noise, salt-pepper noise.
The algorithm principle of medium filtering is, it is first determined the window W of an odd pixel, after in window, each pixel is queued up by gray scale size, (i, j) gray value, become the gray value g (i of window center to replace former f with the gray value of intermediate position, j), namely
G (i, j)=Med{f (x-k, y-l), (k, l ∈ W) },
W is the size (might as well be taken as 3X3) of selected window, the grey scale pixel value that f (x-k, y-l) is window W.Mainly comprising the following steps of medium filtering:
1) template is roamed in the drawings, and template is overlapped with certain location of pixels in figure;
2) gray value of each respective pixel under template is read;
3) grey scale pixel value corresponding for template is sorted from small to large;
4) gray value of 1 pixel coming centre in grey-level sequence is chosen;
5) this intermediate value is assigned to the pixel gray value as pixel of corresponding templates center.
This part gives detection object edge, the method extracting characteristics of image:
(1) detection of object edge
In the present invention, it is preferable that adopt the optimum operator Canny operator of rim detection, detect the edge of target.It is implemented as follows:
Use 2D signal f(k)(x, y) represents the gray value of image, note
( G x , G y ) = ( ∂ f ( k ) ( x , y ) ∂ x , ∂ f ( k ) ( x , y ) ∂ y ) .
Definition:
d ( k ) ( x , y ) = G x 2 + G y 2
For f(k)(x, gradient (filter window is of a size of 3*3) y);
w ( k ) ( x , y ) = exp ( - ( d ( k ) ( x , y ) ) 2 2 h 2 )
For the reflection successional weight coefficient of image picture elements gray value, wherein h is constant parameter, is called that edge retains range coefficient.
The process that realizes of self-adaptive smooth filtering is:
1) making k=1, iterations is N, and arranges edge reservation range coefficient h;
2) Difference Calculation gradient is utilized(x, y) and(x, y):
G x ( k ) ( x , y ) = 1 2 [ f ( k ) ( x + 1 , y ) - f ( k ) ( x - 1 , y ) ] ,
G y ( k ) ( x , y ) = 1 2 [ f ( k ) ( x , y + 1 ) - f ( k ) ( x , y - 1 ) ] ;
3) the weight coefficient w of wave filter is calculated(k)(x, y):
w ( k ) ( x , y ) = exp { - [ G x ( k ) ( x , y ) ] 2 + [ G x ( k ) ( x , y ) ] 2 2 h 2 } ;
4) filtering, to f(k)(x, y) is weighted on average:
f ( k + 1 ) ( x , y ) = 1 N Σ i = - 1 1 Σ j = - 1 1 f ( k ) ( x + i , y + j ) w ( k ) ( x + i , y + j ) ,
Wherein, N = Σ i = - 1 1 Σ j = - 1 1 w ( k ) ( x + i , y + j ) .
5) judge that whether k is equal to N, if k=N, terminates filtering, otherwise k=k+1, returns 2).
After successive ignition, the output image of wave filter is made up of some uniform strength regions, and asking of these regions exists good edge.
(2) extraction of target characteristic point
After edge extracting, it is possible to obtaining a series of edge point set orderly, that represent target integrity profile, these point sets describe the shape facility of object edge.If using this point set as identifying clarification of objective point, then the amount of calculation of recognizer is very huge.In order to reach the purpose of compression point set, in the present invention, carry out polygon approach by the curve that edge point set is constituted, using polygonal summit as characteristic point.
By each edge point set P={pi(xi, yi) | i=1,2 ..., first S (x in N}s, ys) and last point E (xe, ye) as two end points of a virtual line segment, calculate this point and concentrate other pjDistance to this line segment
d k = | x j ( y s - y e ) + y j ( x s - x e ) + y e x s - y s x e | ( x s - x e ) 2 + ( y s - y e ) 2 , j = 2,3 , . . . , N - 1 - - - ( a ) ,
In formula, N is the marginal point number that dummy line is intersegmental.Note
dmax=max (dj), j=2,3 ..., N-1 (b).
Corresponding point is designated as M (xmax, ymax).If dmax< dth(threshold values), then be decided to be polygonal one by this virtual line segment;Otherwise, a M is set to end points, i.e. (an x of virtual line segmente, ye)←(xmax, ymax), another end points remains unchanged, and utilizes formula (a) to calculate the some distance d to virtual line segment of the marginal point concentration put between S and some Ek, then judge d againmaxAnd dthMagnitude relationship, if dmax< dth, then two end points that the 1st S is one article of limit of polygon of newly-generated M and marginal point concentration.Using the M starting point as virtual line segment, i.e. (xs, ys)←(xmax, ymax), using marginal point concentrate last 1 as the terminal of virtual line segment, proceed said process;Otherwise, newly-generated some M is as the terminal of virtual line segment, and starting point remains unchanged, and utilizes formula (b) to calculate dmax, until dmax< dthTill.
One edge point set, after above-mentioned process, can obtain the ordered set Q of the polygon form point after matching, and these points can serve as clarification of objective point.
This part gives the method for left images registration:
(1) characteristic analysis of the image that this biocular systems collects
In the present invention, the left and right image that this binocular vision system collects should have following characteristics:
1) according to principle of parallax, under camera coordinates, only there is a difference in space certain point, the coordinate in x, z-axis equal in the y-axis direction.Therefore any one matching double points in the pixel faces of left and right, it should meet: UL=UR、VL-VR=l, l are relevant with the depth of field of this spatial point.Therefore, if this feature can be considered, the accurateness of Feature Points Matching can be effectively improved.
2) information such as crane weight shape determine that, known, know according to priori, as viewed from surface, weight is circular, therefore the projection of shape in left and right image of weight should be circular (or connecing rotund ellipse), and weight should by some region paracentral.
Therefore, if the weight that can first identify according to shape facility in image, and using the weight in left images as the important references mated, in conjunction with feature 1), then can be effectively improved the accurateness of coupling.
(2) weight is identified
As it is shown on figure 3, the boundary line (301) to a certain closing, for judging that whether it is the contour line of weight, this polygon geometric properties need to be studied, i.e. girth, mean pole distance and area.
At this, first polygon is made normalized (302).If this polygonal feature point set is Q={qi(xi, yi) | i=1,2 ..., n}, the centre of form coordinate of point set Q is, then
x &OverBar; = 1 n &Sigma; i = 1 n x i , y &OverBar; = 1 n &Sigma; i = 1 n y i - - - ( c )
Definition normalization factor
D = max | q i - C | = max ( ( x i - x &OverBar; ) 2 + ( y i - y &OverBar; ) 2 ) - - - ( d )
Characteristic point q after normalizationi′(xi', yi') with normalization before characteristic point qi(xi, yi) corresponding relation be
x i &prime; = ( x i - x &OverBar; ) / D , y i &prime; = ( y i - y &OverBar; ) / D , i = 1,2 , . . . , n - - - ( e )
After normalization, ask for the polygonal girth of normalization, mean pole distance and area:
The polygonal girth of normalization is defined as
L = &Sigma; i = 1 n l i = &Sigma; i = 1 n ( x i &prime; - x i + 1 &prime; ) 2 + ( y i &prime; - y i + 1 &prime; ) 2 - - - ( f )
Wherein liFor characteristic point qi′(xi', yi') and qi+1′(xi+1', yi+1') between Euclidean distance, take i+1=1 as i=M.
The polygonal mean pole distance of normalization is
d &OverBar; = 1 n &Sigma; i = 1 n d i = 1 n &Sigma; i = 1 n x i &prime; 2 + y i &prime; 2 - - - ( g )
Wherein, diFor characteristic point qi′(xi', yi') to the distance between centre of form C ' (0,0).
The polygonal area of normalization
S = &Sigma; i = 1 n &Delta;S i = &Sigma; i = 1 n s ( s - d i ) ( s - d i + 1 ) ( s - l i ) - - - ( h )
Wherein s=(di+di+1+li)/2, Δ SiIt is by characteristic point qi' and qi+1' the area of triangle that constitutes with centre of form C ', takes i+1=1 as i=M.
If objective contour is a standard round, then should be a unit circle after normalization.Using the ratio of the target geometric properties after normalization and the geometric properties of its unit circle (radius r=1) as characteristic parameter, it may be assumed that
Girth compares: c1=L/ (2 π r)=L/ (2 π),
Radius ratio: c 2 = d &OverBar; / r = d &OverBar; ,
Area ratio: c3=S/ (π r2)=S/ π.
If according to the c that above formula is obtained1、c2And c3Meet c simultaneously1> T1、c2> T2、c3> T3(wherein T1、T2And T3Respectively c1、c2And c3Threshold values), then target is circular, it is believed that this target is weight.
(3) characteristic matching
In the present invention, according to this characteristic of Edge Feature Points ordered arrangement, the Eigenvector that characteristic point is sequentially connected is adopted, calculate the total curvature of its line segment, matching characteristic line segment, by being launched into characteristic point form to the Eigenvector after coupling according to certain rule, thus realizing the coupling of characteristic point.
Left and right two width images are never from angle shot, and same target exists very big-difference on geometric properties, but there is radiation transformation relationship between the target image of different angles, and total curvature of line segment is similar invariants.Fig. 4 gives the computation model of total curvature.
Wherein, l1It is a p1And p2Between distance, l2It is a p2And p3Between distance, l3It is a p3And p4Between distance, l13It is a p1And p3Between distance, l24It is a p2And p4Between distance,It is line segment p1p2And pxp3Between angle,It is line segment p2p3And p3p4Between angle.
According to the cosine law, Ying You
cos &PartialD; 1 = l 1 2 + l 2 2 - l 13 2 2 l 1 l 2 , &PartialD; 1 = arccos l 1 2 + l 2 2 - l 13 2 2 l 1 l 2 ,
Then put p1The curvature at place is
c 1 = &PartialD; 1 l 1 + l 2 ,
In like manner can obtain a p2The curvature c at place2
Therefore, some p1With a p2Between total curvature be
C12=l2*|c1-c2|。
The specific algorithm of matching characteristic line segment is as follows:
1) calculate characteristic point to be matched in left and right two figure and be sequentially connected with total curvature of the Eigenvector obtained.
2) in the set of the total curvature of the line segment from left figure, value compares with all of total curvature in right figure successively, if the absolute value of both differences is in a given scope, is considered as the two Eigenvector and is mutually matched.
3) repeat 2) work, obtain whole matching result.
4) circular order that the Eigenvector main manifestations in Statistic analysis result goes out, removes the characteristic matching result not meeting this circulation.
5) result removing error hiding later is launched into characteristic point form according to certain rule, it is possible to obtain the matching result of characteristic point.
This part gives the method distinguishing prospect and background by iterating.
In the present invention, described prospect is primarily referred to as crane self key operation part (suspension hook, weight, lifting rope etc.) and hangs thing, and described background is primarily referred to as other objects in visual field.
In the present invention, described prospect and background have following characteristics:
Cycle owing to gathering image is shorter, and crane is when operation, and for camera coordinate system, the motion of prospect is very slow, and position in the picture is basically unchanged;The motion of background is relatively obvious, and the change in location in the image that consecutive frame gathers is obvious.
Based on characteristic features described above, the present invention a kind of preferably method iterated distinguishes the prospect in visual field and background.Owing to target location is not different in the same time, therefore before the iteration, with current frame image for benchmark, according to parameters such as crane brachium and inclination angles, former two field pictures are done rotation transformation.Concrete iterative process is: respectively the left and right image of several two field pictures adjacent after doing rotation transformation is done poor shadow, if the poor shadow of certain scenery is less than the threshold values set, then it is assumed that be prospect, otherwise it is assumed that be background, and record position respectively.
This part gives the method rebuilding crane visual field three-dimensional information cognitive disorders thing.
(1) three-dimensional coordinate of each object is calculated
According to the camera coordinates model set up, using left and right pixel image as it is known that obtain the absolute coordinate of point corresponding to each feature point pairs.Implement process as follows:
Assuming that A (X, Y, Z) is for the arbitrary characteristics point in space, it is followed successively by the coordinate of left and right imaging plane and pixel planes: (uL, vL)、(uR, vR)、(UL, VL) and (UR, VR)。
By 7. formula and 8. formula obtain:
μ(UL, VL, 1)T=K [Rt] (X, Y, Z, 1)T¨ ¨ ¨ ¨ ¨ ¨ 9.,
μ′(UR, VR, 1)T=K' t ' " (X, Y, Z, 1)T¨ ¨ ¨ ¨ ¨ ¨ 10.,
9. and 10. simultaneous formula can uniquely determine an A (X, Y, Z).
By methods described above, it is possible to obtain the absolute-coordinate three-dimensional of arbitrary characteristics point, then approach the surface of target with three-dimensional grid, thus the three-dimensional informations such as the position of each object, shape are also just uniquely determined.
(2) identification of barrier
For prospect, specific embodiment 6 provides recognition methods, directly according to methods described above, asked for its three-dimensional coordinate.For background, then will according to each object location, it is judged that whether it barrier.
Identification to barrier, it is necessary to first determine interference surface, thus delimit interference region and noninterference region, then judge this suspicion thing whether barrier according to the relative position of suspicion thing Yu interference surface.
The recognition methods of barrier is described below in conjunction with Fig. 5.
Fig. 5 is the areal model figure of cognitive disorders thing provided by the invention.In Fig. 5, B point is the minimum point of current time weight, H0Vertical distance for B point distance video camera photocentre, region 501 is the public view field region (i.e. region 218 in Fig. 2) of binocular camera shooting system, plane 502 was horizontal plane and the interference surface of B point, region 503 is the part and the interference region that are positioned at interference surface (502) top in public view field region (501), region 504 is the part and the noninterference region that are positioned at interference surface (502) lower section in public view field region (501), and suspicion thing 505,506 is the object that shape, color, speed etc. are unknown.
Wherein, suspicion thing 505 some be positioned at interference region (502), it is possible to the safe operation of crane being threatened, therefore this suspicion thing should be considered barrier;Suspicion thing 506 is fully located in noninterference region (503), and temporarily without influence on the safe operation of crane, therefore this suspicion thing should be considered non-barrier.
This part is to asking for kinematic parameter, analysis and exporting early warning information and sat brief elaboration.
On the one hand, the moving situation of prospect can be by what crane self obtained;On the other hand, for the kinematic parameter of barrier, it is possible to the change of the position that breaks the barriers is obtained.It addition, arbitrarily barrier is also observable with the current distance of prospect.Therefore, it can these parameters are sent into specialist system, according to the related threshold set, it is determined that advanced warning grade, analyze the safe condition of crane, it was predicted that collision situation, by final early warning information output, Realtime Alerts under dangerous situation.
The above, be only presently preferred embodiments of the present invention, and the present invention not does any pro forma restriction.All those of ordinary skill in the art, without departing from, under technical solution of the present invention ambit, may be by the method for the disclosure above and technology contents and technical solution of the present invention is made rational improvement, or change the Equivalent embodiments of equivalent variations into.Therefore, every content without departing from technical solution of the present invention, the technical spirit of the foundation present invention, to any simple modification made for any of the above embodiments, equivalent variations and modification, all still falls within the scope of the technology of the present invention protection.

Claims (6)

1. the crane barrier based on binocular vision is monitored and method for early warning, it is characterized in that, being provided with run-in index binocular vision system on crane, the photographic head in run-in index binocular vision system is straight down, and it is fixed on the top of crane, for the image of Real-time Collection depression angle;Comprise the following steps:
Step 1: build binocular vision model;
Step 2: real-time image acquisition, and the image gathered is carried out pretreatment;
Step 3: detect the object edge in pretreated image, extracts characteristics of image;
Selection opertor carries out rim detection, extracts the profile information of target;Polygon approach is carried out, using polygonal summit as characteristic point by the curve that edge point set is constituted;
Step 4: complete left and right image registration;
According to this characteristic of Edge Feature Points ordered arrangement, adopt the Eigenvector that characteristic point is sequentially connected, calculating the total curvature of its line segment, matching characteristic line segment, the Eigenvector after coupling being launched into characteristic point form, thus realizing the coupling of characteristic point;
Step 5: by iterative operation, distinguish the prospect in image and background;
The motion feature different with background according to prospect, so that prospect and background show this feature of different position relationships in consecutive frame image, by the method iterated, distinguishes foreground and background;
Step 6: based on described binocular vision model, rebuilds three-dimensional information, identifies prospect and barrier;
Step 7: ask for kinematic parameter, analyzes and exports early warning information, and return to step 2;
In step 6, calculate the absolute-coordinate three-dimensional of characteristic point based on binocular vision model, then approach the surface of target with three-dimensional grid, so that it is determined that each object dimensional information;Three-dimensional information includes position and shape information;
The recognition methods of barrier:
With the horizontal plane of the minimum point B of current time weight excessively for interference surface;
With the public view field region of binocular camera shooting system, the subregion above interference surface is for interference region;
With the public view field region of binocular camera shooting system, the subregion below interference surface is for noninterference region;
If suspicion thing is positioned partially or entirely in interference region, then this suspicion thing is barrier;
In step 7,
The moving situation of prospect is obtained by crane self;The break the barriers change of position of the kinematic parameter of barrier is obtained;
According to the dependent thresholds set, it is determined that advanced warning grade, analyze the safe condition of crane, it was predicted that collision situation, by final early warning information output, start under dangerous situation and report to the police.
2. the crane barrier based on binocular vision according to claim 1 is monitored and method for early warning, it is characterised in that binocular vision model is characterized by following formula:
For space any point A, its absolute coordinate is (X, Y, Z), the respectively (x of the three-dimensional coordinate in left and right video cameraL,yL,zL)、(xR,yR,zR), the respectively (u of the coordinate in left and right imaging planeL,vL)、(uR,vR), the respectively (U of the coordinate in pixel planesL,VL)、(UR,VR), and the parameter of two video cameras is identical, has:
&mu; U L V L 1 = 1 / d x s &prime; U 0 0 1 / d y V 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R t 0 T 1 X Y Z 1 ;
&mu; &prime; U R V R 1 = 1 / d x s &prime; U 0 0 1 / d y V 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R &prime; t &prime; 0 T 1 X Y Z 1 ;
Wherein, and μ, μ ' it is 2 factors, there is μ=zL, μ '=zR, dxAnd dyRespectively each pixel physical size on imaging plane coordinate system u axle and v direction of principal axis, (U0,V0) for the initial point in image imager coordinate face corresponding to the coordinate of the point on pixel coordinate face, s' represents because of the obliquity factor of the video camera imaging mutually mutually non-orthogonal extraction of plane coordinates axle;R and R ' is the spin matrix of left and right video camera, and t and t ' is the translation vector of left and right video camera, and f is the focal length of video camera, 0T=(0,0,0).
3. the crane barrier based on binocular vision according to claim 2 is monitored and method for early warning, it is characterised in that described pretreatment includes gray processing and filtering, and reduces the luminance difference of left and right image pair, strengthens edge and the details of image;
Following formula is adopted to carry out gray processing, I (i, j)=[ωrR(i,j)+ωgG(i,j)+ωbB (i, j)]/3, R (i, j), G (i, j) and B (i, j) respectively pixel (i, j) corresponding red, green and blue colouring component;
Wherein weights ωr=0.30, ωg=0.59, ωr=0.11;
Described is filtered into medium filtering: first determine the window W of a 3X3 pixel, after each pixel is queued up by gray scale size in window, replaces former f (i with the gray value of intermediate position, j) gray value, become window center gray value g (i, j), namely
G (i, j)=Med{f (i-k, j-l), (k, l ∈ W) },
F (i-k, j-l) is the gray value of pixel each in window W.
4. the crane barrier based on binocular vision according to claim 3 is monitored and method for early warning, it is characterised in that in step 4, the identification based on weight carries out registration.
5. the crane barrier based on binocular vision according to claim 4 is monitored and method for early warning, it is characterized in that, in steps of 5, described prospect includes crane self key operation part and hangs thing, and described crane self key operation part includes suspension hook, weight and lifting rope;Background refers to other objects except prospect in visual field;
The method of differentiation prospect and background:
Respectively the left and right image of several two field pictures adjacent after doing rotation transformation is done poor shadow, if the poor shadow of certain scenery is less than the threshold value set, then it is assumed that be prospect, otherwise it is assumed that be background, the position of record prospect and background respectively.
6. the crane barrier based on binocular vision is monitored and early warning system, it is characterized in that, being provided with run-in index binocular vision system on crane, the photographic head in run-in index binocular vision system is straight down, and it is fixed on the top of crane, for the image of Real-time Collection depression angle;Adopt the crane barrier based on binocular vision described in claim 1 to monitor and method for early warning implementation barrier thing is monitored and early warning.
CN201310462213.1A 2013-10-08 2013-10-08 Crane barrier based on binocular vision is monitored and method for early warning and system Expired - Fee Related CN103559703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310462213.1A CN103559703B (en) 2013-10-08 2013-10-08 Crane barrier based on binocular vision is monitored and method for early warning and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310462213.1A CN103559703B (en) 2013-10-08 2013-10-08 Crane barrier based on binocular vision is monitored and method for early warning and system

Publications (2)

Publication Number Publication Date
CN103559703A CN103559703A (en) 2014-02-05
CN103559703B true CN103559703B (en) 2016-07-06

Family

ID=50013942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310462213.1A Expired - Fee Related CN103559703B (en) 2013-10-08 2013-10-08 Crane barrier based on binocular vision is monitored and method for early warning and system

Country Status (1)

Country Link
CN (1) CN103559703B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484033B (en) * 2014-11-21 2017-10-03 上海同筑信息科技有限公司 Virtual reality display method and system based on BIM
CN105217324A (en) * 2015-10-20 2016-01-06 上海影火智能科技有限公司 A kind of novel de-stacking method and system
CN105678690A (en) * 2016-01-06 2016-06-15 中国航空无线电电子研究所 Image registration method on the basis of optical imaging sensor internal and external parameters
DE102016001684A1 (en) * 2016-02-12 2017-08-17 Liebherr-Werk Biberach Gmbh Method for monitoring at least one crane
CN109478320B (en) 2016-07-12 2022-03-18 深圳市大疆创新科技有限公司 Processing images to obtain environmental information
CN106534693B (en) * 2016-11-25 2019-10-25 努比亚技术有限公司 A kind of photo processing method, device and terminal
CN106709905B (en) * 2016-12-07 2020-03-31 成都通甲优博科技有限责任公司 Vibration damper fault online detection and identification method based on binocular vision image
CN107391631A (en) * 2017-07-10 2017-11-24 国家电网公司 A kind of electric transmission line channel solid space monitoring and fast ranging method
CN107473109B (en) * 2017-08-23 2019-02-01 廊坊中建机械有限公司 Tower crane collision-proof method and system
CN107590444B (en) * 2017-08-23 2020-05-22 深圳市易成自动驾驶技术有限公司 Method and device for detecting static obstacle and storage medium
CN109344677B (en) 2017-11-07 2021-01-15 长城汽车股份有限公司 Method, device, vehicle and storage medium for recognizing three-dimensional object
CN110667474B (en) * 2018-07-02 2021-02-26 北京四维图新科技股份有限公司 General obstacle detection method and device and automatic driving system
CN109035214A (en) * 2018-07-05 2018-12-18 陕西大中科技发展有限公司 A kind of industrial robot material shapes recognition methods
CN110163232B (en) * 2018-08-26 2020-06-23 国网江苏省电力有限公司南京供电分公司 Intelligent vision recognition vehicle board transformer coordinate system
CN110874544B (en) * 2018-08-29 2023-11-21 宝钢工程技术集团有限公司 Metallurgical driving safety monitoring and identifying method
CN109095356B (en) * 2018-11-07 2024-03-01 江苏徐工国重实验室科技有限公司 Engineering machinery and operation space dynamic anti-collision method, device and system thereof
CN110069990B (en) * 2019-03-18 2021-09-17 北京中科慧眼科技有限公司 Height limiting rod detection method and device and automatic driving system
CN110499802A (en) * 2019-07-17 2019-11-26 爱克斯维智能科技(苏州)有限公司 A kind of image-recognizing method and equipment for excavator
CN111047579B (en) * 2019-12-13 2023-09-05 中南大学 Feature quality assessment method and image feature uniform extraction method
CN110733983B (en) * 2019-12-20 2021-08-24 广东博智林机器人有限公司 Tower crane safety control system and control method thereof
CN112051853B (en) * 2020-09-18 2023-04-07 哈尔滨理工大学 Intelligent obstacle avoidance system and method based on machine vision
CN112287824A (en) * 2020-10-28 2021-01-29 杭州海康威视数字技术股份有限公司 Binocular vision-based three-dimensional target detection method, device and system
CN113537159B (en) * 2021-09-09 2021-12-03 丹华海洋工程装备(南通)有限公司 Crane risk data identification method based on artificial intelligence
CN114089364A (en) * 2021-11-18 2022-02-25 智能移动机器人(中山)研究院 Integrated sensing system device and implementation method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202220056U (en) * 2011-09-15 2012-05-16 上海科轻起重机有限公司 Protection system capable of preventing crane from colliding with obstacle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008312004A (en) * 2007-06-15 2008-12-25 Sanyo Electric Co Ltd Camera system and mechanical apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202220056U (en) * 2011-09-15 2012-05-16 上海科轻起重机有限公司 Protection system capable of preventing crane from colliding with obstacle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
The Crane Control Systems: A Survey;Pawel Hyla;《17th International Conference on Methods and Models in Automation and Robotics》;20120827;第505-509页 *
基于几何映射关系的数码相机定位模型的研究;王文发;《计算机与数字工程》;20120331(第3期);第106-108页 *

Also Published As

Publication number Publication date
CN103559703A (en) 2014-02-05

Similar Documents

Publication Publication Date Title
CN103559703B (en) Crane barrier based on binocular vision is monitored and method for early warning and system
WO2018028103A1 (en) Unmanned aerial vehicle power line inspection method based on characteristics of human vision
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN101957325B (en) Substation equipment appearance abnormality recognition method based on substation inspection robot
CN105809679B (en) Mountain railway side slope rockfall detection method based on visual analysis
CN110188724A (en) The method and system of safety cap positioning and color identification based on deep learning
CN112418103B (en) Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
CN103279765B (en) Steel wire rope surface damage detection method based on images match
CN102930334B (en) Video recognition counter for body silhouette
CN105404857A (en) Infrared-based night intelligent vehicle front pedestrian detection method
CN104063702A (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN103984961A (en) Image detection method for detecting foreign matter at bottom of vehicle
CN112329747B (en) Vehicle parameter detection method based on video identification and deep learning and related device
CN104537651B (en) Proportion detecting method and system for cracks in road surface image
CN104061907A (en) Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN108363953B (en) Pedestrian detection method and binocular monitoring equipment
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN103077526A (en) Train abnormality detection method and system with deep detection function
CN103632427A (en) Gate cracking protection method and gate control system
CN106548131A (en) A kind of workmen&#39;s safety helmet real-time detection method based on pedestrian detection
CN112883948B (en) Semantic segmentation and edge detection model building and guardrail abnormity monitoring method
CN105844227A (en) Driver identity authentication method for school bus safety
CN107256413A (en) A kind of article monitoring method and device
CN112488995A (en) Intelligent injury judging method and system for automatic train maintenance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160706

Termination date: 20171008

CF01 Termination of patent right due to non-payment of annual fee