Detailed Description
The following detailed description of the embodiments of the present invention is provided in conjunction with the accompanying drawings, and the following disclosure provides specific embodiments of the apparatus and method for implementing the invention, so that those skilled in the art can more clearly understand how to implement the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention. It should be understood that while the invention has been described in conjunction with the preferred specific embodiments thereof, that these are set forth merely for purposes of illustration and are not intended to limit the scope of the invention.
The embodiment of the invention provides a high-voltage line obstacle identification method which is used for judging the type and the position of an obstacle based on the geometric primitive characteristics of the obstacle and by combining line structure information.
The method aims at line hardware fittings such as a damper, a suspension clamp and a strain tower which are arranged on a ground wire. Meanwhile, in order to facilitate the span-span autonomous operation of the robot and the identification of the sensor, corresponding type selection and modification are carried out on the hardware on the ground wire on the premise of ensuring the electrical performance and the function.
The damper comprises a heavy hammer with certain mass, a galvanized steel strand with higher elasticity and high strength and a wire clamp group. The selection of the damper mainly considers the model that the bulge of the upper part of the damper on the wire is small enough, the shape complexity is low, and the sensor is easy to detect and process.
The suspension clamp is divided into a top-loading type, an anti-corona type and a pendent type. As shown in figure 1, the suspension wire clamp adopts a lever-up type that the hanging point of the hanging plate is positioned at the lower part of the wire clamp, and the shape of the hanging plate is transformed into a C shape, so that the walking wheel of the robot can smoothly pass through the suspension wire clamp, and the detection and the identification of the wire clamp barrier are easy.
The ground wire of the high-voltage transmission line is not continuous at the position of the tension tower, but both ends of the ground wire are respectively connected to the cross arm of the tension tower after the ground wire is disconnected. This blocks the path of the robot travelling along the earth of the high voltage transmission line. In order to enable the robot to smoothly pass through the tension tower and realize the whole-course inspection of the span-span autonomous operation, a tension bridge for the robot to pass through must be added at the tension tower. The structure of the tension bridge is shown in figure 2.
In order to realize better identification effect, the method adopts the following identification concept:
the cameras are arranged at the front end and the rear end of the line inspection robot body, and the optical axis and the running direction of the robot are in the same plane. Therefore, the background of the image shot by the camera is sky, and the complexity of the background of the image can be reduced. And the projections of the damper end face, the suspension clamp and the strain gap bridge to the imaging plane of the camera are in typical geometric shapes. Thus, we can select image-specific geometries and their positional relationship to the power conductors to describe the obstacles to be identified.
The damper, the suspension clamp and the strain bridge have no easily distinguished surface texture and no obvious color characteristic, are hinged with each other and are difficult to be divided into separate areas. The appearance, shape and structural characteristics of the barrier are analyzed, and relatively simple geometric shape elements such as straight lines, rectangles, circles, arcs, ellipses and the like are selected as clues for judging the barrier.
The specific implementation process and principle of the whole method are shown in fig. 3, and include two major parts, the first part is used for ground wire identification and positioning, and the second part is used for identification of obstacles.
For the first part, the principle employed is: the vibration damper, the suspension clamp and the strain gap bridge are fixedly connected with the ground wire, so that the ground wire is firstly identified, positioned and tracked when the obstacle is identified. Firstly, a camera is used for collecting images, high-frequency noise is eliminated through a Gaussian filter, and the images are properly blurred, so that fine texture edges can be reduced, and the linear detection of the edge of the ground wire is facilitated; then, the image is reduced to 320 multiplied by 240 pixels, so that the subsequent processing speed can be accelerated, and the edge information of the image is extracted through a Canny edge detection operator; and finally, extracting all straight line segments with the length larger than a given threshold value in the edge image through cumulative probability Hough transform (PPHT).
The specific operations for implementing the first part are as follows:
when the inspection robot moves along the ground line, the projection of the ground line in the image plane takes the shape of a rod with a thick upper part and a thin lower part, and stands in the image plane, as shown in fig. 4.
If the upper left corner of the image is taken as the origin, the horizontal direction is taken as the right axis, and the vertical direction is taken as the y axis. The robot shake causes variations in the position and inclination of the projection of the ground line in the x-axis direction in the image plane. It was found through experiments that the variation of the position and inclination of the ground line projected in the x-axis direction was within a certain range. Therefore, the detected straight line segments can be classified by adopting a rule-based classifier to find out the left edge straight line L of the ground wireLAnd the right edge line LRThe ground line can be found.
Let E ═ LiI ═ 1,2, …, n } represents the set of all lines detected by PPHT, C represents the set of line edge line candidates, and the classifier classification rule is:
if any straight line segment LiE satisfies the condition:
(1) the projection length delta y to the y axis is more than or equal to Ly,LyIs 60;
(2) the inclination angle theta satisfies 75 deg. < theta <105 deg.;
(3) upper end point coordinate y-axis component y1<Y0,Y0Is a preset threshold value, is determined to be 30 through experiments, and then Li∈C。
Then, the straight lines in the candidate set C are sorted from small to large according to the coordinate x component of the upper endpoint, and two adjacent straight lines L are calculatedi、LjThe difference between the x components of the upper endpoint coordinates, if:
T1<(xj-xi)<T2,(xj>xi,i≠j);T1,T2determining according to the ground wire model and the projection width; straight line Li、LjIs closer to the straight line L from top to bottomi、LjRespectively the left and right edge straight lines L of the ground wireL、LR。
Therefore, the identification and the positioning of the ground wire are completed, the general range of the area where the obstacle is located is determined, and a foundation is laid for the identification of the subsequent obstacle.
The second part is realized by the following specific steps:
first, straight line, rectangle, circle, arc, and ellipse geometric primitive features are detected from the edge image.
The Hough Transform (Hough Transform) is widely used for geometric shape detection in images. The Hough transform defines an image space to parameter space mapping that maps each point in the image plane to a curve or surface in the parameter space, records all possible geometric configurations passing through the point (e.g., all straight lines, circles, ellipses, etc. passing through the point), and then looks at the voting results to determine which configuration really exists. The existing straight lines or circles are highlighted, because they pass through many points, so that there are many votes, and it can be seen that the Hough transformation adopts an edge point clustering mechanism similar to "voting".
The basic idea of Hough transform line detection is point-line duality. A straight line is arranged on the image plane, and the position of the straight line is required to be determined.
On the x-y plane of the image space, the straight line can be expressed as: and y is ax + b.
Where a is the slope and b is the intercept. Let (x)i,yi) Is any point on the above straight line, i regard a and b as variables, then equation b-axi+yiA straight line in the plane of the parameter space a-b is shown. It follows that points (lines) collinear in the x-y plane of the image space correspond to points (a) collinear in the a-b plane of the parameter space0,b0) Of (3) is linear. That is, all lines that intersect at the same point on the plane of the parameter space a-b have collinear points in image space corresponding thereto. The Hough transform converts the problem of detecting a set of collinear points in image space into the equivalent problem of detecting a straight line of collinear points in parameter space based on these relationships.
The parameter space used above is a two-dimensional slope-intercept plane, and since both slope a and intercept b are unbounded real numbers, great difficulty is brought to the discrete quantization of the parameter plane a-b and the allocation of the memory space of the two-dimensional accumulator array. To solve this problem, the parametric equation xcos θ + ysin θ ═ ρ is used to represent a straight line on the x-y plane of the image space. Where ρ represents a perpendicular distance from the origin of coordinates to the straight line, and θ represents an angle between a normal direction of the straight line and a positive direction of the x coordinate axis. If let θ ∈ [0, π ], then each line in the x-y plane of the image corresponds one-to-one to a point in the θ - ρ plane.
Is { (x)1,y1),...,(xn,yn) Is a set of points on the image plane x-y, for any point (x)i,yi) Similarly, when θ and ρ are taken as variables, the equation ρ is xicosθ+yisin θ defines a sinusoid in the plane of the parameter space θ - ρ. It is easy to verify that collinear points on the x-y plane of the image space correspond to common points (theta) on the theta-rho plane of the parameter space0,ρ0) The family of sinusoids of (a). Similarly, using the parameterized linear equation, the problem of detecting co-linear point sets in image space is transformed into detecting co-points in parameter spaceThe equivalence of sinusoids.
Because the size of the image to be examined is known, there is some real number R so that for all possible lines in the image, ρ ∈ [ -R, + R is satisfied]This means that those straight-line parameters of interest form a bounded subset of the θ - ρ planes, i.e., the plane defined by θ ∈ [0, π ∈],ρ∈[-R,+R]A defined parametric plane area. The bounded region is discretely quantized with the appropriate grid, each grid node is treated as an accumulator, and the entire discretely quantized region grid is treated as a two-dimensional accumulator array. For each edge point (x) on the image planei,yi) And solving the corresponding sine curve on the parameter plane theta-rho, and increasing the value of the accumulator corresponding to all the grid nodes through which the curve passes by 1. Thus, each cell in the two-dimensional accumulator array records the number of sinusoids passing through it. After all the edge points in the image are processed, counting the units with higher counting values in the two-dimensional accumulator array, if a certain accumulator unit (theta)i,ρj) Has an accumulated value of K, which indicates that K edge points are located on the x-y plane of the image space by the parameter (theta)i,ρj) On a defined straight line. Thus, the problem of detecting collinear points (lines) on the x-y plane of the image space is converted into an easy-to-implement peak detection problem for the accumulated values of the two-dimensional accumulator array.
For a rectangular profile curve, the geometric features are the side length (length and width), after voting through the straight line Hough transformation, 4 peaks should appear, and the 4 peaks in the voting space are H1 ═ ρ 1, θ 1, H2 ═ ρ 2, θ 2, H3 ═ ρ 3, θ 3, and H4 ═ ρ 4, θ 4 respectively correspond to the 4 features of the rectangle, that is, P4 ═ ρ 4, θ 42P3,P1P4,P3P4,P1P2An edge.
As can be seen from the geometric features of the rectangle, the following relationship should be satisfied in the voting space:
Δθ=|θi-θj|<Tθ,ΔP=|Pi-Pj|<TP;
|C(ρi,θi)-C(ρj,θj)|<TL(C(ρi,θi)+C(ρj,θj))/2;
wherein, TθIs a threshold value of the angle theta and is used for judging each pair of peak values (H)iAnd Hj) Whether or not there is a pair of parallel edges, if thetai≈θjDetermining a pair of parallel sides of the rectangle; t isPIs a distance threshold for judging that the corresponding peak point pair (H)iAnd Hj) Whether the parallel lines are symmetric about the H axis, if ρi≈-ρjThen is symmetric about the H axis; t isLIs a normalized threshold for judging that the corresponding point HiAnd HjWhether opposite sides of (C) have approximately the same length, C (p)j,θj) At point HjThe number of votes in, if C (ρ)i,θi)=C(ρj,θj) Then it is considered to correspond to point HiAnd HjThe two sides of (a) are equal in length. The detected peak, corresponding to a set of opposite sides of the rectangle, can be expressed as follows:
when k is 1, i is 1, j is 2, which represents a pair of opposite sides E1 and E2; when k is 2, i is 3, j is 4, which represents another pair of sides E3, E4.
And finally, judging whether the rectangular shape is formed by comparing the included angles of the two groups of opposite sides by using the following formula:
Δα=||α1-α2|-90°|<Tα;
the threshold TA is used to determine whether two opposite sides E1 and E2 are perpendicular. Similarly, for other contour curves roughly divided according to the shape angles, the specific shapes of the contour curves can be further judged and identified according to the respective geometrical characteristics after the geometrical characteristics are extracted through linear voting. On the rough basis, geometric features can be fully utilized, and straight line voting is adopted to improve the execution speed and accuracy of the algorithm.
Any circle on the image plane x-y, made up of edge points, can be expressed as:
(x-a)2+(y-b)2=r2(ii) a Wherein (a, b) are coordinates of the center of the circle, and r is the radius of the circle. Let (x)i,yi) For any edge point on the image plane, when a, b and r are regarded as variables, the equation (a-x)i)2+(b-yi)2=r2A cone in the three-dimensional parameter space a-b-r is defined. Accordingly, the edge points of the co-circle on the image plane x-y are mapped into a cone group which intersects at a point in the three-dimensional parameter space a-b-r. Therefore, the detection problem of the common-point set on the two-dimensional image plane is converted into the detection of the common-point cone group in the three-dimensional parameter space. Similar to straight line detection, the parameter space is discretized along a coordinate axis a, b coordinate axis r into a three-dimensional grid, a three-dimensional accumulator array is used for representing the parameter space, and index subscripts of array units correspond to coordinates of grid points one by one. For each edge point (x) on the image planei,yi) And solving the corresponding cone in the parameter space a-b-r, increasing the value of the accumulator corresponding to all the grid nodes passed by the cone by 1, and recording the number of the cone curves passed by each unit in the three-dimensional accumulator array. After all edge points in the image are processed, counting the units with higher counting values in the three-dimensional accumulator array, if a certain accumulator unit (a)i,bj,rk) Has an accumulated value of K, which indicates that K edge points are located on the x-y plane of the image space by the parameter (a)i,bj,rk) A defined circle.
Five unknown parameters, namely, a centroid coordinate (x0, y0), a major semiaxis a, a minor semiaxis b and a rotation angle alpha, are required for describing an arbitrary ellipse on an image plane, and the corresponding parameter equation is as follows:
or expressed as an implicit function equation:
x2+Bxy+Cy2+Dx+Ey+F=0
in order to reduce the excessive calculation time and storage space overhead, a 5-dimensional parameter space is generally decomposed into several low-dimensional sub-spaces according to gradient direction information of edge points and geometric characteristics of an ellipse, and detection of the ellipse is completed by utilizing Hough transformation in the sub-parameter space.
And then, judging the type and the position of the obstacle according to the detection result of the geometric primitive feature and the line structure information.
The left and right heavy hammers of the damper are rod-shaped, the end surfaces of the left and right heavy hammers are circular or elliptical, and the two heavy hammers are connected through a steel strand. And the vast majority of the damper is below the ground wire. The identification of the damper can be distinguished from other obstacles by taking the circle and the ellipse of the end face as clues and combining the left and right distribution structure of the heavy hammer and the characteristic of being positioned below the ground wire.
The suspension clamp is transformed into an entity C type, and the upper part of the C type suspension clamp is hung on a cross arm of a tower through a hanging ring. The lower part is tightly held by the earth wire through the upper-lever-type wire clamp. The identification of the suspension clamp takes the solid C-shaped shape as a clue, and the suspension clamp can be easily distinguished by combining the position relation between the upper and lower connecting structures connecting the C-shaped shape and the ground wire.
Two huge circular arcs about strain insulator gap bridge can present in camera visual angle observation, have a plurality of entity blocks between two huge circular arcs to link to each other, and the centre has a plurality of fretwork parts in addition. The middle of the arc is connected with the tower through a solid pole. Two triangular solids are arranged at two ends of the circular arc. The middle of the triangle entity is penetrated by a ground wire. Through the shape structural feature of strain insulator gap bridge itself, just can discern.
And finally, tracking the obstacle by adopting a Kalman filter, and further verifying the obstacle identification result by utilizing the motion information.
Kalman filtering is a linear filtering and prediction method, which is divided into 2 steps, prediction (prediction) and correction (correction). The prediction is to estimate the current time state based on the last time state, and the calibration is to integrate the current time estimated state and the observed state to estimate the optimal state. For the target detected in the video sequence, a current state is predicted by using a kalman filter, and then the detection result of the current frame image is used as an observation value and input to the kalman filter, and the obtained correction result is considered as the real state of the target in the current frame.
When the inspection robot walks on the line, the targets detected in the video sequence also generate relative motion. Information describing a moving object may be represented by a state space { x }k}k=0,1,kBy definition, over time, according to { x }k}=φk(xk-1,wk) And (4) changing. Observed measurement sequence zk}k=1,kSatisfy zk=Hk(xk,vk)。φkAnd HkIs a time-varying, non-linear vector. Noise sequence wkAnd vkSatisfy independent gaussian distributions:
xk=φk,k-1+λkwk;
zk=Hkxk+vk;
in the formula: phi is ak,k-1Is xk-1Change to xkTransition matrix, λkTo observe the parameters of the noise, HkIs an observation matrix. Target tracking, i.e. all observation vectors z from a known start to the current1:kEstimating the current state x by the minimum mean square error methodk。
In addition to the recognition process described above, the recognition method can also recognize the positional relationship between the inspection robot and the obstacle based on visual recognition or electro-optical ranging.
According to the characteristics of hardware obstacles on the ground wire of the high-voltage transmission line, the distance measurement method based on visual identification adopts monocular visual distance measurement and utilizes the principle of camera pinhole imaging, an image generated on an imaging plane of a camera and the projection of an external object image to a world plane parallel to the imaging plane of the camera are in a proportional relation, and a pair of vertex angles generated by the included angle between a connecting line of a certain point and the image projection and an optical axis are the same. And establishing a geometric relation model of the characteristic points of the camera and the hardware obstacle according to the principle, and then establishing a relation between the distance between the camera and the obstacle at the focal distance according to the arc degree of each pixel point with different calibrated focal distances. And (4) bringing the radian represented by each pixel point under different calibrated focal lengths and the vertical distance between the robot and the obstacle into the formula, so as to solve the horizontal distance between the camera and the obstacle.
The ground wire of the high-voltage transmission line is taken as a line on a plane, the plane is made downwards, the plane penetrates through the gravity center of the earth and is called as a ground wire gravity plane, and the ground wire of the high-voltage transmission line passes through the plane. Because the hardware obstacles and the robot are both hung on the ground wire of the high-voltage transmission line, the hardware obstacles on the hardware obstacles are bilaterally symmetrical, naturally, the plane passes through the symmetrical center of the hardware obstacles, the central point of the hardware can be selected as a characteristic point for distance measurement, and because the robot hung on the ground wire of the high-voltage transmission line is acted by gravity, the gravity center of the robot inevitably passes through the plane, the position of a camera on the robot is adjusted, the optical axis of the camera also passes through the plane, and thus the vertical axis of the camera is always positioned on the plane. This simplifies to an in-plane geometry.
Due to the yaw of the robot, the slope of the track, etc., there may be a certain error in practice, and this error may also be corrected based on the measured angle value of the dynamic tilt sensor mounted on the robot.
For a better understanding of the principle of monocular visual ranging, reference is made to the following description in conjunction with fig. 5.
According to the imaging principle of the camera, the feature point P correspondingly generates an image point with a point P on the image plane, and the image point is simultaneously positioned on the image shot by the camera.
And establishing an image coordinate system by taking the pixel points corresponding to the optical axis as an origin, wherein the image coordinate system takes the intersection line of the image plane and the ground wire gravity plane as a y axis, the x axis is vertical to the y axis, and the unit of the image coordinate system is a pixel.
The coordinate (x) of the p point can be acquired by means of computer digital image processingp, yp) According to the focal length of the cameraAnd radian information P of each pixel point of the camera calibrated in advance under the focal lengthradThe value of the included angle alpha between the optical axis of the camera corresponding to the point and the light ray of the point can be calculated
α=∑Prad。
And alpha and beta are vertex angle relation, so alpha is beta;
it can be seen from the figure that
Due to PradThe radian of each pixel point of the camera is well calibrated in advance; gamma is an included angle between a camera optical axis and a horizontal plane and can also be obtained by a camera holder control system; h is the vertical distance between the camera and the obstacle, which can also be measured in advance.
Then it can be derived
And the calculated D is the horizontal distance from the robot to the obstacle.
The vision identification and distance measurement mode is suitable for the condition that the robot and the barrier have a certain distance. When the robot is relatively close to an obstacle or the robot passes through the obstacle, the camera cannot shoot a complete image of the obstacle due to the shielding of the installation position of the camera and the robot body. The distance from the robot to the obstacle or the specific position of the obstacle cannot be measured by the above-mentioned visual recognition and distance measurement method. At this time, the photoelectric sensors on the two arms of the robot can be used for distance measurement and positioning.
In weather conditions such as fog, the type of the obstacle may not be identified visually, only the existence of the obstacle in front is identified, and the type and the position of the obstacle also need to be determined by using the photoelectric sensor.
The photoelectric sensors are arranged on two arms of the robot respectively and are symmetrical left and right, so that the robot can detect the forward movement and the backward movement of the robot. When the robot approaches the obstacle, the metal part of the obstacle can block the light of the sensor, so that the receiving end of the sensor receives the light reflected diffusely, and a continuous signal is obtained to detect the existence of the obstacle. When the hollow part of the obstacle penetrates through the light of the sensor, the receiving end of the sensor cannot receive the light reflected by the diffuse reflection, and the signal of the obstacle is not detected.
When the robot passes through the obstacle, because the obstacle has both a metal part and a hollow part, intermittent signal change can be generated. Then, the length of the metal part and the hollow part of the barrier can be calculated by combining the running speed of the robot and the on-off change time of the signal, and the type and the position of the barrier can be judged by properly selecting corresponding errors because the size of the barrier is known in advance. Meanwhile, the point of change of the intermittent signal can be used as a key point of obstacle detection.
The distance length calculation formula of the metal and the hollow part is as follows
∫vdt=L
Where v is the current running speed of the robot, dt is a short time period, and L is the distance traveled by the robot.
The detection trace lines in fig. 6 and 7 represent the detection traces of the photosensors. The length of L1, S1 and L2 in the figure can be calculated by the formula, wherein the length of L1 and L2 is the length of the metal weight of the damper, S1 is the length of the hollow part of the damper, and the obtained data is compared with the corresponding real data of the damper, so that the damper can be determined, and the specific position of the current robot and the key point of the position of the obstacle can be known. And because the anti-vibration hammer, the wire clamp and the tension tower gap bridge have larger size and shape differences, the obstacles can be easily distinguished.
The inclination angle sensor is arranged in a robot box body, and a high-precision two-axis dynamic inclinometer is adopted, so that the up-down slope angle in the front-back direction of the robot and the left-right swinging angle of the robot can be measured simultaneously. And determining which obstacle the robot passes through according to the change amplitude and the change time of the inclination angle detected in the advancing direction of the robot. Meanwhile, the integral quantity of the speed of the robot to the time is calculated, and the length information can be obtained.
As shown in fig. 6 and 7, it can be seen that the change in inclination as the robot passes the damper is first a small uphill slope, then a small gradual section, and then a small downhill slope. When the robot passes the suspension clamp, there is a large uphill slope, then there is a gentle section a little bit long, and finally a large downhill slope. When the robot passes through the bridge, it can be seen that the inclinometer detects a large uphill slope and then has a long gentle area, and after running for a period of time on the bridge, the robot encounters a large downhill slope to indicate that the robot has passed through the bridge and is leaving the bridge.
By combining the length of the metal part and the length of the hollow part of the obstacle detected by the photoelectric sensor, and the change of the angle of the up-down slope and the interval length of the inclinometer, and properly selecting the error range, the type of the obstacle passing by the robot, the relative position of the obstacle and the accurate positioning of the change key point of the sensor can be judged.
In order to realize the identification method, the embodiment of the invention also provides a high-voltage line obstacle identification device, which comprises an identification module and a distance measurement module.
The identification module comprises a camera and is used for shooting a tour inspection image with the sky as the background in real time; and the identification unit is used for judging the type and the position of the obstacle based on the geometric primitive features of the obstacle and by combining the line structure information.
The distance measurement module identifies the position relation between the inspection robot and the barrier based on visual identification or photoelectric distance measurement.
The visual identification is realized based on the visual identification unit, corresponds to monocular visual ranging, and realizes the analysis and calculation process of the monocular visual ranging. The photoelectric distance measurement is realized based on the photoelectric distance measurement unit, and the analysis and calculation process of the photoelectric distance measurement is realized.
In addition, as shown in fig. 8, the inspection robot includes a walking device and an obstacle recognition device, and the obstacle recognition device recognizes the type and distance of an obstacle, so as to assist the inspection robot in realizing inspection and obstacle crossing on an early high-voltage line.
The robot runs on the ground wire of a high-voltage transmission line, and the front walking wheel and the rear walking wheel are hung on the ground wire and can move forward and backward along the ground wire. Two arms which can rotate left and right are arranged below the walking wheels so as to adapt to the running of the robot in a curve. The arm is provided with a pinch roller and a photoelectric sensor for detecting obstacles. The robot control box is arranged below the arm, and two monocular cameras, a dynamic two-axis inclinometer, a battery, a control system and the like are arranged in the robot control box.
The robot body is provided with a front monocular camera, a rear monocular camera, a front photoelectric sensor and a rear photoelectric sensor, and the monocular camera, the front photoelectric sensor and the rear photoelectric sensor are used for identifying and positioning obstacles.
The front monocular camera and the rear monocular camera are symmetrically arranged with the photoelectric sensor, so that the obstacle identification and positioning of the robot in the advancing direction and the retreating direction are facilitated.
The monocular camera adopts a Haikangwei DS-2DE4220W-AE3, and is a 200-ten-thousand-pixel 4-inch network high-definition dome camera. The output of a maximum 1920 multiplied by 1080@30fps high-definition picture is supported, and the high-frame-rate output of 960p @60fps and 720p @60fps can also be realized; supporting 20 times of optical zoom and 16 times of digital zoom; the method supports three-code-stream technology, and each code stream can be independently configured with resolution and frame rate; the function of supporting the continuous transmission of the disconnected network ensures that the video is not lost; supporting 360 degrees of horizontal rotation, and the vertical direction is 0-90 degrees.
The photoelectric sensor adopts the double-Jiafu GLV18-8-H-120/25/102/115, the detection distance of the photoelectric sensor is 10-120mm, the adjustable range is 40-120mm, the output switching frequency is 500Hz, the response time is less than or equal to 1ms, the photoelectric sensor can work under the light condition of less than 30000Lux, and the photoelectric sensor has higher anti-interference capability.
The inclination angle sensor adopts POSITAL AKS-090-2-CA01-HK2-PV dynamic two-axis inclinometer. The inclinometer can accurately and rapidly measure the angles of the front axis, the back axis, the left axis and the right axis of the current robot no matter the robot is in a dynamic state or a static state. The measuring range of the two axes is +/-90 degrees, the resolution is 0.01 degree, the absolute accuracy is 0.30 degree, the dynamic accuracy is 0.50 degree, and the interface protocol is CANopen.
Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.