CN114266326A - Object identification method based on robot binocular three-dimensional vision - Google Patents

Object identification method based on robot binocular three-dimensional vision Download PDF

Info

Publication number
CN114266326A
CN114266326A CN202210069712.3A CN202210069712A CN114266326A CN 114266326 A CN114266326 A CN 114266326A CN 202210069712 A CN202210069712 A CN 202210069712A CN 114266326 A CN114266326 A CN 114266326A
Authority
CN
China
Prior art keywords
robot
patrol
image information
point
inspection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210069712.3A
Other languages
Chinese (zh)
Other versions
CN114266326B (en
Inventor
冉祥
陈小川
邓志伟
刘欣冉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Micro Chain Daoi Technology Co ltd
Original Assignee
Beijing Micro Chain Daoi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Micro Chain Daoi Technology Co ltd filed Critical Beijing Micro Chain Daoi Technology Co ltd
Priority to CN202210069712.3A priority Critical patent/CN114266326B/en
Publication of CN114266326A publication Critical patent/CN114266326A/en
Application granted granted Critical
Publication of CN114266326B publication Critical patent/CN114266326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an object identification method based on binocular three-dimensional vision of a robot, which is provided with an inspection robot with double camera modules, wherein the poses of the double camera modules can be rotationally adjusted along the normal phase of the inspection robot; the inspection robot is also at least provided with a base; the base is provided with a guide wheel of an independent motor; the double cameras are electrically connected with a control communication module arranged on the base; still have with the control communication module wireless connection's of patrolling line robot collection terminal: the acquisition terminal continuously receives the image information sent by the inspection robot, temporarily stores the received image information and forwards the temporarily stored image information to the server, and the acquisition terminal can send a pose optimization instruction and other new technologies to the inspection robot according to the image information. Therefore, the invention has the advantages of improving the efficiency of the monitoring algorithm, saving labor and reducing cost.

Description

Object identification method based on robot binocular three-dimensional vision
Technical Field
The invention relates to the technical field of machine vision monitoring, in particular to an object identification method based on binocular three-dimensional vision of a robot and a corresponding device.
Background
With the development of information and computer technology, more and more industrial robot devices are used in industrial production. A feature of machine vision systems is to provide production flexibility and a degree of automation. In some dangerous environments which are not suitable for manual operation or occasions where manual vision is difficult to meet requirements, the machine vision is commonly used for replacing the manual vision, so that the production efficiency and the automation degree are greatly improved. Machine vision is therefore also increasingly working in conjunction with robots.
In the prior art, for example, a chinese patent with an issued publication number of CN105643624B discloses a machine vision control method, a robot controller, and a robot control system, in which the machine vision control method performs image acquisition through a network camera, and performs control processing through an embedded dual-core microprocessor; the embedded dual-core microprocessor comprises a motion core and an application core; the motion core processes the PLC command and the motion command; and performing image processing by using the kernel and sending the processed image to the motion kernel. The robot controller has high integration level, integrates the machine vision and the robot controller on the same controller, has small volume and low power consumption, is convenient to move, and only needs to purchase a general network camera when a user applies the robot controller without purchasing special machine vision equipment. However, the hardware configuration of a single robot is high, and therefore, in the case where a plurality of robots cooperate, the implementation cost is high.
Disclosure of Invention
The invention aims to provide an object identification method based on binocular three-dimensional vision of a robot, which improves the efficiency of a monitoring algorithm and saves manpower by adapting the pose, and comprises the following steps,
the line patrol robot with the double camera modules is configured, and the poses of the double camera modules can be rotationally adjusted along the normal phase of the line patrol robot; the inspection robot is also at least provided with a base; the base is provided with a guide wheel of an independent motor; the double cameras are electrically connected with a control communication module arranged on the base;
still have with the control communication module wireless connection's of patrolling line robot collection terminal: the acquisition terminal continuously receives the image information sent by the inspection robot, temporarily stores the received image information and forwards the temporarily stored image information to the server;
the configuration server is used for calculating the image information forwarded by the acquisition terminal to obtain the 3D position of the object in the image information and marking the corresponding appearance position on a preset map; then clustering the moving objects, and drawing time sequence 3D position information according to the appearance positions; clustering includes either manually assisted clustering or automatic clustering. And sending prompt information to the administrator for the 3D position information meeting the preset alarm condition. By the method, the intelligent attribute of line patrol monitoring can be enlarged, fewer line patrol robots can be used for monitoring large-area factory areas, cells and warehouses, and the monitoring intelligence degree and the utilization of image information are improved.
The double cameras are connected to a substrate; the base plate is connected with an angle control stepping motor; the base plate is connected with the base through a machine head and an upright post which can control the elevation angle; an overlapping area capable of generating 3D vision is arranged between the monitoring areas of the two cameras; the line patrol robot can acquire real-time position information in the area through radio or GPS; therefore, the pedestrian or the object monitored by the stereoscopic vision can obtain the relative position information by a simple function transformation. And in the overlapping area, simple geometric transformation can be carried out through the positions of two or more double cameras to obtain the corresponding 3D position.
Further optimization measures adopted also include:
the clustering of the moving objects comprises automatic clustering and artificial auxiliary clustering. After the server obtains the image information, the automatic clustering of the machine is firstly carried out, and the recognition is carried out after the clustering. And identifying the moving objects after automatic clustering in the characteristic database, wherein the moving objects cannot be identified, and performing artificial assisted clustering by an administrator. Through data accumulation for a period of time, a basic database for monitoring regional line patrol can be established in a short time.
The pose optimization instruction comprises an inspection robot identification code, horizontal front and back elevation angle adjustment information, horizontal left and right elevation angle adjustment information and horizontal rotation angle adjustment information. After the machine receives the image information, the horizontal elevation angles of the double cameras of the inspection robot are automatically adjusted, the stepping motor is adjusted to adjust the left deflection angle and the right deflection angle, and the degree of the horizontal rotation angle is adjusted by adjusting the guide wheel, so that the optimal matching between the image information acquired by the inspection robot can be improved. The accuracy of 3D distance recognition can be improved by uniformly leveling the vanishing points of the images, and meanwhile, the calculation power of a server in the process of adjusting the angles of the images to perform clustering operation can be distributed due to the improvement of the original quality of the images.
The administrator presets the inspection point of the inspection robot through a coordinate input mode, and the inspection robot sequentially passes through the inspection points according to a preset sequence to perform inspection monitoring.
A three-dimensional vision based robot recognition and control method, comprising: decomposing the image information in the form of two-dimensional windows into two one-dimensional windows (W ij ) And the aggregation is accelerated by pre-calculating a horizontal integral image and a vertical integral image, and the filter function is as follows:
Figure DEST_PATH_IMAGE002AAAA
the line patrol robot with the double camera modules is adopted, and the poses of the double camera modules can be rotationally adjusted along the normal phase of the line patrol robot; the inspection robot is also at least provided with a base; the base is provided with a guide wheel of an independent motor; the double cameras are electrically connected with a control communication module arranged on the base; still have with the control communication module wireless connection's of patrolling line robot collection terminal: the acquisition terminal continuously receives the image information sent by the inspection robot, temporarily stores the received image information and forwards the temporarily stored image information to the server; the server is used for calculating the image information forwarded by the acquisition terminal to obtain the 3D position of the object in the image information and marking the corresponding appearance position on a preset map; and then clustering the moving objects, and performing time sequence 3D position information drawing and other new technologies according to the appearance positions. Therefore, the invention has the advantages of improving the efficiency of the monitoring algorithm, saving labor and reducing cost.
Drawings
FIG. 1 is a schematic diagram of a system usage scenario according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a line patrol robot according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a process of acquiring a plane in an image captured by two cameras according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating comparison of time consumption for capturing line segments by a camera according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples.
The reference numbers illustrate: the device comprises a base 1, a guide wheel 11, a power motor 12, a control communication module 2, an upright post 3, a machine head 4, an angle control stepping motor 41, a base plate 42 and double cameras 43.
Example 1:
referring to fig. 1 to 3, in the present embodiment, an object identification method based on binocular three-dimensional vision of a robot includes an inspection robot having two camera modules, and poses of the two camera modules can be rotationally adjusted along a normal phase of the inspection robot; the inspection robot is also at least provided with a base; the base is provided with a guide wheel of an independent motor; the double cameras are electrically connected with a control communication module arranged on the base;
still have with the control communication module wireless connection's of patrolling line robot collection terminal: the acquisition terminal continuously receives the image information sent by the inspection robot, temporarily stores the received image information and forwards the temporarily stored image information to the server;
the server is used for calculating the image information forwarded by the acquisition terminal to obtain the 3D position of the object in the image information and marking the corresponding appearance position on a preset map; then clustering the moving objects, and drawing time sequence 3D position information according to the appearance positions; clustering includes either manually assisted clustering or automatic clustering. And sending prompt information to the administrator for the 3D position information meeting the preset alarm condition. By the method, the intelligent attribute of line patrol monitoring can be enlarged, fewer line patrol robots can be used for monitoring large-area factory areas, cells and warehouses, and the monitoring intelligence degree and the utilization of image information are improved.
The double cameras are connected to a substrate; the base plate is connected with an angle control stepping motor; the base plate is connected with the base through a machine head and an upright post which can control the elevation angle; an overlapping area capable of generating 3D vision is arranged between the monitoring areas of the two cameras; the line patrol robot can acquire real-time position information in the area through radio or GPS; therefore, the pedestrian or the object monitored by the stereoscopic vision can obtain the relative position information by a simple function transformation. And in the overlapping area, simple geometric transformation can be carried out through the positions of two or more double cameras to obtain the corresponding 3D position.
The clustering of the moving objects comprises automatic clustering and artificial auxiliary clustering. After the server obtains the image information, the automatic clustering of the machine is firstly carried out, and the recognition is carried out after the clustering. And identifying the moving objects after automatic clustering in the characteristic database, wherein the moving objects cannot be identified, and performing artificial assisted clustering by an administrator. Through data accumulation for a period of time, a basic database for monitoring regional line patrol can be established in a short time.
The pose optimization instruction comprises an inspection robot identification code, horizontal front and back elevation angle adjustment information, horizontal left and right elevation angle adjustment information and horizontal rotation angle adjustment information. After the machine receives the image information, the horizontal elevation angles of the double cameras of the inspection robot are automatically adjusted, the stepping motor is adjusted to adjust the left deflection angle and the right deflection angle, and the degree of the horizontal rotation angle is adjusted by adjusting the guide wheel, so that the optimal matching between the image information acquired by the inspection robot can be improved. The accuracy of 3D distance recognition can be improved by uniformly leveling the vanishing points of the images, and meanwhile, the calculation power of a server in the process of adjusting the angles of the images to perform clustering operation can be distributed due to the improvement of the original quality of the images.
The administrator presets the inspection point of the inspection robot through a coordinate input mode, and the inspection robot sequentially passes through the inspection points according to a preset sequence to perform inspection monitoring.
The 2D rotation matrix estimation algorithm of the present embodiment adopts the textbook classical theory, but some explanations are still provided for easy understanding. The planar projection of the image has a series of straight lines and these lines eventually intersect at a point, referred to as a vanishing point. From the center to the edge of the image
Figure DEST_PATH_IMAGE004A
Intersects the image plane at a vanishing point, while on a 3D scale the direction of the ray and the original 3D parallel lines are parallel under the camera coordinate system. The solution of the intersection point can calculate the normal vectors of each 2D straight line and the camera center which form a plane, then the normal vectors are coplanar, and the cross product of any two normal directions is the intersection point. Thus, 3D parallel lines under the point cloud coordinate system
Figure 662182DEST_PATH_IMAGE006
Vanishing point ray to camera coordinate system
Figure 749961DEST_PATH_IMAGE008
The transformation between is a 3D rigid transformation.
Figure DEST_PATH_IMAGE010A
Wherein
Figure DEST_PATH_IMAGE012
And t
Figure DEST_PATH_IMAGE014
And rotation and translation parameters for the transformation from the point cloud coordinate system to the image coordinate system of the dual camera 43, the rotation matrix R needs to satisfy special orthogonality.
Figure DEST_PATH_IMAGE016
And
Figure DEST_PATH_IMAGE018
showing the inhomogeneous vanishing point direction and the 3D parallel line direction.
Since three-dimensional rotation has 3 degrees of freedom, at least 2 sets of 2D-3D direction matching can be used to estimate the rotation matrix.
Figure DEST_PATH_IMAGE020
To facilitate understanding, the pseudo-code implementing the vanishing point-based rotation matrix estimation of the above classical theory is as follows:
input 2D line set l2d},3D line set { l3dOutputting 8 candidate rotation matrixes R = { R1, R2.., R8};
1. calculating M2d (M2d>3) Clustering the 2D straight lines according to the vanishing points in the vanishing point direction;
2. merge M2dMultiple vanishing point directions and extracting the first 2 main vanishing point directions with the maximum number of 2D lines
Figure DEST_PATH_IMAGE022A
3. Parallel clustering out by using 3D line direction
Figure DEST_PATH_IMAGE024A
A 3D line principal direction;
4. merging 3D line principal directions to reject noise and extracting two principal directions with the largest number of lines
Figure DEST_PATH_IMAGE026
;
5. Computing a rotation matrix candidate using (equation 1)
Figure DEST_PATH_IMAGE028
6. And returning:
Figure DEST_PATH_IMAGE030
;
the above pseudo code content belongs to the idea of the classical scheme, and is only for facilitating the understanding of the technical scheme. The clustering detection algorithm uses a classical particle swarm optimization fuzzy clustering method (PSO), belongs to a classical method and is not described any more.
The server processes the image information, including filtering; the filtering processing is carried out by adopting an improved self-adaptive algorithm, and the method comprises the following steps: defining a linear coefficient ak、bkComprises the following steps:
Figure DEST_PATH_IMAGE032
wherein, the input image I, the filtering image is P, and the output image is Q. i and k are pixel points;
Figure DEST_PATH_IMAGE034
an adaptive support window of k;
Figure DEST_PATH_IMAGE036
to represent
Figure 858861DEST_PATH_IMAGE034
The total number of inner pixels;
Figure DEST_PATH_IMAGE038
is composed of
Figure DEST_PATH_IMAGE040
The average value of (a) of (b),
Figure 855068DEST_PATH_IMAGE041
is composed of
Figure 213981DEST_PATH_IMAGE040
The variance of (a) is determined,
Figure 928121DEST_PATH_IMAGE043
is a penalty factor; the filter function is:
Figure DEST_PATH_IMAGE044
wherein the content of the first and second substances,
Figure 558604DEST_PATH_IMAGE046
Figure 505307DEST_PATH_IMAGE036
respectively represent
Figure 862601DEST_PATH_IMAGE048
Figure 683663DEST_PATH_IMAGE034
Total number of inner pixels.
The pose optimization method comprises the following steps: reading or inputting the collected image, filtering, and reading the 2D line setl 2dGreat, 3D line setl 3dAnd the aforementioned 8 candidate rotation matrices
Figure 126407DEST_PATH_IMAGE050
(ii) a Merging collinear 2D and 3D line segments, and then removing shorter line segment noise; initializing 2D-3D line match number Ni(ii) a Randomly combining 3 pairs of 2D-3D line matching in a small restaurant matching set, and calculating a translation vector t by using the following formula:
Figure DEST_PATH_IMAGE051
e.g. N>Ni then iteration Ni = N, 2D-3D line matching set iteration
Figure 424271DEST_PATH_IMAGE053
Iteration of translation vector ti= t; up to Ni>e*N(l 2d) /4 or reaching the maximum iteration number; extraction of
Figure 170160DEST_PATH_IMAGE055
And corresponding pose and matching set R*,t*
Figure 678633DEST_PATH_IMAGE057
. According to R*,t*
Figure 74586DEST_PATH_IMAGE057
Parameters can be easily converted to obtain the horizontal elevation angle, the left-right deflection angle and the horizontal rotation angle of the corresponding camera, and the parameters can be automatically adjusted correspondingly by taking the difference value with the existing state.
Theoretically, for a pair of matched 2D-3D lines, the registration error contains the overlap distance and the angular difference. Since the overlap length is already constrained in the matching estimation, the pose is optimized here using the matching projection overlap angle as the error. If a 3D line matches a corresponding 2D line, then the projection of the two 3D end points onto the image plane should lie on the 2D straight line.
Example 2:
further, referring to fig. 4, a two-dimensional window is decomposed into two one-dimensional windows by introducing a quadrature integration method into the filtering method of the present invention (
Figure 366021DEST_PATH_IMAGE059
) And the horizontal integral image and the vertical integral image are pre-calculated to accelerate the aggregation, and the improved filter function is as follows:
Figure DEST_PATH_IMAGE060
in the action process of the line patrol robot, a fixed-point sequential line patrol mode is adopted for monitoring, images of the double cameras 43 are continuously shot, so that one camera of the double cameras 43 is a main camera, the obtained image information of line patrol monitoring is used for clustering objects or people, and the other camera is used for matching with the main camera to perform phase difference positioning to convert a relative position.
Acquiring position information of a patrol point in an area, wherein the position information comprises line type information of a patrol point track which is marked in advance in a three-dimensional map model of the area; when the track of the inspection point is in a curve shape, at least acquiring a position information data set of an initial inspection point and a final inspection point, which are sent by a GPS sensor; the method also comprises the steps of acquiring position information data of each patrol viewpoint sent by the three-dimensional point cloud sensor; when the track between the inspection points is linear, at least acquiring the position information data set of the initial inspection point and the final inspection point sent by the GPS sensor, wherein the position information data set comprises the following steps: and acquiring a position information data set of the starting patrol point, a position of the ending patrol point and a patrol point information data set therebetween. The method for establishing the traversal tour route according to the information of the tour viewpoint preset by the administrator can adopt a classic ant colony algorithm.
The routing logic between the tour points in the area includes but is not limited to the following parameters: range, congestion level, intersection and inflection point. In the route, namely in the map of the area, assuming that each grid is one step, the path length problem is converted into the step number problem, and the problem is simplified to the problem of searching the minimum number of steps passing through the grid for avoiding obstacles from the departure point to the target point. I.e. the optimal path optimization problem can be understood as a problem of minimum distance (number of steps) and can be implemented by means of a counter in the running of the algorithm. I.e. the solution including the least number of steps by recording the output path is the best path standard. The congestion degree, namely the number of other traffic participants according to the position of the intelligent vehicle and the planned driving route, can also be understood as the traffic flow and the congestion degree. And respectively setting the path and the patrol point with larger traffic flow as targets with larger weights through historical experience or a learning algorithm, and adjusting the set weights through a feedback mechanism in subsequent operation. The intersection points are determined according to the repeatedly selected tour points, the more tour points of the planned path are repeated, the greater the weight of the associated tour points and the connected edges, and the points can be traffic hubs and junction points in an actual map. The inflection point is selected according to the number of turns and the number of intersections of the planned path. On the regional map, if one patrol point only has one connected edge and one adjacent point, it can be understood that the patrol point is the simplest patrol point in the selectable patrol points. However, when a node and many other patrol points can communicate with each other, it means that the number of intersections traversed by the path passing through the patrol point is the largest. And when two adjacent patrol points in the planned path are respectively located on two intersecting edges, the two patrol points can be called as inflection points. The planning method can refer to a path planning mechanism introduced in the research of an optimal path selection mechanism and a method facing the intelligent vehicle driving by using the existing classical method.
The processing process of the routing logic between the patrol viewpoints in the area comprises image acquisition, graying, filtering, edge detection, point set generation, positioning on key point confirmation, path calculation, optimal path selection, smoothing, output, recording and archiving. On the basis of the intelligent vehicle optimal running path selection mechanism, a learner researches and explores the optimal path selection method of the intelligent driving vehicle through different designs and algorithms, and optimizes and designs the optimal path selection method in a targeted manner through different strategies so as to achieve a better effect. Because of the space of the application text of the invention, the method can be used for a plurality of alternative ways of path planning in the prior art without being expanded.
Patrol and be equipped with the position that supplementary sign of vision is used for calibration robot near the sight point, supplementary sign of vision includes: bar code label, two-dimensional code label, character label, trademark identification. Because GPS signals may be subject to errors in weather, electromagnetic, and other different environments, such as underground installations, GPS location information is often not available. Therefore, the adaptability and the precision of the system can be improved by arranging the visual auxiliary labels near each patrol point to carry out GPS position calibration.
While the invention has been described in connection with a preferred embodiment, it is not intended to limit the invention, and it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the invention.

Claims (9)

1. An object identification method based on robot binocular three-dimensional vision is characterized in that:
the line patrol robot with the double camera (43) modules is configured, and the poses of the double camera (43) modules can be rotationally adjusted along the normal phase of the line patrol robot; the inspection robot at least comprises a base (1); the base (1) is provided with a guide wheel (11) of an independent motor; the double cameras (43) are electrically connected with the control communication module (2) arranged on the base (1);
the configuration with the control communication module (2) wireless connection's of patrolling line robot collection terminal: the acquisition terminal continuously receives the image information sent by the inspection robot, temporarily stores the received image information and forwards the temporarily stored image information to the server;
the configuration server is used for calculating the image information forwarded by the acquisition terminal to obtain the 3D position of the object in the image information and marking the corresponding appearance position on a preset map; then clustering the moving objects, and drawing time sequence 3D position information according to the appearance positions;
the monitoring areas of the two cameras (43) are provided with an overlapping area capable of generating 3D vision; the line patrol robot can acquire real-time position information in the area through radio or GPS;
and a management terminal connected to the server: the management terminal is used for manually assisting clustering input operation.
2. The method for recognizing the object based on the binocular three-dimensional vision of the robot as claimed in claim 1, wherein: the clustering of the moving objects comprises automatic clustering and artificial auxiliary clustering.
3. The object recognition method based on the binocular three-dimensional vision of the robot as claimed in claim 2, wherein: the pose optimization instruction comprises an inspection robot identification code, horizontal front and back elevation angle adjustment information, horizontal left and right elevation angle adjustment information and horizontal rotation angle adjustment information.
4. The object recognition method based on the binocular three-dimensional vision of the robot as claimed in claim 2, wherein: the administrator presets the inspection points of the inspection robot through a coordinate input mode, and the inspection robot sequentially passes through the inspection points according to a preset sequence to perform inspection monitoring.
5. The object recognition method based on the binocular three-dimensional vision of the robot as claimed in claim 2, wherein: a robot identification and control method based on three-dimensional vision is characterized in that: decomposing the image information in the form of two-dimensional windows into two one-dimensional windows (W ij ) And the aggregation is accelerated by pre-calculating a horizontal integral image and a vertical integral image, and the filter function is as follows:
Figure DEST_PATH_IMAGE001
the object recognition method based on the binocular three-dimensional vision of the robot as claimed in claim 2, wherein: acquiring position information of the inspection point in the area, wherein the position information comprises line type information of an inspection point track which is marked in advance in a three-dimensional map model of the area; when the viewpoint patrolling track is in a curve shape, at least acquiring a position information data set of an initial viewpoint patrolling point and a final viewpoint patrolling point, which are sent by a GPS sensor; the method also comprises the steps of acquiring position information data of each patrol viewpoint sent by the three-dimensional point cloud sensor; when the track between the patrol viewpoints is linear, at least acquiring a position information data set of a starting patrol viewpoint and a stopping patrol viewpoint sent by a GPS sensor, wherein the position information data set comprises: and acquiring a position information data set of the starting patrol point, a position of the ending patrol point and a patrol point information data set therebetween.
6. The object recognition method based on the binocular three-dimensional vision of the robot as claimed in claim 2, wherein: the routing logic between the patrol viewpoints in the area is not limited to the following parameters: range, congestion level, intersection and inflection point.
7. The method for recognizing the object based on the binocular three-dimensional vision of the robot as claimed in claim 6, wherein: the processing process of the routing logic between the patrol viewpoints in the area comprises image acquisition, graying, filtering, edge detection, point set generation, positioning on key point confirmation, path calculation, optimal path selection, smoothing, output, recording and archiving.
8. The object recognition method based on the binocular three-dimensional vision of the robot as claimed in claim 2, wherein: tour near the point be equipped with the position that supplementary sign of vision is used for calibration robot, supplementary sign of vision include: bar code label, two-dimensional code label, character label, trademark identification.
9. The utility model provides a device of object identification based on binocular three-dimensional vision of robot, characterized by: the line patrol robot comprises a line patrol robot with two camera modules (43), and the poses of the two camera modules (43) can be rotationally adjusted along the normal phase of the line patrol robot; the inspection robot at least comprises a base (1); the base (1) is provided with a guide wheel (11) of an independent motor; the double cameras (43) are electrically connected with the control communication module (2) arranged on the base (1);
the line patrol robot is characterized by also comprising a collection terminal which is wirelessly connected with the control communication module (2) of the line patrol robot: the acquisition terminal continuously receives the image information sent by the inspection robot, temporarily stores the received image information and forwards the temporarily stored image information to the server;
the server is used for calculating the image information forwarded by the acquisition terminal to obtain the 3D position of the object in the image information and marking the corresponding appearance position on a preset map; then clustering the moving objects, and drawing time sequence 3D position information according to the appearance positions;
the double cameras (43) are connected to a substrate (42); the substrate (42) is connected with an angle control stepping motor (41); the base plate (42) is connected with the base (1) through a machine head (4) and a stand column (3) which can control elevation angles; the monitoring areas of the two cameras (43) are provided with an overlapping area capable of generating 3D vision; the line patrol robot can acquire real-time position information in the area through radio or GPS;
and a management terminal connected to the server: the management terminal is used for manually assisting clustering input operation.
CN202210069712.3A 2022-01-21 2022-01-21 Object identification method based on robot binocular three-dimensional vision Active CN114266326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210069712.3A CN114266326B (en) 2022-01-21 2022-01-21 Object identification method based on robot binocular three-dimensional vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210069712.3A CN114266326B (en) 2022-01-21 2022-01-21 Object identification method based on robot binocular three-dimensional vision

Publications (2)

Publication Number Publication Date
CN114266326A true CN114266326A (en) 2022-04-01
CN114266326B CN114266326B (en) 2022-09-02

Family

ID=80833251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210069712.3A Active CN114266326B (en) 2022-01-21 2022-01-21 Object identification method based on robot binocular three-dimensional vision

Country Status (1)

Country Link
CN (1) CN114266326B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847254A (en) * 2010-01-22 2010-09-29 上海步朗电子科技有限公司 Pre-processing method of infrared small point target detection based on optimal design of matched filter
CN103413313A (en) * 2013-08-19 2013-11-27 国家电网公司 Binocular vision navigation system and method based on power robot
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN107121983A (en) * 2017-05-24 2017-09-01 国家电网公司 Crusing robot path optimization controller based on 3D live-action maps and electronic compass
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN109828572A (en) * 2019-02-20 2019-05-31 广州供电局有限公司 Cable tunnel inspection robot and its positioning system based on mark location
CN110362090A (en) * 2019-08-05 2019-10-22 北京深醒科技有限公司 A kind of crusing robot control system
CN110378176A (en) * 2018-08-23 2019-10-25 北京京东尚科信息技术有限公司 Object identification method, system, equipment and storage medium based on binocular camera

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847254A (en) * 2010-01-22 2010-09-29 上海步朗电子科技有限公司 Pre-processing method of infrared small point target detection based on optimal design of matched filter
CN103413313A (en) * 2013-08-19 2013-11-27 国家电网公司 Binocular vision navigation system and method based on power robot
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN107121983A (en) * 2017-05-24 2017-09-01 国家电网公司 Crusing robot path optimization controller based on 3D live-action maps and electronic compass
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN110378176A (en) * 2018-08-23 2019-10-25 北京京东尚科信息技术有限公司 Object identification method, system, equipment and storage medium based on binocular camera
CN109828572A (en) * 2019-02-20 2019-05-31 广州供电局有限公司 Cable tunnel inspection robot and its positioning system based on mark location
CN110362090A (en) * 2019-08-05 2019-10-22 北京深醒科技有限公司 A kind of crusing robot control system

Also Published As

Publication number Publication date
CN114266326B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN110363816B (en) Mobile robot environment semantic mapping method based on deep learning
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
CN106403942B (en) Personnel indoor inertial positioning method based on substation field depth image identification
CN111037552B (en) Inspection configuration and implementation method of wheel type inspection robot for power distribution room
CN114474061B (en) Cloud service-based multi-sensor fusion positioning navigation system and method for robot
CN108217045B (en) A kind of intelligent robot of the undercarriage on data center's physical equipment
Chen et al. Pole-curb fusion based robust and efficient autonomous vehicle localization system with branch-and-bound global optimization and local grid map method
Ma et al. Crlf: Automatic calibration and refinement based on line feature for lidar and camera in road scenes
Yin et al. Radar-on-lidar: metric radar localization on prior lidar maps
CN114841944B (en) Tailing dam surface deformation inspection method based on rail-mounted robot
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
WO2022228391A1 (en) Terminal device positioning method and related device therefor
US20220164595A1 (en) Method, electronic device and storage medium for vehicle localization
Golovnin et al. Video processing method for high-definition maps generation
CN114266326B (en) Object identification method based on robot binocular three-dimensional vision
Sheng et al. Mobile robot localization and map building based on laser ranging and PTAM
Li et al. Feature point extraction and tracking based on a local adaptive threshold
CN114972945A (en) Multi-machine-position information fusion vehicle identification method, system, equipment and storage medium
He et al. AutoMatch: Leveraging Traffic Camera to Improve Perception and Localization of Autonomous Vehicles
CN114155485A (en) Intelligent community intelligent security monitoring management system based on 3D vision
CN115446846A (en) Robot is checked to books based on bar code identification
Shi et al. Cobev: Elevating roadside 3d object detection with depth and height complementarity
Li et al. BSP-MonoLoc: Basic Semantic Primitives based Monocular Localization on Roads
Zhang et al. Cross‐entropy‐based adaptive fuzzy control for visual tracking of road cracks with unmanned mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20220401

Assignee: Zhongguancun Technology Leasing Co.,Ltd.

Assignor: Beijing Micro-Chain Daoi Technology Co.,Ltd.

Contract record no.: X2023980044000

Denomination of invention: An Object Recognition Method Based on Robot Binocular 3D Vision

Granted publication date: 20220902

License type: Exclusive License

Record date: 20231020

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An Object Recognition Method Based on Robot Binocular 3D Vision

Effective date of registration: 20231023

Granted publication date: 20220902

Pledgee: Zhongguancun Technology Leasing Co.,Ltd.

Pledgor: Beijing Micro-Chain Daoi Technology Co.,Ltd.

Registration number: Y2023980062312