CN117718985A - Search and explosion venting robot based on intelligent three-dimensional vision - Google Patents

Search and explosion venting robot based on intelligent three-dimensional vision Download PDF

Info

Publication number
CN117718985A
CN117718985A CN202410173718.4A CN202410173718A CN117718985A CN 117718985 A CN117718985 A CN 117718985A CN 202410173718 A CN202410173718 A CN 202410173718A CN 117718985 A CN117718985 A CN 117718985A
Authority
CN
China
Prior art keywords
dimensional
coordinate system
coordinates
camera
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410173718.4A
Other languages
Chinese (zh)
Inventor
王德飞
姚震
樊鹏格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Zhongke Photoelectric Precision Engineering Co ltd
Original Assignee
Xi'an Zhongke Photoelectric Precision Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Zhongke Photoelectric Precision Engineering Co ltd filed Critical Xi'an Zhongke Photoelectric Precision Engineering Co ltd
Priority to CN202410173718.4A priority Critical patent/CN117718985A/en
Publication of CN117718985A publication Critical patent/CN117718985A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

A search and explosion venting robot based on intelligent three-dimensional vision comprises a three-dimensional vision unit, a multifunctional operation unit and a master control system, wherein the three-dimensional vision unit, the multifunctional operation unit and the master control system are borne on an autonomous mobile platform; the autonomous mobile platform is used as a bearing main body and a traveling mechanism and is provided with a power supply system and a hydraulic system; the three-dimensional visual unit obtains global coordinates through carrying a three-dimensional visual camera by the multi-degree-of-freedom mechanical arm, identifies and measures a specific area, realizes reconstruction of a three-dimensional scene, and obtains pose, three-dimensional outline and key feature digital information of suspected explosives; the multifunctional operation unit carries a detection instrument or tool in the multifunctional manipulator grabbing tool detection module library through two multi-degree-of-freedom mechanical arms, so that detection or dismantling operation of suspected explosives is realized; and the master control system constructs a walking map, plans a walking path, calculates the pose of the suspected explosive and sends an action control instruction. The unmanned precise operation and intelligent analysis decision level of the search and explosion venting robot can be improved.

Description

Search and explosion venting robot based on intelligent three-dimensional vision
Technical Field
The invention belongs to the technical field of search and explosion venting operation equipment, and particularly relates to an intelligent three-dimensional vision-based search and explosion venting robot.
Background
The danger faced by explosive removing operators in the process of searching and disposing suspicious explosives is immediate, fatal and irreversible, and the huge psychological and physiological pressures born by the explosive removing operators continuously extend through the whole task process. Therefore, research on the intelligent unmanned searching and explosion venting technology has great significance for reducing casualties, reducing losses and promoting the systematic construction of intelligent unmanned searching and explosion venting equipment.
Along with the continuous promotion of the industrialization process, various intelligent robots for replacing manual work are gradually developed. The explosive removing robot can replace explosive removing personnel to perform reconnaissance, transfer, disassembly and destruction on the explosive device, and can also treat other dangerous goods or serve as a monitoring and attacking platform. The whole system of the explosive-handling robot consists of a main body, an operating platform, cables, accessories and the like. The main body adopts a caterpillar or wheel type structure, adopts radio or optical fiber remote control, and is provided with a plurality of color CCD (Charge-Coupled Device) cameras and a multi-degree-of-freedom manipulator. Explosive detachers, manipulators and explosive detectors, X-ray detectors, thermal imaging systems, and the like may also be carried or installed as desired. However, the current common search and explosion venting robot can only perform simple operation and has single function, is controlled by operators, and lacks independent and autonomous operation capability.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides the search and explosion venting robot based on intelligent three-dimensional vision, which can autonomously identify objects and autonomously plan operation tracks, can perform full-flow process operation and greatly improve unmanned accurate operation and intelligent analysis decision level of the search and explosion venting robot.
In order to achieve the above purpose, the present invention has the following technical scheme:
the intelligent three-dimensional vision based search and explosion venting robot comprises an autonomous mobile platform, and a three-dimensional vision unit, a multifunctional operation unit and a master control system which are carried on the autonomous mobile platform;
the autonomous mobile platform is used as a bearing main body and a traveling mechanism, and a power supply system and a hydraulic system are arranged in the main body of the autonomous mobile platform;
the three-dimensional visual unit comprises a three-dimensional visual camera and a first multi-degree-of-freedom mechanical arm, the first multi-degree-of-freedom mechanical arm carries the three-dimensional visual camera to obtain global coordinates, identifies and measures a specific area, realizes reconstruction of a three-dimensional scene, provides a mobile navigation reference for the autonomous mobile platform, and simultaneously identifies and positions suspected explosives to obtain pose, three-dimensional outline and key characteristic digital information of the suspected explosives;
the multifunctional operation unit comprises a second multi-degree-of-freedom mechanical arm, a third multi-degree-of-freedom mechanical arm, a multifunctional mechanical arm and a tool detection module library, wherein the second multi-degree-of-freedom mechanical arm and the third multi-degree-of-freedom mechanical arm respectively carry a detection instrument or tool in the multifunctional mechanical arm grabbing tool detection module library, so that the suspected explosive is detected or removed;
and the master control system is in data interaction with the autonomous mobile platform, the three-dimensional visual unit and the multifunctional operation unit and is used for constructing a walking map, planning a walking path, calculating the pose of suspected explosive and sending an action control instruction.
As a preferable scheme, the autonomous mobile platform comprises a crawler-type vehicle-mounted chassis and a driving system for driving the crawler-type vehicle-mounted chassis to move forwards, backwards and turn.
As a preferable scheme, the power supply system is an explosion-proof battery module fixed on the crawler-type vehicle-mounted chassis.
As a preferred scheme, the autonomous mobile platform further comprises telescopic mechanisms connected to the periphery of the crawler-type vehicle-mounted chassis and supporting mechanisms arranged at the ends of the telescopic mechanisms, when the crawler-type vehicle-mounted chassis moves to a designated position, the telescopic mechanisms set telescopic lengths according to the size of a site space, and after the telescopic mechanisms complete telescopic actions, the supporting mechanisms lift the crawler-type vehicle-mounted chassis to enable the crawler to leave the ground.
As a preferable scheme, the three-dimensional vision camera is arranged at the tail end of the first multi-degree-of-freedom mechanical arm.
As a preferable scheme, the three-dimensional visual unit obtains global coordinates through the first multi-degree-of-freedom mechanical arm carrying the three-dimensional visual camera, identifies and measures a specific area, realizes reconstruction of a three-dimensional scene, provides a mobile navigation reference for an autonomous mobile platform, identifies and positions suspected explosives, and solves three-dimensional coordinate points according to the following mode when pose, three-dimensional outline and key feature digital information of the suspected explosives are obtained:
converting a point in space from a world coordinate system to a camera coordinate system, projecting the point under the camera coordinate system to an imaging plane to obtain a coordinate under an image physical coordinate system, and converting coordinate data on the imaging plane to an image plane to obtain a coordinate under an image pixel coordinate system; the relation between the coordinates of the image physical coordinate system and the coordinates of the image pixel coordinate system is as follows:
in the method, in the process of the invention,is the coordinate in the image pixel coordinate system, < +.>Is the coordinate under the physical coordinate system of the image, +.>Andrespectively isxAndythe picture element size of the direction +.>For the principal point of the image, let:
in the method, in the process of the invention,fxandfyis thatxAndythe focal length of the lens in the direction,defined as equivalent focal length;
then, the relationship between the image pixel coordinate system and the camera coordinate system coordinates is:
in the method, in the process of the invention,is the three-dimensional coordinates of the object point in the camera coordinate system, < > in the camera coordinate system>Is an internal reference matrix;
the relationship between the camera coordinate system and the world coordinate system coordinates is:
in the method, in the process of the invention,for rotating matrix +.>For translation matrix +.>The three-dimensional coordinates of the object point in the world coordinate system; then, the relationship between the image pixel coordinate system and the world coordinate system coordinates is:
in the method, in the process of the invention,an external parameter matrix of the right camera;
for binocular cameras, defineImage pixel coordinates for the left camera point, +.>Coordinates of the object point in a left camera coordinate system; />Image pixel coordinates for the right camera point, +.>Coordinates of the object point in a right camera coordinate system; />Coordinates of the object point in a world coordinate system; the relationship between the image pixel coordinates and world coordinates of the left and right cameras is:
in the method, in the process of the invention,is the internal reference matrix of the left camera, +.>Is the external reference matrix of the left camera, +.>Is the product of the inner reference matrix and the outer reference matrix of the left camera,>is the reference matrix of the right camera, +.>Is the external parameter matrix of the right camera,the method is the product of an internal reference matrix and an external reference matrix of a right camera, wherein the internal reference matrix and the external reference matrix of a left camera and a right camera can be obtained through camera calibration;
the matrix for calculating the space three-dimensional coordinates is obtained according to the pixel coordinates of the corresponding images of the left camera and the right camera, the internal reference matrix and the external reference matrix, and is as follows:
according to the above formula, the space three-dimensional coordinates corresponding to the coordinates in the pixel coordinate system of each pair of images are obtained through calculation.
As a preferable scheme, the detection instrument and the tool in the tool detection module library comprise a wire cutting pliers, a detaching tool, a back light scattering X-ray detector, a fluorescence quenching trace explosive detector, an ion mobility spectrometry trace explosive detector and a hand-held surface scanning Raman detector.
As a preferable scheme, the multifunctional operation unit further comprises a mounting base, and the second multi-degree-of-freedom mechanical arm and the third multi-degree-of-freedom mechanical arm are fixed on the mounting base.
As a preferable scheme, the master control system comprises a remote communication module, wherein the remote communication module has two communication modes of wireless and wired, and the corresponding communication mode is selected according to the field environment; the remote communication module performs real-time data interaction with the autonomous mobile platform, the three-dimensional visual unit and the multifunctional operation unit.
As a preferred solution, the master control system comprises a hand-held control module, which provides manual control options via a hand-held remote control device.
Compared with the prior art, the invention has at least the following beneficial effects:
according to the invention, a three-dimensional visual target intelligent identification and positioning technology is adopted, so that intelligent identification and positioning of suspected explosives are realized, global coordinates are obtained through a three-dimensional visual unit, and specific areas are accurately identified and measured, intelligent and full-automatic operation of the search and explosion-removal robot is realized, a traditional mode of completing an operation task through manual operation is eliminated, and high flexibility, accuracy and high efficiency of the operation task are realized. The intelligent three-dimensional vision based search and explosion-proof robot has the functions of detecting, identifying, arranging and the like. Wherein, the 'exploring' stage is to confirm whether the explosive exists or not in the determined task range; the identification stage is used for determining the type, the physicochemical property and the detonation mode of the explosive through accurate identification; the 'discharging' stage realizes disposal capability such as demolition of explosives. The intelligent three-dimensional vision based search and explosion venting robot disclosed by the invention adopts a multi-mechanical arm cooperation and hand-eye cooperation mode to work, so that detection, identification and disposal of explosives are completed. Through controlling the multi-angle collaborative operation of the mechanical arm, the system has the capability of adapting to various environments to carry out fine operation, and the purposes of collaborative detection and collaborative treatment of explosives are achieved. The invention can improve the autonomous control level and decision capability of the search and explosion venting robot and provide core technical achievements and application systems for subsequent equipment development.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention, and that other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the external structure of a binocular camera according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the outline structure of a three-dimensional structured light camera according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a search and explosion venting robot based on intelligent three-dimensional vision in the embodiment of the invention;
FIG. 4 is a composition hierarchy chart of the search and explosion venting robot based on intelligent three-dimensional vision in the embodiment of the invention;
FIG. 5 is a flow chart of data processing based on a three-dimensional visual intelligent recognition and measurement algorithm according to an embodiment of the invention;
FIG. 6 is a schematic structural diagram of an autonomous mobile platform according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a three-dimensional visual unit according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a multifunctional job unit according to an embodiment of the present invention;
in the accompanying drawings: 1-an autonomous mobile platform; 2-a three-dimensional visual unit; 3-a multifunctional operation unit; 4-a master control system; 111-tracks; 112-a drive system; 113-an explosion-proof battery module; 114-a telescoping mechanism; 115-a support mechanism; 211-a three-dimensional vision camera; 212-a first multi-degree-of-freedom mechanical arm; 311-a multifunctional manipulator; 312-a second multi-degree of freedom mechanical arm; 313-a third multi-degree of freedom mechanical arm; 314—a tool detection module library; 315-mounting base.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, one of ordinary skill in the art may also obtain other embodiments without undue burden.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, in the description of the present invention, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
In the present invention, unless explicitly specified and limited otherwise, terms such as "mounted," "connected," "secured," and the like are to be construed broadly and may be, for example, fixedly attached, detachably attached, or integrally formed; the device can be mechanically connected, electrically connected and communicated; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly under and obliquely below the second feature, or simply means that the first feature is less level than the second feature.
Referring to fig. 3, the search and explosion venting robot based on intelligent three-dimensional vision in the embodiment of the invention mainly comprises an autonomous mobile platform 1, a three-dimensional vision unit 2, a multifunctional operation unit 3 and a master control system 4, wherein the three-dimensional vision unit 2, the multifunctional operation unit 3 and the master control system 4 are carried on the autonomous mobile platform 1.
Referring to fig. 4, the system composition level of the search and explosion venting robot based on intelligent three-dimensional vision according to the embodiment of the invention can be divided into a device level, a unit level and a module level. The device level is an intelligent three-dimensional vision based search and explosion venting robot, and the unit level is an autonomous mobile platform, a three-dimensional vision unit, a multifunctional operation unit and a master control system. Further, the module-level equipment corresponding to the autonomous mobile platform comprises a crawler 111, a crawler-type vehicle-mounted chassis and driving system 112, an explosion-proof battery module 113, a telescopic mechanism 114 and a supporting mechanism 115. The module-level equipment corresponding to the three-dimensional vision unit comprises a first multi-degree-of-freedom mechanical arm 212 and a three-dimensional vision camera 211. The module-level equipment corresponding to the multifunctional operation unit comprises a second multi-degree-of-freedom mechanical arm 312, a tool detection module library 314, a third multi-degree-of-freedom mechanical arm 313, two multifunctional mechanical arms 311 and a mounting base 315; the tool detection module library 314 further comprises a wire clipper, a disassembling tool, a back-scattered X-ray detector, a fluorescence quenching trace explosive detector, an ion mobility spectrometry trace explosive detector and a hand-held surface scanning Raman detector. The module-level equipment corresponding to the master control system comprises a remote communication module, data processing software, a measurement control assembly and a handheld control module.
As shown in fig. 6, in one possible embodiment, the autonomous mobile platform 1 adopts a crawler-type vehicle chassis structure, and carries the three-dimensional visual unit 2, the multifunctional operation unit 3, the master control system 4, accessories and the like, and the functions of advancing, retreating, turning and the like of the autonomous mobile platform 1 can be realized through the driving system 112. The autonomous mobile platform 1 is internally provided with an explosion-proof battery module 113, and the explosion-proof battery module 113 provides a complete set of power supply for the autonomous mobile platform 1. The master control system can control driving power, the three-dimensional visual unit and the power supply of the multifunctional operation unit. The autonomous mobile platform 1 is provided with the telescopic mechanism 114 and the supporting mechanism 115, when the vehicle body moves to a designated position, the telescopic mechanism 114 automatically sets the telescopic length according to the size of the space, and after the telescopic mechanism 114 completes the telescopic action, the supporting mechanism 115 lifts the whole vehicle, so that the crawler 111 is lifted off the ground, and the operation stability of the whole vehicle equipment is improved.
In one possible embodiment, as shown in fig. 7, the three-dimensional vision unit 2 may use both a binocular camera and a three-dimensional structured light camera, as shown in fig. 1 and 2, respectively. The three-dimensional vision unit 2 adopts a three-dimensional vision camera 211, the three-dimensional vision camera 211 is arranged at the tail end of the first multi-degree-of-freedom mechanical arm 212, three-dimensional vision technology is utilized, scene three-dimensional information is rebuilt based on an artificial intelligence method, a planning path is provided for the autonomous mobile platform 1, obstacle avoidance judgment is carried out, meanwhile, the specific position of suspected explosive is identified and positioned, the three-dimensional vision camera 211 is driven by the first multi-degree-of-freedom mechanical arm 212 to scan and measure the explosive, three-dimensional appearance and gesture information of the suspected explosive are obtained, the general control system 4 plans an operation path for the multifunctional operation unit 3 through the obtained three-dimensional information, the multifunctional operation unit 3 is controlled to identify by a professional detection instrument, the type of the explosive is judged, and the method is again given according to the general control system 4, so that dismantling operation is guided.
In order to obtain specific pose information and three-dimensional outline of the suspected explosive, a three-dimensional camera image coordinate system is required to be converted into a world coordinate system, so that the unification of the coordinates of the detected targets is ensured, and the coordinates are solved as follows:
to illustrate the principle of binocular camera reconstruction, first, a camera shooting imaging process is described, which is an optical imaging process, where a point in space is first converted from a world coordinate system to a camera coordinate system, then projected onto an imaging plane (physical coordinate system of an image), and finally coordinate data on the imaging plane is converted to an image plane (pixel coordinate system of an image).
The relation between the coordinates of the image physical coordinate system and the coordinates of the image pixel coordinate system is as follows:
in the method, in the process of the invention,is the coordinate in the image pixel coordinate system, < +.>Is the coordinate under the physical coordinate system of the image, +.>Andrespectively isxAndythe picture element size of the direction +.>For the principal point of the image, let:
in the method, in the process of the invention,fxandfyis thatxAndythe focal length of the lens in the direction,defined as equivalent focal length;
then, the relationship between the image pixel coordinate system and the camera coordinate system coordinates is:
in the method, in the process of the invention,is the three-dimensional coordinates of the object point in the camera coordinate system, < > in the camera coordinate system>Is an internal reference matrix;
the relationship between the camera coordinate system and the world coordinate system coordinates is:
in the method, in the process of the invention,for rotating matrix +.>For translation matrix +.>The three-dimensional coordinates of the object point in the world coordinate system; then, the relationship between the image pixel coordinate system and the world coordinate system coordinates is:
in the method, in the process of the invention,an external parameter matrix of the right camera;
for binocular cameras, defineImage pixel coordinates for the left camera point, +.>Coordinates of the object point in a left camera coordinate system; />Image pixel coordinates for the right camera point, +.>Coordinates of the object point in a right camera coordinate system; />Coordinates of the object point in a world coordinate system; the relationship between the image pixel coordinates and world coordinates of the left and right cameras is:
in the method, in the process of the invention,is the internal reference matrix of the left camera, +.>Is the external reference matrix of the left camera, +.>Is the product of the inner reference matrix and the outer reference matrix of the left camera,>is the reference matrix of the right camera, +.>Is the external parameter matrix of the right camera,the inner parameter matrix and the outer parameter matrix of the right camera are multiplied, and the inner parameter matrix and the outer parameter matrix of the left camera and the right camera can be obtained through camera calibration.
The matrix for calculating the space three-dimensional coordinates can be obtained according to the pixel coordinates of the corresponding images of the left camera and the right camera, the internal reference matrix and the external reference matrix, and the matrix is as follows:
according to the above formula, the space three-dimensional coordinates corresponding to the coordinates in the pixel coordinate system of each pair of images are obtained through calculation.
As shown in fig. 5, the data processing flow based on the three-dimensional visual intelligent recognition and measurement algorithm in the embodiment of the invention comprises the following steps:
image filtering and denoising are carried out on the collected binocular image data, and then image correction is carried out, so that corrected image data are obtained;
performing binocular image registration on the corrected image data and calculating parallax to obtain a parallax map;
performing parallax optimization on the parallax map to obtain a depth map;
calculating a three-dimensional point cloud by using the depth map, and performing point cloud filtering, denoising and error point deletion on the obtained three-dimensional point cloud;
and outputting the point cloud data.
Referring to fig. 8, in one possible embodiment, the multifunctional working unit 3 is composed of a second multi-degree of freedom robot 312, a third multi-degree of freedom robot 313, a multifunctional robot 311, a tool detection module library 314, and a mounting base 315. The second multi-degree-of-freedom mechanical arm 312 and the third multi-degree-of-freedom mechanical arm 313 are fixed on the mounting base 315, so that the relative position stability of the two groups of multi-degree-of-freedom mechanical arms is ensured. Tool detection module library 314 includes wire cutters, disassembly tools, back-scattered X-ray detectors, fluorescence quenching trace explosive detectors, ion mobility spectrometry trace explosive detectors, hand-held facial scanning Raman detectors, and the like. The two multifunctional manipulators 311 are respectively mounted on the second multi-degree-of-freedom mechanical arm 312 and the third multi-degree-of-freedom mechanical arm 313, and are used for adapting to grabbing different devices, and various tools for removing explosives in the tool detection module library 314 can be replaced, specifically, according to the requirements of each stage of the operation process, the multifunctional manipulators 311 grab the detection instruments or tools of the tool detection module library 314, so that the detection or removal operation of the target object is realized.
In one possible implementation, the master control system 4 is composed of a remote communication module, data processing software, a measurement control component and a handheld control module, wherein the remote communication module has two wireless and wired communication modes, the corresponding communication modes are selected according to the field environment, and the remote communication module performs real-time data interaction with the autonomous mobile platform 1, the three-dimensional visual unit 2 and the multifunctional operation unit 3. The data processing software is mainly used for processing and analyzing internal communication and control of the intelligent search and explosion-removal robot, and comprises a vehicle body posture and self relative position information which are obtained by information analysis of an inertial navigation module, a laser SLAM (Simultaneous Localization And Mapping, synchronous positioning and map construction) module, a visual module, a GPS (Global Positioning System ) module and mileage, and then the information is combined with a corresponding algorithm to construct a walking map, plan a walking path and calculate the posture of a target object. The measurement control component is used for controlling the posture of the vehicle body, controlling the posture of the multi-degree-of-freedom mechanical arm and collecting information data of the sensor. The hand-held control module provides manual control selection through a hand-held remote control device, and meanwhile, the debugging and the maintenance of equipment are convenient.
According to the embodiment of the invention, after the intelligent three-dimensional vision based search and explosion venting robot receives a remote task instruction, a series of actions such as detection, recognition and explosion venting are automatically executed. Wherein, the 'exploring' stage is to confirm whether the explosive exists or not in the determined task range; the identification stage is used for determining the type, the physicochemical property and the detonation mode of the explosive through accurate identification; the 'discharging' stage realizes disposal capability such as demolition of explosives. The intelligent three-dimensional vision based search and explosion venting robot provided by the embodiment of the invention realizes intelligent operation by utilizing the combination of the autonomous mobile platform 1, the three-dimensional vision unit 2, the multifunctional operation unit 3 and the master control system 4.
The autonomous mobile platform 1 is used as a bearing main body and a running mechanism of the search and explosion venting robot, and is used as a supply station of a robot power supply system and a hydraulic system, so that the robot can move in an omnidirectional stable, reliable and safe manner. The vehicle-mounted crawler-type vehicle-mounted chassis mainly comprises a crawler-type vehicle-mounted chassis, a driving system 112, an explosion-proof battery module 113, a telescopic mechanism 114 and a supporting mechanism 115.
The three-dimensional visual unit 2 is realized by a multi-degree-of-freedom mechanical arm combined visual sensor and serves as basic equipment for effective operation of other modules, the three-dimensional visual unit 2 carries the three-dimensional visual camera 211 through the mechanical arm to obtain global coordinates and accurately identifies and measures a specific area, the purpose of reconstructing a three-dimensional scene to provide mobile navigation references for a mobile platform is achieved, meanwhile, the three-dimensional visual unit is used as a detection sensor to identify and position suspected explosives, and pose, three-dimensional outline and key feature digital information of the suspected explosives are obtained, so that the purpose of accurately positioning and guiding targets and equipment in the detection and explosion-discharging processes is achieved, and decision basis is provided for realizing intellectualization.
The multifunctional operation unit 3 includes a second multi-degree-of-freedom mechanical arm 312, a tool detection module library 314, a third multi-degree-of-freedom mechanical arm 313, a multifunctional mechanical arm 311, a mounting base 315, and the like, so as to achieve specific implementation of actions such as explosive detection, identification, and arrangement.
The master control system 4 comprises a remote communication module, data processing software, a measurement control assembly, a handheld control module and the like. The remote communication module is mainly responsible for data interaction with the outside; the data processing software consists of various data analysis and data processing algorithm software, and intelligent decision is made through algorithm analysis; the measurement control component is mainly used for acquiring three-dimensional scenes of the current environment and three-dimensional information of a measured target, the handheld control module provides manual control mode selection through the handheld remote control device, and meanwhile debugging and maintenance of equipment are facilitated.
The intelligent three-dimensional vision based search and explosion venting robot finally forms an intelligent, unmanned, systematic and autonomous solution of 'exploring, recognizing and venting' by researching and planning a system of 'exploring, recognizing and venting' whole process and taking man-machine interaction and information fusion as means. The three-dimensional intelligent vision algorithm is used as a core, and the object can be automatically identified, the operation track is automatically planned, the full-flow process operation is performed through the integration of advanced technologies such as three-dimensional intelligent vision intelligent identification positioning and measurement, multi-mechanical arm cooperative operation, artificial intelligent algorithm, intelligent decision-making and the like, so that the unmanned accurate operation and the intelligent analysis decision-making level of the search and explosion-venting robot are greatly improved. Based on the control technology of the real-time positioning navigation mobile carrying platform, the acquisition and processing of sensor information, map construction and positioning navigation are realized, and the positioning and navigation precision of the whole system and the reliability and stability of the system are improved. The integrated detection information fusion technology of explosives has good expandability, enriches the intelligent unmanned detection and explosion identification capabilities, greatly reduces the leakage detection rate and improves the detection and explosion accuracy. The networking application of the intelligent decision expert system technology and the command control platform realizes autonomous decision, auxiliary decision and man-machine cooperative control in the operation process, and improves the scientificity of decision conclusion and the intellectualization of the control process.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The search and explosion venting robot based on intelligent three-dimensional vision is characterized by comprising an autonomous mobile platform (1), a three-dimensional vision unit (2), a multifunctional operation unit (3) and a master control system (4), wherein the three-dimensional vision unit (2), the multifunctional operation unit (3) and the master control system (4) are carried on the autonomous mobile platform (1);
the autonomous mobile platform (1) is used as a bearing main body and a traveling mechanism, and a power supply system and a hydraulic system are arranged in the main body of the autonomous mobile platform (1);
the three-dimensional visual unit (2) comprises a three-dimensional visual camera (211) and a first multi-degree-of-freedom mechanical arm (212), global coordinates are obtained by carrying the three-dimensional visual camera (211) through the first multi-degree-of-freedom mechanical arm (212), and specific areas are identified and measured, so that a three-dimensional scene is reconstructed, a mobile navigation reference is provided for the autonomous mobile platform (1), and meanwhile suspected explosives are identified and positioned, so that pose, three-dimensional outline and key characteristic digital information of the suspected explosives are obtained;
the multifunctional operation unit (3) comprises a second multi-degree-of-freedom mechanical arm (312), a third multi-degree-of-freedom mechanical arm (313), a multifunctional mechanical arm (311) and a tool detection module library (314), wherein the second multi-degree-of-freedom mechanical arm (312) and the third multi-degree-of-freedom mechanical arm (313) respectively carry a detection instrument or tool in the tool detection module library (314) to realize detection or dismantling operation of suspected explosives;
and the master control system (4) is used for carrying out data interaction with the autonomous mobile platform (1), the three-dimensional visual unit (2) and the multifunctional operation unit (3) and is used for constructing a walking map, planning a walking path, calculating the pose of suspected explosive and sending an action control instruction.
2. The intelligent three-dimensional vision-based search and explosion venting robot according to claim 1, wherein the autonomous mobile platform (1) comprises a crawler-type vehicle-mounted chassis and a driving system (112) for driving the crawler-type vehicle-mounted chassis to move forwards, backwards and turn.
3. The search and explosion venting robot based on intelligent three-dimensional vision according to claim 2, wherein the power supply system is an explosion-proof battery module (113) fixed on a crawler-type vehicle-mounted chassis.
4. The intelligent three-dimensional vision-based search and explosion-elimination robot according to claim 2, wherein the autonomous mobile platform (1) further comprises a telescopic mechanism (114) connected to the periphery of the crawler-type vehicle-mounted chassis and a supporting mechanism (115) installed at the end part of the telescopic mechanism (114), when the crawler-type vehicle-mounted chassis moves to a designated position, the telescopic mechanism (114) sets a telescopic length according to the size of a field space, and after the telescopic mechanism (114) completes telescopic action, the supporting mechanism (115) lifts the crawler-type vehicle-mounted chassis to enable the crawler (111) to be lifted off the ground.
5. The intelligent three-dimensional vision-based search and explosion venting robot of claim 1, wherein the three-dimensional vision camera (211) is mounted at the end of a first multi-degree-of-freedom mechanical arm (212).
6. The intelligent three-dimensional vision based search and explosion venting robot according to claim 1, wherein the three-dimensional vision unit (2) obtains global coordinates and identifies and measures a specific area through carrying a three-dimensional vision camera (211) by a first multi-degree-of-freedom mechanical arm (212), so as to reconstruct a three-dimensional scene, provide a mobile navigation reference for an autonomous mobile platform (1), and simultaneously identify and position suspected explosives, and solve three-dimensional coordinate points when acquiring pose, three-dimensional outline and key feature digital information of the suspected explosives, according to the following modes:
converting a point in space from a world coordinate system to a camera coordinate system, projecting the point under the camera coordinate system to an imaging plane to obtain a coordinate under an image physical coordinate system, and converting coordinate data on the imaging plane to an image plane to obtain a coordinate under an image pixel coordinate system; the relation between the coordinates of the image physical coordinate system and the coordinates of the image pixel coordinate system is as follows:
in the method, in the process of the invention,is the coordinate in the image pixel coordinate system, < +.>Is the coordinate under the physical coordinate system of the image, +.>And->Respectively isxAndythe picture element size of the direction +.>For the principal point of the image, let:
in the method, in the process of the invention,fxandfyis thatxAndythe focal length of the lens in the direction,defined as equivalent focal length;
then, the relationship between the image pixel coordinate system and the camera coordinate system coordinates is:
in the method, in the process of the invention,is the three-dimensional coordinates of the object point in the camera coordinate system, < > in the camera coordinate system>Is an internal reference matrix;
the relationship between the camera coordinate system and the world coordinate system coordinates is:
in the method, in the process of the invention,for rotating matrix +.>For translation matrix +.>The three-dimensional coordinates of the object point in the world coordinate system; then, the relationship between the image pixel coordinate system and the world coordinate system coordinates is:
for binocular cameras, defineImage pixel coordinates for the left camera point, +.>Coordinates of the object point in a left camera coordinate system; />Image pixel coordinates for the right camera point, +.>Coordinates of the object point in a right camera coordinate system; />Coordinates of the object point in a world coordinate system; the relationship between the image pixel coordinates and world coordinates of the left and right cameras is:
in the method, in the process of the invention,is the internal reference matrix of the left camera, +.>Is the external reference matrix of the left camera, +.>Is the product of the inner reference matrix and the outer reference matrix of the left camera,>is the reference matrix of the right camera, +.>Is a right cameraIs used for the matrix of the external parameters,the method is the product of an internal reference matrix and an external reference matrix of a right camera, wherein the internal reference matrix and the external reference matrix of a left camera and a right camera can be obtained through camera calibration;
the matrix for calculating the space three-dimensional coordinates is obtained according to the pixel coordinates of the corresponding images of the left camera and the right camera, the internal reference matrix and the external reference matrix, and is as follows:
according to the above formula, the space three-dimensional coordinates corresponding to the coordinates in the pixel coordinate system of each pair of images are obtained through calculation.
7. The intelligent three-dimensional vision based search and explosion venting robot of claim 1, wherein the detection instruments and tools in the tool detection module library (314) comprise a wire cutting pliers, a detaching tool, a back-scattered X-ray detector, a fluorescence quenching trace explosive detector, an ion mobility spectrometry trace explosive detector, and a hand-held surface scanning raman detector.
8. The explosion-proof robot based on intelligent three-dimensional vision according to claim 1, wherein the multifunctional operation unit (3) further comprises a mounting base (315), and the second multi-degree-of-freedom mechanical arm (312) and the third multi-degree-of-freedom mechanical arm (313) are fixed on the mounting base (315).
9. The intelligent three-dimensional vision-based search and explosion-elimination robot according to claim 1, wherein the master control system (4) comprises a remote communication module, the remote communication module has two communication modes of wireless and wired, and the corresponding communication mode is selected according to the field environment; the remote communication module performs real-time data interaction with the autonomous mobile platform (1), the three-dimensional visual unit (2) and the multifunctional operation unit (3).
10. The intelligent three-dimensional vision-based search and explosion venting robot of claim 1, wherein the master control system (4) comprises a handheld control module, and the handheld control module provides manual control selection through a handheld remote control device.
CN202410173718.4A 2024-02-07 2024-02-07 Search and explosion venting robot based on intelligent three-dimensional vision Pending CN117718985A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410173718.4A CN117718985A (en) 2024-02-07 2024-02-07 Search and explosion venting robot based on intelligent three-dimensional vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410173718.4A CN117718985A (en) 2024-02-07 2024-02-07 Search and explosion venting robot based on intelligent three-dimensional vision

Publications (1)

Publication Number Publication Date
CN117718985A true CN117718985A (en) 2024-03-19

Family

ID=90211022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410173718.4A Pending CN117718985A (en) 2024-02-07 2024-02-07 Search and explosion venting robot based on intelligent three-dimensional vision

Country Status (1)

Country Link
CN (1) CN117718985A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980454A (en) * 2012-11-09 2013-03-20 河海大学常州校区 Explosive ordnance disposal (EOD) method of robot EOD system based on brain and machine combination
JP2014128840A (en) * 2012-12-28 2014-07-10 Kanto Seiki Kk Robot control system
CN104036541A (en) * 2014-04-01 2014-09-10 西北工业大学 Fast three-dimensional reconstruction method in vision measurement
US20140286536A1 (en) * 2011-12-06 2014-09-25 Hexagon Technology Center Gmbh Position and orientation determination in 6-dof
CN110136068A (en) * 2019-03-19 2019-08-16 浙江大学山东工业技术研究院 Sound film top dome assembly system based on location position between bilateral telecentric lens camera
CN110751691A (en) * 2019-09-24 2020-02-04 同济大学 Automatic pipe fitting grabbing method based on binocular vision
CN112276951A (en) * 2020-10-22 2021-01-29 中国人民武装警察部队工程大学 Unmanned search and explosion-removal robot system and working method thereof
CN112959329A (en) * 2021-04-06 2021-06-15 南京航空航天大学 Intelligent control welding system based on vision measurement
CN113751981A (en) * 2021-08-19 2021-12-07 哈尔滨工业大学(深圳) Space high-precision assembling method and system based on binocular vision servo
CN114260923A (en) * 2022-02-09 2022-04-01 航天科工智能机器人有限责任公司 Explosive-handling robot and explosive-handling method thereof
WO2023201578A1 (en) * 2022-04-20 2023-10-26 深圳大学 Extrinsic parameter calibration method and device for monocular laser speckle projection system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140286536A1 (en) * 2011-12-06 2014-09-25 Hexagon Technology Center Gmbh Position and orientation determination in 6-dof
CN102980454A (en) * 2012-11-09 2013-03-20 河海大学常州校区 Explosive ordnance disposal (EOD) method of robot EOD system based on brain and machine combination
JP2014128840A (en) * 2012-12-28 2014-07-10 Kanto Seiki Kk Robot control system
CN104036541A (en) * 2014-04-01 2014-09-10 西北工业大学 Fast three-dimensional reconstruction method in vision measurement
CN110136068A (en) * 2019-03-19 2019-08-16 浙江大学山东工业技术研究院 Sound film top dome assembly system based on location position between bilateral telecentric lens camera
CN110751691A (en) * 2019-09-24 2020-02-04 同济大学 Automatic pipe fitting grabbing method based on binocular vision
CN112276951A (en) * 2020-10-22 2021-01-29 中国人民武装警察部队工程大学 Unmanned search and explosion-removal robot system and working method thereof
CN112959329A (en) * 2021-04-06 2021-06-15 南京航空航天大学 Intelligent control welding system based on vision measurement
CN113751981A (en) * 2021-08-19 2021-12-07 哈尔滨工业大学(深圳) Space high-precision assembling method and system based on binocular vision servo
CN114260923A (en) * 2022-02-09 2022-04-01 航天科工智能机器人有限责任公司 Explosive-handling robot and explosive-handling method thereof
WO2023201578A1 (en) * 2022-04-20 2023-10-26 深圳大学 Extrinsic parameter calibration method and device for monocular laser speckle projection system

Similar Documents

Publication Publication Date Title
AU2018217444B2 (en) Multi-terrain inspection robotic device and methods for configuring and guiding the same
CN111487642A (en) Transformer substation inspection robot positioning navigation system and method based on three-dimensional laser and binocular vision
KR101549103B1 (en) Detection apparatus, Detection method and manipulator
CN103978474B (en) A kind of job that requires special skills robot towards extreme environment
CN111123911A (en) Legged intelligent star catalogue detection robot sensing system and working method thereof
WO2015180021A1 (en) Pruning robot system
CN112276951B (en) Unmanned search and explosion-removal robot system and working method thereof
CN102980454B (en) Explosive ordnance disposal (EOD) method of robot EOD system based on brain and machine combination
CN103487812A (en) Ultrasonic navigation unit of greenhouse automatic moving vehicle and method
CN113848208B (en) Plant phenotype platform and control system thereof
CN102221831A (en) Patrol inspection system of movable remote-controlled visual sense machine
CN108873914A (en) A kind of robot autonomous navigation system and method based on depth image data
CN104808667A (en) Automatic navigation and obstacle-avoidance intelligent vehicle control system
CN111417836A (en) Environment acquisition system
CN114407030A (en) Autonomous navigation distribution network live working robot and working method thereof
CN105487558A (en) Object following system based on mobile robot and method
CN209478617U (en) A kind of intelligent drilling robot that quick-replaceable drill bit may be implemented
CN114322980A (en) Method for obtaining position coordinates and drawing electronic map, computer-readable storage medium, and autonomous operating apparatus
CN115299245A (en) Control method and control system of intelligent fruit picking robot
CN116901028A (en) Mobile robot device for post-disaster rescue and rescue method
CN209111089U (en) A kind of intelligent drilling robot of anti-cable winds
WO2021230635A1 (en) Driver assistance system for excavator and method for controlling excavator by using same
CN107328308A (en) A kind of autonomous type detecting a mine robot system and detecting a mine method
CN117718985A (en) Search and explosion venting robot based on intelligent three-dimensional vision
CN213750765U (en) Visual unmanned aerial vehicle for collaborative search and collaborative search platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination