CN112130555A - Self-walking robot and system based on laser navigation radar and computer vision perception fusion - Google Patents
Self-walking robot and system based on laser navigation radar and computer vision perception fusion Download PDFInfo
- Publication number
- CN112130555A CN112130555A CN202010519604.2A CN202010519604A CN112130555A CN 112130555 A CN112130555 A CN 112130555A CN 202010519604 A CN202010519604 A CN 202010519604A CN 112130555 A CN112130555 A CN 112130555A
- Authority
- CN
- China
- Prior art keywords
- module
- panoramic camera
- robot
- camera module
- radar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004438 eyesight Effects 0.000 title claims abstract description 21
- 230000008447 perception Effects 0.000 title claims abstract description 21
- 230000004927 fusion Effects 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims description 90
- 238000004891 communication Methods 0.000 claims description 32
- 239000003381 stabilizer Substances 0.000 claims description 32
- 230000005484 gravity Effects 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 14
- 239000000126 substance Substances 0.000 claims description 13
- 230000004888 barrier function Effects 0.000 claims description 9
- 230000002194 synthesizing effect Effects 0.000 claims description 7
- 238000012423 maintenance Methods 0.000 claims description 6
- 238000010183 spectrum analysis Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 description 15
- 230000015572 biosynthetic process Effects 0.000 description 4
- 230000001149 cognitive effect Effects 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000001276 controlling effect Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000002955 isolation Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 230000008602 contraction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Acoustics & Sound (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a self-walking robot and a system based on laser navigation radar and computer vision perception fusion, which comprises a robot body, a laser radar, a vertically arranged telescopic bracket and a panoramic camera module arranged at the top end of the telescopic bracket, wherein the telescopic bracket is connected with a panoramic camera module revolute pair, a vector angle sensor is arranged between the telescopic bracket and the panoramic camera module and is used for acquiring the rotation angle size and the rotation direction of the panoramic camera module when rotating relative to the telescopic bracket, the height of the panoramic camera module from the ground is greater than that of the laser radar from the ground, the telescopic bracket is used for automatically adjusting the height of the panoramic camera module from the ground, and provide fixed reference for vector angle sensor, laser radar is used for acquireing the distance between robot body and its obstacle around, and the panorama module of making a video recording is used for acquireing the first color image of obstacle around the robot body.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a self-walking robot and a system based on fusion of laser navigation radar and computer vision perception.
Background
At present, magnetic stripe or laser radar navigation is generally adopted by self-walking robots, and the magnetic stripe navigation is high in cost, limited by environment and more in restriction factors. The laser radar can not acquire object information in the environment in more dimensions and can only be suitable for measuring the distance of some specific targets, so that the self-walking robot adopting the laser radar navigation can only move in some wide and simple-terrain areas, and the application scene is greatly limited.
For example, as shown in fig. 1, a circular enclosure is formed by a plurality of fixed shutters 2 and a movable shutter 3, wherein the color of the movable shutter 3 is different from that of the fixed shutter 2, or the word "exit" is written directly on the movable shutter 3. At this time, if the lidar navigation self-walking robot 1 is placed in the laser radar navigation self-walking robot, the radar data acquired by the lidar navigation self-walking robot is a circular fence, and the data that the movable shelter 3 is an exit cannot be acquired, so that the lidar navigation self-walking robot cannot determine a correct action route to walk out of the fence.
As shown in fig. 2, a plurality of fixed blinders 2 are placed in different ways on two paths (path one and path two) divided by an intermediate isolation belt 4 to form different passing paths. The way of placing the fixed shelters 2 at the initial intersections of the first path and the second path is the same. At this time, if the lidar guided self-propelled robot 1 is placed at an intersection to let it pass through the road, it will be the same for the obstacles and the difficulty level that may be encountered in the front, and it will not be possible to obtain the optimal path to pass through, and obviously, the path two is better than the path one.
Disclosure of Invention
In view of the above, a self-walking robot that can be applied to a more complicated environment and a system that is applicable to the robot are provided.
In addition, a control method of the self-walking robot is also provided.
A self-walking robot based on laser navigation radar and computer vision perception fusion comprises a robot body for walking movement, a laser radar arranged on the robot body, a first telescopic support vertically and fixedly arranged on the robot body, and a first panoramic camera shooting module arranged at the top end of the first telescopic support; the first telescopic bracket is connected with the first panoramic camera module revolute pair; a first vector angle sensor is arranged between the first telescopic bracket and the first panoramic camera module; the first vector angle sensor is used for acquiring the rotation angle size and the rotation direction of the first panoramic camera module when rotating relative to the first telescopic support so as to correct the deviation between the orientation of an object in a picture acquired by the first panoramic camera module and the orientation of the object acquired by a laser radar caused by the rotation of the first panoramic camera module relative to the first telescopic support, and the orientations of the same object acquired by the first panoramic camera module and the first telescopic support are the same; the height of the first panoramic camera module from the ground is greater than that of the laser radar from the ground; the first telescopic support is used for automatically adjusting the height of the first panoramic camera module from the ground and providing a first fixed reference for the first vector angle sensor; the laser radar is used for acquiring the distance between the robot body and the surrounding obstacles; the first panoramic camera shooting module is used for acquiring a first color image of the obstacle around the robot body so as to assist in judging the physical and/or chemical properties of the obstacle around the robot body.
In one embodiment, the first panoramic camera module is connected with the first telescopic bracket through a first camera stabilizer; the first camera stabilizer is connected with the first telescopic bracket through a revolute pair; the first panoramic camera shooting module comprises a first lens module, a first lens supporting rod and a first gravity block; the first lens supporting rod is vertically and fixedly arranged on the camera mounting seat of the first camera stabilizer; the first lens module is arranged at the top end of the first lens supporting rod; the first gravity block is arranged on the back of the camera mounting seat so as to reduce the gravity center of the first panoramic camera shooting module and improve the torque of the camera mounting seat relative to the first camera stabilizer, so that the first lens supporting rod is kept in a vertical state; the first vector angle sensor is arranged between the first telescopic support and the first camera stabilizer and used for acquiring the rotation angle size and the rotation direction of the first camera stabilizer relative to the rotation of the first telescopic support, so that the deviation of the orientation of an object in a picture acquired by the first panoramic camera shooting module and the orientation of the object acquired by the laser radar caused by the rotation of the first panoramic camera shooting module relative to the first telescopic support is corrected, and the orientations of the same object acquired by the first panoramic camera shooting module and the first telescopic support are the same.
In one embodiment, the first lens module includes a first wide-angle lens, a second wide-angle lens, and a third wide-angle lens; the shooting angles of the first wide-angle lens, the second wide-angle lens and the third wide-angle lens are respectively 120 degrees, so that a 360-degree panoramic camera is obtained through splicing synthesis.
In one embodiment, the panoramic camera further comprises a second telescopic bracket and a second panoramic camera module arranged at the top end of the second telescopic bracket; the second telescopic bracket is connected with the second panoramic camera module revolute pair; a second vector angle sensor is arranged between the second telescopic bracket and the second panoramic camera module; the second vector angle sensor is used for acquiring the rotation angle size and the rotation direction of the second panoramic camera module relative to the second telescopic support during rotation, so that the deviation between the orientation of an object in a picture acquired by the second panoramic camera module and the orientation of the object acquired by the laser radar caused by the rotation of the second panoramic camera module relative to the second telescopic support is corrected, and the orientations of the same object acquired by the second panoramic camera module and the second telescopic support are the same. The second telescopic bracket is vertically arranged and is fixedly connected with the robot body; the height of the second panoramic camera module from the ground is greater than that of the laser radar and less than that of the first panoramic camera module from the ground; the second telescopic support is used for automatically adjusting the height of the second panoramic camera module from the ground and providing a second fixed reference for the second vector angle sensor; the second fixed reference datum is the same as the first fixed reference datum; the horizontal distance between the second panoramic camera module and the first panoramic camera module is greater than 0; the distance between the second telescopic bracket and the first telescopic bracket is greater than 0; the second panoramic camera shooting module is used for acquiring a second color image of obstacles around the robot body and synthesizing the second color image with the first color image into a three-dimensional image. The method can synthesize 6 three-dimensional images at most, and can see 5 reference plane situations of objects at most.
In one embodiment, the second panoramic camera module is connected with the second telescopic bracket through a second camera stabilizer; the second camera stabilizer is connected with the second telescopic bracket through a revolute pair; the second panoramic camera module comprises a second lens module, a second lens supporting rod and a second gravity block; the second lens supporting rod is vertically and fixedly arranged on the camera mounting seat of the second camera stabilizer; the second lens module is arranged at the top end of the second lens supporting rod; the second gravity block is arranged on the back of the camera mounting seat so as to reduce the gravity center of the second panoramic camera shooting module and improve the torque of the camera mounting seat relative to the second camera stabilizer, so that the second lens supporting rod is kept in a vertical state; the second vector angle sensor is arranged between the second telescopic support and the second camera stabilizer and used for acquiring the rotation angle and the rotation direction of the second camera stabilizer relative to the second telescopic support during rotation so as to correct the deviation between the orientation of the object in the picture acquired by the second panoramic camera module and the orientation of the object acquired by the laser radar caused by the rotation of the second panoramic camera module relative to the second telescopic support, and the orientations of the same object acquired by the second panoramic camera module and the second telescopic support are the same.
In one embodiment, the second lens module includes a fourth wide-angle lens, a fifth wide-angle lens, and a sixth wide-angle lens; the shooting angles of the fourth wide-angle lens, the fifth wide-angle lens and the sixth wide-angle lens are respectively 120, so that a 360-degree panoramic camera is obtained through splicing synthesis.
In one embodiment, the robot further comprises an audio receiver, a mechanical arm arranged on the robot body, and a hammer arranged at the tail end of the mechanical arm for knocking; the audio receiver is used for receiving sound of the hammer hitting an object.
In one embodiment, the robot further comprises an audio receiver, a tail end motor, a mechanical arm arranged on the robot body, a mechanical arm mounting seat and a third telescopic bracket; the mechanical arm is connected with the robot body through the mechanical arm mounting seat; the mechanical arm is a telescopic straight rod; the mechanical arm is connected with the mechanical arm mounting seat revolute pair; the mechanical arm is connected with the third telescopic support revolute pair, so that the extension and the shortening of the third telescopic support can drive the mechanical arm to rotate around the mechanical arm mounting seat. An electric control ejection hammer module is arranged at the tail end of the mechanical arm; the electric control ejection hammer module is connected with the mechanical arm tail end revolute pair and drives the electric control ejection hammer module to rotate relative to the mechanical arm through a tail end motor; the audio receiver is arranged on the electronic control ejection hammer module, and enables the audio receiving end to face the direction in which the electronic control ejection hammer module strikes the object.
In one embodiment, the system further comprises an audio receiver and an ultrasonic transmitter; the ultrasonic transmitter is used for transmitting ultrasonic waves; the audio receiver is used for receiving the ultrasonic waves which are emitted by the ultrasonic emitter and reflected back by the barrier so as to realize the navigation function under the weather conditions of overcast and rainy days, heavy fog days and the like.
In one embodiment, the system further comprises a first communication module and a second communication module; the first communication module is used for being connected with the Internet so as to realize remote control and data maintenance; the second communication module is used for real-time communication after being paired with other robots in the same area, so that the function of mutual learning among the robots is realized.
In one embodiment, the robot further comprises an anemometer module arranged on the robot body; the anemoscope module is used for measuring the wind speed of the position where the robot body is located. Because the panorama module of making a video recording erects on the telescopic support among this application scheme, the position is higher under the normal condition, if the wind speed too big can lead to the torque arm too big, consequently, set up the anemoscope module, when measured wind speed is greater than the setting value, will control the telescopic support who makes the panorama module of making a video recording erect and shorten to reduce the torque arm.
In one embodiment, the robot further comprises a wind direction indicator module arranged on the robot body; the anemoscope module is used for measuring the wind direction of the position where the robot body is located. So set up, can be according to the wind direction adjustment the gesture of robot body to the mode operation of minimum windage.
In one embodiment, a first laser radar, a second laser radar, a third laser radar and a fourth laser radar are sequentially arranged around the robot body; the first laser radar, the second laser radar, the third laser radar and the fourth laser radar are used for respectively acquiring distances between the robot and surrounding obstacles.
The above-mentioned robot of walking by oneself based on laser navigation radar and computer vision perception fuse that provides, through settling the panorama module of making a video recording on the telescopic bracket of vertical setting, can acquire the color information or the text information that laser radar can not acquire to this can assist and judge the physics and/or the chemical attribute of disconnected barrier around the robot, improves the cognitive degree of walking by oneself robot to the surrounding environment, thereby widens the suitable scene of walking by oneself the robot.
According to the content, the application also provides a self-walking robot system based on the laser navigation radar and the computer vision perception fusion.
A self-walking robot system based on laser navigation radar and computer vision perception fusion comprises a central processing unit, an image data processing module, a laser navigation radar data processing module, a first panoramic camera module, a first vector angle sensor and a laser navigation radar module; the image data processing module and the laser navigation radar data processing module are connected with the central processing unit; the first panoramic camera module is connected with the image data processing module; the laser navigation radar module is connected with the laser navigation radar data processing module; the first vector angle sensor is respectively connected with the first panoramic camera module and the central processing unit; the laser navigation radar module is used for acquiring the distance between the robot body and the surrounding obstacles; the first panoramic camera module is used for acquiring a first color image of an obstacle around the robot body so as to assist in judging the physical and/or chemical properties of the obstacle around the robot body; the image data processing module is used for controlling the first panoramic camera module and performing first-stage data processing on the first color image; the laser navigation radar data processing module is used for controlling the laser navigation radar module and performing first-stage processing on data acquired by the laser navigation radar module; the first vector angle sensor is used for acquiring the angle and the direction of the reference deviation of the first panoramic camera module relative to the laser navigation radar module; and the central processing unit performs second-level data processing on the data output by the image data processing module and the laser navigation radar data processing module through the data acquired by the first vector angle sensor, and corrects the reference deviation of the first panoramic camera module relative to the laser navigation radar module caused by rotation, so that the object azimuth in the first color image is the same as the object azimuth acquired by the laser navigation radar.
In one embodiment, the system further comprises a second panoramic camera module and a second vector angle sensor; the second panoramic camera module is connected with the image data processing module; the second vector angle sensor is respectively connected with the second panoramic camera module and the central processing unit; the second panoramic camera shooting module is used for acquiring a second color image of obstacles around the robot body and synthesizing the second color image with the first color image into a three-dimensional image; the second vector angle sensor is used for acquiring the angle and the direction of the reference deviation of the second panoramic camera module relative to the laser navigation radar module; the central processing unit performs second-level data processing on data output by the image data processing module and the laser navigation radar data processing module through data acquired by the second vector angle sensor, and corrects a reference deviation of the second panoramic camera module relative to the laser navigation radar module caused by rotation, so that the object azimuth in the second color image is the same as the object azimuth acquired by the laser navigation radar; the central processing unit or the second panoramic camera module is further used for synthesizing the first color image and the second color image into a three-dimensional image.
In one embodiment, the first panoramic camera module comprises a first wide-angle lens, a second wide-angle lens and a third wide-angle lens; the second panoramic camera module comprises a fourth wide-angle lens, a fifth wide-angle lens and a sixth wide-angle lens; the first wide-angle lens and the sixth wide-angle lens are connected with the image data processing module after being matched and connected; the second wide-angle lens is connected with the fifth wide-angle lens in a matching way and then is connected with the image data processing module; the third wide-angle lens and the fourth wide-angle lens are connected in a matched mode and then connected with the image data processing module; the central processing unit is also used for correcting the relative rotation angle of the first panoramic camera module and the second panoramic camera module through the data of the first vector angle sensor and the second vector angle sensor.
In one embodiment, the laser navigation radar module comprises a first laser radar, a second laser radar, a third laser radar and a fourth laser radar; the first laser radar, the second laser radar, the third laser radar and the fourth laser radar are sequentially arranged around the robot to respectively obtain the distance between the robot and surrounding obstacles; and the laser navigation radar data processing module is also used for splicing and synthesizing data acquired by the first laser radar, the second laser radar, the third laser radar and the fourth laser radar into a complete radar map and outputting the radar map to the central processing unit for further processing.
In one embodiment, the electronic control ejection hammer device further comprises an electronic control ejection hammer module and an audio receiver which are respectively connected with the central processing unit; the electronic control ejection hammer module is used for knocking an object to make a sound; the audio receiver is used for receiving sound generated by the electronic control ejection hammer module when the electronic control ejection hammer module strikes an object; the central processing unit is also used for carrying out spectrum analysis on the sound emitted by the electric control ejection hammer module when the object is knocked, and outputting the type and basic physical and/or chemical properties of the knocked object.
In one embodiment, the system further comprises an ultrasonic transmitter connected with the central processing unit; the ultrasonic transmitter is used for transmitting ultrasonic waves; the audio receiver is also used for receiving the ultrasonic waves which are emitted by the ultrasonic emitter and reflected back by the barrier so as to realize the navigation function of the robot under the weather conditions of overcast and rainy days, heavy fog days and the like.
In one embodiment, the ultrasonic transmitter is further connected to the audio receiver.
In one embodiment, the system further comprises a first communication module and a second communication module which are connected with the central processing unit; the first communication module is used for being connected with the Internet so as to realize remote control and data maintenance; the second communication module is used for real-time communication after being paired with other robots in the same area, so that the function of mutual learning among the robots is realized.
In one embodiment, the wind direction instrument further comprises a wind direction instrument module connected with the central processing unit; the anemoscope module is used for measuring the wind direction of the position where the robot is located.
In one embodiment, the wind speed instrument further comprises an anemometer module connected with the central processing unit; the anemoscope module is used for measuring the wind speed of the position where the robot is located.
In one embodiment, the system further comprises a storage module connected with the central processor; the storage module stores a control program which can be operated by the central processing unit and attribute information of common objects, and is used for storing information generated when each functional module operates.
Above-mentioned self walking robot system based on laser navigation radar and computer vision perception fuse that provides can acquire the color information or the text message that laser radar can not obtain through setting up the panorama module of making a video recording to this can assist and judge the physics and/or the chemical attribute of disconnected barrier around the robot, improves the self walking robot to the cognitive degree of surrounding environment, thereby widens self walking robot's suitable scene.
In addition, the application also provides a control method of the self-walking robot based on the fusion of the laser navigation radar and the computer vision perception.
A control method of a self-walking robot based on laser navigation radar and computer vision perception fusion is disclosed, wherein the self-walking robot is the self-walking robot in any embodiment, and the method comprises the steps that the laser navigation radar acquires a reflecting surface of an obstacle around the robot; the central processing unit judges whether the maximum distance between the obstacles is smaller than the minimum passing distance of the robot or not, and if yes, the first panoramic camera module is started.
In one embodiment, the method further comprises testing the maximum distance that the laser navigation radar can acquire the obstacles around the robot; and judging whether the maximum distance is smaller than or equal to a first set value, and if so, starting the ultrasonic transmitter and the audio receiver.
According to the method, the laser navigation radar is used for preliminarily judging the surrounding obstacle environment, and when the requirement cannot be met only through laser radar navigation, the computer is started to realize visual perception to deeply recognize the surrounding environment, so that the optimal action path is found.
Drawings
FIG. 1 is a schematic diagram of a barrier fence of the prior art configured to expose the shortcomings of existing robots;
FIG. 2 is a schematic illustration of a prior art barrier configured to expose the shortcomings of existing robots;
fig. 3 is a schematic structural view of a self-walking robot provided with a panoramic camera module according to an embodiment;
fig. 4 is a schematic structural view of a self-walking robot provided with two panoramic camera modules according to an embodiment;
fig. 5 is a schematic structural diagram of a self-walking robot system based on laser navigation radar and computer vision perception fusion according to an embodiment.
Description of reference numerals:
1. the laser radar navigation self-walking robot; 2. fixing a shelter; 3. a movable covering; 4. a middle isolation belt;
10. a robot body; 21. a first telescopic bracket; 22. a first camera stabilizer; 23. a second telescopic bracket; 24. a second camera stabilizer; 31. a mechanical arm; 32. a mechanical arm mounting seat; 33. a third telescopic bracket; 34. a terminal motor;
110. a first laser radar; 120. a second laser radar; 130. a third laser radar; 140. a fourth laser radar; 210. a first panoramic camera module; 211. a first wide-angle lens; 212. a second wide-angle lens; 213. a third wide-angle lens; 214. a first lens support bar; 215. a first gravity block; 220. a second panoramic camera module; 221. a fourth wide-angle lens; 222. a fifth wide-angle lens; 223. a sixth wide-angle lens; 224. a second lens support bar; 225. a second weight block; 300. an electrically controlled ejection hammer module; 410. an audio receiver; 420. an ultrasonic transmitter; 510. a first vector angle sensor; 520. a second vector angle sensor; 610. a first communication module; 620. a second communication module; 710. an anemometer module; 720. a anemoscope module; 810. a central processing unit; 820. an image data processing module; 830. the laser navigation radar data processing module; 840. and a storage module.
Detailed Description
DETAILED DESCRIPTION FIGS. 1-5, discussed below, and the various embodiments used to describe the principles or methods of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. It will be appreciated by those skilled in the art that the principles or methods of the present disclosure may be implemented in any suitably arranged robot. Preferred embodiments of the present disclosure will be described hereinafter with reference to the accompanying drawings. In the following description, a detailed description of well-known functions or configurations is omitted so as not to obscure the subject matter of the present disclosure with unnecessary detail. Also, terms used herein will be defined according to functions of the present invention. Thus, the terminology may be different according to the intention or usage of the user or operator. Therefore, the terms used herein must be understood based on the description made herein.
The utility model provides a robot of walking by oneself based on laser navigation radar and computer vision perception fuse, as shown in fig. 3, including the robot 10 that is used for the walking motion, set up laser radar on robot 10, vertical fixed first telescopic bracket 21 that sets up on robot 10, and set up the first panorama module 210 of making a video recording on first telescopic bracket 21 top. The first telescopic bracket 21 is connected with a revolute pair of the first panoramic camera module 210. A first vector angle sensor 510 is arranged between the first telescopic bracket 21 and the first panoramic camera module 210. The first vector angle sensor 510 is used for acquiring the rotation angle size and the rotation direction of the first panoramic camera module 210 when rotating relative to the first telescopic bracket 21, so as to correct the deviation between the orientation of the object in the picture acquired by the first panoramic camera module 210 and the orientation of the object acquired by the laser radar due to the rotation of the first panoramic camera module 210 relative to the first telescopic bracket 21, and make the orientations of the same object acquired by the two identical. The height of the first panoramic camera module 210 from the ground is greater than the height of the laser radar from the ground. First telescopic bracket 21 is used for automatically regulated first panorama module of making a video recording 210 apart from the height on ground to provide first fixed reference for first vector angle sensor 510. The laser radar is used to acquire the distance between the robot body 10 and obstacles around the robot body. The first panoramic camera module 210 is configured to obtain a first color image of an obstacle around the robot body 10 to assist in determining physical and/or chemical properties of the obstacle around the robot body.
In one embodiment, as shown in fig. 3, the first panoramic camera module 210 is connected to the first telescopic bracket 21 through the first camera stabilizer 22. The first camera stabilizer 22 is connected with the first telescopic bracket 21 through a revolute pair. The first panorama camera module 210 includes a first lens module, a first lens support bar 214, and a first gravity block 215. The first lens support bar 214 is vertically fixedly mounted on the camera mount of the first camera stabilizer 22. The first lens module is disposed at the top end of the first lens support rod 214. The first gravity block 215 is provided at the back of the camera mount to lower the center of gravity of the first panorama camera module 210 and to improve the torque of the camera mount with respect to the first camera stabilizer 22, thereby allowing the first lens support rod 214 to maintain a vertical state. First vector angle sensor 510 is disposed between first telescopic bracket 21 and first camera stabilizer 22 for the corner size and the direction of rotation when first camera stabilizer 22 rotates for first telescopic bracket 21 are obtained, so that the deviation that the orientation of the object in the picture that first panorama module 210 obtained and the orientation of this object that laser radar obtained because first panorama module 210 rotated for first telescopic bracket 21 brought is corrected, and the orientation of the same object that the two obtained is the same.
In one embodiment, as shown in fig. 3, the first lens module includes a first wide angle lens 211, a second wide angle lens 212, and a third wide angle lens 213. The photographing angles of the first wide-angle lens 211, the second wide-angle lens 212, and the third wide-angle lens 213 are each 120, whereby one 360-degree panoramic camera is obtained by stitching synthesis.
In one embodiment, as shown in fig. 4, the system further includes a second telescopic bracket 23 and a second panoramic camera module 220 disposed at the top end of the second telescopic bracket 23. The second telescopic bracket 23 is connected with a revolute pair of the second panoramic camera module 220. A second vector angle sensor 520 is arranged between the second telescopic bracket 23 and the second panoramic camera module 220. The second vector angle sensor 520 is used to obtain the rotation angle and the rotation direction of the second panoramic imaging module 220 when rotating relative to the second telescopic bracket 23, so as to correct the deviation between the orientation of the object in the picture obtained by the second panoramic imaging module 220 and the orientation of the object obtained by the laser radar due to the rotation of the second panoramic imaging module 220 relative to the second telescopic bracket 23, and thus the orientations of the same object obtained by the two are the same. The second telescopic bracket 23 is vertically arranged and fixedly connected with the robot body 10. The second panorama module 220 of making a video recording highly is greater than the height of laser radar apart from ground, is less than the height of first panorama module 210 apart from ground. The second telescopic bracket 23 is used for automatically adjusting the height of the second panoramic camera module 220 from the ground and providing a second fixed reference for the second vector angle sensor 520. The second fixed reference is the same as the first fixed reference. The horizontal distance between the second panoramic camera module 220 and the first panoramic camera module 210 is greater than 0. The distance between the second telescopic bracket 23 and the first telescopic bracket 21 is greater than 0. The second panoramic camera module 220 is used for acquiring a second color image of the obstacles around the robot body 10 and synthesizing the second color image with the first color image into a three-dimensional image. The method can synthesize 6 three-dimensional images at most, and can see 5 reference plane situations of objects at most.
In one embodiment, as shown in fig. 4, the second panoramic camera module 220 is connected to the second telescopic bracket 23 through a second camera stabilizer 24. The second camera stabilizer 24 is connected with the second telescopic bracket 23 through a revolute pair. The second panoramic photographing module 220 includes a second lens module, a second lens supporting rod 224 and a second gravity block 225. The second lens support rod 224 is vertically fixedly mounted on the camera mount of the second camera stabilizer 24. The second lens module is disposed at the top end of the second lens support rod 224. The second weight block 225 is disposed at the rear of the camera mount to lower the center of gravity of the second panorama camera module 220 and to increase the torque of the camera mount with respect to the second camera stabilizer 24, thereby allowing the second lens support rod 224 to maintain a vertical state. The second vector angle sensor 520 is disposed between the second telescopic bracket 23 and the second camera stabilizer 24, and is configured to obtain a rotation angle and a rotation direction when the second camera stabilizer 24 rotates relative to the second telescopic bracket 23, so as to correct a deviation between an orientation of an object in a picture obtained by the second panoramic image capturing module 220 and an orientation of the object obtained by the laser radar due to the rotation of the second panoramic image capturing module 220 relative to the second telescopic bracket 23, and make the orientations of the same object obtained by the two identical.
In one embodiment, as shown in fig. 4, the second lens module includes a fourth wide angle lens 221, a fifth wide angle lens 222, and a sixth wide angle lens 223. The photographing angles of the fourth wide-angle lens 221, the fifth wide-angle lens 222, and the sixth wide-angle lens 223 are 120, respectively, so that a 360-degree panoramic camera is obtained through stitching synthesis.
In one embodiment, as shown in fig. 3 or 4, the robot further includes an audio receiver 410, a robot arm 31 disposed on the robot arm 10, and a hammer (which may be considered as reference 300) disposed at an end of the robot arm 31 for tapping. The audio receiver 410 is used to receive the sound of a hammer striking an object.
In one embodiment, as shown in fig. 3 or 4, the robot further includes an audio receiver 410, a tip motor 34, and a robot arm 31, a robot arm mount 32, and a third telescopic bracket 33 provided on the robot body 10. The robot arm 31 is connected to the robot body 10 via a robot arm mount 32. The mechanical arm 31 is a telescopic straight rod. The robot arm 31 is coupled to a robot arm mount 32 as a revolute pair. The robot arm 31 is coupled to the third telescopic bracket 33 as a revolute pair, so that extension and contraction of the third telescopic bracket 33 can rotate the robot arm 31 around the robot arm mount 32. The end of the mechanical arm 31 is provided with an electrically controlled ejection hammer module 300. The electrically controlled ejection hammer module 300 is connected with the end revolute pair of the mechanical arm 31, and drives the relative rotation between the electrically controlled ejection hammer module 300 and the mechanical arm 31 through the end motor 34. The audio receiver 410 is disposed on the electrically controlled ejection hammer module 300, and makes the audio receiving end face the direction in which the electrically controlled ejection hammer module 300 strikes the object.
In one embodiment, as shown in fig. 3 or fig. 4, the apparatus further includes an audio receiver 410 and an ultrasonic transmitter 420. The ultrasonic transmitter 420 is used to transmit ultrasonic waves. The audio receiver 410 is used for receiving the ultrasonic waves emitted by the ultrasonic transmitter 420 and reflected back by the obstacles to realize the navigation function in rainy days, foggy days and other weather conditions.
In one embodiment, as shown in fig. 3 or fig. 4, the apparatus further includes a first communication module 610 and a second communication module 620. The first communication module 610 is used for connecting with the internet to realize remote control and data maintenance. The second communication module 620 is used for real-time communication after being paired with other robots in the same area, so as to realize the function of mutual learning among the robots.
In one embodiment, as shown in fig. 3 or fig. 4, an anemometer module 710 disposed on the robot body 10 is further included. The anemometer module 710 is used to measure the wind speed at the position of the robot body 10. Because the panorama module of making a video recording erects on the telescopic support among the scheme of this application, the position is higher under the normal condition, if the wind speed too big can lead to the knuckle arm too big, consequently, set up anemoscope module 710, when measured wind speed is greater than the setting value, will control the telescopic support who makes the panorama module of making a video recording erect and shorten to reduce the knuckle arm.
In one embodiment, as shown in fig. 3 or 4, a wind direction indicator module 720 is further included on the robot body 10. The anemometer module 710 is used for measuring the wind direction of the robot body 10. So set up, can adjust the posture of 10 according to the wind direction, operate with minimum windage.
In one embodiment, as shown in fig. 3 or 4, a first laser radar 110, a second laser radar 120, a third laser radar 130, and a fourth laser radar 140 (not shown) are sequentially disposed around the robot body 10. The first laser radar 110, the second laser radar 120, the third laser radar 130 and the fourth laser radar 140 are used for acquiring distances between the robot and surrounding obstacles respectively.
The above-mentioned robot of walking by oneself based on laser navigation radar and computer vision perception fuse that provides, through settling the panorama module of making a video recording on the telescopic bracket of vertical setting, can acquire the color information or the text information that laser radar can not acquire to this can assist and judge the physics and/or the chemical attribute of disconnected barrier around the robot, improves the cognitive degree of walking by oneself robot to the surrounding environment, thereby widens the suitable scene of walking by oneself the robot.
According to the content, the application also provides a self-walking robot system based on the laser navigation radar and the computer vision perception fusion.
The utility model provides a robot system of walking by oneself based on laser navigation radar and computer vision perception fuse, as shown in fig. 5, includes central processing unit 810, image data processing module 820, laser navigation radar data processing module 830, first panorama module 210 of making a video recording, first vector angle sensor 510 and laser navigation radar module. The image data processing module 820 and the laser navigation radar data processing module 830 are connected with the central processor 810. The first panoramic camera module 210 is connected with the image data processing module 820. The laser navigation radar module is connected with the laser navigation radar data processing module 830. The first vector angle sensor 510 is connected to the first panoramic camera module 210 and the central processor 810, respectively. The laser navigation radar module is used for acquiring the distance between the 10 and the surrounding obstacles. The first panoramic camera module 210 is configured to obtain a first color image of an obstacle around the robot body 10 to assist in determining physical and/or chemical properties of the obstacle around the robot body. The image data processing module 820 is configured to control the first panoramic camera module 210 and perform a first-level data processing on the first color image. Laser navigation radar data processing module 830 is used for controlling laser navigation radar module to carry out primary processing to the data that laser navigation radar module acquireed. The first vector angle sensor 510 is used for acquiring the angle size and direction of the reference deviation of the first panoramic camera module 210 relative to the laser navigation radar module. The central processing unit 810 performs second-level data processing on the data output by the image data processing module 820 and the laser navigation radar data processing module 830 through the data acquired by the first vector angle sensor 510, and corrects the reference deviation of the first panoramic camera module 210 relative to the laser navigation radar module caused by rotation, so that the object directions acquired by the laser navigation radar of the object directions in the first color image are the same.
In one embodiment, as shown in fig. 5, a second panoramic camera module 220 and a second vector angle sensor 520 are further provided. The second panoramic camera module 220 is connected to the image data processing module 820. The second vector angle sensor 520 is connected to the second panoramic camera module 220 and the central processor 810, respectively. The second panoramic camera module 220 is used for acquiring a second color image of the obstacles around the robot body 10 and synthesizing the second color image with the first color image into a three-dimensional image. The second vector angle sensor 520 is used for acquiring the angle and direction of the reference deviation of the second panoramic camera module 220 relative to the laser navigation radar module. The central processing unit 810 performs second-level data processing on the data output by the image data processing module 820 and the laser navigation radar data processing module 830 through the data acquired by the second vector angle sensor 520, and corrects the reference deviation of the second panoramic camera module 220 relative to the laser navigation radar module caused by the rotation, so that the object directions acquired by the laser navigation radar of the object directions in the second color image are the same. The central processor 810 or the second panoramic camera module 220 is further configured to combine the first color image and the second color image into a three-dimensional image.
In one embodiment, as shown in fig. 5, the first panorama camera module 210 includes a first wide-angle lens 211, a second wide-angle lens 212, and a third wide-angle lens 213. The second panoramic camera module 220 includes a fourth wide-angle lens 221, a fifth wide-angle lens 222, and a sixth wide-angle lens 223. The first wide-angle lens 211 and the sixth wide-angle lens 223 are connected in a paired manner and then connected to the image data processing module 820. The second wide-angle lens 212 is connected to the image data processing module 820 after being connected to the fifth wide-angle lens 222 in a paired manner. The third wide-angle lens 213 and the fourth wide-angle lens 221 are connected in a paired manner and then connected to the image data processing module 820. The central processor 810 is further configured to correct the relative rotation angle of the first panoramic camera module 210 and the second panoramic camera module 220 through the data of the first vector angle sensor 510 and the second vector angle sensor 520.
In one embodiment, as shown in fig. 5, the laser navigation radar module includes a first laser radar 110, a second laser radar 120, a third laser radar 130, and a fourth laser radar 140. The first laser radar 110, the second laser radar 120, the third laser radar 130 and the fourth laser radar 140 are sequentially arranged around the robot to respectively obtain distances between the robot and surrounding obstacles. Laser navigation radar data processing module 830 is further configured to splice data acquired by first laser radar 110, second laser radar 120, third laser radar 130, and fourth laser radar 140 into a complete radar map, and output the radar map to central processing unit 810 for further processing.
In one embodiment, as shown in fig. 5, the electric ejection hammer module 300 and the audio receiver 410 are further included and respectively connected to the central processor 810. The electrically controlled ejection hammer module 300 is used for knocking an object to make a sound. The audio receiver 410 is used for receiving sound generated by the electrically controlled ejection hammer module 300 when the object is knocked. The central processor 810 is further configured to perform spectrum analysis on the sound emitted by the electrically controlled ejection hammer module 300 when the object is knocked, and output the type and basic physical and/or chemical properties of the knocked object.
In one embodiment, as shown in FIG. 5, an ultrasonic transmitter 420 is also included, which is connected to the central processor 810. The ultrasonic transmitter 420 is used to transmit ultrasonic waves. The audio receiver 410 is further configured to receive the ultrasonic waves emitted by the ultrasonic emitter 420 and reflected back through the obstacle, so as to implement a navigation function of the robot in weather conditions such as rainy days, heavy fog days, etc.
In one embodiment, as shown in FIG. 5, the ultrasonic transmitter 420 is also connected to the audio receiver 410.
In one embodiment, as shown in fig. 5, the system further includes a first communication module 610 and a second communication module 620 connected to the cpu 810. The first communication module 610 is used for connecting with the internet to realize remote control and data maintenance. The second communication module 620 is used for real-time communication after being paired with other robots in the same area, so as to realize the function of mutual learning among the robots.
In one embodiment, as shown in fig. 5, the wind direction indicator module 720 is further included and connected to the central processor 810. The anemoscope module 720 is used for measuring the wind direction of the position of the robot.
In one embodiment, as shown in FIG. 5, an anemometer module 710 is also included, which is connected to the central processor 810. The anemometer module 710 is used for measuring the wind speed at the position of the robot.
In one embodiment, as shown in fig. 5, the system further comprises a memory module 840 connected to the central processor 810. The storage module 840 stores a control program executable by the cpu 810 and attribute information of a common object, and stores information generated when each function module operates.
Above-mentioned self walking robot system based on laser navigation radar and computer vision perception fuse that provides can acquire the color information or the text message that laser radar can not obtain through setting up the panorama module of making a video recording to this can assist and judge the physics and/or the chemical attribute of disconnected barrier around the robot, improves the self walking robot to the cognitive degree of surrounding environment, thereby widens self walking robot's suitable scene.
In addition, the application also provides a control method of the self-walking robot based on the fusion of the laser navigation radar and the computer vision perception.
A control method of a self-walking robot based on laser navigation radar and computer vision perception fusion, the self-walking robot is the self-walking robot in any one of the above embodiments, the method includes steps S110-S140:
and S110, acquiring the reflecting surface of the obstacle around the robot by the laser navigation radar.
S120, the central processing unit 810 judges whether the maximum distance between the obstacles is smaller than the minimum passing distance of the robot, if so, the step S130 is executed, otherwise, the step S140 is executed.
And S130, starting the first panoramic camera module 210.
S140: and only starting the laser navigation radar to continue running.
In one embodiment, the method further comprises, in steps S210-S240:
and S210, testing the maximum distance that the laser navigation radar can obtain the obstacles around the robot.
S220, judging whether the maximum distance is smaller than or equal to a first set value, if so, executing step S230, otherwise, executing step S240.
The ultrasonic transmitter 420 and the audio receiver 410 are activated S230.
And S240, only starting the laser navigation radar to continue running.
According to the method, the laser navigation radar is used for preliminarily judging the surrounding obstacle environment, and when the requirement cannot be met only through laser radar navigation, the computer is started to realize visual perception to deeply recognize the surrounding environment, so that the optimal action path is found.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A self-walking robot based on laser navigation radar and computer vision perception fusion is characterized by comprising a robot body for walking movement, a laser radar arranged on the robot body, a first telescopic bracket vertically and fixedly arranged on the robot body, and a first panoramic camera module arranged at the top end of the first telescopic bracket;
the first telescopic bracket is connected with the first panoramic camera module revolute pair;
a first vector angle sensor is arranged between the first telescopic bracket and the first panoramic camera module;
the first vector angle sensor is used for acquiring the rotation angle and the rotation direction of the first panoramic camera module when rotating relative to the first telescopic support;
the height of the first panoramic camera module from the ground is greater than that of the laser radar from the ground;
the first telescopic support is used for automatically adjusting the height of the first panoramic camera module from the ground and providing a first fixed reference for the first vector angle sensor;
the laser radar is used for acquiring the distance between the robot body and the surrounding obstacles;
the first panoramic camera shooting module is used for acquiring a first color image of the obstacle around the robot body so as to assist in judging the physical and/or chemical properties of the obstacle around the robot body.
2. The self-walking robot of claim 1,
the first panoramic camera shooting module is connected with the first telescopic bracket through a first camera stabilizer;
the first camera stabilizer is connected with the first telescopic bracket through a revolute pair;
the first panoramic camera shooting module comprises a first lens module, a first lens supporting rod and a first gravity block;
the first lens supporting rod is vertically and fixedly arranged on the camera mounting seat of the first camera stabilizer;
the first lens module is arranged at the top end of the first lens supporting rod;
the first gravity block is arranged on the back of the camera mounting seat so as to reduce the gravity center of the first panoramic camera shooting module and improve the torque of the camera mounting seat relative to the first camera stabilizer, so that the first lens supporting rod is kept in a vertical state;
the first vector angle sensor is arranged between the first telescopic support and the first camera stabilizer and used for acquiring the rotation angle size and the rotation direction of the first camera stabilizer relative to the first telescopic support during rotation.
3. The self-walking robot of claim 1,
the panoramic camera comprises a first telescopic support and a first panoramic camera module arranged at the top end of the first telescopic support;
the second telescopic bracket is connected with the second panoramic camera module revolute pair;
a second vector angle sensor is arranged between the second telescopic bracket and the second panoramic camera module;
the second vector angle sensor is used for acquiring the rotation angle and the rotation direction of the second panoramic camera module when rotating relative to the second telescopic support;
the second telescopic bracket is vertically arranged and is fixedly connected with the robot body;
the height of the second panoramic camera module from the ground is greater than that of the laser radar and less than that of the first panoramic camera module from the ground;
the second telescopic support is used for automatically adjusting the height of the second panoramic camera module from the ground and providing a second fixed reference for the second vector angle sensor;
the second fixed reference datum is the same as the first fixed reference datum;
the horizontal distance between the second panoramic camera module and the first panoramic camera module is greater than 0;
the distance between the second telescopic bracket and the first telescopic bracket is greater than 0;
the second panoramic camera shooting module is used for acquiring a second color image of obstacles around the robot body and synthesizing the second color image with the first color image into a three-dimensional image.
4. The self-walking robot of claim 1,
the robot further comprises an audio receiver, a mechanical arm arranged on the robot body and a hammer arranged at the tail end of the mechanical arm and used for knocking;
the audio receiver is used for receiving sound of the hammer hitting an object.
5. The self-walking robot of claim 1,
the robot further comprises an audio receiver, a tail end motor, a mechanical arm arranged on the robot body, a mechanical arm mounting seat and a third telescopic support;
the mechanical arm is connected with the robot body through the mechanical arm mounting seat;
the mechanical arm is a telescopic straight rod;
the mechanical arm is connected with the mechanical arm mounting seat revolute pair;
the mechanical arm is connected with the third telescopic bracket revolute pair;
an electric control ejection hammer module is arranged at the tail end of the mechanical arm;
the electric control ejection hammer module is connected with the mechanical arm tail end revolute pair and drives the electric control ejection hammer module to rotate relative to the mechanical arm through a tail end motor;
the audio receiver is arranged on the electronic control ejection hammer module, and enables the audio receiving end to face the direction in which the electronic control ejection hammer module strikes the object.
6. The self-walking robot of claim 1,
the device also comprises a first communication module and a second communication module;
the first communication module is used for being connected with the Internet so as to realize remote control and data maintenance;
the second communication module is used for real-time communication after being paired with other robots in the same area, so that the function of mutual learning among the robots is realized.
7. A self-walking robot system based on laser navigation radar and computer vision perception fusion is characterized in that,
the system comprises a central processing unit, an image data processing module, a laser navigation radar data processing module, a first panoramic camera module, a first vector angle sensor and a laser navigation radar module;
the image data processing module and the laser navigation radar data processing module are connected with the central processing unit;
the first panoramic camera module is connected with the image data processing module;
the laser navigation radar module is connected with the laser navigation radar data processing module;
the first vector angle sensor is respectively connected with the first panoramic camera module and the central processing unit;
the laser navigation radar module is used for acquiring the distance between the robot body and the surrounding obstacles;
the first panoramic camera module is used for acquiring a first color image of an obstacle around the robot body so as to assist in judging the physical and/or chemical properties of the obstacle around the robot body;
the image data processing module is used for controlling the first panoramic camera module and performing first-stage data processing on the first color image;
the laser navigation radar data processing module is used for controlling the laser navigation radar module and performing first-stage processing on data acquired by the laser navigation radar module;
the first vector angle sensor is used for acquiring the angle and the direction of the reference deviation of the first panoramic camera module relative to the laser navigation radar module;
and the central processing unit performs second-level data processing on the data output by the image data processing module and the laser navigation radar data processing module through the data acquired by the first vector angle sensor, and corrects the reference deviation of the first panoramic camera module relative to the laser navigation radar module caused by rotation, so that the object azimuth in the first color image is the same as the object azimuth acquired by the laser navigation radar.
8. The system of claim 7,
the electronic control ejection hammer module and the audio receiver are respectively connected with the central processing unit;
the electronic control ejection hammer module is used for knocking an object to make a sound;
the audio receiver is used for receiving sound generated by the electronic control ejection hammer module when the electronic control ejection hammer module strikes an object;
the central processing unit is also used for carrying out spectrum analysis on the sound emitted by the electric control ejection hammer module when the object is knocked, and outputting the type and basic physical and/or chemical properties of the knocked object.
9. The system of claim 8,
the ultrasonic transmitter is connected with the central processing unit;
the ultrasonic transmitter is used for transmitting ultrasonic waves;
the audio receiver is also used for receiving the ultrasonic waves which are emitted by the ultrasonic emitter and reflected back by the barrier so as to realize the navigation function of the robot under the weather conditions of overcast and rainy days, heavy fog days and the like.
10. The system of claims 7-9,
also comprises a first communication module and a second communication module connected with the central processing unit
The first communication module is used for being connected with the Internet so as to realize remote control and data maintenance;
the second communication module is used for real-time communication after being paired with other robots in the same area, so that the function of mutual learning among the robots is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010519604.2A CN112130555B (en) | 2020-06-09 | 2020-06-09 | Self-walking robot and system based on laser navigation radar and computer vision perception fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010519604.2A CN112130555B (en) | 2020-06-09 | 2020-06-09 | Self-walking robot and system based on laser navigation radar and computer vision perception fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112130555A true CN112130555A (en) | 2020-12-25 |
CN112130555B CN112130555B (en) | 2023-09-15 |
Family
ID=73850506
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010519604.2A Active CN112130555B (en) | 2020-06-09 | 2020-06-09 | Self-walking robot and system based on laser navigation radar and computer vision perception fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112130555B (en) |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002065721A (en) * | 2000-08-29 | 2002-03-05 | Komatsu Ltd | Device and method for supporting environmental recognition for visually handicapped |
CN101008575A (en) * | 2006-01-25 | 2007-08-01 | 刘海英 | Over-limit measuring instrument and method of railway transportation equipment |
JP2012051042A (en) * | 2010-08-31 | 2012-03-15 | Yaskawa Electric Corp | Robot system and robot control device |
CN205961294U (en) * | 2016-08-03 | 2017-02-15 | 北京威佳视科技有限公司 | Portable multimachine virtual studio in position is shot with video -corder and is broadcast all -in -one |
CN107036706A (en) * | 2017-05-27 | 2017-08-11 | 中国石油大学(华东) | One kind set tube vibration well head Monitor detection equipment |
JP2017156153A (en) * | 2016-02-29 | 2017-09-07 | 株式会社 ミックウェア | Navigation device, method for outputting obstacle information at navigation device, and program |
CA3112760A1 (en) * | 2016-06-30 | 2017-12-30 | Spin Master Ltd. | Assembly with object in housing and mechanism to open housing |
CN107730652A (en) * | 2017-10-30 | 2018-02-23 | 国家电网公司 | A kind of cruising inspection system, method and device |
CN107966989A (en) * | 2017-12-25 | 2018-04-27 | 北京工业大学 | A kind of robot autonomous navigation system |
CN107991662A (en) * | 2017-12-06 | 2018-05-04 | 江苏中天引控智能系统有限公司 | A kind of 3D laser and 2D imaging synchronous scanning device and its scan method |
WO2018097574A1 (en) * | 2016-11-24 | 2018-05-31 | 엘지전자 주식회사 | Mobile robot and control method thereof |
CN208760540U (en) * | 2018-09-25 | 2019-04-19 | 成都铂贝科技有限公司 | A kind of unmanned vehicle of more applications |
CN109855624A (en) * | 2019-01-17 | 2019-06-07 | 宁波舜宇智能科技有限公司 | Navigation device and air navigation aid for AGV vehicle |
CN209356928U (en) * | 2019-03-15 | 2019-09-06 | 上海海鸥数码照相机有限公司 | From walking robot formula 3D modeling data acquisition equipment |
CN110968081A (en) * | 2018-09-27 | 2020-04-07 | 广东美的生活电器制造有限公司 | Control method and control device of sweeping robot with telescopic camera |
WO2020077025A1 (en) * | 2018-10-12 | 2020-04-16 | Toyota Research Institute, Inc. | Systems and methods for conditional robotic teleoperation |
CN111203848A (en) * | 2019-12-17 | 2020-05-29 | 苏州商信宝信息科技有限公司 | Intelligent floor tile processing method and system based on big data processing and analysis |
-
2020
- 2020-06-09 CN CN202010519604.2A patent/CN112130555B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002065721A (en) * | 2000-08-29 | 2002-03-05 | Komatsu Ltd | Device and method for supporting environmental recognition for visually handicapped |
CN101008575A (en) * | 2006-01-25 | 2007-08-01 | 刘海英 | Over-limit measuring instrument and method of railway transportation equipment |
JP2012051042A (en) * | 2010-08-31 | 2012-03-15 | Yaskawa Electric Corp | Robot system and robot control device |
JP2017156153A (en) * | 2016-02-29 | 2017-09-07 | 株式会社 ミックウェア | Navigation device, method for outputting obstacle information at navigation device, and program |
CA3112760A1 (en) * | 2016-06-30 | 2017-12-30 | Spin Master Ltd. | Assembly with object in housing and mechanism to open housing |
CN205961294U (en) * | 2016-08-03 | 2017-02-15 | 北京威佳视科技有限公司 | Portable multimachine virtual studio in position is shot with video -corder and is broadcast all -in -one |
WO2018097574A1 (en) * | 2016-11-24 | 2018-05-31 | 엘지전자 주식회사 | Mobile robot and control method thereof |
CN107036706A (en) * | 2017-05-27 | 2017-08-11 | 中国石油大学(华东) | One kind set tube vibration well head Monitor detection equipment |
CN107730652A (en) * | 2017-10-30 | 2018-02-23 | 国家电网公司 | A kind of cruising inspection system, method and device |
CN107991662A (en) * | 2017-12-06 | 2018-05-04 | 江苏中天引控智能系统有限公司 | A kind of 3D laser and 2D imaging synchronous scanning device and its scan method |
CN107966989A (en) * | 2017-12-25 | 2018-04-27 | 北京工业大学 | A kind of robot autonomous navigation system |
CN208760540U (en) * | 2018-09-25 | 2019-04-19 | 成都铂贝科技有限公司 | A kind of unmanned vehicle of more applications |
CN110968081A (en) * | 2018-09-27 | 2020-04-07 | 广东美的生活电器制造有限公司 | Control method and control device of sweeping robot with telescopic camera |
WO2020077025A1 (en) * | 2018-10-12 | 2020-04-16 | Toyota Research Institute, Inc. | Systems and methods for conditional robotic teleoperation |
CN109855624A (en) * | 2019-01-17 | 2019-06-07 | 宁波舜宇智能科技有限公司 | Navigation device and air navigation aid for AGV vehicle |
CN209356928U (en) * | 2019-03-15 | 2019-09-06 | 上海海鸥数码照相机有限公司 | From walking robot formula 3D modeling data acquisition equipment |
CN111203848A (en) * | 2019-12-17 | 2020-05-29 | 苏州商信宝信息科技有限公司 | Intelligent floor tile processing method and system based on big data processing and analysis |
Also Published As
Publication number | Publication date |
---|---|
CN112130555B (en) | 2023-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112419385B (en) | 3D depth information estimation method and device and computer equipment | |
US20220051574A1 (en) | Flight route generation method, control device, and unmanned aerial vehicle system | |
US11513514B2 (en) | Location processing device, flight vehicle, location processing system, flight system, location processing method, flight control method, program and recording medium | |
WO2018158927A1 (en) | Method for estimating three-dimensional shape, flying vehicle, mobile platform, program, and recording medium | |
CN113112413B (en) | Image generation method, image generation device and vehicle-mounted head-up display system | |
JP6765512B2 (en) | Flight path generation method, information processing device, flight path generation system, program and recording medium | |
CN103778649A (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
CN111226154B (en) | Autofocus camera and system | |
US20220253057A1 (en) | Image processing-based laser emission and dynamic calibration apparatus and method, device and medium | |
WO2019061064A1 (en) | Image processing method and device | |
WO2022036500A1 (en) | Flight assisting method for unmanned aerial vehicle, device, chip, system, and medium | |
US20210185235A1 (en) | Information processing device, imaging control method, program and recording medium | |
CN110750153A (en) | Dynamic virtualization device of unmanned vehicle | |
US20210208608A1 (en) | Control method, control apparatus, control terminal for unmanned aerial vehicle | |
WO2021258251A1 (en) | Surveying and mapping method for movable platform, and movable platform and storage medium | |
JP2018138923A (en) | Measuring system | |
CN114240769A (en) | Image processing method and device | |
CN111344650B (en) | Information processing device, flight path generation method, program, and recording medium | |
KR20190075034A (en) | Imaging Apparatus and method for Automobile | |
CN112130555B (en) | Self-walking robot and system based on laser navigation radar and computer vision perception fusion | |
CN114556904A (en) | Control method and control device of holder system, holder system and storage medium | |
CN111399014A (en) | Local stereoscopic vision infrared camera system and method for monitoring wild animals | |
KR101830296B1 (en) | System for drawing digital map | |
CN215749167U (en) | Robot control system | |
CN116027351A (en) | Hand-held/knapsack type SLAM device and positioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |