CN211517547U - Concave robot head with rotary disc - Google Patents

Concave robot head with rotary disc Download PDF

Info

Publication number
CN211517547U
CN211517547U CN202020146582.5U CN202020146582U CN211517547U CN 211517547 U CN211517547 U CN 211517547U CN 202020146582 U CN202020146582 U CN 202020146582U CN 211517547 U CN211517547 U CN 211517547U
Authority
CN
China
Prior art keywords
turntable
shaft
robot
laser radar
auxiliary camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202020146582.5U
Other languages
Chinese (zh)
Inventor
史超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guoxin Taifu Technology Co ltd
Original Assignee
Shenzhen Guoxin Taifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guoxin Taifu Technology Co ltd filed Critical Shenzhen Guoxin Taifu Technology Co ltd
Priority to CN202020146582.5U priority Critical patent/CN211517547U/en
Application granted granted Critical
Publication of CN211517547U publication Critical patent/CN211517547U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The utility model discloses a concave robot head with a turntable, which comprises a head body, an embedded processor and a sensor group; the head body comprises a base, a neck connecting piece, two supports, a turntable and a rotating shaft; the neck connecting piece is rotatably connected to the bottom of the base; the two supports are respectively arranged on the left side and the right side of the top of the base; the rotating shaft is rotatably connected to the top of the rotating disc; the sensor group comprises a laser radar and an auxiliary camera, the laser radar is arranged on the rotating shaft, and the auxiliary camera is arranged on the front side of the head body; the utility model obtains the geometric data in the 360-degree range around the robot through the laser radar and obtains the specific environmental data in front of the robot through the auxiliary camera; the operator can use the user interface to view scenes from any vantage point according to the three-dimensional map, and can also accurately control the actions of the robot according to the specific environment in front of the robot.

Description

Concave robot head with rotary disc
Technical Field
The utility model relates to a technical field of robot especially relates to a set up technical field of spill robot head of carousel.
Background
The laser radar is a radar system that detects a characteristic amount such as a position and a velocity of a target by emitting a laser beam. The working principle is that a detection signal (laser beam) is emitted to a target, then a received signal (target echo) reflected from the target is compared with the emitted signal, and after appropriate processing, relevant information of the target, such as target distance, azimuth, height, speed, attitude, even shape and other parameters, can be obtained, so that the targets of airplanes, missiles and the like are detected, tracked and identified.
The binocular camera is used for positioning an object by using two cameras. For a characteristic point on an object, two cameras fixed at different positions are used for shooting the image of the object, and the coordinates of the point on the image planes of the two cameras are respectively obtained. As long as the precise relative positions of the two cameras are known, the coordinates of the feature point in the coordinate system for fixing one camera can be obtained in a geometric method, namely, the position of the feature point is determined.
In the prior art, most robots realize large-scale positioning through laser radars, but the positioning accuracy of the laser radar positioning is not high; there are also robots that achieve small-range positioning with binocular cameras, but positioning with binocular cameras is computationally expensive and cannot be used directly for large-range accurate positioning. Because only one single sensor structure is often installed on the head of the robot, the acquired information is limited, and the environment acquisition and feedback requirements of the robot on more and more complex actions cannot be supported.
SUMMERY OF THE UTILITY MODEL
In view of the above problems, an object of the present invention is to provide a concave robot head with a turntable, which obtains geometric data within 360 ° around the robot through a laser radar, establishes a three-dimensional map around the robot using the geometric data, and obtains specific environmental data in front of the robot through an auxiliary camera; the operator can use the user interface to view scenes from any vantage point according to the three-dimensional map, and can also accurately control the actions of the robot according to the specific environment in front of the robot.
In order to realize the purpose, the utility model discloses the technical scheme who takes does:
a concave robot head provided with a turntable comprises a head body, an embedded processor and a sensor group arranged on the head body;
the head body comprises a base, a neck connecting piece, two supports, a turntable and a rotating shaft; the neck connecting piece is rotatably connected to the bottom of the base; the two supports are respectively arranged on the left side and the right side of the top of the base; the turntable is arranged between the two supports, and two sides of the turntable are respectively connected to the two supports; the rotating shaft is vertically arranged on the turntable, and the bottom of the rotating shaft is rotatably connected with the turntable;
the sensor group comprises a laser radar and an auxiliary camera, the laser radar is arranged on the rotating shaft, and the auxiliary camera is arranged on the front side of the head body;
the embedded processor is disposed within the base and is connected to the lidar and the auxiliary camera.
The above concave robot head with turntable, wherein the rotating shaft comprises a middle shaft and an outer shaft; the outer shaft is rotationally connected with the rotary disc; the middle shaft is coaxially inserted into the outer shaft, the top end of the middle shaft extends out of the top end of the outer shaft, and the middle shaft is also in rotating connection with the rotating disc;
the two laser radars are arranged up and down, the upper laser radar is arranged on the middle shaft, and the lower laser radar is arranged on the outer shaft.
The concave robot head provided with the turntable is characterized in that the rotating shaft further comprises an inner shaft, the inner shaft is coaxially inserted into the middle shaft, and the inner shaft is fixedly connected to the turntable; and a GPS positioner is arranged at the top of the inner shaft.
The concave robot head provided with the turntable is characterized in that the turntable is hollow, the inner shaft is inserted into the turntable and is connected to the bottom of the inner side of the turntable; the middle shaft is inserted into the turntable, and a first belt pulley is arranged at the bottom of the middle shaft; the outer shaft is inserted into the rotary disc, and a second belt pulley is arranged at the bottom of the outer shaft.
The above-mentioned concave robot head that sets up the carousel, wherein, supplementary camera has two three-dimensional cameras, including two wide-angle lenses, two telephoto lenses and a plurality of light filling LED lamps, and two telephoto lenses slope and point to down.
The above-mentioned concave robot head that sets up carousel, wherein, the sensor group still includes two fisheye cameras, two fisheye cameras have two camera lenses, and one of them camera lens is towards the place ahead, and wherein another camera lens is towards the back.
The concave robot head provided with the turntable is characterized in that the embedded processor is connected with the Ethernet in a wired or wireless mode.
The utility model discloses owing to adopted above-mentioned technique, make it compare the positive effect that has with prior art and be:
1. the utility model obtains the geometric data in the range of 360 degrees around the robot through the laser radar, establishes a three-dimensional map around the robot by utilizing the geometric data, and obtains the specific environmental data in front of the robot through the auxiliary camera; the operator can use the user interface to view scenes from any vantage point according to the three-dimensional map, and can also accurately control the actions of the robot according to the specific environment in front of the robot.
2. The utility model discloses a set up two fisheye cameras, acquire color data and texture data around the robot to color to the three-dimensional map around the robot that the modeling obtained, generate colored three-dimensional map, the operator that is convenient for more to be located the distal end understands the environment around the robot.
3. The utility model discloses a set up the GPS locator, acquire the geocentric coordinate data of robot to compare with the position of robot in the three-dimensional map, when the position of robot in the three-dimensional map of establishing the deviation appears and make the three-dimensional map by the mistake when establishing, update three-dimensional map.
Drawings
Fig. 1 is a schematic structural diagram of a concave robot head provided with a turntable according to the present invention;
fig. 2 is a rear view of a concave robot head provided with a turntable according to the present invention;
fig. 3 is a schematic view of a connection mode between a turntable and a rotation axis of a concave robot head provided with the turntable according to the present invention.
In the drawings:
1. a head body; 11. a base; 12. a neck connector; 13. a support; 14. a turntable; 15. a rotating shaft; 151. an inner shaft; 152. a middle shaft; 153. an outer shaft; 154. a first pulley; 155. a second pulley; 16. heat dissipation holes; 3. a sensor group; 31. a laser radar; 32. an auxiliary camera; 321. a wide-angle lens; 322. a telephoto lens; 33. a double fisheye camera; 34. a GPS locator.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific embodiments, but the present invention is not limited thereto.
Fig. 1 is a schematic structural diagram of a concave robot head provided with a turntable according to the present invention; fig. 2 is a rear view of a concave robot head provided with a turntable according to the present invention; fig. 2 is a schematic view of a connection mode between a turntable and a rotation axis of a concave robot head provided with the turntable according to the present invention. Referring to fig. 1 to 3, a preferred embodiment of a concave robot head provided with a turntable is shown, which is characterized by comprising a head body 1, an embedded processor (not shown in the figure) and a sensor group 3 arranged on the head body 1.
The head body 1 comprises a base 11, a neck connecting piece 12, two supports 13 and a connecting piece 14. The neck connecting piece 12 is rotatably connected to the bottom of the base 11, and the head body 1 is connected to the main body of the robot through the neck connecting piece 12, so that the head body 1 can rotate along a vertical rotating shaft relative to the main body of the robot. The two supports 13 are respectively arranged at the left side and the right side of the top of the base 11, the supports 13 and the base 11 are of an integral structure, and the supports 13 and the base 11 integrally form a basic outline frame of the head body 1. The rotary disc 14 is in a disc shape and is horizontally arranged between the two supports 13, and the rotary disc 14; both sides are respectively fixedly connected with the two supports 13. The rotation shaft 15 is vertically provided to the turntable 14, and a bottom end of the rotation shaft 15 is rotatably connected to the turntable 14 so that the rotation shaft 15 can perform 360 ° lateral rotation.
The sensor group 3 includes a laser radar 31 and an auxiliary camera 32, and the laser radar 31 is provided on the rotating shaft 15 and laterally rotates together with the rotating shaft 15. In this embodiment, be provided with two laser radar 31 altogether about the axis of rotation 15, the axis of rotation 15 can drive according to control command through driving motor to drive two laser radar 31 and rotate together, make two laser radar 31 can acquire the whole geometric data of 360 within ranges around the robot, and carry out data support for the real-time 3D modeling that follow-up control system needs go on. The auxiliary camera 32 is provided on the front side of the head body 1, and the accuracy of the auxiliary camera 32 is higher than that of the laser radar 31, so that the auxiliary camera 32 is used to acquire specific environmental data on the front side of the head body 1, and by rotating the head body 1 in the horizontal direction around the main body of the robot, the auxiliary camera 32 can be directed to a direction that needs attention.
Also, with the auxiliary camera 32, a visual odometry system may be integrated to provide a posture estimate over time. The system generates a solution based on an incremental structure of motion estimation while refining the results using keyframe selection and sparse local bundle adjustment. The auxiliary camera 32 can determine the moving path and head pose change of the robot, update the position and head pose of the robot in real time in the established three-dimensional map, and update the three-dimensional map in real time along with the movement of the robot.
An embedded processor is provided within the head body 1, and the embedded processor is connected to the laser radar 31 and the auxiliary camera 32. In this embodiment, the embedded processor contains all the processors necessary to perform the processing tasks of the head images of the lidar 31 and the auxiliary camera 32, i.e. the embedded processor includes a quad-core Intel i 7-3820 QM unit, two custom Xilinx Spartan 6FPGA units and an Arm Cortex M4 unit.
The above is merely an example of the preferred embodiments of the present invention, and the embodiments and the protection scope of the present invention are not limited thereby.
Further, in a preferred embodiment, the rotating shaft 15 includes a central shaft 152 and an outer shaft 153. The outer shaft 153 is rotatably connected to the turntable 14. Central shaft 152 is coaxially inserted into outer shaft 153, with the top end of central shaft 152 extending from the top end of outer shaft 153, and central shaft 152 is also rotatably connected to carousel 14. Two laser radar 31 set up from top to bottom, and the laser radar 31 of top sets up on axis 152, and the laser radar 31 of below sets up on outer axle 151, makes two laser radar 31's rotation mutually independent.
Further, in a preferred embodiment, the rotating shaft 15 further includes an inner shaft 151, the inner shaft 151 is coaxially inserted into the middle shaft 152, and the inner shaft 151 is fixedly connected to the turntable 14. The top of the inner shaft 151 is provided with a GPS locator 34.
With the shift of time, the position of the voxel model of the robot in the three-dimensional map may be deviated, so that the fusion between the voxel of the newly generated three-dimensional map and the previous voxel is deviated, and the three-dimensional map is established incorrectly. And require precise positioning to engage with the object (e.g., pick the object at a particular location) and avoid collisions while the robot performs the task.
The GPS locator 34 is used to acquire geocentric coordinate data of the robot and compare the geocentric coordinate data with the position of the robot in the three-dimensional map, and when the position of the robot in the established three-dimensional map is deviated and the three-dimensional map is established incorrectly, the three-dimensional map is updated.
Further, in a preferred embodiment, the turntable 15 is hollow inside, and the inner shaft 151 is inserted into the turntable 15 and connected to the bottom of the inner side of the turntable 15. The middle shaft 152 is inserted into the rotating disc 15, and the bottom of the middle shaft 152 is provided with a first belt pulley 154. The outer shaft 153 is inserted into the turntable 15, and a second pulley 155 is provided at the bottom of the outer shaft 153. The two driving motors respectively drive the middle shaft 152 and the outer shaft 153 to rotate through the first belt pulley 154 and the second belt pulley 155, respectively, and respectively drive the two laser radars 31 to rotate, and the two laser radars 31 can rotate independently.
Further, in a preferred embodiment, the auxiliary camera 32 has a dual-stereo camera including two wide-angle lenses 321, two telephoto lenses 322, and a plurality of fill-in LED lamps 323, and the two telephoto lenses are directed obliquely downward. Through the combination of wide-angle lens 321 and telephoto lens 322, and by means of optical zooming, auxiliary camera 32 can obtain a better zooming experience, wide-angle lens 321 with a wide angle of view can "see" a wide range, but "see" objects at a distance, and telephoto lens 322 with a narrow angle of view, although the range of "see" is not large, but "see" farther and clearer. The wide-angle 321 and the telephoto lens 322 are combined and matched, and relatively smooth zooming can be realized through lens switching and a fusion algorithm during shooting. The high-pixel telephoto lens 322 can ensure that the image information lost by the wide-angle lens 321 due to zooming is much lower than the false zooming of a single camera, thereby greatly improving the zooming performance of the auxiliary camera 32.
In the present embodiment, the two wide-angle lenses 321 are both disposed on the front side of the base 11, and the two wide-angle lenses 321 are located on the same level. Both telephoto lenses 322 are disposed at the front side of the base 11, and the two telephoto lenses 322 are also located at the same level. The two telephoto lenses 311 are disposed between the two wide-angle lenses 321.
Two wide-angle lenses 321 are connected to one of the customized Xilinx Spartan 6FPGA units, and two telephoto lenses 322 are connected to the other customized Xilinx Spartan 6FPGA unit. The two customized Xilinx Spartan 6FPGA units are connected with the quad-core Intel i 7-3820 QM unit through a customized Ethernet PCIe adapter. Both lidar 31 are connected to an Arm Cortex M4 unit and an Arm Cortex M4 unit is connected to a quad-core Intel i 7-3820 QM unit through an 8-port managed gigabit ethernet switch.
The surface of the base 11 where the telephoto lenses 322 are provided is slightly inclined downward such that the two telephoto lenses 322 are obliquely directed downward. When the robot grabs the object or carries out other work, can observe the motion of the manipulator of robot through two telephoto lens 322, prevent that the manipulator of robot from bumping in the course of the work, improve the security and the stability of manipulator work.
The front side of the head body 1 is provided with four light supplement LEDs 323. Two of the light supplement LEDs 323 are respectively located below the two wide-angle lenses 321, and the other two light supplement LEDs 323 are located between the two telephoto lenses 322. In a dark place, the fill-in LED323 can emit light and provide light sources for the wide-angle lens 321 and the telephoto lens 322, so that the images of the wide-angle lens 321 and the telephoto lens 322 are clearer.
Heat dissipation holes 16 are formed below the four light supplement LEDs 323, and the heat dissipation holes 16 are used for dissipating heat generated by work in the head body 1.
Further, in a preferred embodiment, the sensor group 3 further includes a dual-fisheye camera 33, and the dual-fisheye camera 33 has two lenses, both of which project forward, one of the lenses being disposed on the front side of the head body 1, and the other lens being disposed on the rear side of the head body 1.
The lens of fish-eye camera is a kind of lens with very short focal length and visual angle close to or equal to 180 deg. and the front lens of the photographic lens is parabolic and convex in front. By providing the two fisheye cameras 33 and providing the lenses of one fisheye camera on each of the front and rear sides of the head main body 1, the two fisheye cameras 33 can photograph all directions over a full 360 ° range.
The dual fisheye camera 33 is used for acquiring color data and texture data around the robot, and coloring a three-dimensional map around the robot obtained through modeling by a fusion algorithm, so that an operator can more easily understand how the environment around the robot is when remotely observing the environment around the robot through a user interface.
And further. In a preferred embodiment, the sensor group 3 further comprises a recording microphone (not shown) for collecting sound signals in the environment. The recording microphone may be disposed at any position on the head main body 1, and is not limited thereto. And in order to prevent the recording microphone from damaging, the recording microphone can select the recording microphone of the flexible.
Further, in a preferred embodiment, the embedded processor is connected to the ethernet network in a wired or wireless manner, so that an operator can observe the posture and position of the robot and control the robot through the user interface at a remote end.
Further, the utility model discloses a set up the spill robot head of carousel and sample the environment, the environment sampling method of adoption includes:
geometric data in a 360 ° range around the robot is acquired by two laser radars 31, and modeling is performed based on the geometric data to construct a three-dimensional map around the robot.
The specific environmental data in front of the robot is acquired by the auxiliary camera 32. The operator can use the user interface to view scenes from any vantage point according to the three-dimensional map, and can also accurately control the actions of the robot according to the specific environment in front of the robot.
In this embodiment, a three-dimensional map of the robot surroundings is constructed using a set of voxel grids. These grids contain 3D voxel sets, each voxel set containing occupancy and color marker information. The grid is created at different ranges and resolutions, as required by the individual tasks, to balance high resolution world modeling with bandwidth and computational constraints. The user can look at the robot within a bold-faced grid (0.5m resolution) of 30m of the sensor to learn about the situation. The high resolution model can capture the local environment around the robot (resolution 0.05 m). An on-demand area of interest, typically placed by a user at a particular location, provides information on the order of 1 centimeter and is used when creating a planning fixture for objects in an environment. With the plug-in panel available on the OCU, the robotic operator can actively modify the settings of each voxel grid, determine which grids to display at a given time, and view the scene from any vantage point using a 3D user interface.
In addition to providing situational awareness that enables an operator to perceive the environment, voxel models are also used in performing motion planning of actions. The robot motion is collision tested using a grid representation of voxels, ensuring that the motions generated by the planning routine do not collide and do not attempt to move the robot limb through the obstacle.
Also, with the auxiliary camera 32, a visual odometry system may be integrated to provide a posture estimate over time. The system generates a solution based on an incremental structure of motion estimation while refining the results using keyframe selection and sparse local bundle adjustment. The auxiliary camera 32 can determine the moving path and head pose change of the robot, update the position and head pose of the robot in real time in the established three-dimensional map, and update the three-dimensional map in real time along with the movement of the robot.
Further, in a preferred embodiment, a double fisheye camera 33 is provided on the head main body 1. Color data and texture data around the robot are acquired by the dual-fisheye camera 33, and modeling is performed based on the color data, the texture data, and the geometric data to generate a colored three-dimensional map around the robot, so that an operator can understand the environment around the robot more easily.
Further, in a preferred embodiment, a GPS locator 34 is provided on the head main body 1. The GPS locator 34 is used to acquire the geocentric coordinates of the robot.
With the shift of time, the position of the voxel model of the robot in the three-dimensional map may be deviated, so that the fusion between the voxel of the newly generated three-dimensional map and the previous voxel is deviated, and the three-dimensional map is established incorrectly. And require precise positioning to engage with the object (e.g., pick the object at a particular location) and avoid collisions while the robot performs the task.
Therefore, whether the position of the voxel model of the robot in the three-dimensional map is deviated or not can be determined by comparing the geocentric coordinates of the robot with the position of the voxel model of the robot in the three-dimensional map in real time. If the position of the voxel model of the robot in the three-dimensional map is deviated, the voxel increment is used for updating the model so as to ensure the high precision of the three-dimensional map.
The above is only a preferred embodiment of the present invention, and not intended to limit the scope of the invention, and it should be appreciated by those skilled in the art that various equivalent substitutions and obvious changes made in the specification and drawings should be included within the scope of the present invention.

Claims (7)

1. A concave robot head provided with a turntable is characterized by comprising a head body, an embedded processor and a sensor group arranged on the head body;
the head body comprises a base, a neck connecting piece, two supports, a turntable and a rotating shaft; the neck connecting piece is rotatably connected to the bottom of the base; the two supports are respectively arranged on the left side and the right side of the top of the base; the turntable is arranged between the two supports, and two sides of the turntable are respectively connected to the two supports; the rotating shaft is vertically arranged on the turntable, and the bottom of the rotating shaft is rotatably connected with the turntable;
the sensor group comprises a laser radar and an auxiliary camera, the laser radar is arranged on the rotating shaft, and the auxiliary camera is arranged on the front side of the head body;
the embedded processor is disposed within the base and is connected to the lidar and the auxiliary camera.
2. The female robot head for setting up a turntable of claim 1, wherein the rotational axis comprises a central axis and an outer axis; the outer shaft is rotationally connected with the rotary disc; the middle shaft is coaxially inserted into the outer shaft, the top end of the middle shaft extends out of the top end of the outer shaft, and the middle shaft is also in rotating connection with the rotating disc;
the two laser radars are arranged up and down, the upper laser radar is arranged on the middle shaft, and the lower laser radar is arranged on the outer shaft.
3. A female robot head provided with a turntable according to claim 2, characterized in that the rotation shaft further comprises an inner shaft, which is coaxially inserted into the middle shaft and which is fixedly connected to the turntable; and a GPS positioner is arranged at the top of the inner shaft.
4. The female robot head provided with a turntable according to claim 3, wherein the turntable is hollow inside, the inner shaft is inserted into the turntable and is connected to the bottom of the inner side of the turntable; the middle shaft is inserted into the turntable, and a first belt pulley is arranged at the bottom of the middle shaft; the outer shaft is inserted into the rotary disc, and a second belt pulley is arranged at the bottom of the outer shaft.
5. The concave robot head provided with the turntable as claimed in claim 1, wherein the auxiliary camera has a double stereo camera including two wide-angle lenses, two telephoto lenses and a plurality of supplementary LED lamps, and the two telephoto lenses are directed obliquely downward.
6. The female robot head with turntable of claim 1, wherein the sensor set further comprises a dual fisheye camera with two lenses, one lens facing forward and the other lens facing rearward.
7. A female robot head with carousel according to claim 1, wherein the embedded processor is connected to ethernet by wired or wireless means.
CN202020146582.5U 2020-01-22 2020-01-22 Concave robot head with rotary disc Active CN211517547U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202020146582.5U CN211517547U (en) 2020-01-22 2020-01-22 Concave robot head with rotary disc

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202020146582.5U CN211517547U (en) 2020-01-22 2020-01-22 Concave robot head with rotary disc

Publications (1)

Publication Number Publication Date
CN211517547U true CN211517547U (en) 2020-09-18

Family

ID=72440403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202020146582.5U Active CN211517547U (en) 2020-01-22 2020-01-22 Concave robot head with rotary disc

Country Status (1)

Country Link
CN (1) CN211517547U (en)

Similar Documents

Publication Publication Date Title
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
US11187790B2 (en) Laser scanning system, laser scanning method, movable laser scanning system, and program
US10796151B2 (en) Mapping a space using a multi-directional camera
CN109079799B (en) Robot perception control system and control method based on bionics
RU2664257C2 (en) Systems and methods for tracking location of mobile target objects
CN113168186A (en) Collision avoidance system, depth imaging system, vehicle, map generator and method thereof
AU2017300937A1 (en) Estimating dimensions for an enclosed space using a multi-directional camera
CN109917420A (en) A kind of automatic travelling device and robot
CN110163963B (en) Mapping device and mapping method based on SLAM
US10893190B2 (en) Tracking image collection for digital capture of environments, and associated systems and methods
CN113848931B (en) Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium
CN112819943A (en) Active vision SLAM system based on panoramic camera
CN115540849A (en) Laser vision and inertial navigation fusion positioning and mapping device and method for aerial work platform
Wang et al. Three-dimensional underwater environment reconstruction with graph optimization using acoustic camera
CN212193168U (en) Robot head with laser radars arranged on two sides
CN113778096A (en) Positioning and model building method and system for indoor robot
CN211517547U (en) Concave robot head with rotary disc
Jensen et al. Laser range imaging using mobile robots: From pose estimation to 3d-models
CN212044822U (en) Laser radar's spill robot head
CN111152237B (en) Robot head with laser radars arranged on two sides and environment sampling method thereof
Yuan et al. Visual steering of UAV in unknown environments
WO2022078437A1 (en) Three-dimensional processing apparatus and method between moving objects
Peñalver et al. Multi-view underwater 3D reconstruction using a stripe laser light and an eye-in-hand camera
CN108344972A (en) Robotic vision system based on grating loss stereoscopic vision and air navigation aid
CN113888702A (en) Indoor high-precision real-time modeling and space positioning device and method based on multi-TOF laser radar and RGB camera

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant