CN112882480A - System and method for fusing SLAM (simultaneous localization and mapping) by laser and vision aiming at crowd environment - Google Patents

System and method for fusing SLAM (simultaneous localization and mapping) by laser and vision aiming at crowd environment Download PDF

Info

Publication number
CN112882480A
CN112882480A CN202110308864.XA CN202110308864A CN112882480A CN 112882480 A CN112882480 A CN 112882480A CN 202110308864 A CN202110308864 A CN 202110308864A CN 112882480 A CN112882480 A CN 112882480A
Authority
CN
China
Prior art keywords
laser
vision
environment
human
sensing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110308864.XA
Other languages
Chinese (zh)
Other versions
CN112882480B (en
Inventor
李林
曾丽娜
杨云帆
巩曰光
李再金
羊大立
杨红
李志波
谢琼涛
彭鸿雁
曲轶
刘国军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Normal University
Original Assignee
Hainan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Normal University filed Critical Hainan Normal University
Priority to CN202110308864.XA priority Critical patent/CN112882480B/en
Publication of CN112882480A publication Critical patent/CN112882480A/en
Application granted granted Critical
Publication of CN112882480B publication Critical patent/CN112882480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a system and a method for fusing SLAM (simultaneous localization and mapping) by laser and vision aiming at a crowd environment, wherein the system comprises a human-computer interaction system, a human-computer interaction touch screen, a first sensing unit, a master control system, a second sensing unit, a function fusion machine body, a laser and vision fusion SLAM navigation system, a mechanical motion system and a master control machine body, wherein the human-computer interaction system is used for realizing human-computer interaction; the man-machine interaction touch screen is used for realizing information query and instruction issuing; the first sensing unit is used for realizing environment detection of a main azimuth viewing angle in front of the machine; the second sensing unit is used for sensing the obstacles and realizing distance measurement and pose estimation of the obstacles around the machine; the main control system is used for realizing instruction control; the laser and vision integrated SLAM navigation system is used for planning a map and a path; and the mechanical motion system is used for controlling the machine to walk according to the corresponding instruction. The intelligent service type machine environment information acquisition capability and the safety in a complex crowd environment are improved.

Description

System and method for fusing SLAM (simultaneous localization and mapping) by laser and vision aiming at crowd environment
Technical Field
The invention relates to the technical field of service-type intelligent machines, in particular to a technical fusion scheme of laser and vision multi-sensor SLAM, and particularly relates to design of a multi-sensor coverage field range.
Background
With the continuous development of robot technology, more and more intelligent machines come into the lives of people and are in close contact with people. Service-type machines are also increasingly being used by people in complex crowd environments, such as superstores, hospitals, high-speed rails, airports, etc., and these intelligent machines often play a role in guiding pedestrians, delivering goods, and warning/prompting/announcing information.
SLAM based on laser radar and SLAM technology based on vision can be applied to the location of autonomic navigation in artificial intelligence fields such as intelligent machine, unmanned aerial vehicle. The technical scheme of vision and laser SLAM can realize information acquisition and map construction in a certain environment. Common environmental detection sensing elements for smart machines using SLAM technology are lidar and vision systems. The laser radar is characterized by strong detection capability on information such as distance, angle, depth and the like of objects in the environment, and the relative position relation between a machine and the objects in the environment can be obtained by actively emitting the laser environment to perform primary detection. A visual system such as a single/binocular camera passively collects the environment, and has strong capacity of collecting plane detail information and image-text colors.
However, the robots applied to the crowd complex environment mostly mainly use fixed vertical machines except logistics machines at present, the movable machines are less used in the crowd environment, the detection capability to the complex building environment and the movable crowd is limited, and the functions of the robots are exerted singly.
Therefore, how to provide a system and method for fusing SLAM with laser and vision for human environment is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of this, the invention provides a system and a method for fusing SLAM with laser and vision for a crowd environment, which improve the environment information acquisition capability of an intelligent service type machine and the safety in a complex crowd environment.
In order to achieve the purpose, the invention adopts the following technical scheme:
a system for laser and vision fusion SLAM for a crowd environment, comprising: the system comprises a human-computer interaction system, a human-computer interaction touch screen, a first sensing unit, a master control system, a second sensing unit, a function fusion machine body, a laser and vision fusion SLAM navigation system, a mechanical motion system and a master control machine body, wherein the human-computer interaction system, the human-computer interaction touch screen, the first sensing unit, the second sensing unit, the master control system, the laser and vision fusion SLAM navigation system and the mechanical motion system are all arranged on the master control machine body, and the function fusion machine body is connected with the master control machine body;
the human-computer interaction system is used for realizing human-computer interaction;
the man-machine interaction touch screen is used for realizing information query and instruction issuing;
the first sensing unit is used for realizing environment detection of a main azimuth viewing angle in front of the machine;
the second sensing unit is used for sensing the obstacles and realizing distance measurement and pose estimation of the obstacles around the machine;
the master control system is used for realizing instruction control;
the laser and vision integrated SLAM navigation system is used for planning a map and a path;
the mechanical motion system is used for controlling the machine to walk according to corresponding instructions;
the function integration machine body is used for carrying required goods and realizing material transportation.
Preferably, the first sensing unit is arranged above the front of the main control machine body and comprises a human eye safety laser radar and a binocular depth vision assembly, and the human eye safety laser radar is arranged right above the binocular depth vision assembly.
Preferably, the second sensing unit includes 2 ranging laser radars, and the 2 ranging laser radars are disposed at diagonal positions of the rear end of the main control body.
Preferably, binocular depth vision subassembly includes left imager of depth vision system, right imager of depth vision system and RGB module, the left imager of depth vision system with right imager of depth vision system symmetry sets up RGB module both sides.
Preferably, the human-computer interaction system comprises a voice recognition unit, a voice playing unit and a WIFI communication unit;
the voice recognition unit is used for performing voice recognition;
the voice playing unit is used for realizing voice playing;
and the WIFI communication unit is used for realizing communication with the monitoring platform.
Preferably, the working wavelength of the human eye safety laser radar is 1550nm, and the working wavelength of the ranging laser radar is 905 nm.
Preferably, the system further comprises an information display screen for realizing information display.
Preferably, the function fusion machine body is a box body for placing articles or an ultraviolet ray sterilization machine body.
Preferably, the main control machine body is an L-shaped main control machine body.
A method of laser and vision fusion SLAM for a crowd environment, comprising:
step 1: setting a walking instruction and a destination through a human-computer interaction touch screen or the human-computer interaction system;
step 2: the main control system judges the environment complexity according to the environment information of the machine front main azimuth visual angle detected by the first sensing unit, if the environment is not complex, the laser and vision fusion SLAM navigation system converts the depth imaging picture acquired by the binocular depth vision component into a point cloud picture and projects the point cloud picture onto a laser point cloud picture detected by a human eye safety laser radar, and the point cloud picture is converted into a grid base picture through information comparison and matching for path planning;
if the environment is complex, the master control system starts a second sensing unit, the ranging laser radar continuously scans and detects the distance between the rear-end peripheral obstacles and the machine within the range of the visual angle, judges whether the ranging laser radar is in a safe distance, if the ranging laser radar is safe, the ranging laser radar normally operates, and if the ranging laser radar is not in a safe distance, the mechanical motion system brakes and performs pose adjustment and path re-planning through the laser and vision fusion SLAM navigation system;
and step 3: and the main control system controls the mechanical motion system to advance to a destination according to an actual planned path.
Compared with the prior art, the technical scheme has the advantages that the system and the method for fusing the SLAM by the laser and the vision aiming at the crowd environment can efficiently and widely obtain the environment information, realize the multi-functional fusion of machines by being matched with a human-computer interaction system and the like on the premise of realizing autonomous positioning, map building and navigation, and improve the safety in the complex crowd environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic diagram of a system structure of a laser and vision fusion SLAM for a crowd environment.
Fig. 2 is a series of explanatory views of the visual range of the first sensing unit, wherein fig. 2(a) is a front view of the structure of the first sensing unit, fig. 2(b) is a top view of the first sensing unit, and fig. 2(c) is a side view of the first sensing unit.
FIG. 3 is a series of illustrations of the visual range of the second sensing unit, wherein FIG. 3(a) is a front view of the second sensing unit, FIG. 3(b) is a top view of the second sensing unit, FIG. 3(c) is a top view of the second sensing unit, and FIG. 3(d) is a side view of the second sensing unit.
Fig. 4 is a schematic connection diagram of a system for fusing laser and vision with SLAM for a crowd environment.
Fig. 5 is a flowchart of the method work of laser and vision fusion SLAM for a crowd environment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a system for fusing SLAM (simultaneous localization and mapping) by laser and vision aiming at a crowd environment, which comprises a human-computer interaction system 1, a human-computer interaction touch screen 2, a first sensing unit 3, a master control system 4, a second sensing unit 5, a function fusion machine body 6, a laser and vision fusion SLAM navigation system 7, a mechanical motion system 8, a master control machine body 9 and an information display screen 10, wherein the human-computer interaction system 1, the human-computer interaction touch screen 2, the first sensing unit 3, the second sensing unit 5, the laser and vision fusion SLAM navigation system 7, the mechanical motion system 8 and the information display screen 10 are all connected with the master control system 4.
The main control machine body 9 is an L-shaped main control machine body, the first sensing unit 3 is arranged in the front upper part of the main control machine body, and environment detection on the main azimuth view angle of the machine is mastered in the moving and navigation processes of the machine; the man-machine interaction touch screen 2 is arranged on the main control machine body and is positioned above the first sensing unit 3, so that the functions of information inquiry and instruction issuing are realized; function fusion organism 6 can be carried on in the spacious organism part in "L" type master control organism rear, realizes that the goods and materials transports the function such as, specifically can change the function fusion organism according to actual application environment and needs and realize required function, for example need transport article then assemble the box that can place article, if need carry out the ultraviolet for the environment and kill then assemble ultraviolet and kill organism etc..
As shown in fig. 2(a), the first sensing unit includes a human eye-safe lidar including a human eye-safe lidar transmitting unit 301 and a human eye-safe lidar receiving unit 302, and a binocular depth vision component including a depth vision system left imager 303, a depth vision system right imager 304, and an RGB module 305. More specifically, the working wavelength of the human eye safety laser radar is 1550nm, wherein the human eye safety laser wave band is 1.4-2.1 microns, 1.5 microns are just right on an atmosphere transmission window, the loss is minimum in the laser transmission process, the common laser radar is replaced by the human eye safety radar, the safety in the crowd environment is guaranteed, and the harm of laser to a human body is reduced.
The view of 2(b) shows the environment horizontal detection visual angle of the first sensing unit from the overlooking angle, the angle which can be scanned by the eye-safe laser radar is 71 degrees as shown in the figure, it needs to be explained that the horizontal scanning angle is realized under the condition of the limitation of the machine structure of the figure 2, and the horizontal scanning range of 0-360 degrees can be realized for a single laser radar without the limitation of the machine structure, so that the machine structure design needs to be modified if the horizontal scanning range of the laser radar needs to be enlarged, the scanning range of the laser radar only needs to cover the binocular depth vision to establish the horizontal visual angle, and the binocular depth vision component can acquire the environment horizontal visual angle of 60 degrees; the human eye safety laser radar scans the visual angle to fully cover the visual angle of the depth vision assembly, and accurate depth information in the visual angle range can be acquired.
And 2(c) is a view that the first sensing unit embodied by the side view surface can acquire the vertical height in the environment, wherein the human eye safety laser radar does not perform height scanning in two-dimensional horizontal scanning, and the vertical height view of depth vision is 86 degrees.
As shown in fig. 3(a), the second sensing unit 5 includes 2 ranging laser radars 51, two identical ranging laser radars 51 are symmetrically mounted at two opposite corners of the rear end of the machine, a low-height opposite corner design is adopted to ensure that the laser does not directly contact human eyes to reduce the damage of the laser to human body, and the scanning in the horizontal viewing angle range and the vertical viewing angle range can be realized. More specifically, the range laser radar 51 operates at 905nm, which may float within + -10 nm.
The 3(b) picture is two range finding laser radar horizontal scanning visual angle ranges of second sensing unit 5, and the shadow part is shown, and this is range finding laser radar scanning visual angle minizone blind area, and apex angle of blind area range is 58 °, the base is the distance length of machine center to laser radar center, but the blind area does not influence environmental information and surveys because two range finding laser radar's horizontal visual angle has the coincidence zone, and range finding laser radar the place ahead does not have the detection blind area when a distance is a fixed distance. The detail view is shown in figure 3(c), with one of the ranging lidar view angle representations, one ranging lidar horizontal scan view angle being 148 °.
Fig. 3(d) shows a scanning angle at the vertical height of the ranging laser radar 51 shown in the side view, where the scanning angle at the vertical height of the ranging laser radar is 104 °, it is to be supplemented that the scanning angle at the vertical height of the ranging laser radar can be achieved by mechanical rotation, and the range of the mechanical rotation can determine the scanning range of the ranging laser radar.
The first sensing unit 3 and the second sensing unit 5 of the present invention respectively realize different functional purposes. In the moving and navigation processes of the machine, the information obtained by environmental scanning of the first sensing unit 3 is transmitted to the laser and vision fusion SLAM navigation system 7 for map analysis and path planning after conversion processing, and the machine is further controlled to travel to the destination. The second sensing unit 5 does not need to run in real time and does not run in an open and crowd sparse environment. When the environmental information of the first sensing unit 3 is extracted to sense the surging of people flow or the narrow and complex building structure in the environment, the second sensing unit 5 starts to operate, real-time scanning ranging is carried out in the rear view field range, obstacles are sensed and the mechanical motion system 8 is early warned to brake, and meanwhile, distance ranging and pose estimation of the peripheral wall of the machine are achieved, and collision and scratch are avoided.
In this embodiment, a method for fusing SLAM with laser and vision in a crowd environment, as shown in fig. 5, includes:
step 1: setting a walking instruction and a destination through a human-computer interaction touch screen or the human-computer interaction system;
step 2: the method comprises the steps that a first sensing unit detects environment information of a main azimuth viewing angle in front of a machine, a main control system preliminarily senses environment complexity according to the environment information, judges whether crowds are dense, whether channels are complex, the number of building barriers and other environment information, if the environment is not complex, a laser and vision fusion SLAM navigation system converts a depth imaging picture acquired by a binocular depth vision assembly into a cloud point picture and projects the cloud point picture onto a laser cloud point picture detected by a human eye safety laser radar, and the cloud point picture is converted into a grid base picture through information comparison and matching to perform path planning;
if the environment is complex, the master control system starts a second sensing unit, the ranging laser radar continuously scans within the visual angle range, the distance between the surrounding objects at the rear end and the machine and the distance between the channel and the machine are detected, automatic early warning is carried out inside the machine or emergency braking is carried out through a mechanical movement device when the distance is less than the safe distance, and then machine pose adjustment or path re-planning is carried out;
and step 3: and the main control system controls the mechanical motion system to move forward to a destination according to the actual planned path.
The invention has the following advantages:
1) the design of the first sensing unit imitates a human visual mode, environment information is efficiently extracted, the phenomenon that a large amount of information is extracted to bring burden to the operation of a machine is avoided, and the first sensing unit is combined with an SLAM rear-end algorithm to realize accurate environment description and improve the moving accuracy of the machine;
2) the first sensing unit and the second sensing unit are matched for sensing, so that the influence of variables in the running process of the machine can be avoided, and obstacles can be avoided;
3) the laser sensing and the visual sensing are combined with each other to realize the advantage complementation of the laser sensing and the visual sensing, the detection of depth information is realized in a non-full-view field environment, the resolving capability of laser on the outline of an environment object is strong, the acquisition capability of a depth camera on environment color and plane information is strong, and the fusion of the laser sensing and the visual sensing avoids information leaks and is the key of autonomous positioning and navigation.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A system for laser and vision fusion SLAM for a crowd environment, comprising: the system comprises a human-computer interaction system, a human-computer interaction touch screen, a first sensing unit, a master control system, a second sensing unit, a function fusion machine body, a laser and vision fusion SLAM navigation system, a mechanical motion system and a master control machine body, wherein the human-computer interaction system, the human-computer interaction touch screen, the first sensing unit, the second sensing unit, the master control system, the laser and vision fusion SLAM navigation system and the mechanical motion system are all arranged on the master control machine body, and the function fusion machine body is connected with the master control machine body;
the human-computer interaction system is used for realizing human-computer interaction;
the man-machine interaction touch screen is used for realizing information query and instruction issuing;
the first sensing unit is used for realizing environment detection of a main azimuth viewing angle in front of the machine;
the second sensing unit is used for sensing the obstacles and realizing distance measurement and pose estimation of the obstacles around the machine;
the master control system is used for realizing instruction control;
the laser and vision integrated SLAM navigation system is used for planning a map and a path;
the mechanical motion system is used for controlling the machine to walk according to corresponding instructions;
the function integration machine body is used for carrying required goods and realizing material transportation.
2. The system of SLAM for fusion of laser and vision for crowd environment of claim 1, wherein the first sensing unit is disposed in front of and above the main control body and comprises a eye-safe lidar and a binocular depth vision component, and the eye-safe lidar is disposed directly above the binocular depth vision component.
3. The system of SLAM for fusion of laser and vision for crowd environment of claim 2, wherein the second sensing unit comprises 2 ranging lidar, 2 ranging lidar being arranged at diagonal positions of the back end of the main control body.
4. The system of claim 2, wherein the binocular depth vision component comprises a depth vision system left imager, a depth vision system right imager, and an RGB module, the depth vision system left imager and the depth vision system right imager being symmetrically disposed on either side of the RGB module.
5. The system of laser and vision fusion SLAM for a crowd environment of claim 1, wherein the human-computer interaction system comprises a voice recognition unit, a voice playing unit and a WIFI communication unit;
the voice recognition unit is used for performing voice recognition;
the voice playing unit is used for realizing voice playing;
and the WIFI communication unit is used for realizing communication with the monitoring platform.
6. The system for laser-vision fusion SLAM for a population environment of claim 3, wherein the operational wavelength of the eye-safe lidar is 1550nm and the operational wavelength of the range lidar is 905 nm.
7. The system of laser and visual fusion SLAM for human environment of claim 1 further comprising an information display screen for enabling information display.
8. The system of laser and vision fusion SLAM for human environment of claim 1, wherein the function fusion body is a box body for placing articles or an ultraviolet ray disinfection body.
9. The system of laser and vision fusion SLAM for human environment of claim 1, wherein said master control body is an "L" shaped master control body.
10. A method for fusing SLAM with laser and vision aiming at crowd environment is characterized by comprising the following steps:
step 1: setting a walking instruction and a destination through a human-computer interaction touch screen or the human-computer interaction system;
step 2: the main control system judges the environment complexity according to the environment information of the machine front main azimuth visual angle detected by the first sensing unit, if the environment is not complex, the laser and vision fusion SLAM navigation system converts the depth imaging picture acquired by the binocular depth vision component into a point cloud picture and projects the point cloud picture onto a laser point cloud picture detected by a human eye safety laser radar, and the point cloud picture is converted into a grid base picture through information comparison and matching for path planning;
if the environment is complex, the master control system starts a second sensing unit, the ranging laser radar continuously scans and detects the distance between the rear-end peripheral obstacles and the machine within the range of the visual angle, judges whether the ranging laser radar is in a safe distance, if the ranging laser radar is safe, the ranging laser radar normally operates, and if the ranging laser radar is not in a safe distance, the mechanical motion system brakes and performs pose adjustment and path re-planning through the laser and vision fusion SLAM navigation system;
and step 3: and the main control system controls the mechanical motion system to advance to a destination according to an actual planned path.
CN202110308864.XA 2021-03-23 2021-03-23 System and method for fusing laser and vision for crowd environment with SLAM Active CN112882480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110308864.XA CN112882480B (en) 2021-03-23 2021-03-23 System and method for fusing laser and vision for crowd environment with SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110308864.XA CN112882480B (en) 2021-03-23 2021-03-23 System and method for fusing laser and vision for crowd environment with SLAM

Publications (2)

Publication Number Publication Date
CN112882480A true CN112882480A (en) 2021-06-01
CN112882480B CN112882480B (en) 2023-07-21

Family

ID=76041980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110308864.XA Active CN112882480B (en) 2021-03-23 2021-03-23 System and method for fusing laser and vision for crowd environment with SLAM

Country Status (1)

Country Link
CN (1) CN112882480B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106950985A (en) * 2017-03-20 2017-07-14 成都通甲优博科技有限责任公司 A kind of automatic delivery method and device
CN109375629A (en) * 2018-12-05 2019-02-22 苏州博众机器人有限公司 A kind of cruiser and its barrier-avoiding method that navigates
CN109799831A (en) * 2018-03-19 2019-05-24 徐州艾奇机器人科技有限公司 A kind of quick cruiser system of two-wheel drive type and working method
US20190286145A1 (en) * 2018-03-14 2019-09-19 Omron Adept Technologies, Inc. Method and Apparatus for Dynamic Obstacle Avoidance by Mobile Robots
CN111006655A (en) * 2019-10-21 2020-04-14 南京理工大学 Multi-scene autonomous navigation positioning method for airport inspection robot
CN111076726A (en) * 2019-12-31 2020-04-28 深圳供电局有限公司 Vision-assisted obstacle avoidance method and device for inspection robot, equipment and storage medium
CN111338359A (en) * 2020-04-30 2020-06-26 武汉科技大学 Mobile robot path planning method based on distance judgment and angle deflection
CN111752285A (en) * 2020-08-18 2020-10-09 广州市优普科技有限公司 Autonomous navigation method and device for quadruped robot, computer equipment and storage medium
CN111949027A (en) * 2020-08-10 2020-11-17 珠海一维弦机器人有限公司 Self-adaptive robot navigation method and device
CN112318507A (en) * 2020-10-28 2021-02-05 内蒙古工业大学 Robot intelligent control system based on SLAM technology

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106950985A (en) * 2017-03-20 2017-07-14 成都通甲优博科技有限责任公司 A kind of automatic delivery method and device
US20190286145A1 (en) * 2018-03-14 2019-09-19 Omron Adept Technologies, Inc. Method and Apparatus for Dynamic Obstacle Avoidance by Mobile Robots
CN109799831A (en) * 2018-03-19 2019-05-24 徐州艾奇机器人科技有限公司 A kind of quick cruiser system of two-wheel drive type and working method
CN109375629A (en) * 2018-12-05 2019-02-22 苏州博众机器人有限公司 A kind of cruiser and its barrier-avoiding method that navigates
CN111006655A (en) * 2019-10-21 2020-04-14 南京理工大学 Multi-scene autonomous navigation positioning method for airport inspection robot
CN111076726A (en) * 2019-12-31 2020-04-28 深圳供电局有限公司 Vision-assisted obstacle avoidance method and device for inspection robot, equipment and storage medium
CN111338359A (en) * 2020-04-30 2020-06-26 武汉科技大学 Mobile robot path planning method based on distance judgment and angle deflection
CN111949027A (en) * 2020-08-10 2020-11-17 珠海一维弦机器人有限公司 Self-adaptive robot navigation method and device
CN111752285A (en) * 2020-08-18 2020-10-09 广州市优普科技有限公司 Autonomous navigation method and device for quadruped robot, computer equipment and storage medium
CN112318507A (en) * 2020-10-28 2021-02-05 内蒙古工业大学 Robot intelligent control system based on SLAM technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
靳东: "室内复杂环境下移动机器人激光视觉融合SLAM及导航研究" *

Also Published As

Publication number Publication date
CN112882480B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
US11407116B2 (en) Robot and operation method therefor
US11754721B2 (en) Visualization and semantic monitoring using lidar data
CN110147106A (en) Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system
CN107272710B (en) Medical logistics robot system based on visual positioning and control method thereof
US9840003B2 (en) Apparatus and methods for safe navigation of robotic devices
CN110621449B (en) Mobile robot
KR102391771B1 (en) Method for operation unmanned moving vehivle based on binary 3d space map
CN112859873B (en) Semantic laser-based mobile robot multi-stage obstacle avoidance system and method
CN111079607A (en) Automatic driving system with tracking function
WO2019240664A1 (en) Convoying system based on fusion of data from vision sensors and lidar
CN110477808A (en) A kind of robot
KR20200023707A (en) Moving robot
WO2020038155A1 (en) Autonomous movement device, control method and storage medium
CN210931169U (en) Robot
CN112882480A (en) System and method for fusing SLAM (simultaneous localization and mapping) by laser and vision aiming at crowd environment
CN113109821A (en) Mapping method, device and system based on ultrasonic radar and laser radar
Bostelman et al. Towards AGV safety and navigation advancement obstacle detection using a TOF range camera
CN113081525A (en) Intelligent walking aid equipment and control method thereof
CN210514609U (en) Detection apparatus for 3D ToF module
WO2022222644A1 (en) Guide rail-based unmanned mobile device and system, and mobile control apparatus
CN116919247A (en) Welt identification method, device, computer equipment and medium
EP2919150A1 (en) Safety system for vehicles
CN215897823U (en) Structured light module and self-moving equipment
CN110916562A (en) Autonomous mobile device, control method, and storage medium
CN206714898U (en) One kind navigation avoidance wheelchair

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant