CN112882480B - System and method for fusing laser and vision for crowd environment with SLAM - Google Patents

System and method for fusing laser and vision for crowd environment with SLAM Download PDF

Info

Publication number
CN112882480B
CN112882480B CN202110308864.XA CN202110308864A CN112882480B CN 112882480 B CN112882480 B CN 112882480B CN 202110308864 A CN202110308864 A CN 202110308864A CN 112882480 B CN112882480 B CN 112882480B
Authority
CN
China
Prior art keywords
laser
machine
vision
environment
sensing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110308864.XA
Other languages
Chinese (zh)
Other versions
CN112882480A (en
Inventor
李林
曾丽娜
杨云帆
巩曰光
李再金
羊大立
杨红
李志波
谢琼涛
彭鸿雁
曲轶
刘国军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Normal University
Original Assignee
Hainan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Normal University filed Critical Hainan Normal University
Priority to CN202110308864.XA priority Critical patent/CN112882480B/en
Publication of CN112882480A publication Critical patent/CN112882480A/en
Application granted granted Critical
Publication of CN112882480B publication Critical patent/CN112882480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a system and a method for fusing laser and vision for a crowd environment, wherein the system comprises a man-machine interaction system, a man-machine interaction touch screen, a first sensing unit, a main control system, a second sensing unit, a functional fusion machine body, a laser and vision fusion SLAM navigation system, a mechanical movement system and a main control machine body, wherein the man-machine interaction system is used for realizing man-machine interaction; the man-machine interaction touch screen is used for realizing information inquiry and instruction issuing; the first sensing unit is used for realizing environment detection of a main direction visual angle in front of the machine; the second sensing unit is used for sensing the obstacle and realizing distance measurement and pose estimation of the obstacle around the machine; the main control system is used for realizing instruction control; the laser and vision fusion SLAM navigation system is used for map planning and path planning; the mechanical movement system is used for controlling the machine to walk according to the corresponding instruction. The intelligent service type machine environment information acquisition capability and the safety of the intelligent service type machine in the complex crowd environment are improved.

Description

System and method for fusing laser and vision for crowd environment with SLAM
Technical Field
The invention relates to the technical field of service type intelligent machines, in particular to a laser and vision multi-sensor SLAM (sequential scanning and parallel scanning) technical fusion scheme, and particularly relates to a design of covering a field of view by a multi-sensor.
Background
With the continuous development of robot technology, more and more intelligent machines enter the life of people and are in close contact with people. Service machines are also increasingly being used by people in complex crowd environments such as shopping malls, hospitals, high-speed rails, airports, etc., and these intelligent machines often serve as guides for pedestrians, distribution of goods, information alerts/prompts/announcements, etc.
SLAM based on lidar and SLAM based on vision techniques may be applied to the positioning of autonomous navigation in the field of artificial intelligence such as intelligent machines, unmanned aerial vehicles, etc. The visual and laser SLAM technical scheme can achieve information acquisition and map construction of a certain environment. The commonly used environmental detection sensing elements of smart machines applying SLAM technology are lidar and vision systems. The laser radar is characterized in that the laser radar has strong capability of detecting information such as distance, angle, depth and the like of objects in the environment, and the relative position relation between the machine and the environmental objects can be obtained by actively transmitting the laser environment to initially detect the objects. The vision system such as a single/binocular camera is used for passively collecting the environment, and has strong collection capability on plane detail information and image-text colors.
However, most robots applied to the complicated environments of people at present mainly adopt fixed vertical machines except logistics machines, the use of movable machines in the environments of people is less, the detection capability of the robots for the complicated building environments and the movable people is limited, and the functions of the robots are single.
Therefore, how to provide a system and method for fusing laser and vision for crowd environment is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a system and a method for fusing laser and vision for crowd environment, which improves the intelligent service type machine environment information acquisition capability and the safety in complex crowd environment.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a system for laser and vision fusion SLAM for a crowd environment, comprising: the system comprises a man-machine interaction system, a man-machine interaction touch screen, a first sensing unit, a main control system, a second sensing unit, a function fusion machine body, a laser and vision fusion SLAM navigation system, a mechanical motion system and a main control machine body, wherein the man-machine interaction system, the man-machine interaction touch screen, the first sensing unit, the second sensing unit, the main control system, the laser and vision fusion SLAM navigation system and the mechanical motion system are all arranged on the main control machine body, and the function fusion machine body is connected with the main control machine body;
the man-machine interaction system is used for realizing man-machine interaction;
the man-machine interaction touch screen is used for realizing information inquiry and instruction issuing;
the first sensing unit is used for realizing environment detection of a main direction visual angle in front of the machine;
the second sensing unit is used for sensing obstacles and realizing distance measurement and pose estimation of the obstacles around the machine;
the main control system is used for realizing instruction control;
the laser and vision fusion SLAM navigation system is used for map planning and path planning;
the mechanical movement system is used for controlling the machine to walk according to the corresponding instruction;
the functional fusion machine body is used for carrying required goods and realizing material transportation.
Preferably, the first sensing unit is arranged above the front of the main control body and comprises a eye-safe laser radar and a binocular depth vision assembly, and the eye-safe laser radar is arranged right above the binocular depth vision assembly.
Preferably, the second sensing unit includes 2 ranging lidars, and 2 ranging lidars are disposed at diagonal positions of the rear end of the main control body.
Preferably, the binocular depth vision assembly comprises a depth vision system left imager, a depth vision system right imager and an RGB module, wherein the depth vision system left imager and the depth vision system right imager are symmetrically arranged on two sides of the RGB module.
Preferably, the man-machine interaction system comprises a voice recognition unit, a voice playing unit and a WIFI communication unit;
the voice recognition unit is used for performing voice recognition;
the voice playing unit is used for realizing voice playing;
the WIFI communication unit is used for realizing communication with the monitoring platform.
Preferably, the working wavelength of the eye-safe laser radar is 1550nm, and the working wavelength of the ranging laser radar is 905nm.
Preferably, the system further comprises an information display screen for realizing information display.
Preferably, the functional fusion machine body is a box body for placing articles or an ultraviolet sterilization machine body.
Preferably, the main control body is an L-shaped main control body.
A method of laser and vision fusion SLAM for a crowd environment, comprising:
step 1: setting a walking instruction and a destination through a man-machine interaction touch screen or the man-machine interaction system;
step 2: the main control system judges the complexity of the environment according to the environment information of the main direction visual angle in front of the machine detected by the first sensing unit, if the environment is not complex, the laser and vision fusion SLAM navigation system converts the depth imaging picture obtained by the binocular depth vision component into a point cloud picture and projects the point cloud picture onto a laser point cloud picture detected by the eye safety laser radar, and the point cloud picture is converted into a grid base picture through information comparison and matching to carry out path planning;
if the environment is complex, the main control system starts the second sensing unit, the distance measuring laser radar continuously scans and detects the distance between the surrounding obstacle and the machine at the rear end in the visual angle range, whether the distance is in a safe distance is judged, if the distance is safe, the mechanical movement system is normally operated, otherwise, the mechanical movement system is braked, and pose adjustment and path re-planning are carried out through the laser and vision fusion SLAM navigation system;
step 3: the main control system controls the mechanical movement system to advance to a destination according to an actual planned path.
Compared with the prior art, the system and the method for fusing the laser and the vision aiming at the crowd environment can efficiently acquire the environment information in a large range, realize the fusion of the machine functions by being matched with a man-machine interaction system and the like on the premise of realizing autonomous positioning, map construction and navigation, and improve the safety of the environment of the complex crowd.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a system architecture of a laser and vision fusion SLAM for crowd environments.
Fig. 2 is a series of explanatory views of the visual range of the first sensor unit, in which fig. 2 (a) is a front view of the first sensor unit structure, fig. 2 (b) is a top view of the first sensor unit, and fig. 2 (c) is a side view of the first sensor unit.
Fig. 3 is a view illustrating a series of visual ranges of the second sensing unit, in which fig. 3 (a) is a front view of the structure of the second sensing unit, fig. 3 (b) is a top view of the second sensing unit, fig. 3 (c) is a top view of the single range lidar of the second sensing unit, and fig. 3 (d) is a side view of the second sensing unit.
Fig. 4 is a schematic diagram of a system principle connection of laser and vision fusion SLAM for crowd environments.
FIG. 5 is a flowchart of a method of laser and vision fusion SLAM for a crowd environment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention discloses a system for fusing laser and vision with SLAM (slit-level-sensing) aiming at crowd environment, which is shown in figures 1 and 4 and comprises a man-machine interaction system 1, a man-machine interaction touch screen 2, a first sensing unit 3, a main control system 4, a second sensing unit 5, a functional fusion machine body 6, a laser and vision fusion SLAM navigation system 7, a mechanical motion system 8, a main control machine body 9 and an information display screen 10, wherein the man-machine interaction system 1, the man-machine interaction touch screen 2, the first sensing unit 3, the second sensing unit 5, the laser and vision fusion SLAM navigation system 7, the mechanical motion system 8 and the information display screen 10 are all connected with the main control system 4.
The main control machine body 9 is an L-shaped main control machine body, the first sensing unit 3 is arranged above the front of the main control machine body, and the environment detection on the main azimuth view angle of the machine is mastered in the moving and navigation processes of the machine; the man-machine interaction touch screen 2 is arranged on the main control body and is positioned above the first sensing unit 3, so that the functions of information inquiry and instruction issuing are realized; the function fusion machine body 6 can be carried on the rear open machine body part of the L-shaped main control machine body to realize functions such as material conveying, and the functions required by the function fusion machine body can be specifically realized according to actual application environments and needs to be changed, for example, a box body capable of containing the objects is assembled when the objects are conveyed, an ultraviolet sterilization machine body is assembled when the environments are subjected to ultraviolet sterilization, and the like.
As shown in fig. 2 (a), the first sensing unit includes a human eye safety laser radar including a human eye safety laser radar transmitting unit 301 and a human eye safety laser radar receiving unit 302, and a binocular depth vision component including a depth vision system left imager 303, a depth vision system right imager 304, and an RGB module 305. More specifically, the working wavelength of the eye-safe laser radar is 1550nm, wherein the eye-safe laser wave band is 1.4-2.1 microns, and 1.5 microns is just above the atmospheric transmission window, so that the loss in the laser transmission process is minimum, the common laser radar is replaced by the eye-safe radar, the safety in the crowd environment is ensured, and the harm of laser to human bodies is reduced.
2 (b) is a view showing an environmental horizontal detection view angle of the first sensing unit from a top view, wherein a scannable angle of the eye-safe laser radar is 71 ° as shown in the figure, and it is to be noted that the embodiment is a horizontal scanning angle realized under the condition of the machine structure limitation of fig. 2, and a horizontal scanning range of 0-360 ° can be realized for a single laser radar without the machine structure limitation, so that if the laser radar horizontal scanning range needs to be enlarged, a machine structure design needs to be modified, and only the binocular depth vision component needs to cover the laser radar scanning range to construct a horizontal view angle, and the binocular depth vision component can acquire the environmental horizontal view angle of 60 °; the human eye safety laser radar scans the visual angle of the full-coverage depth visual assembly, and accurate depth information in the visual angle range can be obtained.
2 (c) is a view angle of a vertical height in the environment that can be obtained by the first sensing unit embodied in a side view, wherein the eye-safe laser radar is a two-dimensional horizontal scanning without height scanning, and the vertical height view angle of depth vision is 86 °.
As shown in fig. 3 (a), the second sensing unit 5 includes 2 ranging lidars 51, two identical ranging lidars 51 are symmetrically assembled at two opposite angles of the rear end of the machine, a low-height diagonal design is adopted to ensure that the laser does not directly contact eyes of a person, damage of the laser to the human body is reduced, and scanning in a horizontal viewing angle and a viewing angle range of a vertical height can be realized. More specifically, the range lidar 51 has an operating wavelength of 905nm, which may float within a range of ±10 nm.
The diagram 3 (b) is a horizontal scanning view angle range of two ranging lidars of the second sensing unit 5, and is shown by a shadow part, which is a blind area of a small range of the scanning view angle of the ranging lidar, wherein one vertex angle of the blind area range is 58 degrees, the bottom edge is the distance length from the center of the machine to the center of the lidar, but the blind area does not influence the detection of environmental information, because the horizontal view angles of the two ranging lidars have a superposition area, and no detection blind area exists in front of the ranging lidar when the distance is a certain distance. The detail view is shown in fig. 3 (c), and the view angle representation of one of the range lidars is shown, and the horizontal scanning view angle of one range lidar is 148 °.
The diagram 3 (d) is a scanning view angle of the ranging lidar 51 at a vertical height, which is shown in a side view angle, the scanning view angle of the ranging lidar at the vertical height is 104 °, it is necessary to supplement that the scanning view angle of the lidar at the vertical height is realized by mechanical rotation, and the range of the mechanical rotation can determine the range of the lidar scanning, and the range of the lidar scanning is given in a specific view angle under the present embodiment, so that in practical application, the range of the lidar scanning needs to be designed according to the machine size and the structure.
The first sensor unit 3 and the second sensor unit 5 according to the invention each have different functional uses. In the machine moving and navigating process, the information obtained by the first sensing unit 3 through environmental scanning is sent to the laser and vision fusion SLAM navigation system 7 after being converted, so that map analysis and path planning are carried out, and the machine is further controlled to walk to a destination. The second sensing unit 5 does not operate in real time and does not operate in open and crowd sparse environments. When the first sensing unit 3 extracts and senses people flow gushes or is of a narrow and complex building structure in the environment information, the second sensing unit 5 starts to operate, real-time scanning ranging is performed in the rear-end view field range, obstacle sensing and early warning are performed in advance on the mechanical movement system 8 for braking, distance ranging and pose estimation of surrounding walls of the machine are achieved, and collision and scratch are avoided.
In this embodiment, a method for fusing laser and vision for crowd environment, as shown in fig. 5, includes:
step 1: setting a walking instruction and a destination through a man-machine interaction touch screen or the man-machine interaction system;
step 2: the first sensing unit detects environment information of a main direction view angle in front of the machine, the main control system primarily senses environment complexity according to the environment information, judges whether crowds are dense, whether channels are complex, the number of building obstacles and other environment information, if the environment is not complex, the laser and vision fusion SLAM navigation system converts a depth imaging picture obtained by the binocular depth vision component into a point cloud picture and projects the point cloud picture onto a laser point cloud picture detected by the eye safety laser radar, and the point cloud picture is converted into a grid base picture through information comparison and matching to carry out path planning;
if the environment is complex, the main control system starts the second sensing unit, the range laser radar continuously scans in the visual angle range, detects the distance between the surrounding objects at the rear end and the machine, detects the distance between the channel and the wall and the machine, automatically early-warns the interior of the machine or carries out emergency braking through a mechanical movement device when the distance is smaller than the safe distance, and then adjusts the pose of the machine or re-plans the path;
step 3: the main control system controls the mechanical movement system to advance to the destination according to the actual planned path.
The invention has the following advantages:
1) The design of the first sensing unit imitates the visual mode of a person, the environment information is efficiently extracted, the burden on the operation of a machine caused by extraction of a large amount of information is avoided, and the combination of the first sensing unit and the SLAM back-end algorithm realizes accurate environment description and improves the accuracy of movement of the machine;
2) The cooperation sensing of the first sensing unit and the second sensing unit can avoid the influence of variables and obstacles in the running process of the machine;
3) The laser sensing and the vision sensing are combined mutually to realize the complementary advantages of the two, the detection of depth information is realized in a field-of-view environment with a non-full view angle, the resolution capability of laser to the outline of an environmental object is high, the acquisition capability of a depth camera to the environmental color and plane information is high, and the fusion of the two avoids information loopholes, so that the method is a key for autonomous positioning and navigation.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A system for laser and vision fusion SLAM for a crowd environment, comprising: the system comprises a man-machine interaction system, a man-machine interaction touch screen, a first sensing unit, a main control system, a second sensing unit, a function fusion machine body, a laser and vision fusion SLAM navigation system, a mechanical motion system and a main control machine body, wherein the man-machine interaction system, the man-machine interaction touch screen, the first sensing unit, the second sensing unit, the main control system, the laser and vision fusion SLAM navigation system and the mechanical motion system are all arranged on the main control machine body, and the function fusion machine body is connected with the main control machine body;
the man-machine interaction system is used for realizing man-machine interaction;
the man-machine interaction touch screen is used for realizing information inquiry and instruction issuing;
the first sensing unit is used for realizing environment detection of a main direction visual angle in front of the machine;
the second sensing unit is used for sensing obstacles and realizing distance measurement and pose estimation of the obstacles around the machine; the second sensing unit does not need to run in real time and does not run in open and crowd sparse environments; when the first sensing unit extracts and senses people flow gushes or is of a narrow and complex building structure in the environment, the second sensing unit starts to operate;
the main control system is used for realizing instruction control;
the laser and vision fusion SLAM navigation system is used for map planning and path planning;
the mechanical movement system is used for controlling the machine to walk according to the corresponding instruction;
the functional fusion machine body is used for carrying required goods and realizing material transportation.
2. The system for laser and vision fusion SLAM for a crowd environment of claim 1, wherein the first sensing unit is disposed in front of and above the main control body, and comprises a eye-safe laser radar and a binocular depth vision assembly, the eye-safe laser radar being disposed directly above the binocular depth vision assembly.
3. The system for laser and vision fusion SLAM for a crowd environment of claim 2, wherein the second sensing unit comprises 2 ranging lidars, and 2 ranging lidars are disposed at diagonal positions of the rear end of the main body.
4. The system for laser and vision fusion SLAM for a crowd environment of claim 2, wherein the binocular depth vision assembly comprises a depth vision system left imager, a depth vision system right imager and an RGB module, the depth vision system left imager and the depth vision system right imager being symmetrically disposed on both sides of the RGB module.
5. The system for laser and vision fusion SLAM for a crowd environment of claim 1, wherein the human-computer interaction system comprises a voice recognition unit, a voice playing unit and a WIFI communication unit;
the voice recognition unit is used for performing voice recognition;
the voice playing unit is used for realizing voice playing;
the WIFI communication unit is used for realizing communication with the monitoring platform.
6. The system for laser and vision fusion SLAM for a crowd environment of claim 3, wherein the eye-safe lidar has an operating wavelength of 1550nm and the range lidar has an operating wavelength of 905nm.
7. The system for laser and visual fusion SLAM for a crowd environment of claim 1, further comprising an information display screen for enabling information display.
8. The system for laser and vision fusion SLAM for a crowd environment of claim 1, wherein the functional fusion body is a box for placing items or an ultraviolet sterilization body.
9. The system for laser and vision fusion SLAM for a crowd environment of claim 1, wherein the master body is an "L" master body.
10. A method of laser and vision fusion SLAM for a crowd environment, the method implemented based on the system of claim 1, comprising:
step 1: setting a walking instruction and a destination through a man-machine interaction touch screen or the man-machine interaction system;
step 2: the main control system judges the complexity of the environment according to the environment information of the main direction visual angle in front of the machine detected by the first sensing unit, if the environment is not complex, the laser and vision fusion SLAM navigation system converts the depth imaging picture obtained by the binocular depth vision component into a point cloud picture and projects the point cloud picture onto a laser point cloud picture detected by the eye safety laser radar, and the point cloud picture is converted into a grid base picture through information comparison and matching to carry out path planning;
if the environment is complex, the main control system starts the second sensing unit, the distance measuring laser radar continuously scans and detects the distance between the surrounding obstacle and the machine at the rear end in the visual angle range, whether the distance is in a safe distance is judged, if the distance is safe, the mechanical movement system is normally operated, otherwise, the mechanical movement system is braked, and pose adjustment and path re-planning are carried out through the laser and vision fusion SLAM navigation system;
step 3: the main control system controls the mechanical movement system to advance to a destination according to an actual planned path.
CN202110308864.XA 2021-03-23 2021-03-23 System and method for fusing laser and vision for crowd environment with SLAM Active CN112882480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110308864.XA CN112882480B (en) 2021-03-23 2021-03-23 System and method for fusing laser and vision for crowd environment with SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110308864.XA CN112882480B (en) 2021-03-23 2021-03-23 System and method for fusing laser and vision for crowd environment with SLAM

Publications (2)

Publication Number Publication Date
CN112882480A CN112882480A (en) 2021-06-01
CN112882480B true CN112882480B (en) 2023-07-21

Family

ID=76041980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110308864.XA Active CN112882480B (en) 2021-03-23 2021-03-23 System and method for fusing laser and vision for crowd environment with SLAM

Country Status (1)

Country Link
CN (1) CN112882480B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106950985A (en) * 2017-03-20 2017-07-14 成都通甲优博科技有限责任公司 A kind of automatic delivery method and device
CN109375629A (en) * 2018-12-05 2019-02-22 苏州博众机器人有限公司 A kind of cruiser and its barrier-avoiding method that navigates
CN109799831A (en) * 2018-03-19 2019-05-24 徐州艾奇机器人科技有限公司 A kind of quick cruiser system of two-wheel drive type and working method
CN111006655A (en) * 2019-10-21 2020-04-14 南京理工大学 Multi-scene autonomous navigation positioning method for airport inspection robot
CN111076726A (en) * 2019-12-31 2020-04-28 深圳供电局有限公司 Vision-assisted obstacle avoidance method and device for inspection robot, equipment and storage medium
CN111338359A (en) * 2020-04-30 2020-06-26 武汉科技大学 Mobile robot path planning method based on distance judgment and angle deflection
CN111752285A (en) * 2020-08-18 2020-10-09 广州市优普科技有限公司 Autonomous navigation method and device for quadruped robot, computer equipment and storage medium
CN111949027A (en) * 2020-08-10 2020-11-17 珠海一维弦机器人有限公司 Self-adaptive robot navigation method and device
CN112318507A (en) * 2020-10-28 2021-02-05 内蒙古工业大学 Robot intelligent control system based on SLAM technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190286145A1 (en) * 2018-03-14 2019-09-19 Omron Adept Technologies, Inc. Method and Apparatus for Dynamic Obstacle Avoidance by Mobile Robots

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106950985A (en) * 2017-03-20 2017-07-14 成都通甲优博科技有限责任公司 A kind of automatic delivery method and device
CN109799831A (en) * 2018-03-19 2019-05-24 徐州艾奇机器人科技有限公司 A kind of quick cruiser system of two-wheel drive type and working method
CN109375629A (en) * 2018-12-05 2019-02-22 苏州博众机器人有限公司 A kind of cruiser and its barrier-avoiding method that navigates
CN111006655A (en) * 2019-10-21 2020-04-14 南京理工大学 Multi-scene autonomous navigation positioning method for airport inspection robot
CN111076726A (en) * 2019-12-31 2020-04-28 深圳供电局有限公司 Vision-assisted obstacle avoidance method and device for inspection robot, equipment and storage medium
CN111338359A (en) * 2020-04-30 2020-06-26 武汉科技大学 Mobile robot path planning method based on distance judgment and angle deflection
CN111949027A (en) * 2020-08-10 2020-11-17 珠海一维弦机器人有限公司 Self-adaptive robot navigation method and device
CN111752285A (en) * 2020-08-18 2020-10-09 广州市优普科技有限公司 Autonomous navigation method and device for quadruped robot, computer equipment and storage medium
CN112318507A (en) * 2020-10-28 2021-02-05 内蒙古工业大学 Robot intelligent control system based on SLAM technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
靳东.室内复杂环境下移动机器人激光视觉融合SLAM及导航研究.《中国优秀硕士学位论文全文数据库信息科技辑》.2021,摘要,正文第10-11、21-22页. *

Also Published As

Publication number Publication date
CN112882480A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
US11754721B2 (en) Visualization and semantic monitoring using lidar data
US11725956B2 (en) Apparatus for acquiring 3-dimensional maps of a scene
US10807230B2 (en) Bistatic object detection apparatus and methods
US11407116B2 (en) Robot and operation method therefor
Stiller et al. Multisensor obstacle detection and tracking
CN109160452A (en) Unmanned transhipment fork truck and air navigation aid based on laser positioning and stereoscopic vision
CN110147106A (en) Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system
KR102391771B1 (en) Method for operation unmanned moving vehivle based on binary 3d space map
CN111079607A (en) Automatic driving system with tracking function
CN111781929A (en) AGV trolley and 3D laser radar positioning and navigation method
CN112477533A (en) Dual-purpose transport robot of facility agriculture rail
EP3842885A1 (en) Autonomous movement device, control method and storage medium
US20240085916A1 (en) Systems and methods for robotic detection of escalators and moving walkways
CN112882480B (en) System and method for fusing laser and vision for crowd environment with SLAM
CN113081525B (en) Intelligent walking aid equipment and control method thereof
Bostelman et al. Towards AGV safety and navigation advancement obstacle detection using a TOF range camera
US11537137B2 (en) Marker for space recognition, method of moving and lining up robot based on space recognition and robot of implementing thereof
Rodríguez-Gómez et al. UAV human teleoperation using event-based and frame-based cameras
EP2919150A1 (en) Safety system for vehicles
Bostelman et al. Obstacle detection using a time-of-flight range camera for automated guided vehicle safety and navigation
CN114663754A (en) Detection method, detection device, multi-legged robot and storage medium
CN114815809A (en) Obstacle avoidance method and system for mobile robot, terminal device and storage medium
Wu et al. Transmission line unmanned aerial vehicle obstacle avoidance system incorporating multiple sensing technologies
US20240027581A1 (en) Device and Method for Detecting Objects in a Monitored Zone
Malinovskii Modern vizualization technologies fusion–the way to artificial intellectual systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant