CN115014320A - Method and system for building image of indoor robot in glass scene - Google Patents

Method and system for building image of indoor robot in glass scene Download PDF

Info

Publication number
CN115014320A
CN115014320A CN202210552223.3A CN202210552223A CN115014320A CN 115014320 A CN115014320 A CN 115014320A CN 202210552223 A CN202210552223 A CN 202210552223A CN 115014320 A CN115014320 A CN 115014320A
Authority
CN
China
Prior art keywords
glass
mirror
area
reflection intensity
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210552223.3A
Other languages
Chinese (zh)
Inventor
杨洪杰
郭震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jingwu Trade Technology Development Co Ltd
Original Assignee
Shanghai Jingwu Trade Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jingwu Trade Technology Development Co Ltd filed Critical Shanghai Jingwu Trade Technology Development Co Ltd
Priority to CN202210552223.3A priority Critical patent/CN115014320A/en
Publication of CN115014320A publication Critical patent/CN115014320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3811Point data, e.g. Point of Interest [POI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3859Differential updating map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method and a system for building an image of an indoor robot in a glass scene, which comprise the following steps: step S1: collecting the reflection intensity of different materials in a field environment under laser irradiation to obtain a corresponding reflection intensity range; step S2: identifying the positions of the glass and the mirror in the visual data acquired by the robot according to the reflection intensity; step S3: transforming the detected glass and mirror position data into a grid map in a SLAM process, and updating the grid map into occupation; step S4: symmetric images appearing in glass and mirror planes in the visual data are removed. The scheme provides a simple and effective glass and mirror panel identification method, and the method is combined with an SLAM algorithm, so that the map accuracy can be effectively improved.

Description

Method and system for building image of indoor robot in glass scene
Technical Field
The invention relates to the technical field of robot composition, in particular to a method and a system for building an indoor robot in a glass scene.
Background
With the deep development of artificial intelligence technology, the function of the mobile robot in each field of social life is more and more prominent. Among them, the SLAM technique is a crucial technique in the field of mobile robot research. The important characteristic of the SLAM technology is that the positioning and the map construction are carried out simultaneously, and the autonomous movement can be realized by combining the technology and a path planning algorithm to move the robot. No matter the positioning of the robot or the construction of the environment map, the change of the surrounding environment needs to be sensed in real time through an external sensor. The laser radar is widely used in the mobile robot due to the characteristics of high precision, high data acquisition speed, wide scanning range and the like, but the laser radar cannot return correct data information to the robot due to the physical characteristics of light pulses when scanning transparent objects. This will cause the mobile robot to perceive this layer from the environment to be erroneous, which in turn affects the proper operation of the entire robot system. In recent years, many architectural interior finishes use glass or mirrors, and due to the transparent nature of the glass panels, the lidar cannot obtain accurate data, resulting in SLAM being ineffective in these scenarios.
In chinese patent document No. CN113203409B, a method for constructing a navigation map of a mobile robot in a complex indoor environment is disclosed, which includes: acquiring self pose information and environment information of the robot; primary SLAM and establishing an original map; processing the intensity data of the laser radar, and screening suspected glass existence areas; selecting an RGB image according to the suspected glass existence area; realizing glass detection in a complex environment based on the selected RGB image; determining the state of the grid of the corresponding area according to the glass detection result; the map is updated and a two-dimensional grid map containing the glass of the room is provided. This patent document recognizes a glass region by using a deep learning algorithm, and thus has a high demand for hardware computation of a robot.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for building an image of an indoor robot in a glass scene.
The invention provides a drawing establishing method of an indoor robot in a glass scene, which comprises the following steps:
step S1: collecting the reflection intensity of different materials in a field environment under laser irradiation to obtain a corresponding reflection intensity range;
step S2: identifying the positions of the glass and the mirror in the visual data acquired by the robot according to the reflection intensity;
step S3: transforming the detected glass and mirror position data into a grid map in a SLAM process, and updating the grid map into occupation;
step S4: symmetric images appearing in glass and mirror planes in the visual data are removed.
Preferably, the material in the step S1 includes glass, mirror, and general objects;
when a few lasers return to the sensor, the lasers are made of glass;
when no laser return sensor exists or most of the laser return sensors exist, the sensor is made of a mirror surface material;
when a small part of laser returns to the sensor, the sensor is a common object.
Preferably, the actual position is determined in step S2 by obtaining edge line segments of the glass and the mirror, the edge line segments including the following conditions:
there is a distance threshold step;
there is a reflection intensity step;
the line segment area meets the threshold value of the reflecting intensity;
and connecting the edge line segments simultaneously meeting the conditions to obtain the actual positions of the glass and the mirror.
Preferably, the distance threshold is a step in data from the non-glass region to the glass region and from the glass region to the non-glass region, both measured values of distance.
Preferably, the step of the reflection intensity is a step of the data from the non-glass area to the glass area and from the glass area to the non-glass area.
Preferably, the step S4 includes the following sub-steps:
step S4.1: detecting a mirror, and acquiring the position of an end point of the mirror to obtain a mirror edge line segment;
step S4.2: acquiring a connection line between the robot and the end point of the edge line segment of the mirror, and calculating to obtain the positions of the end points of the symmetrical wall surface;
step S4.3: and setting an area surrounded by the connection lines of the end points of the edge line segments of the mirror and the end points of the symmetrical wall surface as an unknown area in the map.
The invention provides a drawing establishing system of an indoor robot under a glass scene, which comprises the following modules:
module M1: collecting the reflection intensity of different materials in a field environment under laser irradiation to obtain a corresponding reflection intensity range;
module M2: identifying the positions of the glass and the mirror in the visual data acquired by the robot according to the reflection intensity;
module M3: transforming the detected glass and mirror position data into a grid map in a SLAM process, and updating the grid map into occupation;
module M4: symmetric images appearing in glass and mirror in the visual data are removed.
Preferably, the materials identified in the module M1 include glass, mirrors, and common objects;
when a few lasers return to the sensor, the lasers are made of glass;
when no laser return sensor exists or most of the laser return sensors exist, the sensor is made of a mirror surface material;
when a small part of laser returns to the sensor, the sensor is a common object.
Preferably, the actual position is determined in said module M2 by taking edge line segments of the glass and the mirror, said edge line segments comprising the following conditions:
there is a distance threshold step;
there is a reflection intensity step;
the line segment area meets the threshold value of the reflecting intensity;
and connecting the edge line segments simultaneously meeting the conditions to obtain the actual positions of the glass and the mirror.
Preferably, the distance threshold is a step in data from the non-glass region to the glass region and from the glass region to the non-glass region, both measured values of distance.
Compared with the prior art, the invention has the following beneficial effects:
1. the scheme provides a simple and effective glass and mirror panel identification method, and the method is combined with an SLAM algorithm, so that the map accuracy can be effectively improved.
2. The recognition method provided by the invention has the advantages of simple and efficient algorithm and higher recognition efficiency.
3. The invention identifies the positions of the glass and the mirror through the distance and the reflecting intensity, and has the advantage of high identification accuracy.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic view of a glass material reflecting laser light;
FIG. 2 is a schematic diagram of the reflection of a mirror material when a laser beam is incident perpendicularly to the normal direction;
FIG. 3 is a schematic diagram of the reflection of a mirror material when a laser beam is injected into the mirror material in a non-perpendicular normal direction;
FIG. 4 is a schematic view of a diffuse reflection laser of a common material;
FIG. 5 is a schematic view of a robot identifying glass and mirror edges;
fig. 6 is a schematic diagram of a symmetrical image appearing in the removed glass and mirror surfaces.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides a drawing establishing method for an indoor robot in a glass scene, which comprises the following steps:
step S1: collecting the reflection intensity of different materials in a field environment under laser irradiation to obtain a corresponding reflection intensity range;
referring to fig. 1, when the material is glass, the laser light is mostly transmitted through the glass, and only a very small amount of data is returned to the sensor.
Referring to FIG. 2, when the material is specular, especially at normal incidence, most of the reflected energy returns to the sensor; referring to fig. 3, when not normally directed, specular reflection has no data back to the sensor.
Referring to fig. 4, when the material is a normal object, generally diffuse reflection, some of the reflected energy is received.
Step S2: the positions of the glass and the mirror are identified in the vision data acquired by the robot according to the intensity of the reflected light. Referring to fig. 5, since the actual position of the glass cannot be directly extracted from the laser data, the edge line segment of the glass is acquired by the gradient of the laser data, thereby determining the actual position of the glass. The edge line segment includes the following conditions:
there is a distance threshold step; there is a step in the data for the distance measurements from the non-glass region to the glass region and from the glass region to the non-glass region.
There is a reflection intensity step; there is a step in the data for the measurement of the intensity of the reflected light, both from the non-glass region to the glass region and from the glass region to the non-glass region.
The line segment area meets the threshold value of the reflecting intensity;
and connecting the edge line segments simultaneously meeting the conditions to obtain the actual positions of the glass and the mirror.
Step S3: transforming the detected glass and mirror position data into a grid map in a SLAM process, and updating the grid map into occupation;
step S4: removing the symmetrical images appearing in the glass and mirror surfaces in the visual data, with reference to fig. 6, specifically comprises the following sub-steps:
step S4.1: detecting a mirror, and acquiring the position of an end point of the mirror to obtain a mirror edge line segment;
step S4.2: acquiring a connection line between the robot and the end point of the edge line segment of the mirror, and calculating to obtain the positions of the end points of the symmetrical wall surface;
step S4.3: and setting an area surrounded by the connection lines of the end points of the edge line segments of the mirror and the end points of the symmetrical wall surface as an unknown area in the map.
The invention also provides a drawing establishing system of the indoor robot in the glass scene, which comprises the following modules:
module M1: collecting the reflection intensity of different materials in a field environment under laser irradiation to obtain a corresponding reflection intensity range;
the materials identified in the execution module M1 include glass, mirror, and general objects;
when a few lasers return to the sensor, the lasers are made of glass;
when no laser returns to the sensor, or most of the laser returns to the sensor, the mirror surface material;
when a small part of laser returns to the sensor, the sensor is a common object.
Module M2: identifying the positions of the glass and the mirror in the visual data acquired by the robot according to the reflection intensity; determining the actual position by taking edge line segments of the glass and the mirror, said edge line segments comprising the following conditions:
there is a distance threshold step; there is a step in the data for the distance measurements from the non-glass region to the glass region and from the glass region to the non-glass region.
There is a reflection intensity step; there is a step in the data for the measurement of the intensity of the reflected light, both from the non-glass region to the glass region and from the glass region to the non-glass region.
The line segment area meets the threshold value of the reflecting intensity;
and connecting the edge line segments simultaneously meeting the conditions to obtain the actual positions of the glass and the mirror.
Module M3: transforming the detected glass and mirror position data into a grid map in a SLAM process, and updating the grid map into occupation;
module M4: symmetric images appearing in glass and mirror planes in the visual data are removed.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description has described specific embodiments of the present invention. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A method for building an image of an indoor robot in a glass scene is characterized by comprising the following steps:
step S1: collecting the reflection intensity of different materials in a field environment under laser irradiation to obtain a corresponding reflection intensity range;
step S2: identifying the positions of the glass and the mirror in the visual data acquired by the robot according to the reflection intensity;
step S3: transforming the detected glass and mirror position data into a grid map in a SLAM process, and updating the grid map into occupation;
step S4: symmetric images appearing in glass and mirror planes in the visual data are removed.
2. The mapping method of an indoor robot under a glass scene according to claim 1, characterized in that: the materials in the step S1 include glass, mirror and common objects;
when a very small number of lasers return to the sensor, the lasers are made of glass;
when no laser returns to the sensor, or most of the laser returns to the sensor, the mirror surface material;
when a small part of laser returns to the sensor, the sensor is a common object.
3. The mapping method of an indoor robot under a glass scene according to claim 1, characterized in that: the actual position is determined in step S2 by obtaining edge line segments of the glass and the mirror, the edge line segments including the following conditions:
there is a distance threshold step;
there is a reflection intensity step;
the line segment area meets the threshold value of the reflecting intensity;
and connecting the edge line segments simultaneously meeting the conditions to obtain the actual positions of the glass and the mirror.
4. The mapping method for the indoor robot in the glass scene as claimed in claim 3, wherein: the distance threshold is from the non-glass area to the glass area and from the glass area to the non-glass area, and the measured value of the distance has a step change of data.
5. The mapping method of the indoor robot under the glass scene according to claim 3, characterized in that: the reflection intensity steps are from non-glass area to glass area and from glass area to non-glass area, and the measured values of the reflection intensity have data steps.
6. The mapping method of an indoor robot under a glass scene according to claim 1, characterized in that: the step S4 includes the following sub-steps:
step S4.1: detecting a mirror, and acquiring the position of an end point of the mirror to obtain a mirror edge line segment;
step S4.2: acquiring a connecting line of the robot and the end point of the edge line segment of the mirror, and calculating to obtain the position of the end point of the symmetrical wall surface;
step S4.3: and setting an area surrounded by the connection lines of the end points of the edge line segments of the mirror and the end points of the symmetrical wall surface as an unknown area in the map.
7. The utility model provides a picture system is built to indoor robot under glass scene which characterized in that includes following module:
module M1: collecting the reflection intensity of different materials in a field environment under laser irradiation to obtain a corresponding reflection intensity range;
module M2: identifying the positions of the glass and the mirror in the visual data acquired by the robot according to the reflection intensity;
module M3: transforming the detected glass and mirror position data into a grid map during SLAM and updating the grid map into occupancy;
module M4: symmetric images appearing in glass and mirror planes in the visual data are removed.
8. The mapping system of the indoor robot under the glass scene of claim 7, wherein: the materials identified in the module M1 include glass, mirrors, and common objects;
when a few lasers return to the sensor, the lasers are made of glass;
when no laser return sensor exists or most of the laser return sensors exist, the sensor is made of a mirror surface material;
when a small part of laser returns to the sensor, the sensor is a common object.
9. The mapping system of the indoor robot under the glass scene of claim 7, wherein: the actual position is determined in said module M2 by taking edge line segments of the glass and the mirror, said edge line segments comprising the following conditions:
there is a distance threshold step;
there is a reflection intensity step;
the line segment area meets the threshold value of the reflecting intensity;
and connecting the edge line segments simultaneously meeting the conditions to obtain the actual positions of the glass and the mirror.
10. The mapping system of the indoor robot under the glass scene of claim 9, wherein: the distance threshold is from the non-glass area to the glass area and from the glass area to the non-glass area, and the measured value of the distance has a step change of data.
CN202210552223.3A 2022-05-20 2022-05-20 Method and system for building image of indoor robot in glass scene Pending CN115014320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210552223.3A CN115014320A (en) 2022-05-20 2022-05-20 Method and system for building image of indoor robot in glass scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210552223.3A CN115014320A (en) 2022-05-20 2022-05-20 Method and system for building image of indoor robot in glass scene

Publications (1)

Publication Number Publication Date
CN115014320A true CN115014320A (en) 2022-09-06

Family

ID=83069190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210552223.3A Pending CN115014320A (en) 2022-05-20 2022-05-20 Method and system for building image of indoor robot in glass scene

Country Status (1)

Country Link
CN (1) CN115014320A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116400371A (en) * 2023-06-06 2023-07-07 山东大学 Indoor reflective transparent object position identification method and system based on three-dimensional point cloud

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116400371A (en) * 2023-06-06 2023-07-07 山东大学 Indoor reflective transparent object position identification method and system based on three-dimensional point cloud
CN116400371B (en) * 2023-06-06 2023-09-26 山东大学 Indoor reflective transparent object position identification method and system based on three-dimensional point cloud

Similar Documents

Publication Publication Date Title
Kropp et al. Interior construction state recognition with 4D BIM registered image sequences
CN111492265B (en) Multi-resolution, simultaneous localization and mapping based on 3D lidar measurements
Nguyen et al. A comparison of line extraction algorithms using 2D laser rangefinder for indoor mobile robotics
Foster et al. Visagge: Visible angle grid for glass environments
Zhao et al. Mapping with reflection-detection and utilization of reflection in 3d lidar scans
KR101888295B1 (en) Method for estimating reliability of distance type witch is estimated corresponding to measurement distance of laser range finder and localization of mobile robot using the same
Weingarten Feature-based 3D SLAM
CN112446927A (en) Combined calibration method, device and equipment for laser radar and camera and storage medium
US10107897B2 (en) Method for evaluating type of distance measured by laser range finder and method for estimating position of mobile robot by using same
CN111337011A (en) Indoor positioning method based on laser and two-dimensional code fusion
Koch et al. Detection of specular reflections in range measurements for faultless robotic slam
CN115014320A (en) Method and system for building image of indoor robot in glass scene
Aijazi et al. Automatic detection and feature estimation of windows in 3D urban point clouds exploiting façade symmetry and temporal correspondences
CN114966714A (en) Window occlusion detection method and device
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
Jiang et al. Online glass confidence map building using laser rangefinder for mobile robots
Howard et al. Fast visual mapping for mobile robot navigation
Tao et al. Glass Recognition and Map Optimization Method for Mobile Robot Based on Boundary Guidance
Mora et al. Intensity-based identification of reflective surfaces for occupancy grid map modification
CN115147561A (en) Pose graph generation method, high-precision map generation method and device
Mo et al. A survey on recent reflective detection methods in simultaneous localization and mapping for robot applications
Mandischer et al. Bots2ReC: Radar localization in low visibility indoor environments
Cui et al. Recognition of indoor glass by 3D lidar
Wu et al. Method for detecting glass wall with LiDAR and ultrasonic sensor
Zhang et al. A glass detection method based on multi-sensor data fusion in simultaneous localization and mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination