CN115494484A - Robot sensor system self-checking method and robot - Google Patents

Robot sensor system self-checking method and robot Download PDF

Info

Publication number
CN115494484A
CN115494484A CN202211316906.5A CN202211316906A CN115494484A CN 115494484 A CN115494484 A CN 115494484A CN 202211316906 A CN202211316906 A CN 202211316906A CN 115494484 A CN115494484 A CN 115494484A
Authority
CN
China
Prior art keywords
point cloud
laser radar
correct
installation position
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211316906.5A
Other languages
Chinese (zh)
Inventor
温天宇
何林
唐旋来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202211316906.5A priority Critical patent/CN115494484A/en
Publication of CN115494484A publication Critical patent/CN115494484A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention provides a self-checking method of a robot sensor system, wherein the sensor system comprises a laser radar and a stereoscopic vision sensor which are arranged on a robot, and the method comprises the following steps: determining that a parking position of the robot has a preset position relation with a target object; controlling a laser radar to transmit detection pulses to the periphery, receiving echo pulses of the detection pulses, and generating a first point cloud based on the echo pulses; determining an effective detection area of the laser radar according to the first point cloud, and determining whether the installation position of the laser radar is correct or not according to the effective detection area; when the mounting position of the laser radar is determined to be correct, acquiring second point cloud acquired by a stereoscopic vision sensor; and determining whether the installation position of the stereoscopic vision sensor is correct or not according to the first point cloud and the second point cloud. By adopting the technical scheme of the invention, whether the installation position of the laser radar is correct or not can be quickly detected, the position relation between the stereoscopic vision camera and the laser radar can be checked and corrected, and the time and labor cost is low and the efficiency is high.

Description

Robot sensor system self-checking method and robot
Technical Field
The present invention generally relates to the field of intelligent robots, and more particularly, to a method for self-checking a robot sensor system, a robot, and a computer-readable storage medium.
Background
With the development of science and technology, robots are widely applied to various fields in work and life and are more and more favored by people. The sensor plays an important role in the control of the robot, and the robot has the perception function and the reaction capability similar to human beings just by the sensor.
The robot is usually transported to a customer site after being assembled in a factory, but fastening parts of the sensor, such as screws, bolts and the like, may be loosened due to vibration during transportation, so that the mounting position of the sensor is changed; in addition, the temperature and humidity change of the surrounding environment of the robot can also cause the installation position of the sensor to change. When the installation position of the sensor changes, the detection data of the sensor is not accurate enough, the measurement result is influenced, and the normal work of the robot is further influenced. Therefore, it is important to detect whether the sensor is working properly. At present, the installation positions of the sensors are usually detected by methods such as manual visual inspection and the like, and whether each sensor works normally is determined by confirming whether each sensor can normally transmit information (namely hardware connection detection) when a robot is started, but the required time and labor cost are high, and the efficiency is low.
The statements in the background section are merely prior art as they are known to the inventors and do not, of course, represent prior art in the field.
Disclosure of Invention
To address one or more of the problems of the prior art, the present invention provides a method for self-checking of a robot sensor system, the sensor system including a lidar and a stereo vision sensor mounted on a robot, the method comprising:
determining that a parking position of the robot has a preset position relation with a preset target object;
controlling a laser radar to emit detection pulses to the periphery, receiving echo pulses of the detection pulses, and generating a first point cloud based on the echo pulses;
determining an effective detection area of the laser radar according to the first point cloud, and determining whether the installation position of the laser radar is correct or not according to the effective detection area;
when the mounting position of the laser radar is determined to be correct, acquiring second point cloud acquired by the stereoscopic vision sensor; and
and determining whether the installation position of the stereoscopic vision sensor is correct or not according to the first point cloud and the second point cloud.
According to an aspect of the present invention, the preset target has strong light reflecting strips and weak light reflecting strips at substantially the same height as the lidar, and the strong light reflecting strips and the weak light emitting strips are arranged in a preset arrangement and face the lidar.
According to an aspect of the present invention, the step of determining whether the installation position of the laser radar is correct according to the effective detection area includes: and comparing the effective detection area with a preset ideal detection area of the laser radar, and determining whether the installation position of the laser radar is correct or not.
According to an aspect of the present invention, the determining whether the installation position of the laser radar is correct according to the valid detection area includes: if the difference between the effective detection area and the ideal detection area is smaller than a first threshold value, determining that the installation position of the laser radar is correct; and if the difference between the two is not smaller than the first threshold value, determining that the installation position of the laser radar is incorrect.
According to an aspect of the invention, wherein the step of determining the effective detection area of the lidar comprises:
determining a distance and a reflectivity according to the first point cloud;
denoising and straight line fitting are carried out on the first point cloud;
and determining the effective detection area of the laser radar according to the distance, the reflectivity and the fitted straight line.
According to an aspect of the invention, further comprising: and when the installation position of the laser radar is incorrect, outputting alarm information of the laser radar with the wrong installation position.
According to an aspect of the invention, further comprising:
carrying out point cloud filtering on the second point cloud to remove noise points in the second point cloud;
and performing plane fitting based on the second point cloud.
According to an aspect of the present invention, the step of determining whether the mounting position of the stereo vision sensor is correct includes: and when the straight line fitted according to the first point cloud is overlapped with the plane fitted according to the second point cloud, determining whether the installation position of the stereoscopic vision sensor is correct.
According to an aspect of the present invention, the step of determining whether the mounting position of the stereo vision sensor is correct further comprises:
when the straight line fitted according to the first point cloud is overlapped with the plane fitted according to the second point cloud, determining the relative pose relationship between the laser radar and the stereoscopic vision sensor;
when the difference of the relative pose relation relative to the ideal pose relation is smaller than a threshold value, determining that the installation position of the stereoscopic vision sensor is correct;
and when the difference of the relative pose relation relative to the ideal pose relation is larger than a threshold value, determining that the installation position of the stereoscopic vision sensor is abnormal, and correcting the position parameters of the stereoscopic vision sensor according to the relative pose relation.
The present invention also provides a robot comprising:
a main body having a traveling mechanism;
a sensor system including a laser radar and a stereo vision sensor to detect an environment around the robot;
a controller coupled to the running gear and the sensor system, configured to control the running gear and the sensor system, and configured to perform the method as described above.
The present invention also provides a computer-readable storage medium comprising computer-executable instructions stored thereon which, when executed by a processor, implement the method as described above.
By adopting the technical scheme of the invention, whether the installation position of the laser radar is correct or not can be quickly detected, the position relation between the stereoscopic vision camera and the laser radar can be checked and corrected, the time and labor cost is low, the efficiency is high, the blank that the existing service robot cannot perform professional self-inspection when being supported by the client field technology can be made up, and the efficiency and the specialty of client field robot deployment and after-sale service are favorably improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 shows a flow diagram of a method of self-testing a robotic sensor system according to one embodiment of the invention;
FIG. 2 shows a schematic view of a preset target according to one embodiment of the invention;
FIG. 3 is a schematic diagram illustrating a positional relationship of a robot to a preset target object according to an embodiment of the present invention;
FIG. 4A is a schematic diagram illustrating a point cloud fitting straight line when the laser radar installation position is correct according to a preferred embodiment of the present invention;
FIG. 4B is a schematic diagram illustrating a point cloud fitted straight line when the laser radar mounting location is incorrect according to a preferred embodiment of the present invention; and
fig. 5 shows a schematic view of a robot according to an embodiment of the invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection, either mechanically, electrically, or in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly above and obliquely above the second feature, or simply meaning that the first feature is at a lesser level than the second feature.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or uses of other materials.
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it should be understood that they are presented herein only to illustrate and explain the present invention and not to limit the present invention.
Before describing the method 100, the principle that the change of the position of the sensor influences the normal work of the robot is described.
Normally, a robot has a plurality of sensors, each of which is fixed in its mounting position on the robot and has a corresponding measurement range. When the installation position of the sensor is changed, the sensor may incline or displace to a certain degree, so that a part of shielding exists between the sensor and the shell or the internal data line of the robot, the sensor mistakenly regards that a part shielded by the shell or the data line of the robot has an obstacle during detection, an erroneous measurement result is obtained, and then the robot cannot correctly construct an image according to the erroneous measurement result, and the work of positioning, navigation, obstacle avoidance and the like of the robot is influenced.
With the understanding that sensor position changes affect the normal operation of the robot, the method 100 is described in detail below.
Fig. 1 shows a flow diagram of a method 100 for self-testing of a robot sensor system comprising a lidar and a stereo vision sensor mounted on a robot, according to an embodiment of the invention, the method comprising steps S101-S105, the individual steps of the method 100 being described in detail below.
In step S101, it is determined that the parking position of the robot has a preset positional relationship with a preset target object.
Fig. 2 shows a schematic diagram of a preset target for assisting the sensor system in self-testing, according to an embodiment of the invention. As shown in fig. 2, the preset target may be a plurality of flat plates having a certain size and capable of covering the field of view of the lidar and the stereo vision sensor. The preset height of the preset target object is provided with a plurality of strong light reflecting strips and a plurality of weak light reflecting strips, and the strong light reflecting strips and the weak light reflecting strips are arranged in a preset arrangement mode and face the laser radar. According to an embodiment of the invention, the predetermined height is, for example, substantially the same height as the lidar, it being understood that in actual use, the heights of the strong and weak reflective stripes are typically slightly higher than the height of the lidar in order for the lidar to be able to detect the strong and weak reflective stripes. In addition, the strong light-reflecting stripes and the weak light-reflecting stripes may be arranged in an equidistant alternative arrangement, and of course, may also be arranged in other arrangements, specifically, for example, a plurality of (for example, two) strong light-reflecting stripes and a weak light-reflecting stripe are arranged in an alternative manner, a plurality of (for example, two) weak light-reflecting stripes and a strong light-reflecting stripe are arranged in an alternative manner, and the invention is not limited. The width, number and specific material of the strong reflective stripes and the reflective stripes are not limited in the present invention. Specifically, the material of the strong reflective strips is, for example, diamond reflective film, gold, copper, aluminum, etc., and the material of the weak reflective strips is, for example, black plastic or cloth, etc. The purpose of arranging the strong light reflecting strips and the weak light reflecting strips on the preset target object is to distinguish the preset target object from other obstacles in the surrounding environment of the robot, and the arrangement mode of the strong light reflecting strips and the weak light reflecting strips is more characteristic, and the larger the reflectivity difference is, the more easily the preset target object is distinguished from other obstacles in the surrounding environment of the robot. The above-mentioned embodiment describes a situation of the preset target object, and what kind of preset positional relationship the parking position of the robot has with the preset target object, and the following description is continued.
Fig. 3 is a schematic diagram showing a positional relationship between a robot and a preset target object according to an embodiment of the present invention. As shown in fig. 3, the robot is parked in front of the preset target object, a certain preset distance (for example, 10cm to 50 cm) is left between the parking position of the robot and the preset target object, and the plane where the robot and the preset target object are located is perpendicular to each other and faces the preset target object.
In step S102, the laser radar is controlled to emit a probe pulse to the surroundings, receive an echo pulse of the probe pulse, and generate a first point cloud based on the echo pulse.
The invention is not limited to the type of lidar, service robots such as meal delivery robots, preferably single line lidar, which has fast scanning speed, high resolution, high reliability, and is faster in angular frequency and sensitivity than multi-line and 3D lidar, and therefore more accurate in both distance and accuracy in detecting surrounding obstacles. The single line lidar generally works in conjunction with a rotating mechanism that is responsible for rotating the lidar at a frequency to effect scanning and detection of a range of fields of view in the horizontal direction to generate a first point cloud.
In step S103, an effective detection area of the laser radar is determined according to the first point cloud, and whether the installation position of the laser radar is correct is determined according to the effective detection area.
According to a preferred embodiment of the present invention, the distance and reflectivity are determined according to the first point cloud, thereby determining whether the laser radar recognizes the preset target object. The first point cloud is a dataset of points in the lidar coordinate system that contain rich information, such as two-dimensional coordinates (x) j ,y j ) (for single line radar) or three-dimensional coordinates (x) j ,y j ,z j ) (for multiline or 3D lidar), reflectionRate, time, etc., if the reflectivity of the point coincides with the reflectivity of the strong and/or weak reflective stripes on the preset target object, and the distance of the target object obtained by the time of the point coincides with the distance of the preset target object, it can be basically determined that the laser radar recognizes the preset target object. The laser radar performs ranging based on time of flight (TOF), and can calculate according to the following formula:
d = c × t/2, where c is the speed of light, c =3 × 10 8 m/s, t is the time of flight of the pulse.
In order to further determine that the laser radar recognizes the preset target object and determine an effective detection area of the laser radar, denoising and straight line fitting are required to be performed on the first point cloud. The noise is generated because the robot is interfered by ambient light, smoke, and the like in the working environment (such as a restaurant), and the purpose of removing the noise is to reduce the interference of the noise on the real point cloud data, improve the detection accuracy, and facilitate the subsequent processing. According to a preferred embodiment of the present invention, the first point cloud may be denoised by a radius filtering method. Specifically, the first point cloud may be traversed, each point in the first point cloud is specified to have at least 10 adjacent points in a radius range of a preset radius (for example, 0.05 m) with the point as a center of a circle, otherwise, the point is regarded as a noise point (i.e., an outlier), and the noise point is removed. It should be understood that the present embodiment is only an example, and does not limit the preset radius size and/or the number of the neighboring points. According to a preferred embodiment of the invention, a least square method can be adopted to perform straight line fitting on the denoised first point cloud to obtain a fitted straight line, and the effective detection area of the laser radar can be determined through the fitted straight line. The effective detection area of the lidar is determined, that is, whether the installation position of the lidar is correct or not can be determined according to the effective detection area, which is described in detail below.
According to a preferred embodiment of the present invention, an effective detection area of the lidar may be compared with an ideal detection area thereof, a difference between the effective detection area and the ideal detection area may be calculated, and if the difference is smaller than a first threshold (e.g., 1% or 3%), it may be determined that the installation position of the lidar is correct; if the difference between the two is not less than the first threshold (e.g., 1% or 3%), it is determined that the laser radar is installed incorrectly.
According to another preferred embodiment of the present invention, it is also possible to determine whether the installation position of the lidar is correct by the shape characteristics of the fitted straight line. The following is specifically described.
Fig. 4A is a schematic diagram showing a point cloud fitting straight line when the laser radar installation position is correct according to a preferred embodiment of the present invention, and fig. 4B is a schematic diagram showing a point cloud fitting straight line when the laser radar installation position is incorrect according to a preferred embodiment of the present invention, and referring to fig. 4A, the fitting straight line is relatively straight, which indicates that the generated points are substantially all at the same height, thereby indicating that the laser radar installation position is correct; in contrast, referring to fig. 4B, the fitting straight line is bent to indicate that the generated points are not at the same height, and thus, that the installation position of the lidar is changed because the lidar vibrates due to the loosening of its fastening member such as a screw, a bolt, etc. during the detection process, so that the generated points are not at the same height.
According to a preferred embodiment of the present invention, when it is determined that the installation position of the lidar is incorrect, alarm information that the installation position of the lidar is incorrect is output, and the installation position of the lidar is corrected.
The above-described embodiment describes how to determine whether the installation position of the laser radar is correct, and the robot sensor system includes the stereo vision sensor in addition to the laser radar, and how to determine whether the installation position of the stereo vision sensor is correct, which will be described in detail below.
Ideally, the installation positions of the laser radar and the stereoscopic vision sensor on the robot are fixed (refer to fig. 5), and the laser radar and the stereoscopic vision sensor have a preset pose relationship, so that when the installation position of the laser radar is determined to be correct, whether the installation position of the stereoscopic vision sensor is correct can be detected through the laser radar. The description is continued next.
In step S104, when the mounting position of the laser radar is determined to be correct, the second point cloud acquired by the stereo vision sensor is acquired. The second point cloud is a dataset of points in the stereoscopic vision sensor coordinate system, and the points in the second point cloud also contain rich information, such as three-dimensional coordinates (xv, yv, zv) of the points, and so on. When the second point cloud is obtained, point cloud filtering (such as radius filtering) is performed on the second point cloud to remove noise in the second point cloud, and plane fitting is performed on the second point cloud to obtain a fitting plane, so as to facilitate subsequent processing.
In step S105, it is determined whether the installation position of the stereoscopic vision sensor is correct according to the first point cloud and the second point cloud.
According to a preferred embodiment of the present invention, when a straight line fitted from the first point cloud (i.e., the fitted straight line) overlaps a plane fitted from the second point cloud (i.e., the fitted plane), it is determined whether the installation position of the stereoscopic vision sensor is correct. Specifically, a linear equation may be obtained from a straight line (i.e., a fitting straight line) fitted by the first point cloud, and a planar equation may be obtained from a plane (i.e., a fitting plane) fitted by the second point cloud.
The linear equation can be expressed by a standard equation of a linear equation as shown in the following equation (1-1):
Ajx+Bjy+Bj=0 (1-1)
the plane equation can be expressed by a standard formula of the plane equation as shown in the following equation (1-2):
Avx+Bvy+Cvz+Dv=0 (1-2)
and obtaining parameters of the linear equation, namely Aj, bj and Bj according to the three-dimensional coordinates (xj, yj, zj) of the point of the first point cloud, and obtaining the parameters of the linear equation to obtain a specific expression of the linear equation. Similarly, parameters of the plane equation, i.e., av, bv, cv, and Dv, may also be obtained according to the three-dimensional coordinates (xv, yv, zv) of the point of the second point cloud, and the parameters of the plane equation may be obtained, i.e., a specific expression of the plane equation may be obtained. And determining the relative pose relationship between the laser radar and the stereoscopic vision sensor according to the linear equation and the plane equation.
According to a preferred embodiment of the present invention, the actual relative pose relationship between the laser radar and the stereoscopic vision sensor can be compared with the ideal pose relationship, and when the difference between the relative pose relationship and the ideal pose relationship is smaller than a threshold value, it is determined that the installation position of the stereoscopic vision sensor is correct; and when the difference of the relative pose relation relative to the ideal pose relation is larger than a threshold value, determining that the installation position of the stereoscopic vision sensor is abnormal. The relative pose relationship includes rotation amounts θ x, θ y, θ z and translation amounts tx, ty, tz. The rotation amounts thetax, thetay and thetaz are rotation amounts around an x axis, a y axis and a z axis respectively, and the translation amounts tx, ty and tz are translation amounts along the x axis, the y axis and the z axis respectively.
According to a preferred embodiment of the present invention, when the difference of the relative pose relationship (e.g., rotation amount) from the ideal pose relationship (e.g., rotation amount) is less than a threshold value (e.g., 0.5 °), it is determined that the mounting position of the stereo vision sensor is correct; when the relative pose relationship (e.g., offset amount) differs from an ideal pose relationship (e.g., rotation amount) by more than a threshold value (e.g., 0.5 °), it is determined that the mounting position of the stereo vision sensor is abnormal.
According to another preferred embodiment of the present invention, when the relative pose relationship (e.g., offset amount) differs from the ideal pose relationship (e.g., offset amount) by less than a threshold value (e.g., 1 mm), it is determined that the mounting position of the stereo vision sensor is correct; when the relative positional relationship (e.g., offset) differs from the ideal positional relationship (e.g., offset) by more than a threshold value (e.g., 1 mm), it is determined that the mounting position of the stereo vision sensor is abnormal.
According to a preferred embodiment of the present invention, when it is determined that the mounting position of the stereo vision sensor is correct, information that the mounting position of the stereo vision sensor is normal is output. When the mounting position of the stereoscopic vision sensor is determined to be abnormal, alarm information of the error mounting position of the stereoscopic vision sensor is output, and the position parameter of the stereoscopic vision sensor is corrected according to the relative pose relation, namely, the robot processes according to the actual relative pose relation of the stereoscopic vision sensor and the laser radar without manual intervention.
The above embodiment describes a case where the preset target object is a flat plate, and it should be understood that when the preset target object is a plurality of (for example, two) flat plates, because the flat plates are connected to each other to form a certain included angle, the detection pulse emitted by the laser radar can be more intensively incident on the preset target object, so that the point cloud obtained by the laser radar has more shape characteristics, and the laser radar is more convenient to determine the effective detection area thereof, and further determine whether the installation position thereof is correct.
The invention also provides a robot, fig. 5 shows a schematic view of a robot 20 according to an embodiment of the invention, as shown in fig. 5, the robot 20 comprising:
a main body 21 having a traveling mechanism 211;
a sensor system 22 including a laser radar 221 and a stereo vision sensor 222 to detect the environment around the robot 20; and
a controller 23 coupled to the travelling mechanism 211 and the sensor system 22, configured to control the travelling mechanism 211 and the sensor system 22, and configured to perform the method as described above, the controller 23 being physically located inside the robot 20.
The present invention also provides a computer-readable storage medium comprising computer-executable instructions stored thereon which, when executed by a processor, implement the method as described above. Wherein the computer-readable storage medium includes, but is not limited to, any type of memory, and the invention is not limited to the type of memory. The memory may be non-volatile and/or volatile memory. The non-volatile Memory may include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), variable resistive Memory (ReRAM), phase change Memory (PCRAM), or Flash Memory (Flash Memory). Volatile memory can include Random Access Memory (RAM), registers, or cache. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
By adopting the technical scheme of the invention, whether the installation position of the laser radar is correct or not can be quickly detected, the position relation between the stereoscopic vision camera and the laser radar can be checked and corrected, the time and labor cost is low, the efficiency is high, the blank that the existing service robot cannot perform professional self-inspection when being supported by the client field technology can be made up, and the efficiency and the specialty of client field robot deployment and after-sale service are favorably improved.
It should be noted that the preset target object in the above embodiment is a plurality of flat plates which are additionally collocated, in the practical application process, the strong light reflecting strips and the weak light reflecting strips can be pasted on other positions such as a wall, a door, etc., and the preset target object is not limited to only a flat plate, but also can be other objects, and the invention is not limited. In addition, the invention is only exemplified by restaurant robots, and does not limit the invention, that is, the robot can also work in other scenes, including but not limited to hospitals, libraries, and shopping malls, etc.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described above, or equivalents may be substituted for elements thereof. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method of self-testing a robotic sensor system, the sensor system comprising a lidar and a stereo vision sensor mounted on a robot, the method comprising:
determining that a parking position of the robot has a preset position relation with a preset target object;
controlling a laser radar to emit detection pulses to the periphery, receiving echo pulses of the detection pulses, and generating a first point cloud based on the echo pulses;
determining an effective detection area of the laser radar according to the first point cloud, and determining whether the installation position of the laser radar is correct or not according to the effective detection area;
when the mounting position of the laser radar is determined to be correct, acquiring second point cloud acquired by the stereoscopic vision sensor; and
and determining whether the installation position of the stereoscopic vision sensor is correct or not according to the first point cloud and the second point cloud.
2. The method of claim 1, wherein the predetermined target has strongly and weakly reflective stripes at substantially the same height as the lidar, the strongly and weakly reflective stripes being arranged in a predetermined arrangement and facing the lidar.
3. The method of claim 1, wherein the step of determining whether the installation position of the lidar is correct according to the effective detection area comprises: and comparing the effective detection area with a preset ideal detection area of the laser radar, and determining whether the installation position of the laser radar is correct.
4. The method of claim 3, the step of determining whether the installation position of the lidar is correct according to the effective detection area comprising: if the difference between the effective detection area and the ideal detection area is smaller than a first threshold value, determining that the installation position of the laser radar is correct; and if the difference between the two is not less than the first threshold value, determining that the installation position of the laser radar is incorrect.
5. The method of claim 1, wherein the step of determining an effective detection area of the lidar comprises:
determining a distance and a reflectivity according to the first point cloud;
denoising and straight line fitting are carried out on the first point cloud;
and determining the effective detection area of the laser radar according to the distance, the reflectivity and the fitted straight line.
6. The method of claim 1, further comprising: and when the installation position of the laser radar is incorrect, outputting alarm information of the error installation position of the laser radar.
7. The method of claim 1 or 5, further comprising:
carrying out point cloud filtering on the second point cloud to remove noise points in the second point cloud;
and performing plane fitting based on the second point cloud.
8. The method of claim 7, wherein the step of determining whether the mounting position of the stereo vision sensor is correct comprises: and when the straight line fitted according to the first point cloud is overlapped with the plane fitted according to the second point cloud, determining whether the installation position of the stereoscopic vision sensor is correct.
9. The method of claim 8, the step of determining whether the mounting location of the stereo vision sensor is correct further comprising:
when the straight line fitted according to the first point cloud is overlapped with the plane fitted according to the second point cloud, determining the relative pose relationship between the laser radar and the stereoscopic vision sensor;
when the difference of the relative pose relation relative to the ideal pose relation is smaller than a threshold value, determining that the installation position of the stereoscopic vision sensor is correct;
and when the difference of the relative pose relation relative to the ideal pose relation is larger than a threshold value, determining that the installation position of the stereoscopic vision sensor is abnormal, and correcting the position parameters of the stereoscopic vision sensor according to the relative pose relation.
10. A robot, comprising:
a main body having a traveling mechanism;
a sensor system including a laser radar and a stereo vision sensor for detecting an environment around the robot;
a controller coupled to the running gear and sensor system, configured to control the running gear and sensor system, and configured to perform the method of any of claims 1-9.
11. A computer-readable storage medium comprising computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-9.
CN202211316906.5A 2022-10-26 2022-10-26 Robot sensor system self-checking method and robot Pending CN115494484A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211316906.5A CN115494484A (en) 2022-10-26 2022-10-26 Robot sensor system self-checking method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211316906.5A CN115494484A (en) 2022-10-26 2022-10-26 Robot sensor system self-checking method and robot

Publications (1)

Publication Number Publication Date
CN115494484A true CN115494484A (en) 2022-12-20

Family

ID=85114856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211316906.5A Pending CN115494484A (en) 2022-10-26 2022-10-26 Robot sensor system self-checking method and robot

Country Status (1)

Country Link
CN (1) CN115494484A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium
CN112017205A (en) * 2020-07-27 2020-12-01 清华大学 Automatic calibration method and system for space positions of laser radar and camera sensor
CN113436270A (en) * 2021-06-18 2021-09-24 上海商汤临港智能科技有限公司 Sensor calibration method and device, electronic equipment and storage medium
CN114252870A (en) * 2021-11-30 2022-03-29 深圳元戎启行科技有限公司 Laser radar self-checking method, laser radar self-checking equipment and computer readable storage medium
CN114549590A (en) * 2022-03-01 2022-05-27 浙江大华技术股份有限公司 Target object detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium
CN112017205A (en) * 2020-07-27 2020-12-01 清华大学 Automatic calibration method and system for space positions of laser radar and camera sensor
CN113436270A (en) * 2021-06-18 2021-09-24 上海商汤临港智能科技有限公司 Sensor calibration method and device, electronic equipment and storage medium
CN114252870A (en) * 2021-11-30 2022-03-29 深圳元戎启行科技有限公司 Laser radar self-checking method, laser radar self-checking equipment and computer readable storage medium
CN114549590A (en) * 2022-03-01 2022-05-27 浙江大华技术股份有限公司 Target object detection method and device

Similar Documents

Publication Publication Date Title
US20200114509A1 (en) Method for identifying moving object in three-dimensional space and robot for implementing same
EP3422050A1 (en) Lidar sensor alignment system
US9129523B2 (en) Method and system for obstacle detection for vehicles using planar sensor data
US20100235129A1 (en) Calibration of multi-sensor system
US11486988B2 (en) Method for calibrating the alignment of a moving object sensor
CN112130158B (en) Object distance measuring device and method
RU2669200C2 (en) Obstacle detection device with crossing planes and method of detecting thereby
JP7489014B2 (en) Location Estimation System
JP2009014481A (en) Distance measuring device and its calibration method
US11619725B1 (en) Method and device for the recognition of blooming in a lidar measurement
CN110488280A (en) A kind of modification method and device, vehicle, storage medium of parking stall profile
CN115494484A (en) Robot sensor system self-checking method and robot
WO2023123890A1 (en) Lidar position and orientation diagnostic method, lidar and autonomous vehicle
WO2019239775A1 (en) Vehicle object sensing device
CN114895322A (en) Laser radar stain detection method and robot
CN113238237B (en) Library position detection method and device
US20210034071A1 (en) Method and device for checking a calibration of environment sensors
KR101853652B1 (en) Around view genegation method and apparatus performing the same
JP2019035633A (en) External field recognition device
JP5644673B2 (en) Perimeter monitoring device, perimeter monitoring method, and driving support device
US20230314564A1 (en) Method and device for recognizing misalignments of a stationary sensor and stationary sensor
JP7283341B2 (en) cruise control system
CN110308460B (en) Parameter determination method and system of sensor
CN113050073B (en) Reference plane calibration method, obstacle detection method and distance detection device
US20230252751A1 (en) Method for aligning at least two images formed by three-dimensional points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination