CN113376655B - Obstacle avoidance module, mobile robot and obstacle avoidance method - Google Patents

Obstacle avoidance module, mobile robot and obstacle avoidance method Download PDF

Info

Publication number
CN113376655B
CN113376655B CN202110495344.4A CN202110495344A CN113376655B CN 113376655 B CN113376655 B CN 113376655B CN 202110495344 A CN202110495344 A CN 202110495344A CN 113376655 B CN113376655 B CN 113376655B
Authority
CN
China
Prior art keywords
laser
image sensor
image
obstacle
horizontal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110495344.4A
Other languages
Chinese (zh)
Other versions
CN113376655A (en
Inventor
李乐
周琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huanchuang Technology Co ltd
Original Assignee
Shenzhen Huanchuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huanchuang Technology Co ltd filed Critical Shenzhen Huanchuang Technology Co ltd
Priority to CN202110495344.4A priority Critical patent/CN113376655B/en
Publication of CN113376655A publication Critical patent/CN113376655A/en
Application granted granted Critical
Publication of CN113376655B publication Critical patent/CN113376655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Optical Distance (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of obstacle avoidance, in particular to an obstacle avoidance module, a mobile robot and an obstacle avoidance method, wherein the method comprises the following steps: when the laser projects horizontal laser lines and vertical laser lines to an obstacle, acquiring a first image acquired by the first image sensor and a second image acquired by the second image sensor; acquiring first depth information corresponding to the horizontal laser line and second depth information corresponding to the vertical laser line according to the first image and the second image; and combining the first depth information and the second depth information to obtain a point cloud, wherein the point cloud is used for identifying the obstacle. According to the invention, the barriers are detected through the two dimensions of the vertical direction and the horizontal direction, so that the blind area of the mobile robot can be reduced, and the barriers around the mobile robot can be detected more accurately and comprehensively.

Description

Obstacle avoidance module, mobile robot and obstacle avoidance method
Technical Field
The invention relates to the technical field of obstacle avoidance, in particular to an obstacle avoidance module, a mobile robot and an obstacle avoidance method.
Background
The obstacle avoidance technology is just needed in the field of robots, and along with the improvement of control precision and intelligent requirements in the robot industry, an obstacle avoidance sensor tends to be miniaturized, and the sensing performance of the sensor is higher and higher.
The existing mobile robot adopts a laser radar to scan the surrounding environment, and obstacle avoidance is implemented according to point clouds. However, when detecting an obstacle, the current mobile robot generally can only detect the obstacle on a height section, and a large blind area is generated for the obstacle lower than the laser radar, so that the mobile robot cannot accurately detect the effective obstacle.
Disclosure of Invention
The embodiment of the invention provides an obstacle avoidance module, a mobile robot and an obstacle avoidance method, which are used for solving the technical problems of blind areas, low accuracy and low comprehensiveness of detection results in the detection of obstacles in the related technology.
In order to solve the technical problems, the embodiment of the invention provides the following technical scheme:
In a first aspect, an embodiment of the present invention provides an obstacle avoidance module, including:
The device comprises a first image sensor, a second image sensor and a laser, wherein the first image sensor and the second image sensor are spaced by a preset distance in the vertical direction, and the first image sensor and the laser are arranged on the same horizontal plane;
wherein the laser is used for emitting horizontal laser lines and vertical laser lines.
Optionally, an optical axis center line of both the first image sensor and the laser is parallel to an X-axis of the first image sensor coordinate system.
Optionally, the optical axis center connecting line of the first image sensor and the second image sensor is parallel to a straight line where the vertical direction is located.
Optionally, the optical axis center connecting line of the first image sensor and the second image sensor intersects with a straight line where the vertical direction is located.
In a second aspect, an embodiment of the present invention provides a mobile robot including:
A housing;
the obstacle avoidance module is arranged on the shell;
The driving module is arranged in the shell; and
And the controller is respectively connected with the obstacle avoidance module and the driving module and is used for sending a control instruction to control the driving module to drive the shell to move and detect an obstacle through the obstacle avoidance module.
In a third aspect, an embodiment of the present invention provides an obstacle avoidance method, which is applied to a mobile robot as described above, and includes:
When the laser projects horizontal laser lines and vertical laser lines to an obstacle, acquiring a first image acquired by the first image sensor and a second image acquired by the second image sensor;
Acquiring first depth information corresponding to the horizontal laser line and second depth information corresponding to the vertical laser line according to the first image and the second image;
And combining the first depth information and the second depth information to obtain a point cloud, wherein the point cloud is used for identifying the obstacle.
Optionally, the obtaining, according to the first image and the second image, first depth information corresponding to the horizontal laser line includes:
acquiring a first horizontal laser stripe in the first image;
Determining a laser stripe corresponding to the first horizontal laser stripe in the second image as a second horizontal laser stripe according to the first horizontal laser stripe; wherein the second image comprises at least one horizontal laser stripe;
acquiring the light spot height of the second horizontal laser stripe in the second image sensor coordinate system;
And measuring and obtaining first depth information corresponding to the horizontal laser line according to the light spot height, the relative height of the second image sensor and the laser and the focal length of the second image sensor.
Optionally, after the horizontal laser line is reflected by an obstacle spaced a first distance from the laser for the first time, the reflected laser line is imaged in a fixed line pixel range of the imaging surface of the first image sensor, and at the same time, after the horizontal laser line is reflected by an obstacle spaced a second distance from the laser for the first time, the reflected laser line is imaged in the fixed line pixel range, where the first distance is different from the second distance.
Optionally, the horizontal laser stripes and the vertical laser stripes corresponding to the obstacles having different distances from the laser are at different heights on the imaging surface of the second image sensor.
Optionally, the obtaining, according to the first image and the second image, second depth information corresponding to the vertical laser line includes:
Acquiring a first change value corresponding to a change of a column generated by the vertical laser line in the horizontal direction according to the first image;
acquiring a second variation value corresponding to the variation of the columns generated by the vertical laser line in the horizontal direction according to the second image;
and obtaining second depth information corresponding to the vertical laser line according to the first variation value and/or the second variation value.
Compared with the prior art, in the obstacle avoidance module, the mobile robot and the obstacle avoidance method provided by the embodiments of the invention, the obstacle avoidance module comprises a first image sensor, a second image sensor and a laser, wherein the first image sensor and the second image sensor are spaced by a preset distance in the vertical direction, the second image sensor and the laser are arranged on the same horizontal plane, and the laser is used for emitting horizontal laser lines and vertical laser lines. Because the horizontal laser line of the laser can fall in the fixed line pixel range of the imaging surface of the first image sensor after being reflected by the barrier for the first time, the second horizontal laser stripe imaged on the second image sensor can be effectively searched based on the first horizontal laser stripe imaged on the first image sensor, and further the depth information corresponding to the horizontal laser line can be accurately measured by utilizing the second horizontal laser stripe. In addition, the vertical laser stripes respectively obtained by the first image sensor and the second image sensor can change along with the change of the distance of the obstacle, so that the depth information corresponding to the vertical laser lines can be accurately measured. Finally, according to the depth information corresponding to the horizontal laser line and the depth information corresponding to the vertical laser line, a cross-shaped distributed point cloud can be obtained, and the detected point cloud can be used for identifying more detailed obstacles. By adopting the obstacle avoidance module, the obstacle avoidance method and the mobile robot, the blind area of the mobile robot can be reduced, and the obstacles around the mobile robot can be detected more accurately and comprehensively.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
Fig. 1a is a schematic structural diagram of a mobile robot according to an embodiment of the present invention;
FIG. 1b is a schematic block diagram of a mobile robot according to an embodiment of the present invention;
FIG. 2a is a schematic diagram illustrating a first position between a first image sensor, a second image sensor and a laser according to an embodiment of the present invention;
FIG. 2b is a schematic diagram illustrating a second position between the first image sensor, the second image sensor and the laser according to the embodiment of the present invention;
FIG. 2c is a schematic diagram showing that the optical axis center line of the first image sensor and the laser is parallel to the X-axis of the first image sensor coordinate system according to the embodiment of the present invention;
fig. 2d is a schematic diagram of a positional relationship between an obstacle avoidance module and the ground according to an embodiment of the present invention;
FIG. 3a is a schematic view of imaging a horizontal laser line emitted from a laser to an obstacle at each image sensor according to an embodiment of the present invention;
FIG. 3b is a schematic diagram illustrating imaging of a horizontal laser line in a first image sensor when the horizontal laser line strikes an obstacle having different distances according to an embodiment of the present invention;
FIG. 3c is a schematic diagram illustrating imaging of a horizontal laser line in a second image sensor when the horizontal laser line strikes an obstacle having different distances according to an embodiment of the present invention;
FIG. 4 is an imaging schematic diagram of a laser line emitted by a laser device to the ground at each image sensor according to an embodiment of the present invention, wherein the laser line reflected by the ground is reflected by an obstacle again;
FIG. 5a is a rear view of an image of a vertical laser line emitted by a laser toward an obstacle at each image sensor provided by an embodiment of the present invention;
FIG. 5b is a top view of an image of a vertical laser line emitted by a laser toward an obstacle at each image sensor provided by an embodiment of the present invention;
FIG. 5c is a schematic view of imaging a first image sensor when a vertical laser line provided by an embodiment of the present invention strikes an obstacle having different distances;
FIG. 5d is a schematic diagram illustrating imaging of a vertical laser line in a second image sensor when the vertical laser line strikes an obstacle having different distances according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of an obstacle avoidance method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of each image sensor provided with a sub-window area according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a laser device according to an embodiment of the present invention emitting a horizontal laser line to an obstacle, and imaging the horizontal laser line on an imaging surface of a second image sensor after reflection by the obstacle;
Fig. 9 is a schematic structural diagram of an obstacle avoidance device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that, if not in conflict, the features of the embodiments of the present invention may be combined with each other, which are all within the protection scope of the present invention. In addition, while the division of functional blocks is performed in a device diagram and the logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in a device diagram or the sequence in a flowchart.
The obstacle avoidance module provided by the embodiment of the invention can be arranged in any suitable electronic equipment, such as a robot, industrial equipment, household equipment, an unmanned automobile and the like. In this embodiment, the robot may be configured with any suitable business function to achieve the completion of the corresponding business operations, such as a cleaning robot, a sweeper, and the like.
Referring to fig. 1a and 1b, the mobile robot 100 includes a housing 11, a driving module 12, a cleaning module 13, a wireless communication unit 14, an audio unit 15, an obstacle avoidance module 16, a light supplementing module 17 and a control unit 18.
The housing 11 may be configured in any suitable shape, such as a frustoconical shape, an irregular shape, and the like. Inside the housing 11, which may be configured with corresponding structures for the operational features of the mobile robot 100, for example, the mobile robot 100 is used to clean the floor, the interior of the housing 11 may be configured with a channel for drawing out sewage or debris carried by the cleaning assembly 14.
The driving module 12 is disposed in the housing 11, and is used for driving the mobile robot 100 to travel along a planned path so as to perform a cleaning operation, and during cleaning, the control unit 18 sends a control command to the driving module 12, and the driving module 12 drives the cleaning assembly 14 to complete the cleaning operation according to the control command.
In some embodiments, the driving module 12 includes a motor assembly and a driving wheel, and the motor assembly receives the control command and drives the driving wheel to rotate according to the control command, so as to drive the mobile robot 100 to travel or retract.
A cleaning assembly 13 is provided to the housing 11 for cleaning the floor. When the mobile robot 100 is driven by the driving module 12 to travel, the mobile robot 100 drives the cleaning assembly 13 to clean the floor. The cleaning means 13 may be a water washing means, a scrubbing means, a sweeping means, or the like.
In some embodiments, the cleaning assembly 13 includes a motor assembly and a drum, the surface of the drum is provided with a wiper, two ends of the drum are disposed on the housing 11, the motor assembly is connected with the drum, the motor assembly is controlled by the control unit 18 to drive the drum to rotate, and the wiper rotates along with the rotation of the drum, so that the wiper can clean the floor.
The wireless communication unit 14 is used for wireless communication with a user terminal, and the wireless communication unit 14 is electrically connected to the control unit 18. When a pet is on hold, the user transmits a control instruction to the electronic device 100 through the user terminal, the wireless communication unit 14 receives the control instruction and transmits the control instruction to the control unit 18, and the control unit 18 controls the mobile robot 100 according to the control instruction.
The wireless communication unit 14 includes a combination of one or more of a broadcast receiving module, a mobile communication module, a wireless internet module, a short-range communication module, and a positioning information module. Wherein the broadcast receiving module receives the broadcast signal and/or the broadcast-related information from the external broadcast management server via a broadcast channel. The broadcast receiving module may receive the digital broadcast signal using a digital broadcast system such as terrestrial digital multimedia broadcasting (DMB-T), satellite digital multimedia broadcasting (DMB-S), media forward link only (MediaFLO), digital video broadcasting-handheld (DVB-H), or terrestrial integrated services digital broadcasting (ISDB-T).
The mobile communication module transmits or receives a wireless signal to or from at least one of a base station, an external terminal, and a server on a mobile communication network. Here, the wireless signal may include a voice call signal, a video call signal, or various forms of data according to the reception and transmission of the character/multimedia message.
The wireless internet module refers to a module for wireless internet connection, and may be built-in or external to the terminal. Wireless internet technologies such as Wireless LAN (WLAN) (Wi-Fi), wireless broadband (Wibro), worldwide interoperability for microwave access (Wimax), high Speed Downlink Packet Access (HSDPA) may be used.
The short-range communication module refers to a module for performing short-range communication. Short-range communication technologies such as Bluetooth (Bluetooth), radio Frequency Identification (RFID), infrared data association (IrDA), ultra Wideband (UWB), or ZigBee may be used.
The positioning information module is a module for obtaining the position of the mobile robot 100, such as a Global Positioning System (GPS) module.
The audio unit 15 is configured to output audio signals, and the control unit 18 controls the audio unit 15 to output corresponding audio signals, such as clean, full sewage, etc., according to preset logic.
In some embodiments, the audio unit 15 may be a loudspeaker, a speaker, a microphone, or the like electroacoustic transducer, wherein the number of loudspeakers or speakers may be one or more, the number of microphones may be a plurality, and the plurality of microphones may constitute a microphone array so as to effectively collect sound. The microphone may be electrodynamic (moving coil, ribbon), capacitive (dc polarized), piezoelectric (crystal, ceramic), electromagnetic, carbon particle, semiconductor, etc., or any combination thereof. In some embodiments, the microphone may be a microelectromechanical system (MEMS) microphone.
The obstacle avoidance module 16 is used to measure the distance between the robot and the obstacle so that the robot can avoid the obstacle or construct a map.
In the present embodiment, the obstacle avoidance module 16 includes a first image sensor 161, a second image sensor 162, and a laser 163.
In the present embodiment, the first image sensor 161 is spaced apart from the second image sensor 162 by a predetermined distance in the vertical direction. Referring to fig. 2a and 2b together, in some embodiments, the first image sensor 161 and the second image sensor 162 are aligned in a vertical direction, that is, the optical axis center lines of the first image sensor 161 and the second image sensor 162 are parallel to a straight line in the vertical direction, and such a structural design can help to extract the data point cloud efficiently in the later stage.
In some embodiments, the first image sensor 161 and the second image sensor 162 may also be misaligned in the vertical direction, that is, the optical axis center line of both the first image sensor 161 and the second image sensor 162 intersects a straight line in the vertical direction.
In the present embodiment, the first image sensor 161 and the laser 163 are disposed at the same horizontal plane, that is, referring to fig. 2c, the optical axis center lines O1 and O2 of the first image sensor 161 and the laser 163 are parallel to the X-axis of the coordinate system of the first image sensor 161.
In some embodiments, referring to fig. 2d, the obstacle avoidance module 16 is disposed on the housing 11 at a height from the ground, for example, the first image sensor 161 of the obstacle avoidance module 16 is about 6.5 cm from the ground.
In some embodiments, the optical axes of the first image sensor 161, the second image sensor 162 and the laser 163 in the obstacle avoidance module 16 all intersect with the horizontal plane of the ground at a certain angle, that is, the first image sensor 161, the second image sensor 162 and the laser 163 face the ground at a certain angle, for example, when the laser 163 is set to irradiate an obstacle 15cm-20cm in front of the obstacle, the angle between the optical axis of the laser 163 and the ground is about 14-17 degrees. Therefore, the obstacle avoidance module adopting the structure can detect small-volume obstacles on the ground, so that point clouds in the environment can be effectively constructed and obstacle avoidance can be implemented.
In the present embodiment, the laser 163 is used to emit a horizontal laser line and a vertical laser line. Since the first image sensor 161 and the laser 163 are disposed at the same horizontal plane, after the horizontal laser line emitted by the laser 163 is reflected by the obstacle having a different distance from the horizontal laser line, the laser line after the first reflection is imaged in the fixed line pixel range of the imaging surface of the first image sensor 161, that is, after the laser line is reflected by the obstacle having a first distance from the laser for the first time, the reflected laser line is imaged in the fixed line pixel range of the imaging surface of the first image sensor 161, and at the same time, the horizontal laser line is reflected by the obstacle having a second distance from the laser for the first time, and the reflected laser line is imaged in the fixed line pixel range, wherein the first distance is different from the second distance. And, after the horizontal laser line emitted by the laser is reflected by the obstacle with different distances at least twice, the laser lines after at least two reflections are all imaged outside the fixed line pixel range.
For example, referring to fig. 3a, the laser 163 emits the horizontal laser line 32 toward the obstacle 31, and the horizontal laser line 32 returns to the two reflected laser lines, namely the first reflected laser line 33 and the second reflected laser line 34, after being reflected by the obstacle 31, and the first reflected laser line 33 is collected by the second image sensor 162 and imaged on the first imaging plane 35. The second reflected laser line 34 is collected by the first image sensor 161 and imaged onto the second imaging plane 36. Where x1o1y1 is the coordinate system of the second image sensor 162 and x2o2y2 is the coordinate system of the first image sensor 161.
It will be appreciated that since the first image sensor 161 and the laser 163 are disposed at the same horizontal plane and the first image sensor 161 and the second image sensor 162 are vertically spaced apart by a predetermined distance, in fig. 3a, the laser line after the first reflection is imaged within a fixed line of pixels of the imaging surface of the first image sensor 161 regardless of whether the horizontal laser line 32 impinges on an obstacle of any distance. Therefore, during the post ranging, the laser stripe in the pixel range of the fixed line can be effectively searched, and the searched laser stripe is used as a reference object, so that the other laser stripe matched with the reference object is effectively found on the other imaging surface, and the distance between the laser 163 and the obstacle can be effectively and accurately measured.
In fig. 3a, the horizontal laser lines 32 are projected onto an obstacle at an arbitrary distance, and the first reflected laser lines are imaged at different heights on the first imaging surface 35 of the second image sensor 162, that is, the horizontal laser stripes corresponding to the obstacles having different distances from the laser 163 are imaged at different heights on the imaging surface of the second image sensor, which is herein understood in conjunction with fig. 3b and 3 c.
In fig. 3b, since the first image sensor 161 and the laser 163 are disposed at the same horizontal plane, when the horizontal laser line 32 strikes an obstacle having a different distance, the second reflected laser line 34 is imaged on the same line image of the imaging surface of the first image sensor 161, that is, the imaging of the horizontal laser line emitted by the laser 163 on the first image sensor 161 does not change in height with the distance of the obstacle. For example, the laser stripe striking the long-distance obstacle 37 and the laser stripe 39 of the short-distance obstacle 38 are both imaged on the same line of image of the imaging surface of the first image sensor 161.
In fig. 3c, since the second image sensor 162 is spaced apart from the laser 163 by a predetermined distance in the vertical direction, when the horizontal laser line 32 strikes an obstacle having a different distance, the laser stripe of the long-distance obstacle 37 and the laser stripe 40 of the short-distance obstacle 38 are imaged on different line images of the imaging surface of the second image sensor 162, that is, the imaging of the horizontal laser line emitted from the laser 163 on the second image sensor 162 shows a height change according to the distance change of the obstacle.
Referring to fig. 4, the laser 163 emits the horizontal laser line 42 to the ground 41, and the horizontal laser line 42 is separated into a first reflected laser line 43, a second reflected laser line 44 and a third reflected laser line 45 after being reflected by the ground 41, wherein the first reflected laser line 43 is collected by the second image sensor 162 and imaged on the first imaging plane 46 to obtain a first effective stripe image 411. The second reflected laser line 44 is collected by the first image sensor 161 and imaged onto the second imaging plane 47 to obtain a second effective fringe image 412. The third reflected laser line 45 is reflected again by the obstacle 48 as an incident ray of the obstacle 48, and is divided into a fourth reflected laser line 49 and a fifth reflected laser line 410. Where x3o3y3 is the coordinate system of the second image sensor 162 and x4o4y4 is the coordinate system of the first image sensor 161.
Wherein the fourth reflected laser line 49 is collected by the second image sensor 162 and imaged onto the first imaging plane 46 to obtain the first ineffective streak image 413. The fifth reflected laser line 410 is collected by the first image sensor 161 and imaged onto the second imaging plane 47 to obtain a second invalid stripe image 414.
As is evident from fig. 4, for the point cloud reconstruction, the first effective fringe image 411 or the second effective fringe image 412 is reflected for the first time by the ground as an obstacle, both of which are correct for the point cloud reconstruction. However, the first or second invalid stripe image 413 or 414 is secondarily reflective, and if the point cloud is reconstructed using the first or second invalid stripe image 413 or 414, the last measured distance is not accurate enough.
As can be seen from fig. 4, since the first image sensor 161 and the laser 163 are disposed at the same horizontal plane, the second effective stripe image 412 is imaged within a fixed line pixel range of the first image sensor imaging plane, and the second ineffective stripe image 414 is imaged outside the fixed line pixel range. Therefore, the second effective stripe image 412 can be searched only by searching within the fixed line pixel range of the first image sensor imaging surface, and the second ineffective stripe image 414 cannot be searched.
The second effective stripe image 412 is determined, and the first effective stripe image 411 and the first ineffective stripe image 413 are also determined, so that the robot can use the second effective stripe image 412 to perform image similarity matching processing with the first effective stripe image 411 and the first ineffective stripe image 413, respectively, and can effectively find the first effective stripe image 411 with high matching degree with the second effective stripe image 412. In the later ranging, the distance between the laser 163 and the ground point can be measured by using the first effective fringe image 411 and combining with a similar triangle model.
Referring next to fig. 5a and 5b, the laser 163 emits the vertical laser line 52 toward the obstacle 51, and the vertical laser line 52 returns to the reflected laser line 53 after being reflected by the obstacle 51, and the reflected laser line 53 may be collected by the first image sensor 161 and/or the second image sensor 162 and imaged to exhibit a column change in the horizontal direction. Wherein the obstacle 51 includes a ground obstacle and a top obstacle, i.e., an obstacle above the top of the mobile robot 100.
The first image sensor 161 and the second image sensor 162 are spaced apart by a preset distance in the vertical direction, so that the fields of view of the observation scene are also different, the positions of the vertical laser stripe images acquired on the imaging surfaces of the two image sensors are different, and the images of the emitted vertical laser lines on the first image sensor 161 and the second image sensor 162 may or may not have a common area. At the time of the post ranging, the distance of the laser 163 from the obstacle may be measured according to the value of the change of the column generated by the vertical laser line in the horizontal direction.
In fig. 5a and 5b, the vertical laser line 52 is projected onto an obstacle at any of various distances, and the laser line after the first reflection is imaged on the first image sensor 161 and the second image sensor 162 at various heights, that is, the vertical laser stripes corresponding to the obstacles having different distances from the laser 163 are imaged on the imaging surfaces of the first image sensor 161 and the second image sensor 162, which is understood herein with reference to fig. 5c and 5 d.
In fig. 5c, when the same vertical laser line impinges on obstacles of different distances, the vertical stripe appearing on the imaging surface of the first image sensor 161 is divided into different segments and falls on different columns. Wherein the overhead obstacle is detectable by the vertical laser line. For example, the uppermost 59 is a top obstacle with its vertical laser stripe in the middle of the top obstacle, and the lower two vertical laser stripes are the vertical laser stripes of the obstacle 57 farther and the obstacle 58 nearer, respectively.
In fig. 5d, since the second image sensor 162 is disposed higher than the first image sensor 161, when the viewing angle is seen from above, the rear laser line is blocked by the front obstacle, resulting in the breaking of the laser stripe being presented, so that two separate laser stripes are seen from the viewing angle. For example, the laser light reflected by the obstacle 57 at a farther distance is blocked by the obstacle 58 at a nearer distance, resulting in the laser stripe reflected by the obstacle 57 being broken.
In the present embodiment, imaging of the vertical laser line emitted from the laser 163 on the first image sensor 161 and the second image sensor 162 shows a change in height with a change in the distance of an obstacle.
In some embodiments, the first image sensor 161 and the second image sensor 162 include a Charge-coupled Device (CCD), a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS), and the CMOS sensor may be a backside-illuminated CMOS sensor or a stacked CMOS sensor.
In some embodiments, the first image sensor 161 and the second image sensor 162 are further integrated with an ISP (IMAGE SIGNAL Processor) for processing output data of the optical sensor, such as processing for AEC (automatic exposure control), AGC (automatic gain control), AWB (automatic white balance), color correction, and the like.
In some embodiments, the light supplementing assembly 17 is used to supplement the image sensor with light when capturing images. For example, when the light of the room environment is insufficient, the control unit 18 activates the light supplement module 17 to emit light. The light supplementing component can be a light emitting source such as an LED lamp.
The laser 163 includes any type of laser source capable of projecting laser light, including a line laser, a solid-state laser, a gas laser, a liquid laser, a semiconductor laser, a free electron laser, and the like.
The control unit 18 is respectively connected with the driving module 12, the cleaning assembly 13, the wireless communication unit 14, the audio unit 15 and the obstacle avoidance module 16.
The control unit 18 may send a driving command to the driving module 12, and control the driving module 12 to drive the housing 11 to move. Or the control unit 18 may send a cleaning instruction to the cleaning assembly 13 to control the cleaning assembly 13 to perform a cleaning operation. Or the control unit 18 may communicate with the wireless communication unit 14. Or the control unit 18 may send a voice command to the audio unit 15 controlling the audio unit 15 to play sound. Or the control unit 18 may acquire data collected by the obstacle avoidance module 16, and construct a map or plan a path according to a map construction algorithm. Or the control unit 18 may control the light supplementing unit 17 to implement light supplementing when capturing an image.
The control unit 18 serves as a control core of the mobile robot 100, and coordinates operations of the respective units. The control unit 18 may be a general purpose processor (e.g., a Central Processing Unit (CPU)), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a field programmable gate array (FPGA, CPLD, etc.), a single chip microcomputer, an ARM (Acorn RISC MACHINE) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. Also, the control unit 18 may be any conventional processor, controller, microcontroller, or state machine. The control unit 18 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
As another aspect of the embodiments of the present invention, an embodiment of the present invention provides an obstacle avoidance method applied to the mobile robot 100 described above. Referring to fig. 6, the obstacle avoidance method includes:
S61, when the laser projects horizontal laser lines and vertical laser lines to an obstacle, acquiring a first image acquired by the first image sensor and a second image acquired by the second image sensor.
The first image and the second image comprise at least one horizontal laser stripe and at least one vertical laser stripe, respectively. The first image and the second image may be obtained through sub-windows of image sensors, specifically, referring to fig. 7, each of the image sensors includes a sub-window area 70a, the control unit 18 sends a synchronization control signal to the first image sensor 161, the second image sensor 162 and the laser 163, respectively, the laser 163 is turned on according to the synchronization control signal, and simultaneously, the first image sensor 161 and the second image sensor 162 are exposed according to the synchronization control signal. Then, the first image sensor 161 obtains a captured image, and the first image sensor 161 uses the sub-window to capture a target image area from the self-captured image as a first image, and the second image sensor 162 also uses its corresponding sub-window to capture a target image area from the self-captured image as a second image.
The target image area is intercepted from the corresponding acquired image through the sub-window to serve as a first image or a second image, so that the operation amount of each frame can be effectively reduced, the frame rate is improved, and the detection probability is increased during the later ranging.
In some embodiments, the first image sensor 161 and the second image sensor 162 may also send the acquired images to the control unit 18, and the first image and the second image are obtained after the image processing is completed by the control unit 18.
S62, according to the first image and the second image, obtaining first depth information corresponding to the horizontal laser line and obtaining second depth information corresponding to the vertical laser line.
In this embodiment, the obtaining, according to the first image and the second image, first depth information corresponding to the horizontal laser line includes:
acquiring a first horizontal laser stripe in the first image;
Determining a laser stripe corresponding to the first horizontal laser stripe in the second image as a second horizontal laser stripe according to the first horizontal laser stripe, wherein the second image comprises at least one horizontal laser stripe;
acquiring the light spot height of the second horizontal laser stripe in the second image sensor coordinate system;
And measuring and obtaining first depth information corresponding to the horizontal laser line according to the light spot height, the relative height of the second image sensor and the laser and the focal length of the second image sensor.
Wherein a sliding window may be used to slide in a horizontal direction over a fixed row of pixels in the first image to extract a first horizontal laser stripe; and then searching a second horizontal laser stripe corresponding to the first horizontal laser stripe in the second image. For example, according to the image matching algorithm, a sliding window is used to search on the Y-axis of the coordinate system of the second image sensor 162 to search for a second horizontal laser stripe that matches the first horizontal laser stripe, so that the coordinates of the second horizontal laser stripe on the Y-axis of the coordinate system of the second image sensor 162 can be obtained. Wherein the image matching algorithm includes, but is not limited to: sum of absolute gray value differences (SAD, sum of Absolute Differences), normalized Correlation Coefficient (NCC), and the like.
The spot height of the second horizontal laser stripe in the coordinate system of the second image sensor 162 is the coordinate of the second horizontal laser stripe on the Y axis of the coordinate system of the second image sensor 162.
Wherein the spot height, the relative height and the focal length may be processed using a similar triangle model to obtain the first depth information.
For example, referring to fig. 8, the laser 163 emits a horizontal laser line 82 toward the obstacle 81, and the horizontal laser line 82 is split into two paths after being reflected by the obstacle 81, and one path of reflected laser line (not shown) is incident on the first image sensor 161 and imaged on an imaging surface (not shown) of the first image sensor 161. The other path of reflected laser line 83 is incident on the second image sensor 162 and imaged on the imaging surface 84 of the second image sensor 162, where x7o7y7 is the coordinate system of the second image sensor 162.
In fig. 8, a line segment AB is a distance d between the laser 163 and the obstacle 81, a line segment CD is a spot height y', x8o8y8 (not shown) is a coordinate system of the first image sensor 161, OA is a relative height h, and OD is a focal length f. Since triangle ABO and triangle DOC are similar triangles to each other, there is the following relationship:
d/h=f/y’;
since h, f and y 'are known, then d=h×f/y'.
It can be understood that when the lens of the image sensor is distorted, before the distance between the laser and the obstacle is calculated by using the similar triangle model, the distortion of the lens can be corrected, a corrected image is obtained, and then the distance between the laser and the obstacle is calculated by combining the similar triangle model according to the corrected image. When a more accurate distance d is obtained, the point cloud can be reconstructed more accurately.
In this embodiment, the obtaining, according to the first image and the second image, second depth information corresponding to the vertical laser line includes: acquiring a first change value corresponding to a change of a column generated by the vertical laser line in the horizontal direction according to the first image; acquiring a second variation value corresponding to the variation of the columns generated by the vertical laser line in the horizontal direction according to the second image; and obtaining second depth information corresponding to the vertical laser line according to the first variation value and/or the second variation value.
Wherein the laser stripes of the vertical laser lines in the first image sensor 161 and the second image sensor 162 may change along with the horizontal direction generated by the distance of the obstacle, the imaging of the emitted vertical laser lines on the first image sensor 161 and the second image sensor 162 may or may not have a common area. When there is no common area, the second depth information may be determined according to a value of a change in a column generated in the horizontal direction by the vertical laser line in the image acquired by the first image sensor 161 or the second image sensor 162. When there is a common area, the values of the changes of the columns generated in the horizontal direction of the vertical laser line, that is, the first change value and the second change value, which may be the same or similar, may be acquired from the first image sensor 161 and the second image sensor 162, respectively, and the second depth information may take any one of the two change values, or an average value of the two change values may be calculated, and the average value may be taken as the second depth information.
And S63, combining the first depth information and the second depth information to obtain a point cloud, wherein the point cloud is used for identifying the obstacle.
The point cloud information of the obstacle in front of or near to the ground detected by the laser 163 can be obtained according to the first depth information, and the point cloud information of the obstacle above or at the top detected by the laser 163 can be obtained according to the second depth information, so that the obstacle around the mobile robot can be comprehensively obtained, and the blind area range of the mobile robot can be reduced.
The obstacle avoidance method provided by the embodiment of the invention can combine the depth restored by the horizontal laser and the vertical laser to form a cross-shaped distributed point cloud, wherein the three-dimensional information of the ground obstacle in a low space can be restored in the forward movement process of the mobile robot, and the three-dimensional information of the obstacle in the surrounding space of the mobile robot can be restored when the mobile robot rotates. Therefore, the method reduces the blind area range and improves the comprehensiveness and accuracy of the mobile robot for detecting the obstacle.
In another aspect of the embodiments of the present invention, an obstacle avoidance device is provided and applied to the mobile robot. The obstacle avoidance device of the embodiment of the invention can be used as one of the software functional units, and comprises a plurality of instructions which are stored in a memory, and a processor can access the memory and call the instructions to execute so as to complete the obstacle avoidance method.
Referring to fig. 9, the obstacle avoidance apparatus 900 includes an image acquisition module 91, a depth information acquisition module 92, and an obstacle recognition module 93.
The image acquisition module 91 is configured to acquire a first image acquired by the first image sensor and a second image acquired by the second image sensor when the laser projects a horizontal laser line and a vertical laser line to an obstacle; the depth information obtaining module 92 is configured to obtain first depth information corresponding to the horizontal laser line and obtain second depth information corresponding to the vertical laser line according to the first image and the second image; the obstacle identifying module 93 is configured to combine the first depth information and the second depth information to obtain a point cloud, where the point cloud is used to identify the obstacle.
Wherein, when obtaining the first depth information, the depth information obtaining module 92 is specifically configured to: acquiring a first horizontal laser stripe in the first image; determining a laser stripe corresponding to the first horizontal laser stripe in the second image as a second horizontal laser stripe according to the first horizontal laser stripe; wherein the second image comprises at least one horizontal laser stripe; acquiring the light spot height of the second horizontal laser stripe in the second image sensor coordinate system; and measuring and obtaining first depth information corresponding to the horizontal laser line according to the light spot height, the relative height of the second image sensor and the laser and the focal length of the second image sensor.
The horizontal laser line is imaged in a fixed line pixel range of the imaging surface of the first image sensor after being reflected by an obstacle spaced a first distance from the laser for the first time, and at the same time, the reflected laser line is imaged in the fixed line pixel range after being reflected by an obstacle spaced a second distance from the laser for the first time, wherein the first distance is different from the second distance.
Wherein, when obtaining the second depth information, the depth information obtaining module 92 is specifically configured to: acquiring a first change value corresponding to a change of a column generated by the vertical laser line in the horizontal direction according to the first image; acquiring a second variation value corresponding to the variation of the columns generated by the vertical laser line in the horizontal direction according to the second image; and obtaining second depth information corresponding to the vertical laser line according to the first variation value and/or the second variation value.
Wherein horizontal and vertical laser stripes corresponding to obstacles having different distances from the laser are at different heights of the second image sensor imaging surface.
It should be noted that, the obstacle avoidance device may execute the obstacle avoidance method provided by the embodiment of the present invention, and has the corresponding functional module and beneficial effects of the execution method. Technical details not described in detail in the embodiments of the obstacle avoidance apparatus may be referred to the obstacle avoidance method provided in the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the application, the steps may be implemented in any order, and there are many other variations of the different aspects of the application as described above, which are not provided in detail for the sake of brevity; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (8)

1. A method of obstacle avoidance, the method comprising:
When a laser projects a horizontal laser line and a vertical laser line to an obstacle, a first image acquired by a first image sensor and a second image acquired by a second image sensor are acquired, wherein the first image sensor and the second image sensor are spaced by a preset distance in the vertical direction, and the first image sensor and the laser are arranged on the same horizontal plane;
Acquiring first depth information corresponding to the horizontal laser line and second depth information corresponding to the vertical laser line according to the first image and the second image;
Combining the first depth information and the second depth information to obtain a point cloud, wherein the point cloud is used for identifying the obstacle;
The obtaining, according to the first image and the second image, first depth information corresponding to the horizontal laser line includes:
acquiring a first horizontal laser stripe in the first image;
Determining a laser stripe corresponding to the first horizontal laser stripe in the second image as a second horizontal laser stripe according to the first horizontal laser stripe; wherein the second image comprises at least one horizontal laser stripe;
acquiring the light spot height of the second horizontal laser stripe in the second image sensor coordinate system;
measuring and obtaining first depth information corresponding to the horizontal laser line according to the light spot height, the relative height of the second image sensor and the laser and the focal length of the second image sensor;
the obtaining, according to the first image and the second image, second depth information corresponding to the vertical laser line includes:
Acquiring a first change value corresponding to a change of a column generated by the vertical laser line in the horizontal direction according to the first image;
acquiring a second variation value corresponding to the variation of the columns generated by the vertical laser line in the horizontal direction according to the second image;
and obtaining second depth information corresponding to the vertical laser line according to the first variation value and/or the second variation value.
2. The obstacle avoidance method of claim 1, wherein,
The horizontal laser line is imaged in a fixed line pixel range of the imaging surface of the first image sensor after being reflected by an obstacle spaced a first distance from the laser for the first time, and at the same time, the reflected laser line is imaged in the fixed line pixel range after being reflected by an obstacle spaced a second distance from the laser for the first time, wherein the first distance is different from the second distance.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
Horizontal and vertical laser stripes corresponding to obstacles having different distances from the laser are at different heights on the imaging surface of the second image sensor.
4. A mobile robot, comprising:
A housing;
the obstacle avoidance module is arranged on the shell;
The driving module is arranged in the shell; and
The controller is respectively connected with the obstacle avoidance module and the driving module, and is used for sending a control instruction to control the driving module to drive the shell to move and detecting an obstacle through the obstacle avoidance module;
The controller is configured to perform the method of any one of claims 1 to 3.
5. The mobile robot of claim 4, wherein the obstacle avoidance module comprises a first image sensor, a second image sensor, and a laser;
wherein the laser is used for emitting horizontal laser lines and vertical laser lines.
6. The mobile robot of claim 5, wherein an optical axis center line of both the first image sensor and the laser is parallel to an X-axis of the first image sensor coordinate system.
7. The mobile robot of claim 5 or 6, wherein an optical axis center line of both the first image sensor and the second image sensor is parallel to a straight line in which a vertical direction is located.
8. The mobile robot of claim 5 or 6, wherein an optical axis center line of both the first image sensor and the second image sensor intersects a straight line in a vertical direction.
CN202110495344.4A 2021-05-07 2021-05-07 Obstacle avoidance module, mobile robot and obstacle avoidance method Active CN113376655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110495344.4A CN113376655B (en) 2021-05-07 2021-05-07 Obstacle avoidance module, mobile robot and obstacle avoidance method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110495344.4A CN113376655B (en) 2021-05-07 2021-05-07 Obstacle avoidance module, mobile robot and obstacle avoidance method

Publications (2)

Publication Number Publication Date
CN113376655A CN113376655A (en) 2021-09-10
CN113376655B true CN113376655B (en) 2024-05-17

Family

ID=77570482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110495344.4A Active CN113376655B (en) 2021-05-07 2021-05-07 Obstacle avoidance module, mobile robot and obstacle avoidance method

Country Status (1)

Country Link
CN (1) CN113376655B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114396911B (en) * 2021-12-21 2023-10-31 中汽创智科技有限公司 Obstacle ranging method, device, equipment and storage medium
CN114259580A (en) * 2021-12-27 2022-04-01 杭州电子科技大学 Mobile sterilization robot
CN115338548B (en) * 2022-10-14 2023-05-26 四川智龙激光科技有限公司 Obstacle avoidance method and system for cutting head of plane cutting machine tool

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10346995B1 (en) * 2016-08-22 2019-07-09 AI Incorporated Remote distance estimation system and method
CN110353583A (en) * 2019-08-21 2019-10-22 追创科技(苏州)有限公司 The autocontrol method of sweeping robot and sweeping robot
CN110622085A (en) * 2019-08-14 2019-12-27 珊口(深圳)智能科技有限公司 Mobile robot and control method and control system thereof
CN111103593A (en) * 2019-12-31 2020-05-05 深圳市欢创科技有限公司 Distance measurement module, robot, distance measurement method and non-volatile readable storage medium
CN211012988U (en) * 2019-11-19 2020-07-14 珠海市一微半导体有限公司 Mobile robot based on laser visual information obstacle avoidance navigation
CN112749643A (en) * 2020-12-30 2021-05-04 深圳市欢创科技有限公司 Obstacle detection method, device and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10346995B1 (en) * 2016-08-22 2019-07-09 AI Incorporated Remote distance estimation system and method
CN110622085A (en) * 2019-08-14 2019-12-27 珊口(深圳)智能科技有限公司 Mobile robot and control method and control system thereof
CN110353583A (en) * 2019-08-21 2019-10-22 追创科技(苏州)有限公司 The autocontrol method of sweeping robot and sweeping robot
CN211012988U (en) * 2019-11-19 2020-07-14 珠海市一微半导体有限公司 Mobile robot based on laser visual information obstacle avoidance navigation
CN111103593A (en) * 2019-12-31 2020-05-05 深圳市欢创科技有限公司 Distance measurement module, robot, distance measurement method and non-volatile readable storage medium
CN112749643A (en) * 2020-12-30 2021-05-04 深圳市欢创科技有限公司 Obstacle detection method, device and system

Also Published As

Publication number Publication date
CN113376655A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN113376655B (en) Obstacle avoidance module, mobile robot and obstacle avoidance method
WO2021134809A1 (en) Distance measurement module, robot, distance measurement method and nonvolatile readable storage medium
KR101632168B1 (en) The apparatus of smart camera with lidar senser module
US10490079B2 (en) Method and device for selecting and transmitting sensor data from a first motor vehicle to a second motor vehicle
CN107992052B (en) Target tracking method and device, mobile device and storage medium
CN106998983B (en) Electric vacuum cleaner
US11579254B2 (en) Multi-channel lidar sensor module
US11019322B2 (en) Estimation system and automobile
JP2021516401A (en) Data fusion method and related equipment
RU2210491C2 (en) Mobile robot system using high-frequency module
CN112155487A (en) Sweeping robot, control method of sweeping robot and storage medium
JP2021509515A (en) Distance measurement methods, intelligent control methods and devices, electronic devices and storage media
CA2969202C (en) Vacuum cleaner
KR20200018197A (en) Moving robot and contorlling method and a terminal
JP6030405B2 (en) Planar detection device and autonomous mobile device including the same
KR100901311B1 (en) Autonomous mobile platform
WO2019019819A1 (en) Mobile electronic device and method for processing tasks in task region
US20180268225A1 (en) Processing apparatus and processing system
US10346995B1 (en) Remote distance estimation system and method
CN110928312B (en) Robot position determination method, non-volatile computer-readable storage medium, and robot
US20120002044A1 (en) Method and System for Implementing a Three-Dimension Positioning
CN211741574U (en) Distance measurement module and robot
CN108175337B (en) Sweeping robot and walking method thereof
CN111474552A (en) Laser ranging method and device and self-moving equipment
JP2015152411A (en) overhead line detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000, Floor 1801, Block C, Minzhi Stock Commercial Center, North Station Community, Minzhi Street, Longhua District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Huanchuang Technology Co.,Ltd.

Address before: 518000 2407-2409, building 4, phase II, Tian'an Yungu Industrial Park, Gangtou community, Bantian street, Longgang District, Shenzhen, Guangdong

Applicant before: SHENZHEN CAMSENSE TECHNOLOGIES Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant