CN108415037B - Obstacle avoidance device with infrared and visual characteristics and control method - Google Patents

Obstacle avoidance device with infrared and visual characteristics and control method Download PDF

Info

Publication number
CN108415037B
CN108415037B CN201810458440.XA CN201810458440A CN108415037B CN 108415037 B CN108415037 B CN 108415037B CN 201810458440 A CN201810458440 A CN 201810458440A CN 108415037 B CN108415037 B CN 108415037B
Authority
CN
China
Prior art keywords
infrared
obstacle
image
camera
infrared light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810458440.XA
Other languages
Chinese (zh)
Other versions
CN108415037A (en
Inventor
邓文拔
肖刚军
赖钦伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN201810458440.XA priority Critical patent/CN108415037B/en
Publication of CN108415037A publication Critical patent/CN108415037A/en
Application granted granted Critical
Publication of CN108415037B publication Critical patent/CN108415037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an obstacle avoidance device with infrared and visual characteristics and a control method, wherein the method is characterized in that a first infrared emission tube and a second infrared emission tube are controlled to be opened and closed to output pulse signals with complementary phases, the sampling frame rate of a camera is regulated, images of infrared light reflected by obstacles are collected, a sample is trained based on the collected brightness change characteristics of the images, classifiers of the obstacles on the left side and the right side of a sensing position central line of the obstacle avoidance device are obtained, and then the positions of the obstacles are judged. And finally driving the obstacle avoidance device to perform obstacle avoidance according to the judged position. Compared with the prior art, the camera is combined with the low-cost infrared component, so that the obstacle can be positioned on a determined azimuth by capturing the image brightness characteristic change caused by the difference of the intensity of the emission signals of the infrared luminous tubes from two different distances through the camera, the influence of different reflecting surfaces is eliminated, and the accuracy of describing the environmental characteristics is improved.

Description

Obstacle avoidance device with infrared and visual characteristics and control method
Technical Field
The invention relates to the technical field of autonomous mobile robots, in particular to an obstacle avoidance device with infrared and visual characteristics and a control method.
Background
The perception of the environment depends on the acquisition of various sensor data, currently common sensors include: infrared intensity sensors, infrared ranging sensors, ultrasonic sensors, vision sensors, laser sensors, and the like. From the aspect of precision, the infrared ranging sensor, the ultrasonic sensor and the laser sensor can all obtain higher precision, but the cost is higher, meanwhile, the covered angle except the laser sensor is smaller, the blind area of detection can be reduced only by a larger number, the laser sensor mainly senses a very narrow two-dimensional plane, and the blind area exists in the vertical direction. If the visual sensor needs to be used for ranging, at least two cameras are needed, the cost is high, the precision is poor, and special holes are needed to be arranged on the die. From the cost and appearance point of view, the infrared light intensity sensor is definitely the cheapest and the most widely applied, but the current use is based on single light intensity detection, and different materials emit different infrared light, so that the adaptability to obstacles is poor.
Disclosure of Invention
An obstacle avoidance device with infrared and visual characteristics comprises a detection module and an image processing module;
the detection module comprises a bearing mechanism, two infrared emission tubes and a camera, wherein the two infrared emission tubes and the camera are arranged on the bearing mechanism at an included angle;
the image processing module is used for training the images of the infrared light reflected by the obstacles received by the camera as samples, judging the positions of the obstacles in front of the obstacle avoidance device by using a trained classifier, and controlling the obstacle avoidance device to execute corresponding obstacle avoidance actions according to the judgment result;
the obstacle is positioned in the view angle range of the camera and each infrared emission tube in front of the obstacle avoidance device.
Further, the implementation mode that each infrared transmitting tube and the central line of the camera have a common intersection point is as follows: and setting the included angles and/or horizontal distances of the infrared transmitting tubes and the cameras to be the same.
Further, the camera is arranged on a sensing bit center line of the bearing mechanism; the infrared transmitting tube comprises a first infrared transmitting tube and a second infrared transmitting tube, the first infrared transmitting tube is arranged on the left side of the sensing position central line, and the second infrared transmitting tube is arranged on the right side of the sensing position central line.
Further, the first infrared transmitting tube and the second infrared transmitting tube respectively form an acute angle with the central line of the sensing position.
Further, the image processing module firstly collects an image of the infrared light reflected by the obstacle at the left side of the sensing bit central line as a first positive sample, and the image of the infrared light reflected by the obstacle at the left side of the sensing bit central line is not used as a first negative sample, and training is carried out to obtain a first classifier;
the image processing module acquires an image of the infrared light reflected by the obstacle on the right side of the sensing position central line as a second positive sample, and does not acquire an image of the infrared light reflected by the obstacle on the right side of the sensing position central line as a second negative sample, and training is carried out to obtain a second classifier;
and combining the first classifier and the second classifier into a combined classifier according to a preset weight, wherein the preset weight is determined according to the image brightness characteristics of infrared light reflected by the obstacle, and the combined classifier is used for judging the position of the obstacle in the sensed scene.
A control method based on the obstacle avoidance device, comprising:
step 1, controlling the on-off output phase complementary pulse signals of the first infrared emission tube and the second infrared emission tube through repeated control periods, and adjusting the sampling frame rate of the camera, wherein the pulse time sequence of each control period is as follows:
(1) Before the time t0, the first infrared transmitting tube and the second infrared transmitting tube are controlled to be closed, and the camera does not start sampling at the sampling frame rate;
(2) At time t0, after the first infrared emission tube is controlled to be started, the camera captures an image of the obstacle reflecting infrared light from the first infrared emission tube as a first frame image;
(3) At time t1, controlling to close the first infrared emission tube and open the second infrared emission tube, and switching the camera from a first frame image to a second frame image;
(4) At time t2, the camera captures an image of the obstacle reflecting infrared light from the second infrared emission tube, and then the first infrared emission tube and the second infrared emission tube are turned off to serve as a second frame image;
(5) At time t3, entering the next control period, and enabling the camera to start capturing a third frame of image;
the control period is t3-t0, t3-t2 is a dead zone time period, and the sampling frame rate is the time for the camera to sample each frame of image;
step 2, taking images of infrared light reflected by the obstacles at the left side and the right side of the sensing position center line as samples, training the classifiers through training samples to obtain two classifiers corresponding to the obstacles at the left side and the right side of the sensing position center line, and combining the two classifiers into a combined classifier according to preset weights for judging the positions of the obstacles in a sensed scene, wherein the preset weights are determined according to the brightness characteristics of the images of the infrared light reflected by the obstacles;
step 3, when judging that the obstacle exists on the left side of the sensing position center line, driving the obstacle avoidance device to rotate towards the right side of the sensing position center line; and when judging that the obstacle exists on the right side of the sensing bit center line, driving the obstacle avoidance device to rotate towards the left side of the sensing bit center line.
Further, in the step 2, an image of the infrared light reflected by the obstacle on the left side of the sensing bit center line is taken as a first positive sample, an image of the infrared light reflected by the obstacle on the left side of the sensing bit center line is taken as a first negative sample, brightness characteristics of the image in the sampling frame rate are selected and extracted, so that original data are transformed to obtain characteristics which can reflect the classification essence most, and then a first classifier is trained;
and taking the image of the infrared light reflected by the obstacle on the right side of the sensing bit center line as a second positive sample, taking the image of the infrared light reflected by the obstacle on the right side of the sensing bit center line as a second negative sample, selecting and extracting brightness characteristics of the image in the sampling frame rate, thereby transforming the original data to obtain the characteristics which can reflect the classification essence most, and further training to obtain a second classifier.
Further, when the camera captures an image of the infrared light reflected by the obstacle at the left side of the sensing position center line, controlling the brightness of the image captured by the camera to be I10 after the first infrared transmitting tube is started at the time t0 of the control period; at the time t1 of the control period, the first infrared emission tube is controlled to be closed, the second infrared emission tube is controlled to be opened, and the brightness of an image captured by the camera is I11; wherein I10> I11.
Further, when the camera captures an image of the infrared light reflected by the obstacle on the right side of the sensing position center line, controlling the brightness of the image captured by the camera to be I20 after the first infrared transmitting tube is started at the time t0 of the control period; at time t2 of the control period, the brightness of the image captured by the camera is I21; wherein I20< I21.
Further, the position judgment of the obstacle is performed by using the combined classifier: and continuously shifting a scanning sub-window in the image to be detected to calculate brightness characteristics of a window area, and screening the characteristics through the trained combined classifier to finally obtain a required classification result.
The invention has the beneficial effects that: compared with the prior art, the camera is combined with the low-cost infrared component, and the 180-degree phase difference of pulse signals of reflected light of obstacles in different directions and the frame rate of modulating the pulse signals are equal to the sampling frame rate of the camera, so that the obstacle is positioned in a determined direction by capturing the image brightness characteristic change caused by the difference of the emitted signal intensities of the infrared luminous tubes in two different distances through the camera, the influence of different reflecting surfaces is eliminated, the obstacle avoidance operation is further executed, and the accuracy of describing the environmental characteristics is improved.
Drawings
FIG. 1 is a schematic diagram of an infrared and visual obstacle avoidance apparatus according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a detection module in the obstacle avoidance apparatus according to an embodiment of the present invention;
FIG. 3 is a control timing diagram of the obstacle avoidance apparatus according to an embodiment of the present invention;
FIG. 4 is a timing chart of brightness variation of an image captured by a camera of the detection module according to an embodiment of the present invention for detecting an obstacle located on the left side of a sensing center line in front of the obstacle avoidance device;
FIG. 5 is a timing chart of brightness variation of an image captured by a camera of the detection module according to an embodiment of the present invention for detecting an obstacle located on the right side of a sensing center line in front of the obstacle avoidance device;
fig. 6 is a flowchart of a control method of an obstacle avoidance device with infrared and visual features according to an embodiment of the present invention.
Detailed Description
The following is a further description of embodiments of the invention, taken in conjunction with the accompanying drawings:
it is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
In the description of the invention, it should be understood that the terms "center," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate an orientation or a positional relationship based on that shown in the drawings, merely for convenience of description and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
An infrared and visual feature obstacle avoidance device in embodiments of the present invention may be implemented as a mobile robot, including a floor sweeping robot, an AGV, and the like. The following assumes that the obstacle avoidance device is mounted on a robot for sweeping floor. However, it will be appreciated by those skilled in the art that the configuration according to the embodiment of the present invention can be extended to be applied to a mobile terminal, except for being particularly used for a mobile robot.
An obstacle avoidance device with infrared and visual characteristics comprises a detection module and an image processing module; as shown in fig. 1, the detection module includes a carrying mechanism 106, a first infrared emission tube E1, a second infrared emission tube E2, and a camera D, where the first infrared emission tube E1 and the second infrared emission tube E2 are respectively disposed on two sides of the camera D and are disposed on the carrying mechanism 106 at an included angle with each other, and each infrared emission tube and a center line of the camera have a common intersection O. Wherein the carrying mechanism 106 is used for fixing and isolating the two infrared transmitting tubes and the camera D.
The image processing module is used for training an image of the infrared light reflected by the obstacle received by the camera D as a sample, judging the position of the obstacle in front of the obstacle avoidance device by using a trained classifier, and controlling the obstacle avoidance device to execute corresponding obstacle avoidance action according to a judgment result; the obstacle is positioned in the view angle range of the camera D and each infrared emission tube in front of the obstacle avoidance device. As shown in fig. 1, the viewing angle of the first infrared emission tube E1 is marked 104, the viewing angle of the second infrared emission tube E2 is marked 103, and the viewing angle of the camera D is marked 105.
In the embodiment of the present invention, the first infrared emission tube E1 and the second infrared emission tube E2 respectively have a common intersection O with the center line of the camera (the sensing center line 107 of the bearing mechanism 106), which is implemented in the following manner: as shown in fig. 1, an included angle between the first infrared emission tube E1 and the center line 107 of the camera D is marked as 102, an included angle between the second infrared emission tube E2 and the center line 107 of the camera D is marked as 101, and the included angle 102 is equal to the included angle 101; on the bearing mechanism 106, the hole positions of the first infrared transmitting tube E1 and the second infrared transmitting tube E2 are respectively equal to the horizontal distance of the hole position of the camera D.
Specifically, the camera D is disposed on the sensing bit center line 107 of the carrying mechanism; the infrared transmitting tubes comprise a first infrared transmitting tube E1 and a second infrared transmitting tube E2, the first infrared transmitting tube E1 is arranged on the left side of the sensing bit center line 107, and the second infrared transmitting tube E2 is arranged on the right side of the sensing bit center line 107. The transmitting power of the two infrared transmitting tubes is made to be relatively close.
As shown in fig. 2, when the obstacle is located at a position a on the left side of the sensing position center line 107, the distance between the position a and the first infrared light emitting tube E1 is smaller than the distance between the position a and the second infrared light emitting tube E2, so that the light intensity of the obstacle received by the first infrared light emitting tube E1 is stronger than that of the second infrared light emitting tube E2, the light intensity received on the plane in the vertical direction where the corresponding position a is located is the same, and in the range of the viewing angle 104 of the first infrared light emitting tube E1 and the viewing angle 103 of the second infrared light emitting tube E2 from right to left, as the obstacle is further away from the sensing position center line 107, the infrared signal emitted by the obstacle is always stronger than the infrared signal emitted by the E2. When an obstacle is located at a position B on the right side of the sensing position center line 107, the distance between the position B and the first infrared light emitting tube E1 is greater than the distance between the position B and the second infrared light emitting tube E2, so that the light intensity of the obstacle received by the first infrared light emitting tube E1 is weaker than that of the second infrared light emitting tube E2, and the light intensity received on a plane in the vertical direction where the corresponding position B is located is the same, in the range of the viewing angle 104 of the first infrared light emitting tube E1 and the viewing angle 103 of the second infrared light emitting tube E2 from left to right, as the obstacle is further away from the sensing position center line 107, the infrared signal emitted by the E2 received by the obstacle is always stronger than the infrared signal emitted by the E1; meanwhile, the intensity of the reflected signal is also affected by the intersection range of the view angle 104 of the E1 and the view angle 103 of the E2 with the receiving view angle of the camera D, and the larger the intersection area of the view angle of the reflected signal of the obstacle and the intersection range is, the larger the brightness of the image captured by the camera D is.
The intersection point O of the central line of the camera D and each infrared emission tube is positioned on the central line 107 of the sensing position, and the point O is the strongest point of the camera D for receiving light, and the obstacle reflection at the position is stronger; the plane corresponding to the sensing bit center line 107 is the plane with strongest reception and reflection, and the intensity value of the light reflected by the obstacle back to the camera is smaller from the left side and the right side of the sensing bit center line 107 from the near to the far from the intersection O to the position a or from the intersection O to the position B, and further from the intersection O. Wherein position a and position B are within the reception range of the view angle 105 of the camera D and do not exceed the view angle 104 of the first infrared emission tube E1 and the view angle 103 of the second infrared emission tube E2. Because the same distance exists on different reflecting surfaces of the obstacle, the receiving intensity of the same distance is different, so the obstacle avoidance device provided by the implementation of the invention is beneficial to realizing the positioning of the obstacle on a determined azimuth by means of the image brightness characteristic change caused by the fact that the camera captures the difference of the emitting signal intensities of the infrared luminous tubes from two different distances, thereby eliminating the influence of different reflecting surfaces.
Further, in the application process, the robot needs to be prevented from being too close to the obstacle, meanwhile, the reflected infrared intensities of different obstacles are different, and the distance between the obstacle and the obstacle avoidance device is larger than the anti-collision distance in the application process.
Further, the image processing module controls the infrared transmitting tube to transmit a modulated infrared light pulse signal, the light signal with the sampling frame rate reflected by the obstacle is received by the image sensor on the camera at the sampling frame rate, and the image processing module obtains the depth information of the obstacle by calculating the time difference or the phase difference (namely the parallax of related characteristic points) between the transmitted light signal and the reflected signals of two continuous frames of images received by each pixel of the image sensor and combining a triangulation principle.
As one mode of implementation of the invention, the image processing module correspondingly divides two types of image characteristic data according to the brightness characteristics of the images of the infrared light reflected by the barriers at the left side and the right side of the sensing bit central line so as to train the corresponding classifier. Firstly, collecting an image of infrared light reflected by the obstacle at the left side of the sensing bit central line as a first positive sample, and training to obtain a first classifier by taking an image of infrared light reflected by the obstacle at the left side of the sensing bit central line as a first negative sample; the image processing module acquires an image of the infrared light reflected by the obstacle on the right side of the sensing position central line again as a second positive sample, and does not acquire an image of the infrared light reflected by the obstacle on the right side of the sensing position central line as a second negative sample, and training is carried out to obtain a second classifier. According to the embodiment of the invention, the images of the infrared light emitted by the barriers at the left side and the right side are extracted to obtain different training samples, the corresponding base classifier is generated by training on each training sample, and then a preset weight set is allocated to each base classifier to form a combined classifier which is used for judging the position of the barrier in the sensed scene and providing the performance and classification precision of the classifier. The preset weight is determined according to the image brightness characteristics of the infrared light reflected by the obstacle.
In order to realize the function of judging the obstacle by the multi-infrared auxiliary camera, a corresponding time sequence control method is needed, as shown in fig. 3, the time sequences controlled by the camera D and the two infrared transmitting tubes are drawn in the figure, the upper part signal is a sampling frame signal controlled by the camera D, the middle part signal is an output pulse signal controlled by the first infrared transmitting tube E1, and the lower part signal is an output pulse signal controlled by the second infrared transmitting tube E2. In the step 1, the first infrared emission tube E1 and the second infrared emission tube E2 are controlled to be turned on and off through repeated control periods, and pulse signals with complementary phases are output, and the sampling frame rate of the camera D is adjusted, wherein the pulse time sequence of each control period is as follows:
(1) Before the time t0, controlling the first infrared emission tube E1 and the second infrared emission tube E2 to be closed, wherein the camera does not start sampling at the sampling frame rate;
(2) At time t0, after the first infrared emission tube E1 is controlled to be started, the camera captures an image of the obstacle reflecting infrared light from the first infrared emission tube E1 as a first frame image (a first high-level signal in a sampling frame), wherein the image captured by the camera ignores the intensity influence of an ambient light signal;
(3) At time t1, controlling to close the first infrared emission tube E1 and open the second infrared emission tube E2, wherein the time interval of the camera D from the first frame image to the second frame image is the low level interval of the time t1 in the sampling frame signal controlled by the camera D;
(4) At time t2, the camera captures an image of the obstacle reflecting infrared light from the second infrared emission tube E2, and then turns off the first infrared emission tube E1 and the second infrared emission tube E2 as a second frame image (a second high level signal in a sampling frame);
(5) At time t3, entering the next control period, and enabling the camera to start capturing a third frame of image;
the control period is t3-t0, wherein t3-t2 is a dead time period, the phases of pulse signals output by the first infrared emission tube E1 and the second infrared emission tube E2 are different by 180 degrees, correspondingly, the effective brightness values received by the camera D on the imaging plane are also equal, and the sampling frame rate is the time for the camera D to sample each frame of image. In the step 1, the complementary pulse signals modulated and output by the first infrared transmitting tube E1 and the second infrared transmitting tube E2 are beneficial to the judgment of the obstacles at the left and right sides of the sensing bit center line in the subsequent step.
Specifically, the emission intensity of the infrared light emitting diode varies depending on the emission direction. The smaller the direction angle, the more sensitive the directivity of the representative element. Generally, an infrared light emitting diode is used, and lenses are attached to the infrared light emitting diode, so that the directivity of the infrared light emitting diode is more sensitive. The radiation intensity of the infrared luminous tube is changed according to the distance on the optical axis and also is changed according to the different light receiving elements. Basically, the light measurement is inversely proportional to the square of the distance and is related to the difference in the characteristics of the light receiving element. When infrared rays are emitted to control corresponding controlled devices, the intensity of the radiation decays exponentially with the emission propagation distance. Meanwhile, in order to facilitate the camera to sample the image brightness change of the light emitted by the infrared luminous tube, the infrared luminous tube works in a pulse state, and the modulation frequency of the pulse light is the same as the sampling frame rate of the camera D; in order to assist the camera to distinguish the variation difference in time sequence of the brightness of the captured images of the infrared light reflected by the obstacle at the left side and the right side of the sensing bit center line, the duty ratio of the first infrared emission tube E1 and the duty ratio of the second infrared emission tube E2 are the same and complementary, that is, the two output modulation waveforms are 180 degrees out of phase.
Preferably, when the camera captures an image of the obstacle reflected by the infrared light on the left side of the sensing position center line, as shown in fig. 4, the upper part is a sampling frame signal controlled by the camera D, the lower part is an image brightness change signal received by the camera D, the brightness of the image captured by the camera D is I10 after the first infrared emission tube E1 is controlled to be turned on at time t0 of the control period, and the brightness value is continued until time t1, and the time length is the time interval when the camera D is switched from a first frame image to a second frame image; and at the time t1 of the control period, the first infrared emission tube E1 is controlled to be closed, the second infrared emission tube E2 is controlled to be opened, and the brightness of an image captured by the camera is I11. In the embodiment of the present invention, as shown in fig. 2, when an obstacle is located at a position a on the left side of the sensing position center line 107, the distance between the position a and the first infrared light emitting tube E1 is smaller than the distance between the position a and the second infrared light emitting tube E2, so that the obstacle receives the intensity of the first infrared light emitting tube E1 stronger than the intensity of the second infrared light emitting tube E2, and the intensity of the obstacle reflects the first infrared light emitting tube E1 stronger than the intensity of the second infrared light emitting tube E2, so that in a control period, the image brightness I10 captured by the camera D in a time period from t0 to t1 is larger than the image brightness I11 captured by the camera D in a time period from t1 to t2, wherein the image brightness captured by the camera D is the intensity value of the obstacle reflecting the superposition of the intensity of the first infrared light emitting tube E1 and the intensity of the second infrared light emitting tube E2, which will generate a difference in time sequence due to the position of the obstacle, and the image brightness captured by the camera D is selected as the first sample classification feature in the subsequent control method.
Preferably, when the camera captures an image of the obstacle reflected by the infrared light on the right side of the sensing position center line, as shown in fig. 5, the upper part is a sampling frame signal controlled by the camera D, the lower part is an image brightness change signal received by the camera D, the brightness of the image captured by the camera D is I20 after the first infrared emission tube E1 is controlled to be turned on at time t0 of the control period, and the brightness value is continued until time t1, and the time length is the time interval when the camera D is switched from a first frame image to a second frame image; and at the time t1 of the control period, the first infrared emission tube E1 is controlled to be closed, the second infrared emission tube E2 is controlled to be opened, and the brightness of an image captured by the camera is I21. In the embodiment of the present invention, as shown in fig. 2, when an obstacle is located at a position B on the right side of the sensing position center line 107, the distance between the position B and the first infrared light emitting tube E1 is greater than the distance between the position B and the second infrared light emitting tube E2, so that the obstacle receives the intensity of the first infrared light emitting tube E1 weaker than the intensity of the second infrared light emitting tube E2, the intensity of the obstacle reflected by the first infrared light emitting tube E1 is weaker than the intensity of the second infrared light emitting tube E2, so that in one control period, the image brightness I20 captured by the camera D in the time period t 0-t 1 is smaller than the image brightness I21 captured by the camera D in the time period t 1-t 2, wherein the image brightness captured by the camera D is the intensity value of the obstacle reflected by the superposition of the intensity of the first infrared light emitting tube E1 and the intensity of the second infrared light emitting tube E2, which will generate a difference in time sequence due to the position of the obstacle, and the difference will be selected as a training sample of the second sub-training method in the subsequent control method.
Preferably, when the obstacle appears on the sensing bit center line, as a region where light emitted by the two infrared emission tubes crosses most strongly, brightness of an image of the obstacle reflected infrared light captured by the camera is a pulse signal of which phases are complementary to those of the first infrared emission tube E1 and the second infrared emission tube E2, so that brightness of the image captured by the camera maintains a constant brightness value over a certain time sequence.
In the step 2, the image processing module correspondingly divides two types of image feature data according to the brightness features of the images of the infrared light reflected by the obstacles at the left side and the right side of the sensing bit center line so as to train the corresponding classifier. The processing method for the sample collected by the camera is as follows:
taking an image of the infrared light reflected by the obstacle at the left side of the sensing bit center line as a first positive sample, taking an image of the infrared light reflected by the obstacle at the left side of the sensing bit center line as a first negative sample, and normalizing all sample pictures to be of the same size; because the data volume obtained by the image or waveform is quite large, in order to effectively realize classification recognition, the brightness features of the image in the sampling frame rate need to be selected and extracted, so that the original data is transformed to obtain the features which can reflect the classification essence most, and the first classifier is obtained through training.
Taking an image of the infrared light reflected by the obstacle on the right side of the sensing bit center line as a second positive sample, taking an image of the infrared light reflected by the obstacle on the right side of the sensing bit center line as a second negative sample, and normalizing all sample pictures to be of the same size; because the data volume obtained by the image or waveform is quite large, in order to effectively realize classification recognition, the brightness features of the image in the sampling frame rate need to be selected and extracted, so that the original data is transformed to obtain the features which can reflect the classification essence most, and the second classifier is obtained through training.
According to the embodiment of the invention, the images of the infrared light emitted by the barriers at the left side and the right side are extracted to obtain different training samples, the corresponding base classifier is generated by training on each training sample, and then a preset weight set is allocated to each base classifier to form a combined classifier which is used for judging the position of the barrier in the sensed scene and providing the performance and classification precision of the classifier. The preset weight is determined according to the image brightness characteristics of the infrared light reflected by the obstacle.
The method for judging the position of the obstacle by utilizing the combined classifier trained according to the preset weight comprises the steps of firstly, continuously shifting and sliding a scanning sub-window in an image to be detected, and calculating the characteristics of the region every time the sub-window reaches one position; secondly, screening the characteristics by using the combined classifier, and judging whether the area is a target or not; then, because the size of the target in the image may be different from the sample picture used in training the classifier, the sub-window of the scan needs to be enlarged or reduced (or the image is reduced), and then the target slides in the image and is matched again; and finally obtaining a classification result.
The above embodiments are merely for fully disclosing the present invention, but not limiting the present invention, and should be considered as the scope of the disclosure of the present application based on the substitution of equivalent technical features of the inventive subject matter without creative work.

Claims (9)

1. The obstacle avoidance device with infrared and visual characteristics is characterized by comprising a detection module and an image processing module;
the detection module comprises a bearing mechanism, two infrared emission tubes and a camera, wherein the two infrared emission tubes and the camera are arranged on the bearing mechanism at an included angle;
the image processing module is used for training the images of the infrared light reflected by the obstacles received by the camera as samples, judging the positions of the obstacles in front of the obstacle avoidance device by using a trained classifier, and controlling the obstacle avoidance device to execute corresponding obstacle avoidance actions according to the judgment result;
wherein the obstacle is positioned in the view angle range of the camera and each infrared emission tube in front of the obstacle avoidance device;
the image processing module firstly collects an image of the infrared light reflected by the obstacle at the left side of the sensing position central line as a first positive sample, and an image of the infrared light reflected by the obstacle at the left side of the sensing position central line as a first negative sample, and trains to obtain a first classifier;
the image processing module acquires an image of the infrared light reflected by the obstacle on the right side of the sensing bit center line as a second positive sample, and acquires a second classifier by training when the image of the infrared light reflected by the obstacle on the right side of the sensing bit center line is not used as a second negative sample;
and combining the first classifier and the second classifier into a combined classifier according to a preset weight, wherein the preset weight is determined according to the image brightness characteristics of infrared light reflected by the obstacle, and the combined classifier is used for judging the position of the obstacle in the sensed scene.
2. The obstacle avoidance apparatus of claim 1 wherein the common intersection of each of the infrared emitting tubes with the camera centerline is achieved by: and setting the included angles and/or horizontal distances of the infrared transmitting tubes and the cameras to be the same.
3. The obstacle avoidance device of claim 2 wherein the camera is disposed on a sensing bit centerline of the load bearing mechanism; the infrared transmitting tube comprises a first infrared transmitting tube and a second infrared transmitting tube, the first infrared transmitting tube is arranged on the left side of the sensing position central line, and the second infrared transmitting tube is arranged on the right side of the sensing position central line.
4. The obstacle avoidance apparatus of claim 1 wherein the first and second infrared emitting tubes each form an acute included angle with the sensing bit centerline.
5. A control method based on the obstacle avoidance apparatus of claim 3, comprising:
step 1, controlling the on-off output phase complementary pulse signals of the first infrared emission tube and the second infrared emission tube through repeated control periods, and adjusting the sampling frame rate of the camera, wherein the pulse time sequence of each control period is as follows:
(1) Before the time t0, the first infrared transmitting tube and the second infrared transmitting tube are controlled to be closed, and the camera does not start sampling at the sampling frame rate;
(2) At time t0, after the first infrared emission tube is controlled to be started, the camera captures an image of the obstacle reflecting infrared light from the first infrared emission tube as a first frame image;
(3) At time t1, controlling to close the first infrared emission tube and open the second infrared emission tube, and switching the camera from a first frame image to a second frame image;
(4) At time t2, the camera captures an image of the obstacle reflecting infrared light from the second infrared emission tube, and then the first infrared emission tube and the second infrared emission tube are turned off to serve as a second frame image;
(5) At time t3, entering the next control period, and enabling the camera to start capturing a third frame of image;
the control period is t3-t0, t3-t2 is a dead zone time period, and the sampling frame rate is the time for the camera to sample each frame of image;
step 2, taking images of infrared light reflected by the obstacles at the left side and the right side of the sensing position center line as samples, training the classifiers through training samples to obtain two classifiers corresponding to the obstacles at the left side and the right side of the sensing position center line, and combining the two classifiers into a combined classifier according to preset weights for judging the positions of the obstacles in a sensed scene, wherein the preset weights are determined according to the brightness characteristics of the images of the infrared light reflected by the obstacles;
step 3, when judging that the obstacle exists on the left side of the sensing position center line, driving the obstacle avoidance device to rotate towards the right side of the sensing position center line; and when judging that the obstacle exists on the right side of the sensing position center line, driving the obstacle avoidance device to rotate towards the left side of the sensing position center line.
6. The control method according to claim 5, wherein in the step 2, an image of the obstacle reflected infrared light on the left side of the sensing bit center line is taken as a first positive sample, an image of the obstacle not reflected infrared light on the left side of the sensing bit center line is taken as a first negative sample, brightness characteristics of the image in the sampling frame rate are selected and extracted, so that original data are transformed to obtain characteristics which can reflect the classification essence most, and then a first classifier is trained;
and taking the image of the infrared light reflected by the obstacle on the right side of the sensing bit center line as a second positive sample, taking the image of the infrared light reflected by the obstacle on the right side of the sensing bit center line as a second negative sample, selecting and extracting brightness characteristics of the image in the sampling frame rate, thereby transforming the original data to obtain the characteristics which can reflect the classification essence most, and further training to obtain a second classifier.
7. The control method according to claim 6, wherein when the camera captures an image of the infrared light reflected by the obstacle on the left side of the sensing bit center line, the brightness of the image captured by the camera after the first infrared emission tube is controlled to be I10 at time t0 of the control period; at the time t1 of the control period, the first infrared emission tube is controlled to be closed, the second infrared emission tube is controlled to be opened, and the brightness of an image captured by the camera is I11; wherein I10> I11.
8. The control method according to claim 6, wherein when the camera captures an image of the infrared light reflected by the obstacle on the right side of the sensing bit center line, the brightness of the image captured by the camera after the first infrared transmitting tube is controlled to be I20 at time t0 of the control period; at time t2 of the control period, the brightness of the image captured by the camera is I21; wherein I20< I21.
9. The control method according to claim 5, characterized in that the position determination of the obstacle is performed using the combined classifier: and continuously shifting a scanning sub-window in the image to be detected to calculate brightness characteristics of a window area, and screening the characteristics through the trained combined classifier to finally obtain a required classification result.
CN201810458440.XA 2018-05-14 2018-05-14 Obstacle avoidance device with infrared and visual characteristics and control method Active CN108415037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810458440.XA CN108415037B (en) 2018-05-14 2018-05-14 Obstacle avoidance device with infrared and visual characteristics and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810458440.XA CN108415037B (en) 2018-05-14 2018-05-14 Obstacle avoidance device with infrared and visual characteristics and control method

Publications (2)

Publication Number Publication Date
CN108415037A CN108415037A (en) 2018-08-17
CN108415037B true CN108415037B (en) 2023-06-13

Family

ID=63139287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810458440.XA Active CN108415037B (en) 2018-05-14 2018-05-14 Obstacle avoidance device with infrared and visual characteristics and control method

Country Status (1)

Country Link
CN (1) CN108415037B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112902958A (en) * 2019-11-19 2021-06-04 珠海市一微半导体有限公司 Mobile robot based on laser visual information obstacle avoidance navigation
JP2021094118A (en) * 2019-12-16 2021-06-24 日立グローバルライフソリューションズ株式会社 Autonomously travelling type cleaner
CN113452978B (en) * 2021-06-28 2023-01-17 深圳银星智能集团股份有限公司 Obstacle detection method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622589A (en) * 2012-03-13 2012-08-01 辉路科技(北京)有限公司 Multispectral face detection method based on graphics processing unit (GPU)
CN107045352A (en) * 2017-05-31 2017-08-15 珠海市微半导体有限公司 Based on how infrared robot obstacle-avoiding device, its control method and Robot side control method
CN107368079A (en) * 2017-08-31 2017-11-21 珠海市微半导体有限公司 Robot cleans the planing method and chip in path
CN208432736U (en) * 2018-05-14 2019-01-25 珠海市一微半导体有限公司 A kind of infrared and visual signature obstacle avoidance apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7859432B2 (en) * 2007-05-23 2010-12-28 Che Il Electric Wireing Devices Co., Ltd. Collision avoidance system based on detection of obstacles in blind spots of vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622589A (en) * 2012-03-13 2012-08-01 辉路科技(北京)有限公司 Multispectral face detection method based on graphics processing unit (GPU)
CN107045352A (en) * 2017-05-31 2017-08-15 珠海市微半导体有限公司 Based on how infrared robot obstacle-avoiding device, its control method and Robot side control method
CN107368079A (en) * 2017-08-31 2017-11-21 珠海市微半导体有限公司 Robot cleans the planing method and chip in path
CN208432736U (en) * 2018-05-14 2019-01-25 珠海市一微半导体有限公司 A kind of infrared and visual signature obstacle avoidance apparatus

Also Published As

Publication number Publication date
CN108415037A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
US11725956B2 (en) Apparatus for acquiring 3-dimensional maps of a scene
EP3187895B1 (en) Variable resolution light radar system
CN108415037B (en) Obstacle avoidance device with infrared and visual characteristics and control method
US11579254B2 (en) Multi-channel lidar sensor module
CN108627813A (en) A kind of laser radar
CN1844852B (en) Method for generating hybrid image of scenery
CN109819173B (en) Depth fusion method based on TOF imaging system and TOF camera
CN110325879A (en) System and method for compress three-dimensional depth sense
CN101171833A (en) Digital cameras with triangulation autofocus systems and related methods
EP2824418A1 (en) Surround sensing system
US20200026031A1 (en) Bokeh control utilizing time-of-flight sensor to estimate distances to an object
CN111487648A (en) Non-visual field imaging method and system based on flight time
CN208432736U (en) A kind of infrared and visual signature obstacle avoidance apparatus
US11280907B2 (en) Depth imaging system
Hancock et al. High-performance laser range scanner
CN109061606A (en) Intellisense laser radar system and Intellisense laser radar control method
CN109618085B (en) Electronic equipment and mobile platform
US11448756B2 (en) Application specific integrated circuits for LIDAR sensor and multi-type sensor systems
KR20110082734A (en) Apparatus for controlling of auto focus and method thereof
CN100417915C (en) Scanner-free imaging range finding method and its range finder
TW202238172A (en) sensing system
Poppinga et al. A characterization of 3D sensors for response robots
Ohya et al. Intelligent escort robot moving together with human-methods for human position recognition
KR20190129551A (en) System and method for guiding object for unmenned moving body
US12032063B2 (en) Application specific integrated circuits for lidar sensor and multi-type sensor systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: 519000 room 105-514, No. 6, Baohua Road, Hengqin new area, Zhuhai, Guangdong

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant