WO2022188292A1 - 目标检测及控制方法、系统、设备及存储介质 - Google Patents

目标检测及控制方法、系统、设备及存储介质 Download PDF

Info

Publication number
WO2022188292A1
WO2022188292A1 PCT/CN2021/100722 CN2021100722W WO2022188292A1 WO 2022188292 A1 WO2022188292 A1 WO 2022188292A1 CN 2021100722 W CN2021100722 W CN 2021100722W WO 2022188292 A1 WO2022188292 A1 WO 2022188292A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
laser
light
camera
target object
Prior art date
Application number
PCT/CN2021/100722
Other languages
English (en)
French (fr)
Inventor
谢濠键
Original Assignee
北京石头创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京石头创新科技有限公司 filed Critical 北京石头创新科技有限公司
Priority to US17/716,826 priority Critical patent/US20220284707A1/en
Publication of WO2022188292A1 publication Critical patent/WO2022188292A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present disclosure relates to the technical field of computer vision, and in particular, to a target detection and control method, system, device, and readable storage medium.
  • Intelligent self-propelled equipment usually adopts advanced navigation technology to realize automatic driving.
  • Simultaneous Localization and Mapping is widely used in autonomous driving, robots, and drones.
  • the purpose of the present disclosure is to provide a target detection and control method, device, system, device, and readable storage medium, to at least to a certain extent overcome the problem of unreasonable obstacle avoidance path planning.
  • a method for detecting a target including: acquiring a first image acquired by a camera device, and emitting a laser light of a first predetermined wavelength when acquiring the first image; acquiring a first image acquired by the camera device the second image of the camera, emits light of a second predetermined wavelength when the second image is collected; obtains the distance between the target object and the camera device according to the first image; measures the target object according to the second image to identify.
  • the first image includes a first laser image and a second laser image
  • the target object is irradiated by the laser of the first predetermined wavelength at a first angle when the first laser image is collected , when the second laser image is collected, the target object is irradiated by the laser of the first predetermined wavelength at a second angle
  • the obtaining the distance between the target object and the camera device according to the first image includes: According to the first laser image and the second laser image, it is calculated that the point at which the laser light of the first predetermined wavelength irradiates the target object at the first angle and the second angle respectively is relative to the 3D coordinates of the camera.
  • the obtaining the distance between the target object and the camera according to the first image also includes: making a difference between the pixels in the first image and the pixels at the corresponding positions in the third image to obtain Correcting the laser image; obtaining the distance between the target object and the camera device according to the corrected laser image.
  • the camera device alternately captures the first image and the second image.
  • the first image is acquired by the camera under a preset first exposure parameter; the second image is acquired by the camera under the second exposure parameter, so The second exposure parameter is obtained according to the imaging quality of the second image of the previous frame collected and combined with the exposure parameters when the second image of the previous frame is collected; wherein the exposure parameters include exposure time and/or exposure gain.
  • a method for controlling a self-propelled device comprising: acquiring a first image acquired by a camera disposed on the self-propelled device at multiple time points, and sending out a signal when the first image is collected. laser light of a first predetermined wavelength; obtaining multiple positions of the self-propelled equipment when the camera device collects images at multiple time points; according to the first image obtained by the camera device at multiple time points and the A point cloud is obtained at multiple positions where the self-propelled device is located; the point cloud is clustered, and a navigation plan for the self-propelled device is performed on the clustered result.
  • performing navigation planning on the self-propelled device based on the clustering result includes: obtaining the clustering result, where the clustering result includes objects whose size exceeds a preset threshold object; when the distance between the self-propelled device and the target object whose size exceeds a preset threshold is less than or equal to a preset distance, control the self-propelled device to detour; wherein the preset distance is greater than 0.
  • the first image includes a first laser image and a second laser image
  • the target object is irradiated by the laser of the first predetermined wavelength at a first angle when the first laser image is collected , when the second laser image is collected, the target object is irradiated by the laser of the first predetermined wavelength at a second angle.
  • the method further includes: acquiring a third image acquired by the camera device, and stopping emitting the laser light of the first predetermined wavelength when acquiring the third image; The difference between the pixel point and the pixel point at the corresponding position in the third image is to obtain a corrected laser image; the first image acquired by the camera at multiple time points and the multiple points where the self-propelled equipment is located are obtained.
  • Obtaining the point cloud of the position includes: obtaining the distance between the target object and the camera device according to the plurality of corrected laser images corresponding to the first images collected at the plurality of time points and the plurality of positions where the self-propelled device is located. distance.
  • a target detection control method comprising: alternately controlling to turn on a laser emitting device and a light-filling device; wherein a camera device collects and obtains a first image when the laser emitting device is turned on, and when the laser emitting device is turned on, a first image is acquired. Collecting and obtaining a second image when the supplementary light device is turned on; the laser emitting device is used for emitting laser light with a first predetermined wavelength, and the supplementary light device is used for emitting light with a second predetermined wavelength; the target object is obtained according to the first image The distance from the camera device; the target object is identified according to the second image.
  • the laser emitting device includes a first laser emitting device and a second laser emitting device
  • the first image includes a first laser image and a second laser image
  • the camera is in the second laser image.
  • a laser emitting device collects and obtains the first laser image when the second laser emitting device is turned on
  • the camera device collects and obtains the second laser image when the second laser emitting device is turned on
  • the target object collects the first laser image when the first laser image is collected.
  • the laser of the first predetermined wavelength emitted by the first laser emitting device is irradiated at a first angle, and the first predetermined wavelength of the target object emitted by the first laser emitting device when the second laser image is collected
  • the laser is irradiated at a second angle;
  • the obtaining the distance between the target object and the camera device according to the first image includes: based on laser ranging principles such as triangulation or TOF (Time of Flight), according to the first From a laser image and the second laser image, the three-dimensional image of the point at which the laser light of the first predetermined wavelength is irradiated on the target object at the first angle and the second angle, respectively, is relative to the camera device. coordinate.
  • the method further includes: controlling to turn off the laser emitting device and the light-filling device; wherein, the camera device collects and obtains a third image when the laser-emitting device and the light-filling device are turned off ; the obtaining the distance between the target object and the camera according to the first image includes: making a difference between the pixel point in the first image and the pixel point at the corresponding position in the third image, and obtaining a correction Laser image; obtain the distance between the target object and the camera device according to the corrected laser image.
  • the first image is acquired by the camera under a preset first exposure parameter; the second image is acquired by the camera under the second exposure parameter, so The second exposure parameter is obtained according to the imaging quality of the second image of the previous frame collected and combined with the exposure parameters when the second image of the previous frame is collected; wherein the exposure parameters include exposure time and/or exposure gain.
  • a target detection system comprising: a laser emitting device, a light-filling device, a camera device, and a target detection device, wherein: the laser emitting device is configured to emit laser light of a first predetermined wavelength; The supplementary light device is used to emit light of the second predetermined wavelength; the laser light of the first predetermined wavelength and the light of the second predetermined wavelength can be the same wavelength or different wavelengths; the camera device is used to emit the first predetermined wavelength.
  • the first image is collected when the laser light of the predetermined wavelength is emitted; the second image is collected when the light of the second predetermined wavelength is emitted; the target detection device includes: a ranging module, which is used for obtaining the target object and the target object according to the first image. the distance between the camera devices; a target recognition module, configured to recognize the target object according to the second image.
  • a self-propelled device comprising: a driving device for driving the self-propelled device to walk along a work surface; a sensing system, including a target detection system, the target detection system comprising a laser emitting device , a supplementary light device, a camera device and an infrared filter; the laser emission device is used to emit laser light of a first wavelength; the supplementary light device is used to emit infrared light of a second wavelength; the value of the first wavelength and The value of the second wavelength may be equal or unequal; the infrared filter is arranged in front of the imaging device for filtering the light incident on the imaging device, and the laser light of the first wavelength and the infrared light energy of the second wavelength is incident to the camera device through the infrared filter; the camera device is used for collecting images.
  • the laser emitting device and the supplementary light device alternately emit light of corresponding wavelengths.
  • the camera device when the laser emitting device is in operation, the camera device collects and obtains a first image; when the light-filling device is in operation, the camera device captures and obtains a second image; the The self-propelled device further includes a control unit, the control unit obtains the distance between the target object and the camera device according to the first image, and identifies the target object according to the second image.
  • an apparatus comprising: a memory, a processor, and executable instructions stored in the memory and executable in the processor, the processor executing the executable instructions When implementing any of the above methods.
  • a computer-readable storage medium on which computer-executable instructions are stored, and when the executable instructions are executed by a processor, implement any of the above methods.
  • the target detection method provided by the embodiment of the present disclosure realizes the obstacle recognition while measuring the distance of the target object through the method of time-division multiplexing the same camera device, and improves the rationality of obstacle avoidance path planning. And save costs.
  • the location of obstacles in the travel direction can be more accurately confirmed, and the navigation route planned accordingly is more accurate, thereby further reducing the accidental collision of obstacles in the working environment.
  • FIG. 1A shows a schematic diagram of a target detection system in an embodiment of the present disclosure.
  • FIG. 1B shows a graph of transmittance versus wavelength for an optical filter, according to an exemplary embodiment.
  • FIG. 1C shows a left side view of the exemplary system of FIG. 1A .
  • FIG. 2 shows a flowchart of a target detection method in an embodiment of the present disclosure.
  • FIG. 3A shows a schematic flowchart of time-sharing control performed by a target detection system in an embodiment of the present disclosure.
  • FIG. 3B shows a schematic flowchart of time-division control performed by another target detection system in an embodiment of the present disclosure.
  • FIG. 3C shows a timing diagram of time-sharing control in an embodiment of the present disclosure according to FIG. 3B .
  • FIG. 3D shows a schematic flowchart of time-sharing control performed by yet another target detection system according to an embodiment of the present disclosure.
  • FIG. 3E shows a schematic flowchart of time-division control performed by yet another target detection system according to an embodiment of the present disclosure.
  • Fig. 4 is a flow chart of an obstacle avoidance method for a self-propelled device according to an exemplary embodiment.
  • FIG. 5 shows a block diagram of a target detection apparatus in an embodiment of the present disclosure.
  • FIG. 6 shows a block diagram of another object detection apparatus in an embodiment of the present disclosure.
  • FIG. 7 shows a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments can be embodied in various forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
  • the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale.
  • the same reference numerals in the drawings denote the same or similar parts, and thus their repeated descriptions will be omitted.
  • first, second, etc. are only used for descriptive purposes and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.
  • the symbol “/” generally indicates that the related objects are an “or” relationship.
  • connection should be interpreted in a broad sense, for example, it may be an electrical connection or may communicate with each other; it may be directly connected or indirectly connected through an intermediate medium.
  • connection should be interpreted in a broad sense, for example, it may be an electrical connection or may communicate with each other; it may be directly connected or indirectly connected through an intermediate medium.
  • lidar for obstacle ranging.
  • the lidar needs to be rotated frequently and is easily damaged.
  • the lidar is raised on the top of the self-propelled equipment, which increases the height of the self-propelled equipment; Only obstacles at or above their location can be sensed.
  • intelligent self-propelled devices use line laser or structured light for distance measurement, which cannot identify obstacles, which may affect the obstacle avoidance strategy of obstacles with low heights, resulting in the planning of intelligent self-propelled devices. The movement path is unreasonable.
  • the present disclosure provides a target detection method, by acquiring a first image captured by a camera device when emitting laser light of a first predetermined wavelength and a second image captured by the camera device when emitting light of a second predetermined wavelength, The distance between the target object and the camera device is obtained according to the first image, and the target object is recognized according to the second image, so that the obstacle recognition can be performed while the target object is measured, and the rationality of obstacle avoidance path planning can be improved. sex.
  • FIG. 1A illustrates an exemplary object detection system to which the object detection method of the present disclosure may be applied.
  • the target detection system 10 may be a mobile device, such as a cleaning robot, a service robot, etc.
  • the system 10 includes a camera device 102 , a laser emission device 104 , a light supplement device 106 and a control module (not shown in the figure) .
  • the camera device 102 may be a camera including a camera and a Charge Coupled Device (CCD), and is also provided with a filter to ensure that only light of a specific wavelength can pass through the filter to be captured by the camera. .
  • CCD Charge Coupled Device
  • the laser emitting device 104 can be a line laser emitter, a structured light emitter, or a surface laser emitter, etc., and can be used to emit infrared laser light, for example, a line laser with a wavelength of 850 nm.
  • the supplementary light device 106 may be a supplementary light device capable of emitting infrared light within a certain wavelength band, which may include the wavelength of the laser light emitted by the laser emitting device 104 .
  • a filter that passes through a narrow band of 850 nm can be used. The relationship between the transmittance of the filter and the wavelength is shown in Figure 1B, and the filter can also transmit infrared light in a band of about 850 nm.
  • the control module may be a target detection device, and the specific implementation can be referred to FIG. 5 and FIG. 6 , which will not be described in detail here.
  • FIG. 1C shows a left side view of the exemplary system of FIG. 1A .
  • the infrared light emitted by the laser emitting device 104 and the supplementary light device 106 can be irradiated on the obstacles in front of the system 10 , and the camera device 102 can irradiate the obstacles irradiated by the laser emitting device 104 and the supplementary light device 106 respectively.
  • Shooting can be controlled by the control module in a time-sharing manner.
  • the laser emitting device 104 and the light-filling device 106 are turned on alternately, and the laser image (first image) collected by the camera 102 is used for ranging, and the light-filling device 106 is collected when the light-filling device 106 is turned on.
  • the second image is used for obstacle identification.
  • fixed exposure can be used, that is, fixed exposure parameters can be set, including exposure time and exposure gain, etc.; Adjust the exposure parameters with reference to the previous frame in which the image of the supplementary light device was collected for object recognition.
  • the exposure time and exposure can be adjusted according to the imaging quality (such as the brightness of the picture, the number of feature points in the picture, etc.) gain.
  • the advantage of using automatic exposure is to improve the imaging quality by changing the exposure parameters, thereby improving the recognition rate of obstacles and improving the user experience.
  • the numbers of imaging devices, laser emitting devices, and light-filling devices in FIG. 1A and FIG. 1C are only illustrative. According to implementation requirements, there may be any number of camera devices, laser emission devices, and light-filling devices.
  • the laser emitting device 104 can be two line laser emitters arranged on the left and right sides of the camera device, or can be arranged on the same side of the camera device.
  • the heights of the two line laser transmitters in the horizontal direction are the same, and the optical axis has an intersection in the traveling direction of the self-propelled equipment; when placed on the same side of the camera, the two lines
  • the lasers can be arranged side by side along the height direction of the self-propelled device.
  • the effect of ranging and recognition can be achieved at the same time by multiplexing a camera on the self-propelled device, so that the obstacle recognition can be performed while the target object is measured.
  • Better planning of navigation paths improves system compactness and saves costs.
  • Fig. 2 is a flow chart of a method for object detection according to an exemplary embodiment. The method shown in FIG. 2 can be applied, for example, to the above-mentioned target detection system 10 .
  • the method 20 provided by the embodiment of the present disclosure may include the following steps.
  • step S202 a first image captured by the camera device is acquired, and a laser of a first predetermined wavelength is emitted when the first image is captured.
  • a laser of a first predetermined wavelength is emitted when the first image is captured.
  • a second image captured by the camera device is acquired, and light of a second predetermined wavelength is emitted when the second image is captured.
  • the laser light of the first predetermined wavelength and the light of the second predetermined wavelength may have the same wavelength or different wavelengths, which are not limited herein.
  • the laser emitting device can be used for emitting laser light with a first predetermined wavelength, and the supplementary light device can be used for emitting light with a second predetermined wavelength, both of which can use an infrared light source.
  • the camera device can use a camera that only passes part of the infrared wavelength, for example, a camera with a filter set to ensure that the light energy with a wavelength between the first predetermined wavelength and the second predetermined wavelength can be collected by the camera, so as to filter out the outside world as much as possible.
  • Light source interference ensures the accuracy of imaging.
  • the camera device alternately captures the first image and the second image, which can be realized by controlling the laser emitting device and the light-filling device to be turned on alternately, and setting the exposure parameters of the camera device accordingly, such as turning on the laser emitting device to emit laser light
  • the laser image is acquired by the camera according to the first exposure parameter, and the first exposure parameter includes a preset fixed exposure time and gain;
  • the supplementary light image is acquired by the camera according to the second exposure parameter,
  • the second exposure parameter is obtained according to the imaging quality of the previous frame of the supplementary light image collected and combined with the exposure parameter of the camera device at that time. For example, if the image quality in the fill-light image in the previous frame is poor, adjust the exposure parameter of this frame to a value that helps to improve the image quality.
  • step S206 the distance between the target object and the camera is obtained according to the first image.
  • the line laser irradiates the target object
  • the three-dimensional coordinates of each point relative to the camera device combined with the relative position of the camera device on the self-propelled device and the real-time SLAM coordinates of the self-propelled device, can calculate the three-dimensional coordinates of each point on the line laser in the SLAM coordinate system.
  • the point cloud of the objects encountered during the movement can be constructed. Cluster the point cloud, and perform obstacle avoidance processing for objects that meet a certain height and/or width threshold (or when the height is between the above threshold and the height of the obstacle that can be overcome by itself, it can be processed).
  • the specific implementation can refer to FIG. 4 .
  • the target object is identified according to the second image.
  • the global and/or local features of the object in the three-dimensional space can be extracted through a neural network such as a trained machine learning model, and the object category can be classified by comparing the shape of the object in the image and the reference object. Identify, in order to better perform different obstacle avoidance path planning according to its category (such as easily involved fabrics, threads, pet feces, bases, etc.) to ensure that the cleaning coverage is maximized on the basis of In addition, it does not cause unnecessary damage to its working environment, reduces the risk of stuck stuck, and improves the user experience.
  • the target detection method by acquiring the first image acquired by the camera device when the laser light of the first predetermined wavelength is emitted and the second image acquired by the camera device when the light of the second predetermined wavelength is emitted, according to The distance between the target object and the camera device is obtained from the first image, and the target object is identified according to the second image, so that the obstacle can be identified while the target object is measured, and the rationality of obstacle avoidance path planning can be improved.
  • FIG. 3A shows a schematic flowchart of time-sharing control performed by a target detection system in an embodiment of the present disclosure.
  • Fig. 3A shows the case where a line laser transmitter is provided.
  • the laser emission device and the light-filling device are alternately controlled to be turned on; wherein, the camera device collects and obtains the first image when the laser-emitting device is turned on, and collects and obtains the second image when the light-filling device is turned on;
  • the laser emitting device is used for emitting laser light with a first predetermined wavelength
  • the light-compensating device is used for emitting light with a second predetermined wavelength.
  • the distance between the target object and the camera is obtained according to the first image.
  • the target object is identified according to the second image.
  • Multiple laser transmitters can be used to obtain multiple first images.
  • two line laser transmitters can be arranged on the left and right sides of the camera device, and the first image includes a first laser image and a second laser image.
  • the image is confused in identification, which affects the generation of correct coordinates of the target object in front.
  • the first laser image and the second laser image are acquired by the camera in a time-sharing manner.
  • the left-line laser transmitter emits a laser with a first predetermined wavelength.
  • the target object is irradiated at a first angle
  • the right-line laser transmitter emits a laser with a first predetermined wavelength to irradiate the target object at a second angle
  • the first angle and the second angle are the corresponding laser transmitters respectively
  • the values of the first angle and the second angle may be the same or different, which are not limited here.
  • the two line laser transmitters can be placed side by side on the self-propelled device in the horizontal direction, and the optical axis is in the traveling direction of the self-propelled device, at this time, the first angle and the second angle are the same.
  • FIG. 3B shows a schematic flowchart of time-division control performed by another target detection system in an embodiment of the present disclosure.
  • two line laser emitters are arranged on the left and right sides of the camera device. As shown in FIG.
  • FIG. 3B shows a time-sharing control timing diagram in an embodiment of the present disclosure according to FIG.
  • the camera device uses a fixed exposure at time t1 , and the time when the left-line laser transmitter is turned on (S302) is consistent with the camera device exposure time , the camera device uses fixed exposure at time t2 , the time when the right-line laser is turned on (S306) is consistent with the exposure time of the camera device, the fill light device is turned on at time t3 , the camera device uses automatic exposure, and the exposure parameter refers to the previous one for object recognition frame.
  • the exposure parameters include exposure time and/or exposure gain, that is, the first image is acquired by the camera under the preset first exposure parameter, and the second image is acquired by the camera under the second exposure parameter, and the second exposure parameter can be Obtained according to the imaging quality of the second image of the previous frame acquired and combined with the exposure parameters at that time.
  • the camera device may acquire a third image in step S304, stop emitting the laser light of the first predetermined wavelength and the light of the second predetermined wavelength when acquiring the third image, and the target object is not irradiated by the laser light or supplementary light.
  • the third image is used to perform operations with the images in steps S302 and S306 to remove background noise and further reduce the influence of lights, strong light, etc.
  • the transmitting device and the supplementary light device are turned off, take one photo
  • the purpose of shooting is to make the difference between the pixels in the first image and the pixels in the corresponding positions in the third image to obtain a corrected laser image to reduce as much as possible.
  • the effect of external light sources on line lasers For example, when the target object is illuminated by natural light, a natural light image is obtained, the laser ranging result of the target object in the scene under sunlight is optimized, and then the distance between the target object and the camera device can be obtained according to the corrected laser image.
  • FIG. 3D shows a schematic flowchart of time-sharing control performed by yet another target detection system according to an embodiment of the present disclosure.
  • the first predetermined wavelength of the laser light emitted by the line laser transmitter is different from the second predetermined wavelength of the light emitted by the supplementary light device. As shown in Fig.
  • FIG. 3E shows a schematic flowchart of time-division control performed by yet another target detection system according to an embodiment of the present disclosure.
  • the positions of the two line laser emitters are staggered up and down, and the lasers emitted when turned on at the same time do not intersect.
  • the first line laser image can be changed by The pixel point in the laser image is different from the pixel point in the corresponding position in the second laser image to obtain a corrected laser image. Repeat the above steps as the self-propelled device moves. For specific implementations of obtaining an image and performing processing according to the image, reference may be made to FIG. 3A and FIG. 3C .
  • Fig. 4 is a flow chart of an obstacle avoidance method for a self-propelled device according to an exemplary embodiment. The method shown in FIG. 4 can be applied to the above-mentioned target detection system 10, for example.
  • the method 40 provided by the embodiment of the present disclosure may include the following steps.
  • step S402 a first image collected by a camera set on the self-propelled device at multiple time points is acquired, and a laser of a first predetermined wavelength is emitted when the first image is collected.
  • a laser of a first predetermined wavelength is emitted when the first image is collected.
  • step S404 multiple positions of the self-propelled equipment when the camera device collects images at multiple time points are acquired. Among them, the self-propelled device moves relative to the target object at multiple time points.
  • a point cloud is obtained according to the first image acquired by the camera at multiple time points and the multiple positions where the self-propelled equipment is located. For example, if the self-propelled device is at coordinate A, it can measure the points where the line laser irradiates the target object at this time, and calculate the SLAM three-dimensional coordinates of these points; after the self-propelled device moves or rotates, the coordinate is B, then if the line laser also When hitting the target object, the distance measurement is also performed, and the SLAM three-dimensional coordinates of other points on the target object can be calculated. Through the continuous motion of the self-propelled device, the point cloud of the object can be obtained.
  • the corrected laser image may also be obtained according to FIG. 2 to FIG. 3D , and the point cloud may be obtained according to the corrected laser image.
  • step S408 the point cloud is clustered, and obstacle avoidance processing is performed on the target object whose size exceeds a preset threshold after the clustering.
  • the self-propelled device can be controlled to detour; wherein, the preset distance is greater than 0, and its value can be related to the identified obstacle type, That is, for different types of obstacles identified, the preset distance will have different numerical settings; of course, its value can also be a fixed value, which is suitable for target objects whose type cannot be determined.
  • the synchronized coordinates of at least some points on the target object may be obtained according to the second image captured when the fill light is turned on based on the principle of monocular ranging. According to the synchronization coordinates of at least some of the above points, these points are added to the initial point cloud of the target object (that is, the point cloud obtained by the image of the light emitted by the laser emitting device collected by the camera device), and the dense point cloud of the target object is obtained.
  • the current SLAM coordinates of the self-propelled device can be estimated through monocular ranging, combined with the point cloud information obtained through the first image, so as to construct the point cloud of the object for more accurate obstacle avoidance, for example, calculate some point clouds and their Then, according to the three-dimensional information calculated by monocular ranging, the identified objects are associated with the point cloud data to obtain denser point cloud data.
  • the effects of distance measurement and recognition are simultaneously achieved by multiplexing the cameras on the equipment, and the recognition result is used to accurately restore the object point cloud, thereby improving the accuracy of the obstacle avoidance strategy. and rationality.
  • the embodiments of the present disclosure also provide a self-propelled device, including: a driving device for driving the self-propelled device to walk along a working surface; a perception system, including a target detection system, and the target detection system includes a laser emitting device, a light-filling device, and a camera.
  • the device and the infrared filter; the laser emitting device is used to emit the laser light of the first wavelength; the supplementary light device is used to emit the infrared light of the second wavelength; the values of the first wavelength and the second wavelength may be equal or unequal; the infrared filter
  • the light sheet is arranged in front of the imaging device and is used for filtering the light incident on the imaging device.
  • the light energy of the first wavelength and the second wavelength is incident to the imaging device through the infrared filter; the imaging device is used for collecting images.
  • the laser emitting device and the supplementary light device alternately emit light of corresponding wavelengths.
  • the camera device collects and obtains the first image; when the light supplement device is working, the camera device collects and obtains the second image.
  • the self-propelled device further includes a control unit, the control unit obtains the distance between the target object and the camera device according to the first image, and identifies the target object according to the second image.
  • Fig. 5 shows a block diagram of a target detection apparatus according to an exemplary embodiment.
  • the apparatus shown in FIG. 5 can be applied to, for example, the above-mentioned target detection system 10 .
  • the apparatus 50 may include a laser image acquisition module 502 , a fill-light image acquisition module 504 , a ranging module 506 , and a target identification module 508 .
  • the laser image acquisition module 502 may be configured to acquire the first image acquired by the camera device, and emit laser light of the first predetermined wavelength when acquiring the first image.
  • the supplementary light image acquisition module 504 may be configured to acquire a second image acquired by the camera device, and emit light of a second predetermined wavelength when acquiring the second image.
  • the ranging module 506 can be used to obtain the distance between the target object and the camera device according to the first image.
  • the target recognition module 508 can be used to recognize the target object according to the second image.
  • Fig. 6 shows a block diagram of another target detection apparatus according to an exemplary embodiment.
  • the apparatus shown in FIG. 6 can be applied to, for example, the above-mentioned target detection system 10 .
  • an apparatus 60 may include a laser image acquisition module 602, a background image acquisition module 603, a fill light image acquisition module 604, a ranging module 606, a target recognition module 608, a laser point cloud acquisition module 610, Synchronizing the coordinate calculation module 612 , the accurate point cloud restoration module 614 , the point cloud clustering module 616 and the path planning module 618 , the ranging module 606 may include a denoising module 6062 and a distance calculation module 6064 .
  • the laser image acquisition module 602 may be configured to acquire the first image acquired by the camera device, and emit laser light of the first predetermined wavelength when acquiring the first image.
  • the first image includes a first laser image and a second laser image.
  • the target object is irradiated by a laser with a first predetermined wavelength at a first angle
  • the target object is irradiated by a laser with a first predetermined wavelength. Irradiate at a second angle.
  • the first image is acquired by a camera device under preset first exposure parameters, wherein the exposure parameters include exposure time and/or exposure gain.
  • the background image acquisition module 603 may be configured to acquire the third image acquired by the camera device, and stop emitting the laser light of the first predetermined wavelength and the light of the second predetermined wavelength when acquiring the third image.
  • the supplementary light image acquisition module 604 may be configured to acquire a second image acquired by the camera device, and emit light of a second predetermined wavelength when acquiring the second image.
  • the camera device alternately captures the first image and the second image.
  • the second image is acquired by the camera device under the second exposure parameter, and the second exposure parameter is obtained according to the imaging quality of the second image of the previous frame collected and combined with the exposure parameter at that time.
  • the ranging module 606 can be used to obtain the distance between the target object and the camera device according to the first image.
  • the ranging module 606 can also be used to calculate the point at which the laser with the first predetermined wavelength irradiates the target object at the first angle and the second angle according to the first laser image and the second laser image with respect to the camera device based on the principle of laser ranging. three-dimensional coordinates.
  • the denoising module 6062 can be used to make a difference between the pixel points in the first image and the pixel points at the corresponding positions in the third image to obtain a corrected laser image.
  • the distance calculation module 6064 can be used to obtain the distance between the target object and the camera device according to the corrected laser image.
  • the target recognition module 608 can be used to recognize the target object according to the second image.
  • the laser point cloud obtaining module 610 may be configured to obtain a point cloud according to the first image acquired by the camera at multiple time points and the multiple positions where the self-propelled device is located.
  • the synchronization coordinate calculation module 612 can be used to acquire multiple positions of the self-propelled device when the camera device collects images at multiple time points.
  • the precise point cloud restoration module 614 may be configured to obtain a dense point cloud of the target object according to the synchronization coordinates of the supplementary points and supplementing the supplementary points to the initial point cloud of the target object.
  • the point cloud clustering module 616 may be used to cluster point clouds.
  • the path planning module 618 may be configured to perform obstacle avoidance processing on target objects whose size exceeds a preset threshold after clustering.
  • the preset can be related to the identified obstacle type, that is, for different types of identified obstacles, the preset distance will have different numerical settings; of course, its value can also be a fixed value, which is suitable for Type of target object.
  • the path planning module 618 can also be used to control the self-propelled device to detour when the distance from the target object whose size exceeds the preset threshold is less than or equal to the preset distance; wherein the preset distance is greater than 0.
  • FIG. 7 shows a schematic structural diagram of an electronic device in an embodiment of the present disclosure. It should be noted that the device shown in FIG. 7 is only an example of a computer system, and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the apparatus 700 includes a central processing unit (CPU) 701, which can be processed according to a program stored in a read only memory (ROM) 702 or a program loaded from a storage section 708 into a random access memory (RAM) 703 Various appropriate actions and processes are performed.
  • ROM read only memory
  • RAM random access memory
  • various programs and data necessary for the operation of the device 700 are also stored.
  • the CPU 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to bus 704 .
  • the following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, etc.; an output section 707 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage section 708 including a hard disk, etc. ; and a communication section 709 including a network interface card such as a LAN card, a modem, and the like.
  • the communication section 709 performs communication processing via a network such as the Internet.
  • a driver 77 is also connected to the I/O interface 705 as needed.
  • a removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 77 as needed, so that a computer program read therefrom is installed into the storage section 708 as needed.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication portion 709 and/or installed from the removable medium 711 .
  • the central processing unit (CPU) 701 the above-described functions defined in the system of the present disclosure are executed.
  • the computer-readable medium shown in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • the modules involved in the embodiments of the present disclosure may be implemented in software or hardware.
  • the described modules can also be set in the processor, for example, it can be described as: a processor includes a laser image acquisition module, a supplementary light image acquisition module, a ranging module and a target identification module.
  • a processor includes a laser image acquisition module, a supplementary light image acquisition module, a ranging module and a target identification module.
  • the names of these modules do not limit the module itself in some cases, for example, the laser image module can also be described as "a module that captures the image of the target object irradiated by laser through the connected camera device".
  • the present disclosure also provides a computer-readable medium.
  • the computer-readable medium may be included in the device described in the above-mentioned embodiments, or it may exist alone without being assembled into the device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by a device, the device includes: acquiring the first image acquired by the camera, and sending out the first image when acquiring the first image. a laser with a predetermined wavelength; acquire a second image collected by the camera, and emit light of a second predetermined wavelength when the second image is collected; obtain the distance between the target object and the camera according to the first image; target object for identification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种目标检测方法、装置、系统(10)、设备及存储介质,涉及计算机视觉技术领域。方法包括:获取由摄像装置(102)采集获得的第一图像,采集第一图像时发出第一预定波长的激光(S202);获取由摄像装置(102)采集获得的第二图像,采集第二图像时发出第二预定波长的光(S204),第一预定波长的激光与第二预定波长的光可为相同波长或不同波长;根据第一图像获得目标物体与摄像装置之间的距离(S206);根据第二图像对目标物体进行识别(S208)。方法实现了仅使用一个摄像装置即可实现障碍物的测距及识别,提高了路径规划的合理性,并节省了硬件成本。

Description

目标检测及控制方法、系统、设备及存储介质
本公开基于申请号为202110264971.7、申请日为2021年3月8日的中国专利申请《目标检测及控制方法、系统、设备及存储介质》提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及计算机视觉技术领域,具体而言,涉及一种目标检测及控制方法、系统、设备及可读存储介质。
背景技术
智能自行走设备通常采用先进的导航技术实现自动行驶。同步定位与建图(Simultaneous Localization and Mapping,SLAM)作为基础技术之一,在自动驾驶、机器人、无人机等方面广泛应用。
至今,如何提高避障路径规划的合理性仍是亟待解决的问题。
在所述背景技术部分公开的上述信息仅用于加强对本公开的背景的理解,因此它可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开的目的在于提供一目标检测及控制方法、装置、系统、设备及可读存储介质,至少在一定程度上克服避障路径规划不合理的问题。
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。
根据本公开的一方面,提供一种目标检测方法,包括:获取由摄像装置采集获得的第一图像,采集所述第一图像时发出第一预定波长的激光;获取由所述摄像装置采集获得的第二图像,采集所述第二图像时发出第二预定波长的光;根据所述第一图像获得目标物体与所述摄像装置之间的距离;根据所述第二图像对所述目标物体进行识别。
根据本公开的一实施例,所述第一图像包括第一激光图像和第二激光图像,采集所述第一激光图像时所述目标物体由所述第一预定波长的激光以第一角度照射,采集所述第二激光图像时所述目标物体由所述第一预定波长的激光以第二角度照射;所述根据所述第一图像获得目标物体与所述摄像装置之间的距离包括:根据所述第一激光图像和所述第二激光图像计算出所述第一预定波长的激光分别以所述第一角度和所述第二角度照射到所述目标物体上的点相对于所述摄像装置的三维坐标。
根据本公开的一实施例,还包括:获取由所述摄像装置采集获得的第三图像,采集所述第三图像时停止发射所述第一预定波长的激光和所述第二预定波长的光;所述根据所述第一 图像获得目标物体与所述摄像装置之间的距离还包括:将所述第一图像中的像素点与所述第三图像中对应位置的像素点做差,获得修正激光图像;根据所述修正激光图像获得所述目标物体与所述摄像装置之间的距离。
根据本公开的一实施例,所述摄像装置交替采集所述第一图像与所述第二图像。
根据本公开的一实施例,所述第一图像由所述摄像装置在预设的第一曝光参数下采集获得;所述第二图像由所述摄像装置在第二曝光参数下采集获得,所述第二曝光参数根据采集的前一帧第二图像的成像质量并结合采集所述前一帧第二图像时的曝光参数获得;其中,曝光参数包括曝光时间和/或曝光增益。
根据本公开的另一方面,提供一种自行走设备控制方法,包括:获取由设置在自行走设备上的摄像装置在多个时间点采集获得的第一图像,采集所述第一图像时发出第一预定波长的激光;获取所述摄像装置在多个时间点采集图像时所述自行走设备所在的多个位置;根据所述摄像装置在多个时间点采集获得的第一图像和所述自行走设备所在的多个位置获得点云;对所述点云进行聚类,并对于聚类后的结果对所述自行走设备进行导航规划。
根据本公开的一实施例,所述对于聚类后的结果对所述自行走设备进行导航规划,包括:获得所述聚类的结果,所述聚类的结果包括尺寸超过预设阈值的目标物体;在所述自行走设备与所述尺寸超过预设阈值的目标物体之间的距离小于或等于预设距离时,控制所述自行走设备绕行;其中,所述预设距离大于0。
根据本公开的一实施例,所述第一图像包括第一激光图像和第二激光图像,采集所述第一激光图像时所述目标物体由所述第一预定波长的激光以第一角度照射,采集所述第二激光图像时所述目标物体由所述第一预定波长的激光以第二角度照射。
根据本公开的一实施例,还包括:获取由所述摄像装置采集获得的第三图像,采集所述第三图像时停止发射所述第一预定波长的激光;将所述第一图像中的像素点与所述第三图像中对应位置的像素点做差,获得修正激光图像;所述根据所述摄像装置在多个时间点采集获得的第一图像和所述自行走设备所在的多个位置获得点云包括:根据与所述多个时间点采集的第一图像对应的多个修正激光图像和所述自行走设备所在的多个位置获得所述目标物体与所述摄像装置之间的距离。
根据本公开的又一方面,提供一种目标检测控制方法,包括:交替控制开启激光发射装置和补光装置;其中,摄像装置在所述激光发射装置开启时采集获得第一图像,在所述补光装置开启时采集获得第二图像;所述激光发射装置用于发射第一预定波长的激光,所述补光装置用于发射第二预定波长的光;根据所述第一图像获得目标物体与所述摄像装置之间的距离;根据所述第二图像对所述目标物体进行识别。
根据本公开的一实施例,所述激光发射装置包括第一激光发射装置和第二激光发射装置,所述第一图像包括第一激光图像和第二激光图像,所述摄像装置在所述第一激光发射装置开启时采集获得所述第一激光图像,所述摄像装置在所述第二激光发射装置开启时采集获得所述第二激光图像,采集所述第一激光图像时所述目标物体由所述第一激光发射装置发射出的第一预定波长的激光以第一角度照射,采集所述第二激光图像时所述目标物体由所述第 一激光发射装置发射出的第一预定波长的激光以第二角度照射;所述根据所述第一图像获得目标物体与所述摄像装置之间的距离包括:基于三角测距或TOF(Time of Flight)等激光测距原理根据所述第一激光图像和所述第二激光图像计算出所述第一预定波长的激光分别以所述第一角度和所述第二角度照射到所述目标物体上的点相对于所述摄像装置的三维坐标。
根据本公开的一实施例,还包括:控制关闭所述激光发射装置和所述补光装置;其中,所述摄像装置在所述激光发射装置和所述补光装置关闭时采集获得第三图像;所述根据所述第一图像获得目标物体与所述摄像装置之间的距离包括:将所述第一图像中的像素点与所述第三图像中对应位置的像素点做差,获得修正激光图像;根据所述修正激光图像获得所述目标物体与所述摄像装置之间的距离。
根据本公开的一实施例,所述第一图像由所述摄像装置在预设的第一曝光参数下采集获得;所述第二图像由所述摄像装置在第二曝光参数下采集获得,所述第二曝光参数根据采集的前一帧第二图像的成像质量并结合采集所述前一帧第二图像时的曝光参数获得;其中,曝光参数包括曝光时间和/或曝光增益。
根据本公开的再一方面,提供一种目标检测系统,包括:激光发射装置、补光装置、摄像装置和目标检测装置,其中:所述激光发射装置,用于发射第一预定波长的激光;所述补光装置,用于发射第二预定波长的光;第一预定波长的激光与第二预定波长的光可为相同波长或不同波长;所述摄像装置,用于在发出所述第一预定波长的激光时采集第一图像;在发出所述第二预定波长的光时采集第二图像;所述目标检测装置,包括:测距模块,用于根据所述第一图像获得目标物体与所述摄像装置之间的距离;目标识别模块,用于根据所述第二图像对所述目标物体进行识别。
根据本公开的再一方面,提供一种自行走设备,包括:驱动装置,用于驱动所述自行走设备沿工作表面行走;感知系统,包括目标检测系统,所述目标检测系统包括激光发射装置、补光装置、摄像装置和红外滤光片;所述激光发射装置用于发射第一波长的激光;所述补光装置用于发射第二波长的红外光;所述第一波长的值和所述第二波长的值可以相等也可以不相等;所述红外滤光片设置在所述摄像装置的前方,用于对入射到所述摄像装置的光进行过滤,所述第一波长的激光和所述第二波长的红外光能通过所述红外滤光片入射到所述摄像装置;所述摄像装置用于采集图像。
根据本公开的一实施例,所述激光发射装置和所述补光装置交替发射相应波长的光。
根据本公开的另一实施例,在所述激光发射装置工作时,所述摄像装置采集获得第一图像;在所述补光装置工作时,所述摄像装置采集获得的第二图像;所述自行走设备还包括控制单元,所述控制单元根据所述第一图像获得目标物体与所述摄像装置之间的距离,根据所述第二图像对所述目标物体进行识别。
根据本公开的再一方面,提供一种设备,包括:存储器、处理器及存储在所述存储器中并可在所述处理器中运行的可执行指令,所述处理器执行所述可执行指令时实现如上述任一种方法。
根据本公开的再一方面,提供一种计算机可读存储介质,其上存储有计算机可执行指令, 所述可执行指令被处理器执行时实现如上述任一种方法。
本公开的实施例提供的目标检测方法,通过对同一个摄像装置分时复用的方法,实现了在对目标物体进行测距的同时进行障碍物识别,提高了避障路径规划的合理性,而且节省了成本。另外,根据激光测距的结果,可以更精确的确认行进方向的障碍物位置,据此规划出的导航路线更准确,从而进一步减少对工作环境中障碍物的误碰。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本公开。
附图说明
通过参照附图详细描述其示例实施例,本公开的上述和其它目标、特征及优点将变得更加显而易见。
图1A示出本公开实施例中一种目标检测系统的示意图。
图1B根据一示例性实施例示出了一种滤光片的透过率与波长的关系图。
图1C示出了图1A中的示例性系统的左视图。
图2示出本公开实施例中一种目标检测方法的流程图。
图3A示出本公开实施例中一种目标检测系统进行分时控制的流程示意图。
图3B示出本公开实施例中另一种目标检测系统进行分时控制的流程示意图。
图3C根据图3B示出本公开实施例中一种分时控制时序图。
图3D示出本公开实施例中又一种目标检测系统进行分时控制的流程示意图。
图3E示出本公开实施例中再一种目标检测系统进行分时控制的流程示意图。
图4是根据一示例性实施例示出的一种自行走设备避障方法的流程图。
图5示出本公开实施例中一种目标检测装置的框图。
图6示出本公开实施例中另一种目标检测装置的框图。
图7示出本公开实施例中一种电子设备的结构示意图。
具体实施方式
现在将参考附图更全面地描述示例实施例。然而,示例实施例能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施例使得本公开将更加全面和完整,并将示例实施例的构思全面地传达给本领域的技术人员。附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本公开的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、装置、步骤等。在其它情况下,不详细示出或描述公知结构、方法、装置、实现或者操作以避免喧宾夺主而使得本公开的各方面变得模糊。
此外,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要 性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。符号“/”一般表示前后关联对象是一种“或”的关系。
在本公开中,除非另有明确的规定和限定,“连接”等术语应做广义理解,例如,可以是电连接或可以互相通讯;可以是直接相连,也可以通过中间媒介间接相连。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本公开中的具体含义。
一些相关技术采用激光雷达进行障碍物测距,激光雷达需要频繁转动,容易损坏,而且激光雷达凸起设置在自行走设备顶部,增加了自行走设备的高度;另外由于激光雷达的设置位置,导致只有等于或高于其位置的障碍物能被感测到。另一些相关技术中的智能自行走设备采用线激光或结构光进行测距,无法进行障碍物识别,可能会对高度较低的障碍物的避障策略进行影响,导致为智能自行走设备规划的移动路径不合理。因此,本公开提供了一种目标检测方法,通过获取发出第一预定波长的激光时由摄像装置采集获得的第一图像和发出第二预定波长的光时由摄像装置采集获得的第二图像,根据第一图像获得目标物体与摄像装置之间的距离,根据第二图像对目标物体进行识别,从而可实现在对目标物体进行测距的同时进行障碍物识别,提高了避障路径规划的合理性。
图1A示出了可以应用本公开的目标检测方法的示例性目标检测系统。
如图1A所示,目标检测系统10可以是移动设备,如扫地机器人、服务机器人等等,系统10包括摄像装置102、激光发射装置104、补光装置106和控制模块(图中未示出)。其中,摄像装置102可以是包括摄像头和电荷耦合器件(Charge Coupled Device,CCD)的相机,还设置有滤光片,以保证只有特定波长的光能通过该滤光片而被摄像头采集到其图像。激光发射装置104可以为线激光发射器、结构光发射器或面激光发射器等等,可用于发射红外激光,例如波长为850nm的线激光。补光装置106可为可发射一定波段内的红外光的补光装置,其中可包括激光发射装置104发射的激光的波长。例如,可采用850nm窄带通过的滤光片,滤光片的透过率与波长的关系如图1B所示,滤光片还可透过850nm左右的波段的红外光。控制模块可为目标检测装置,具体实施方式可参照图5和图6,此处不予详述。
图1C示出了图1A中的示例性系统的左视图。如图1C所示,激光发射装置104、补光装置106发出的红外光可照射到系统10前方的障碍物上,摄像装置102可对激光发射装置104和补光装置106照射的障碍物分别进行拍摄,可通过控制模块分时进行控制,例如激光发射装置104和补光装置106交替开启,摄像装置102采集的激光图像(第一图像)用于测距,在补光装置106打开时采集的第二图像用于障碍物识别。摄像装置102采集激光图像时可使用固定曝光,即设定固定的曝光参数,可包括曝光时间和曝光增益等等;摄像装置102采集补光装置图像(第二图像)时可设置自动曝光,即参考前一个采集补光装置图像进行物体识别的帧来调整曝光参数,可根据该前一个采集补光装置图像的成像质量(如图片亮度、图片中特征点个数等)来调整曝光时间和曝光增益。采用自动曝光的好处是通过改变曝光参数提高成像质量,从而提升障碍物的识别率,提升用户体验。
应该理解,图1A和图1C中的摄像装置、激光发射装置、补光装置的数目仅仅是示意性的。根据实现需要,可以具有任意数目的摄像装置、激光发射装置、补光装置。例如激光发射装置104可为设置在摄像装置左右两侧的两个线激光发射器,也可以设置在摄像装置的同一侧。当设置在摄像装置左右两侧时,两个线激光发射器在水平方向的高度一致,且光轴在自行走设备的行进方向上有交点;当设置在摄像装置的同一侧时,两个线激光可沿自行走装置的高度方向并列设置。
根据本公开实施例提供的目标检测系统,通过复用自行走设备上的一个摄像装置来同时达到测距和识别的效果,从而可实现在对目标物体进行测距的同时进行障碍物识别,可对导航路径进行更好的规划,提高了系统的结构紧凑性,并节省了成本。
图2是根据一示例性实施例示出的一种目标检测方法的流程图。如图2所示的方法例如可以应用于上述目标检测系统10。
参考图2,本公开实施例提供的方法20可以包括以下步骤。
在步骤S202中,获取由摄像装置采集获得的第一图像,采集第一图像时发出第一预定波长的激光。本方法涉及的装置的具体实施方式可参照图1A至图1C,此处不再赘述。
在步骤S204中,获取由摄像装置采集获得的第二图像,采集第二图像时发出第二预定波长的光。第一预定波长的激光与第二预定波长的光可为相同波长或不同波长,此处不作限制。激光发射装置可用于发射第一预定波长的激光,补光装置可用于发射第二预定波长的光,二者均可采用红外光源。摄像装置可以采用只通过部分红外波长的摄像头,例如采用设置滤光片的摄像头,保证波长在上述第一预定波长和第二预定波长之间的光能被摄像头采集到,这样可以尽量过滤掉外界光源干扰,保证了成像的准确性。
在一些实施例中,例如,摄像装置交替采集第一图像与第二图像,可通过控制激光发射装置和补光装置交替开启,并相应设置摄像装置的曝光参数实现,如激光发射装置打开发射激光的时间和摄像装置曝光时间一致,激光图像由摄像装置根据第一曝光参数采集获得,第一曝光参数包括预设的固定曝光时间和增益;补光图像由摄像装置根据第二曝光参数采集获得,第二曝光参数根据采集上一帧补光图像的成像质量并结合当时的摄像装置的曝光参数获得。例如:如前一帧补光图像中的成像质量较差,则将本帧的曝光参数调整为有助于提高成像质量的值。
在步骤S206中,根据第一图像获得目标物体与摄像装置之间的距离。
在一些实施例中,例如,在采用一个以上的激光发射器(如线激光发射器)的情况下(如两个),可基于激光测距原理和标定数据,计算出线激光照射到目标物体上每个点相对于摄像装置的三维坐标,再结合摄像装置在自行走设备上的相对位置、自行走设备的实时SLAM坐标,可以计算出线激光上每个点在SLAM坐标系中的三维坐标。通过自行走设备的自身移动,从而可以构建出移动过程中遇到的物体的点云。对点云进行聚类,对于满足大于一定高度和/或宽度阈值的物体,进行避障处理(或者高度在上述阈值与自身可越障高度之间时,可做翻越处理)。具体实施方式可参照图4。
在步骤S208中,根据第二图像对目标物体进行识别。对于摄像装置拍出的目标物体图 像,可以通过训练的机器学习模型等神经网络提取物体在三维空间的全局特征和/或局部特征,通过比对图像中物体与参照物体的形态来进行物体类别的识别,以便更好地根据其类别(如容易卷入的织物、线体、宠物粪便、底座等等)来相应的执行不同的避障路径规划,以保证在清扫覆盖率尽可能最大化的基础上,不对其工作环境造成不必要的破坏,减少卡困风险,提升用户体验。
根据本公开实施例提供的目标检测方法,通过获取发出第一预定波长的激光时由摄像装置采集获得的第一图像和发出第二预定波长的光时由摄像装置采集获得的第二图像,根据第一图像获得目标物体与摄像装置之间的距离,根据第二图像对目标物体进行识别,从而可实现在对目标物体进行测距的同时进行障碍物识别,提高了避障路径规划的合理性。
图3A示出本公开实施例中一种目标检测系统进行分时控制的流程示意图。图3A为设置一个线激光发射器的情况。如图3A所示,在步骤S3002中,交替控制开启激光发射装置和补光装置;其中,摄像装置在激光发射装置开启时采集获得第一图像,在补光装置开启时采集获得第二图像;激光发射装置用于发射第一预定波长的激光,补光装置用于发射第二预定波长的光。在步骤S3004中,根据第一图像获得目标物体与摄像装置之间的距离。在步骤S3006中,根据第二图像对目标物体进行识别。
可采用多个激光发射器获得多个第一图像,例如可在摄像装置左右两侧设置两个线激光发射器,则第一图像包括第一激光图像和第二激光图像,为了避免两个激光图像产生辨别混淆,影响生成正确的前方目标物体的坐标,第一激光图像和第二激光图像由摄像装置分时采集获得,采集第一激光图像时左线激光发射器发射第一预定波长的激光以第一角度照射目标物体,采集第二激光图像时右线激光发射器发射第一预定波长的激光以第二角度照射目标物体;其中,第一角度和第二角度分别为对应的激光发射器发出的激光的方向与摄像装置的光轴之间的夹角。第一角度和第二角度的数值可以相同也可以不同,此处不做限制。两个线激光发射器可在水平方向上并排放置在自行走设备上,且光轴在自行走设备的行进方向上,此时第一角度和第二角度相同。在进行测距时,可基于激光测距原理根据第一激光图像和第二激光图像计算出第一预定波长的激光分别以第一角度和第二角度照射到目标物体上的点相对于摄像装置的三维坐标;在经历摄像装置的多次图像采集后,可以获得行进过程中遇到的障碍物的坐标信息。图3B示出本公开实施例中另一种目标检测系统进行分时控制的流程示意图。图3B中为在摄像装置左右两侧设置两个线激光发射器的情况。如图3B所示,首先只打开左线激光发射器(S302),然后关闭所有线激光发射器、补光装置(S304),再只打开右线激光发射器(S306),接着只打开补光装置(S308),在自行走设备移动时重复以上步骤。在摄像装置方面,图3C根据图3B示出本公开实施例中一种分时控制时序图,t 1时刻摄像装置使用固定曝光,左线激光发射器打开(S302)时间和摄像装置曝光时间一致,t 2时刻摄像装置使用固定曝光,右线激光打开(S306)时间和摄像装置曝光时间一致,t 3时刻补光装置开启,摄像装置使用自动曝光,曝光参数参考的是上一个进行物体识别的帧。曝光参数包括曝光时间和/或曝光增益,即第一图像由摄像装置在预设的第一曝光参数下采集获得,第二图像由摄像装置在第二曝光参数下采集获得,第二曝光参数可根据采集的前一帧第二图 像的成像质量并结合当时的曝光参数获得。
在一些实施例中,步骤S304中摄像装置可采集获得第三图像,采集第三图像时停止发射第一预定波长的激光和第二预定波长的光,目标物体不受激光或补光的照射。第三图像是用来和步骤S302和步骤S306的图像做运算,用以去掉背景噪声,进一步降低如灯光、强光等的影响,也可能会在步骤S306后也拍一张(即保证所有激光发射装置和补光装置关闭时拍摄一张即可),拍摄的目的是将第一图像中的像素点与第三图像中对应位置的像素点做差,获得修正激光图像,以尽可能的减少外部光源对线激光的影响。例如此时目标物体受自然光照射,则获得自然光图像,优化阳光下的场景中对目标物体的激光测距结果,然后可根据修正激光图像获得目标物体与摄像装置之间的距离。
图3D示出本公开实施例中又一种目标检测系统进行分时控制的流程示意图。图3D中为线激光发射器发射激光的第一预定波长与补光装置发射光的第二预定波长不同的情况。如图3D所示,首先打开左线激光发射器(S312)(此时右线激光发射器关闭、补光装置可以是开启状态),采集获得第一激光图像;然后关闭所有线激光发射器(S314),采集获得第三图像;再打开右线激光发射器(S316),采集获得第二激光图像;接着关闭所有线激光发射器(S318),只打开补光装置,采集获得第二图像。在自行走设备移动时重复以上步骤。获得图像以及根据图像进行处理的具体实施方式可参照图3A图3C。由于第一波长和第二波长数值并不相同,因此,采集第三图像时补光装置是否开启并不影响最终修正激光图像的生成,因此简化了控制逻辑。
图3E示出本公开实施例中再一种目标检测系统进行分时控制的流程示意图。图3E中为两个线激光发射器的位置上下错开,同时开启时发射的激光不相交的情况。如图3E所示,首先打开左线激光发射器(S322)(此时右线激光发射器、补光装置可能是开着的),采集获得第一激光图像;然后打开右线激光发射器(S324)(此时无需关闭左线激光发射器),采集获得第二激光图像(此图像亦可作为第三图像);接着关闭所有线激光发射器(S326),只打开补光装置,采集获得第二图像。因为第一图像中的上下两个线激光图像不会导致无法识别线激光和摄像头之间的位置关系从而影响计算SLAM坐标,且两个线激光图像的噪声情况相近,故此时可通过将第一激光图像中的像素点与第二激光图像中对应位置的像素点做差,获得修正激光图像。在自行走设备移动时重复以上步骤。获得图像以及根据图像进行处理的具体实施方式可参照图3A图3C。
图4是根据一示例性实施例示出的一种自行走设备避障方法的流程图。如图4所示的方法例如可以应用于上述目标检测系统10。
参考图4,本公开实施例提供的方法40可以包括以下步骤。
在步骤S402中,获取由设置在自行走设备上的摄像装置在多个时间点采集获得的第一图像,采集第一图像时发出第一预定波长的激光。第一图像采集的具体实施方式可参照图2。
在步骤S404中,获取摄像装置在多个时间点采集图像时自行走设备所在的多个位置。其中,自行走设备在多个时间点相对于目标物体移动。
在步骤S406中,根据摄像装置在多个时间点采集获得的第一图像和自行走设备所在的 多个位置获得点云。例如,自行走设备在坐标A,测距出这时候线激光照射到目标物体上的点,计算出这些点的SLAM三维坐标;自行走设备移动或者转动后坐标为B,这时候如果线激光也打到这个目标物体上,也进行测距,可以算出目标物体上的另外一些点的SLAM三维坐标。通过自行走设备持续运动,可以得到物体的点云。
在一些实施例中,也可根据图2至图3D获得修正激光图像,根据修正激光图像获得点云。
在步骤S408中,对点云进行聚类,并对于聚类后尺寸超过预设阈值的目标物体进行避障处理。可在与尺寸超过预设阈值的目标物体之间的距离小于或等于预设距离时,控制自行走设备绕行;其中,预设距离大于0,其值可与识别到的障碍物类型有关,即针对识别出的不同类型的障碍物,预设的距离会有不同的数值设置;当然,其值也可以是一个固定值,适用于无法确定其类型的目标物体。
在一些实施例中,可基于单目测距原理根据补光灯开启时拍摄的第二图像获得目标物体上的至少部分点的同步坐标。根据上述至少部分点的同步坐标,将这些点补充到目标物体的初始点云(即通过摄像装置采集的激光发射装置发射的光的图像得到的点云)中,获得目标物体的密集点云。详细的,可通过单目测距估算出自行走设备当前SLAM坐标,结合通过第一图像获得的点云信息,从而构建出物体的点云,进行更精确避障,例如,算出一些点云和他们的三维信息,然后根据单目测距算出的三维信息,从而把识别出来的物体和点云数据关联上,获得更密集的点云数据。
根据本公开实施例提供的自行走设备避障方法,通过复用设备上的摄像机来同时达到测距和识别的效果,并利用识别结果实现精确复原物体点云,提高了避障策略的准确性和合理性。
本公开实施例还提供了一种自行走设备,包括:驱动装置,用于驱动自行走设备沿工作表面行走;感知系统,包括目标检测系统,目标检测系统包括激光发射装置、补光装置、摄像装置和红外滤光片;激光发射装置用于发射第一波长的激光;补光装置用于发射第二波长的红外光;第一波长和第二波长的值可以相等也可以不相等;红外滤光片设置在摄像装置的前方,用于对入射到摄像装置的光进行过滤,第一波长和第二波长的光能通过红外滤光片入射到摄像装置;摄像装置用于采集图像。
在一些实施例中,激光发射装置和补光装置交替发射相应波长的光。在上述激光发射装置工作时,摄像装置采集获得第一图像;在补光装置工作时,摄像装置采集获得的第二图像。自行走设备还包括控制单元,控制单元根据上述第一图像获得目标物体与摄像装置之间的距离,根据第二图像对目标物体进行识别。
图5根据一示例性实施例示出的一种目标检测装置的框图。如图5所示的装置例如可以应用于上述目标检测系统10。
参考图5,本公开实施例提供的装置50可以包括激光图像获取模块502、补光图像获取模块504、测距模块506和目标识别模块508。
激光图像获取模块502可用于获取由摄像装置采集获得的第一图像,采集第一图像时发出第一预定波长的激光。
补光图像获取模块504可用于获取由摄像装置采集获得的第二图像,采集第二图像时发出第二预定波长的光。
测距模块506可用于根据第一图像获得目标物体与摄像装置之间的距离。
目标识别模块508可用于根据第二图像对目标物体进行识别。
图6根据一示例性实施例示出的另一种目标检测装置的框图。如图6所示的装置例如可以应用于上述目标检测系统10。
参考图6,本公开实施例提供的装置60可以包括激光图像获取模块602、背景图像获取模块603、补光图像获取模块604、测距模块606、目标识别模块608、激光点云获得模块610、同步坐标计算模块612、精确点云复原模块614、点云聚类模块616和路径规划模块618,测距模块606可包括去噪模块6062和距离计算模块6064。
激光图像获取模块602可用于获取由摄像装置采集获得的第一图像,采集第一图像时发出第一预定波长的激光。
第一图像包括第一激光图像和第二激光图像,采集第一激光图像时目标物体由第一预定波长的激光以第一角度照射,采集第二激光图像时目标物体由第一预定波长的激光以第二角度照射。
第一图像由摄像装置在预设的第一曝光参数下采集获得;其中,曝光参数包括曝光时间和/或曝光增益。
背景图像获取模块603可用于获取由摄像装置采集获得的第三图像,采集第三图像时停止发射第一预定波长的激光和第二预定波长的光。
补光图像获取模块604可用于获取由摄像装置采集获得的第二图像,采集第二图像时发出第二预定波长的光。
摄像装置交替采集第一图像与第二图像。
第二图像由摄像装置在第二曝光参数下采集获得,第二曝光参数根据采集的前一帧第二图像的成像质量并结合当时的曝光参数获得。
测距模块606可用于根据第一图像获得目标物体与摄像装置之间的距离。
测距模块606还可用于基于激光测距原理根据第一激光图像和第二激光图像计算出第一预定波长的激光分别以第一角度和第二角度照射到目标物体上的点相对于摄像装置的三维坐标。
去噪模块6062可用于将第一图像中的像素点与第三图像中对应位置的像素点做差,获得修正激光图像。
距离计算模块6064可用于根据修正激光图像获得目标物体与摄像装置之间的距离。
目标识别模块608可用于根据第二图像对目标物体进行识别。
激光点云获得模块610可用于根据摄像装置在多个时间点采集获得的第一图像和自行走设备所在的多个位置获得点云。
同步坐标计算模块612可用于获取摄像装置在多个时间点采集图像时自行走设备所在的多个位置。
精确点云复原模块614可用于根据补充点的同步坐标与将补充点补充到目标物体的初始点云中,获得目标物体的密集点云。
点云聚类模块616可用于对点云进行聚类。
路径规划模块618可用于对于聚类后尺寸超过预设阈值的目标物体进行避障处理。预设可与识别到的障碍物类型有关,即针对识别出的不同类型的障碍物,预设的距离会有不同的数值设置;当然,其值也可以是一个固定值,适用于无法确定其类型的目标物体。
路径规划模块618还可用于在与尺寸超过预设阈值的目标物体之间的距离小于或等于预设距离时,控制自行走设备绕行;其中,预设距离大于0。
本公开实施例提供的装置中的各个模块的具体实现可以参照上述方法中的内容,此处不再赘述。
图7示出本公开实施例中一种电子设备的结构示意图。需要说明的是,图7示出的设备仅以计算机系统为示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,设备700包括中央处理单元(CPU)701,其可以根据存储在只读存储器(ROM)702中的程序或者从存储部分708加载到随机访问存储器(RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有设备700操作所需的各种程序和数据。CPU701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
以下部件连接至I/O接口705:包括键盘、鼠标等的输入部分706;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分707;包括硬盘等的存储部分708;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分709。通信部分709经由诸如因特网的网络执行通信处理。驱动器77也根据需要连接至I/O接口705。可拆卸介质711,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器77上,以便于从其上读出的计算机程序根据需要被安装入存储部分708。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分709从网络上被下载和安装,和/或从可拆卸介质711被安装。在该计算机程序被中央处理单元(CPU)701执行时,执行本公开的系统中限定的上述功能。
需要说明的是,本公开所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存 储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的模块也可以设置在处理器中,例如,可以描述为:一种处理器包括激光图像获取模块、补光图像获取模块、测距模块和目标识别模块。其中,这些模块的名称在某种情况下并不构成对该模块本身的限定,例如,激光图像模块还可以被描述为“通过所连接的摄像装置采集激光照射的目标物体图像的模块”。
作为另一方面,本公开还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的设备中所包含的;也可以是单独存在,而未装配入该设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该设备执行时,使得该设备包括:获取由摄像装置采集获得的第一图像,采集第一图像时发出第一预定波长的激光;获取由摄像装置采集获得的第二图像,采集第二图像时发出第二预定波长的光;根据第一图像获得目标物体与摄像装置之间的距离;根据第二图像对目标物体进行识别。
以上具体地示出和描述了本公开的示例性实施例。应可理解的是,本公开不限于这里描述的详细结构、设置方式或实现方法;相反,本公开意图涵盖包含在所附权利要求的精神和范围内的各种修改和等效设置。

Claims (19)

  1. 一种目标检测方法,其中,包括:
    获取由摄像装置采集获得的第一图像,采集所述第一图像时发出第一预定波长的激光;
    获取由所述摄像装置采集获得的第二图像,采集所述第二图像时发出第二预定波长的光,第一预定波长的激光与第二预定波长的光可为相同波长或不同波长;
    根据所述第一图像获得目标物体与所述摄像装置之间的距离;
    根据所述第二图像对所述目标物体进行识别。
  2. 根据权利要求1所述的方法,其中,所述第一图像包括第一激光图像和第二激光图像,采集所述第一激光图像时所述目标物体由所述第一预定波长的激光以第一角度照射,采集所述第二激光图像时所述目标物体由所述第一预定波长的激光以第二角度照射;
    所述根据所述第一图像获得目标物体与所述摄像装置之间的距离包括:
    根据所述第一激光图像和所述第二激光图像计算出所述第一预定波长的激光分别以所述第一角度和所述第二角度照射到所述目标物体上的点相对于所述摄像装置的三维坐标。
  3. 根据权利要求1或2所述的方法,其中,还包括:
    获取由所述摄像装置采集获得的第三图像,采集所述第三图像时停止发射所述第一预定波长的激光和所述第二预定波长的光;
    所述根据所述第一图像获得目标物体与所述摄像装置之间的距离还包括:
    将所述第一图像中的像素点与所述第三图像中对应位置的像素点做差,获得修正激光图像;
    根据所述修正激光图像获得所述目标物体与所述摄像装置之间的距离。
  4. 根据权利要求1或2所述的方法,其中,所述摄像装置交替采集所述第一图像与所述第二图像。
  5. 根据权利要求1或2所述的方法,其中,所述第一图像由所述摄像装置在预设的第一曝光参数下采集获得;
    所述第二图像由所述摄像装置在第二曝光参数下采集获得,所述第二曝光参数根据采集的前一帧第二图像的成像质量并结合采集所述前一帧第二图像时的曝光参数获得;
    其中,曝光参数包括曝光时间和/或曝光增益。
  6. 一种自行走设备控制方法,其中,包括:
    获取由设置在自行走设备上的摄像装置在多个时间点采集获得的第一图像,采集所述第一图像时发出第一预定波长的激光;
    获取所述摄像装置在多个时间点采集图像时所述自行走设备所在的多个位置;
    根据所述摄像装置在多个时间点采集获得的第一图像和所述自行走设备所在的多个 位置获得点云;
    对所述点云进行聚类,并根据聚类的结果对所述自行走设备进行导航规划。
  7. 根据权利要求6所述的方法,其中,
    所述根据聚类的结果对所述自行走设备进行导航规划,包括:
    获得所述聚类的结果,所述聚类的结果包括尺寸超过预设阈值的目标物体;
    在所述自行走设备与所述尺寸超过预设阈值的目标物体之间的距离小于或等于预设距离时,控制所述自行走设备绕行;
    其中,所述预设距离的值大于0。
  8. 根据权利要求6所述的方法,其中,
    所述第一图像包括第一激光图像和第二激光图像,采集所述第一激光图像时所述目标物体由所述第一预定波长的激光以第一角度照射,采集所述第二激光图像时所述目标物体由所述第一预定波长的激光以第二角度照射。
  9. 根据权利要求6所述的方法,其中,还包括:
    获取由所述摄像装置采集获得的第三图像,采集所述第三图像时停止发射所述第一预定波长的激光;
    将所述第一图像中的像素点与所述第三图像中对应位置的像素点做差,获得修正激光图像;
    所述根据所述摄像装置在多个时间点采集获得的第一图像和所述自行走设备所在的多个位置获得点云包括:
    根据与所述多个时间点采集的第一图像对应的多个修正激光图像和所述自行走设备所在的多个位置获得所述目标物体与所述摄像装置之间的距离。
  10. 一种目标检测控制方法,其中,包括:
    交替控制开启激光发射装置和补光装置;其中,摄像装置在所述激光发射装置开启时采集获得第一图像,在所述补光装置开启时采集获得第二图像;所述激光发射装置用于发射第一预定波长的激光,所述补光装置用于发射第二预定波长的光;
    根据所述第一图像获得目标物体与所述摄像装置之间的距离;
    根据所述第二图像对所述目标物体进行识别。
  11. 根据权利要求10所述的方法,其中,所述激光发射装置包括第一激光发射装置和第二激光发射装置,所述第一图像包括第一激光图像和第二激光图像,所述摄像装置在所述第一激光发射装置开启时采集获得所述第一激光图像,所述摄像装置在所述第二激光发射装置开启时采集获得所述第二激光图像,采集所述第一激光图像时所述目标物体由所述第一激光发射装置发射出的第一预定波长的激光以第一角度照射,采集所述第二激光图像时所述目标物体由所述第一激光发射装置发射出的第一预定波长的激光以第二角度照射;
    所述根据所述第一图像获得目标物体与所述摄像装置之间的距离包括:
    根据所述第一激光图像和所述第二激光图像计算出所述第一预定波长的激光分别以所述第一角度和所述第二角度照射到所述目标物体上的点相对于所述摄像装置的三维坐标。
  12. 根据权利要求10或11所述的方法,其中,还包括:
    控制关闭所述激光发射装置和所述补光装置;其中,所述摄像装置在所述激光发射装置和所述补光装置关闭时采集获得第三图像;
    所述根据所述第一图像获得目标物体与所述摄像装置之间的距离包括:
    将所述第一图像中的像素点与所述第三图像中对应位置的像素点做差,获得修正激光图像;
    根据所述修正激光图像获得所述目标物体与所述摄像装置之间的距离。
  13. 根据权利要求10或11所述的方法,其中,所述第一图像由所述摄像装置在预设的第一曝光参数下采集获得;
    所述第二图像由所述摄像装置在第二曝光参数下采集获得,所述第二曝光参数根据采集的前一帧第二图像的成像质量并结合采集所述前一帧第二图像时的曝光参数获得;
    其中,曝光参数包括曝光时间和/或曝光增益。
  14. 一种目标检测系统,其中,包括:激光发射装置、补光装置、摄像装置和目标检测装置,其中:
    所述激光发射装置,用于发射第一预定波长的激光;
    所述补光装置,用于发射第二预定波长的光;第一预定波长的激光与第二预定波长的光可为相同波长或不同波长;
    所述摄像装置,用于在发出所述第一预定波长的激光时采集第一图像;在发出所述第二预定波长的光时采集第二图像;
    所述目标检测装置,包括:
    测距模块,用于根据所述第一图像获得目标物体与所述摄像装置之间的距离;
    目标识别模块,用于根据所述第二图像对所述目标物体进行识别。
  15. 一种自行走设备,包括:
    驱动装置,用于驱动所述自行走设备沿工作表面行走;
    感知系统,包括目标检测系统,所述目标检测系统包括激光发射装置、补光装置、摄像装置和红外滤光片;所述激光发射装置用于发射第一波长的激光;所述补光装置用于发射第二波长的红外光;所述第一波长的值和所述第二波长的值可以相等也可以不相等;所述红外滤光片设置在所述摄像装置的前方,用于对入射到所述摄像装置的光进行过滤,所述第一波长的激光和所述第二波长的红外光能通过所述红外滤光片入射到所述摄像装置;所述摄像装置用于采集图像。
  16. 根据权利要求15所述的自行走设备,其中,
    所述激光发射装置和所述补光装置交替发射所述第一波长的激光和所述第二波长的 红外光。
  17. 根据权利要求16所述的自行走设备,其中,
    在所述激光发射装置工作时,所述摄像装置采集获得第一图像;
    在所述补光装置工作时,所述摄像装置采集获得的第二图像;
    所述自行走设备还包括控制单元,所述控制单元根据所述第一图像获得目标物体与所述摄像装置之间的距离,根据所述第二图像对所述目标物体进行识别。
  18. 一种设备,包括:存储器、处理器及存储在所述存储器中并可在所述处理器中运行的可执行指令,其中,所述处理器执行所述可执行指令时实现如权利要求1-13任一项所述的方法。
  19. 一种计算机可读存储介质,其上存储有计算机可执行指令,其中,所述可执行指令被处理器执行时实现如权利要求1-13任一项所述的方法。
PCT/CN2021/100722 2021-03-08 2021-06-17 目标检测及控制方法、系统、设备及存储介质 WO2022188292A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/716,826 US20220284707A1 (en) 2021-03-08 2022-04-08 Target detection and control method, system, apparatus and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110264971.7 2021-03-08
CN202110264971.7A CN113075692A (zh) 2021-03-08 2021-03-08 目标检测及控制方法、系统、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/716,826 Continuation US20220284707A1 (en) 2021-03-08 2022-04-08 Target detection and control method, system, apparatus and storage medium

Publications (1)

Publication Number Publication Date
WO2022188292A1 true WO2022188292A1 (zh) 2022-09-15

Family

ID=76612276

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100722 WO2022188292A1 (zh) 2021-03-08 2021-06-17 目标检测及控制方法、系统、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113075692A (zh)
WO (1) WO2022188292A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115307559B (zh) * 2022-07-08 2023-10-24 国网湖北省电力有限公司荆州供电公司 一种目标定位方法、远距离激光清洗方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102753932A (zh) * 2010-02-17 2012-10-24 三洋电机株式会社 物体检测装置及信息取得装置
CN108710367A (zh) * 2018-05-23 2018-10-26 广州视源电子科技股份有限公司 激光数据识别方法、装置、机器人及存储介质
US20190297241A1 (en) * 2018-03-20 2019-09-26 Magik Eye Inc. Adjusting camera exposure for three-dimensional depth sensing and two-dimensional imaging
CN111142526A (zh) * 2019-12-30 2020-05-12 科沃斯机器人股份有限公司 越障与作业方法、设备及存储介质
FR3089111A1 (fr) * 2018-11-30 2020-06-05 Paul DORVAL Dispositif d’imagerie de fluorescence
CN111753799A (zh) * 2020-07-03 2020-10-09 深圳市目心智能科技有限公司 一种基于主动双目的视觉传感器及机器人

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016065718A (ja) * 2013-02-08 2016-04-28 三洋電機株式会社 情報取得装置および物体検出装置
CN109144072A (zh) * 2018-09-30 2019-01-04 亿嘉和科技股份有限公司 一种基于三维激光的机器人智能避障方法
CN111474552A (zh) * 2019-01-23 2020-07-31 科沃斯机器人股份有限公司 一种激光测距方法、装置以及自移动设备
CN211933925U (zh) * 2020-01-03 2020-11-17 深圳飞科机器人有限公司 清洁机器人
CN210927761U (zh) * 2020-01-10 2020-07-03 北京石头世纪科技股份有限公司 智能清洁设备
CN111291708B (zh) * 2020-02-25 2023-03-28 华南理工大学 融合深度相机的变电站巡检机器人障碍物检测识别方法
CN111860321B (zh) * 2020-07-20 2023-12-22 浙江光珀智能科技有限公司 一种障碍物识别方法及系统
CN114521836B (zh) * 2020-08-26 2023-11-28 北京石头创新科技有限公司 一种自动清洁设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102753932A (zh) * 2010-02-17 2012-10-24 三洋电机株式会社 物体检测装置及信息取得装置
US20190297241A1 (en) * 2018-03-20 2019-09-26 Magik Eye Inc. Adjusting camera exposure for three-dimensional depth sensing and two-dimensional imaging
CN108710367A (zh) * 2018-05-23 2018-10-26 广州视源电子科技股份有限公司 激光数据识别方法、装置、机器人及存储介质
FR3089111A1 (fr) * 2018-11-30 2020-06-05 Paul DORVAL Dispositif d’imagerie de fluorescence
CN111142526A (zh) * 2019-12-30 2020-05-12 科沃斯机器人股份有限公司 越障与作业方法、设备及存储介质
CN111753799A (zh) * 2020-07-03 2020-10-09 深圳市目心智能科技有限公司 一种基于主动双目的视觉传感器及机器人

Also Published As

Publication number Publication date
CN113075692A (zh) 2021-07-06

Similar Documents

Publication Publication Date Title
US11216673B2 (en) Direct vehicle detection as 3D bounding boxes using neural network image processing
US11407116B2 (en) Robot and operation method therefor
CN112017251B (zh) 标定方法、装置、路侧设备和计算机可读存储介质
Levinson et al. Traffic light mapping, localization, and state detection for autonomous vehicles
WO2021227645A1 (zh) 目标检测方法和装置
EP3825903A1 (en) Method, apparatus and storage medium for detecting small obstacles
US11841434B2 (en) Annotation cross-labeling for autonomous control systems
CN110765894A (zh) 目标检测方法、装置、设备及计算机可读存储介质
US11195065B2 (en) System and method for joint image and lidar annotation and calibration
CN108508916B (zh) 一种无人机编队的控制方法、装置、设备及存储介质
JP2012075060A (ja) 画像処理装置及びそれを用いた撮像装置
CN106774296A (zh) 一种基于激光雷达和ccd摄像机信息融合的障碍检测方法
CN109211260B (zh) 智能车的行驶路径规划方法及装置、智能车
CN108106617A (zh) 一种无人机自动避障方法
CN110717445A (zh) 一种用于自动驾驶的前车距离跟踪系统与方法
CN108784540A (zh) 一种扫地机器人自动避障行进装置及行进方法
CN111857114A (zh) 一种机器人编队移动方法、系统、设备和存储介质
WO2024055788A1 (zh) 基于图像信息的激光定位方法及机器人
WO2022188292A1 (zh) 目标检测及控制方法、系统、设备及存储介质
US20220012494A1 (en) Intelligent multi-visual camera system and method
JPH11149557A (ja) 自律走行車の周囲環境認識装置
US20220284707A1 (en) Target detection and control method, system, apparatus and storage medium
CN114911223B (zh) 一种机器人导航方法、装置、机器人及存储介质
CN108175337A (zh) 扫地机器人及其行走的方法
CN115237113A (zh) 机器人导航的方法、机器人、机器人系统及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21929762

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE