WO2021212986A1 - 障碍物识别方法、装置、自移动设备及存储介质 - Google Patents

障碍物识别方法、装置、自移动设备及存储介质 Download PDF

Info

Publication number
WO2021212986A1
WO2021212986A1 PCT/CN2021/076964 CN2021076964W WO2021212986A1 WO 2021212986 A1 WO2021212986 A1 WO 2021212986A1 CN 2021076964 W CN2021076964 W CN 2021076964W WO 2021212986 A1 WO2021212986 A1 WO 2021212986A1
Authority
WO
WIPO (PCT)
Prior art keywords
line laser
obstacle
contour
self
vertical distance
Prior art date
Application number
PCT/CN2021/076964
Other languages
English (en)
French (fr)
Inventor
孙佳佳
徐银波
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Priority to KR1020227018029A priority Critical patent/KR20220086682A/ko
Priority to JP2022541668A priority patent/JP7383828B2/ja
Priority to EP21793582.4A priority patent/EP4050378A4/en
Publication of WO2021212986A1 publication Critical patent/WO2021212986A1/zh
Priority to US17/747,957 priority patent/US20220273152A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • G01S7/4815Constructional features, e.g. arrangements of optical elements of transmitters alone using multiple transmitters
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • B60W2420/408
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/20Static objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • This application relates to an obstacle recognition method, device, self-moving device and storage medium, and belongs to the field of computer technology.
  • self-moving devices such as sweeping robots, smart lawn mowers, etc.
  • image recognition algorithms are used to identify obstacles in images.
  • the present application provides an obstacle recognition method, device, self-moving device and storage medium, which can improve the obstacle recognition result.
  • an embodiment of the present application provides an obstacle recognition method for use in a self-moving device, where a first-line laser transmitter and an image acquisition component are installed on the self-moving device; the first-line laser emission The device is used to emit a line laser obliquely below the direction of travel; the image acquisition component is used to collect an environmental image including the reflected light obtained by the line laser reflected by an object; the method includes:
  • the contour indicated by the contour information is an obstacle contour, it is determined that the object acted on by the line laser is an obstacle.
  • the method further includes:
  • the vertical distance of the obstacle relative to the ground is determined according to the contour information.
  • the determining the vertical distance of the obstacle relative to the ground according to the contour information includes:
  • the difference between the second vertical distance and the first vertical distance is determined as the vertical distance.
  • the acquiring the first vertical distance between the ground and the first line laser transmitter includes:
  • historical environmental images are based on the principle of laser ranging and the reflected light in the historical environmental images.
  • the pixel coordinates are used to determine the vertical distance between the object and the first line laser emitter to obtain the first vertical distance.
  • a second line laser transmitter is further provided on the self-moving device, and the second line laser transmitter is used to transmit other line lasers, and the emission direction of the other line lasers is the same as that of the line laser.
  • the emission direction is different; the environmental image also includes the reflected light obtained by the other line laser reflected by the object.
  • the method further includes:
  • a working strategy of the self-moving device is determined based on the type of the obstacle, and the working strategy is used to avoid or surpass the obstacle.
  • the determining the working strategy of the self-mobile device based on the type of the obstacle includes:
  • the obstacle is a carpet
  • the vertical distance of the carpet with respect to the ground is greater than a first threshold and less than a second threshold
  • the obstacle is a carpet
  • the vertical distance of the carpet with respect to the ground is greater than or equal to a second threshold
  • the obstacle is a step
  • the step is located below the ground and the vertical distance to the ground is greater than the third threshold and less than the fourth threshold, it is determined that the working strategy of the self-mobile device is to drive at a slower speed to descend. To the height corresponding to the step;
  • the working strategy of the self-moving device is to change the driving direction to avoid the shelter. ⁇ obstacles.
  • the method further includes: determining the contour shape in the contour information Number of bumps
  • the ground information extracted from the environmental image captured by the image acquisition component is flat and smooth; when the obstacle is a carpet, And when the line laser hits the carpet, the ground information extracted from the environmental image captured by the image acquisition component is irregular data with noise.
  • an embodiment of the present application provides an obstacle recognition device for use in a self-moving device, where a first-line laser transmitter and an image acquisition component are installed on the self-moving device; the first-line laser transmitter The device is used to emit a line laser obliquely downward in the direction of travel; the image acquisition component is used to collect an environmental image including the reflected light obtained by the line laser reflected by an object; the device includes:
  • An image acquisition module for acquiring the environment image sent by the image acquisition component
  • the contour determination module is configured to use the pixel coordinates of the reflected light in the environment image to determine the contour information of the object acted on by the line laser;
  • the object recognition module is used to determine that the object acted by the line laser is an obstacle when the contour indicated by the contour information is the contour of an obstacle.
  • an embodiment of the present application provides an obstacle recognition device, the device includes a processor and a memory; the memory is stored with a program, and the program is loaded and executed by the processor to implement the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium in which a program is stored, and the program is loaded and executed by the processor to implement the obstacle identification method described in the first aspect.
  • an embodiment of the present application provides a self-moving device, the self-moving device includes a housing, a first line laser transmitter, an image acquisition component, and a control component;
  • the first line laser transmitter is first arranged in the On the housing, it is used to emit line laser light obliquely downward along the direction of travel;
  • the image acquisition component is used to collect an environmental image including the reflected light obtained by the line laser light reflected by an object;
  • the first line laser transmitter and The image acquisition components are respectively connected to the control components in communication; the control components are used for:
  • the contour indicated by the contour information is an obstacle contour, it is determined that the object acted on by the line laser is an obstacle.
  • the self-moving device further includes a second line laser transmitter, the second line laser transmitter is used to transmit other line lasers, and the emission direction of the other line lasers is the same as that of the line laser.
  • the direction is different; the environment image also includes the reflected light obtained by the other line laser reflected by the object.
  • a first line laser transmitter and an image acquisition component are installed on a self-moving device; the first line laser transmitter is used to emit a line laser obliquely below the direction of travel; the image acquisition component is used to collect objects including a line laser passing through The environment image of the reflected light obtained by reflection; the environment image sent by the image acquisition component is acquired; the pixel coordinates of the reflected light in the environment image are used to determine the contour information of the object acted on by the line laser; the contour indicated by the contour information is the obstacle contour When determining that the object acted by the line laser is an obstacle; it can solve the problem that the existing obstacle recognition method is easily affected by ambient light, resulting in inaccurate recognition results; due to the combination of the first line laser transmitter and the image acquisition component Realize the collection of environmental images.
  • the image acquisition component can still capture the image of the reflected light of the line laser. Therefore, it can be ensured that obstacles can still be identified in the case of dark ambient light.
  • the image acquisition component can still capture the image of the reflected light of the line laser. Therefore, it can be ensured that obstacles can still be identified in the case of dark ambient light.
  • the equipment resources consumed when recognizing the obstacle can be saved.
  • Fig. 1 is a schematic structural diagram of a self-mobile device provided by an embodiment of the present application.
  • Fig. 2 is a flowchart of an obstacle recognition method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a scene of identifying obstacles provided by an embodiment of the present application.
  • Fig. 4 is a block diagram of an obstacle recognition device provided by an embodiment of the present application.
  • Fig. 5 is a block diagram of an obstacle recognition device provided by an embodiment of the present application.
  • Fig. 1 is a schematic structural diagram of a self-mobile device provided by an embodiment of the present application.
  • Self-moving equipment refers to equipment that can be moved without human force.
  • Self-moving devices include but are not limited to: sweeping robots, smart lawn mowers, etc., and this embodiment does not limit the type of self-moving devices.
  • the self-moving device at least includes: a first line laser transmitter 110, an image acquisition component 120 and a control component 130.
  • the first line laser transmitter 110 is arranged on the housing of the self-moving device, and is used to emit line laser light obliquely downward in the traveling direction.
  • the number of the first line laser transmitter 110 may be one or more, and this embodiment does not limit the number of the first line laser sensor 110.
  • the angle of the diagonally downward along the traveling direction may be 15°, 30°, 45°, etc. from the horizontal direction, and this embodiment does not limit the value of the diagonally downward angle along the traveling direction.
  • the image acquisition component 120 is used to acquire an environment image including reflected light obtained by the reflection of the line laser by the object.
  • the image acquisition component 120 is a camera, a video camera, etc., and the device type of the image acquisition component 120 is not limited in this embodiment.
  • the first line laser transmitter 110 and the image acquisition component 120 are respectively connected to the control component 130 in communication.
  • the control component 130 controls the working sequence of the first line laser transmitter 110 and the image acquisition component 120.
  • the control component 130 controls the first line laser transmitter 110 to start working before the image acquisition component 120.
  • control component 130 is also used to: obtain the environment image sent by the image acquisition component 120; use the pixel coordinates of the reflected light in the environment image to determine the contour information of the object acted on by the line laser; When it is the outline of an obstacle, determine that the object acted on by the line laser is an obstacle.
  • the contour information includes the shape and position of the contour.
  • a second line laser transmitter 140 is also provided on the mobile device.
  • the second line laser transmitter 140 is used to transmit other line lasers.
  • the emission direction of the other line lasers is the same as that of the first line laser transmitter 110.
  • the emission direction of the emitted line laser is different; the environment image also includes the reflected light from other line lasers reflected by the object. At this time, the reflected light of other line lasers can assist the mobile device to determine whether there is an obstacle.
  • the self-mobile device realizes the collection of environmental images through the combination of the first line laser transmitter and the image acquisition component. Since the ambient light is dark, the image acquisition component can still collect the image of the reflected light of the line laser. Therefore, it can be ensured that obstacles can be identified even when the ambient light is dark.
  • the equipment resources consumed when recognizing the obstacle can be saved.
  • FIG. 2 is a flowchart of an obstacle recognition method provided by an embodiment of the present application.
  • the method is applied to the self-mobile device shown in FIG. 1, and the execution subject of each step is the control component in the self-mobile device 130 is taken as an example for description.
  • the method includes at least the following steps:
  • Step 201 Acquire an environment image sent by an image acquisition component.
  • the environment image includes the reflected light of the line laser emitted by the first line laser transmitter reflected by the object.
  • a second line laser transmitter is also provided on the mobile device.
  • the second line laser transmitter is used to transmit other line lasers.
  • the emission direction of the other line lasers is different from the emission direction of the line laser; the environment image is also Including the reflected light from other lines of laser light reflected by the object. At this time, the environmental image also includes the reflected light from other line lasers reflected by the object.
  • Step 202 Use the pixel coordinates of the reflected light in the environment image to determine the contour information of the object acted on by the line laser.
  • the contour information includes but is not limited to the shape and position of the contour.
  • the contour information of the object should be flat and smooth; and when there are obstacles in the traveling direction, the contour information of the object is usually uneven. Based on this, the self-mobile device can determine whether there is an obstacle in the traveling direction through the contour information.
  • the self-mobile device connects the pixel coordinates of the reflected light to obtain the contour information of the object.
  • Step 203 When the contour indicated by the contour information is the contour of an obstacle, it is determined that the object acted on by the line laser is an obstacle.
  • the self-mobile device compares the contour shape in the contour information with the template shape; if the contour shape matches the template shape, the contour indicated by the contour information is the obstacle contour; if the contour shape If it does not match the shape of the template, the contour indicated by the contour information is not an obstacle contour.
  • the template shape includes the contour shape of each obstacle, such as the contour shape of the carpet; or the contour shape of the steps; or the contour shape of the wardrobe.
  • the self-mobile device may also use other methods to determine whether the contour indicated by the contour information is an obstacle contour, and this embodiment will not list them one by one here.
  • the self-moving device may also determine the vertical distance of the obstacle relative to the ground according to the contour information.
  • determining the vertical distance of the obstacle relative to the ground according to the contour information includes: obtaining the first vertical distance between the ground and the first line laser transmitter; and determining the pixel coordinates based on the principle of laser ranging and the contour information The second vertical distance between the object acted on by the line laser and the first line laser emitter; the difference between the second vertical distance and the first vertical distance is determined as the vertical distance.
  • the control component can measure the distance between the first line laser emitter and the object based on the laser ranging principle and the pixel coordinates of the reflected light
  • the emission angle of the line laser is pre-stored in the mobile device; based on the first line laser emitter and The distance between the objects and the emission angle can determine the second vertical distance between the object and the first line laser transmitter.
  • the first vertical distance between the ground and the first line laser transmitter including: historical environmental images that are collected in history and the vertical distance of the object acted by the line laser relative to the ground is less than or equal to the preset distance threshold, Based on the principle of laser ranging and the pixel coordinates of the reflected light in the historical environment image, the vertical distance between the object and the first line of laser transmitter is determined, and the first vertical distance is obtained. Since the vertical distance of the object acted by the line laser relative to the ground is less than or equal to the preset distance threshold, it indicates that the object acted by the line laser in the historical environment image is not an obstacle. At this time, the object is the earth by default.
  • determining the second vertical distance between the object acted by the line laser and the first line laser transmitter includes: determining the line based on the principle of laser distance measurement and the pixel coordinates of the reflected light in the environment image The vertical distance and the minimum vertical distance between the object acted on by the laser and the first line laser transmitter; the average value between the vertical distance and the minimum vertical distance is determined as the second vertical distance.
  • the second vertical distance may also be the vertical distance, and this embodiment does not limit the arrangement of the second vertical distance.
  • the ground information extracted is flat and smooth.
  • the first vertical distance H1 of the ground relative to the first line laser transmitter can be estimated.
  • the extracted ground information is irregular data with noise.
  • the second vertical distance H2 of the carpet relative to the first line laser emitter can be estimated.
  • the vertical distance (H2-H1) of the carpet can be estimated according to the first vertical distance H1 detected in the traveling direction of the mobile device.
  • the self-mobile device also includes a second-line laser transmitter, referring to Fig. 3, when other-line lasers emitted by the second-line laser transmitter hit the carpet, due to the edge characteristics of the carpet, the extracted ground information is irregular and noisy. data. At this time, the reflected light from other line lasers emitted by the carpet assists the mobile device to determine the vertical distance of the carpet.
  • the obstacle identification method installs a first line laser transmitter and an image acquisition component on a self-moving device; the first line laser transmitter is used to emit a line laser obliquely downward in the direction of travel;
  • the image acquisition component is used to collect the environment image including the reflected light of the line laser reflected by the object; obtain the environment image sent by the image acquisition component; use the pixel coordinates of the reflected light in the environment image to determine the contour information of the object acted on by the line laser ;
  • the contour indicated by the contour information is the contour of an obstacle
  • the object acted by the line laser is determined to be an obstacle; it can solve the problem that the existing obstacle recognition methods are easily affected by the ambient light, resulting in inaccurate recognition results;
  • the first line laser transmitter and the image acquisition component jointly realize the collection of environmental images.
  • the image acquisition component can still collect the image of the reflected light of the line laser. Therefore, it can ensure that the ambient light is dark. Obstacles can still be identified under the circumstances.
  • the equipment resources consumed when recognizing the obstacle can be saved.
  • the self-mobile device may also determine a work strategy of the self-mobile device based on the type of the obstacle, and the work strategy is used to avoid or cross the obstacle.
  • the working strategy of the self-mobile device when the obstacle is a carpet, and the vertical distance of the carpet with respect to the ground is greater than the first threshold and less than the second threshold, it is determined that the working strategy of the self-mobile device is to accelerate driving to cross the obstacle; If it is a carpet, and the vertical distance of the carpet with respect to the ground is greater than or equal to the second threshold, it is determined that the working strategy of the self-mobile device is to change the driving direction to avoid obstacles; when the obstacles are steps, the steps are located below the ground, And when the maximum vertical distance relative to the ground is greater than the third threshold and less than the fourth threshold, it is determined that the working strategy of the self-mobile device is to slow down and descend to the height corresponding to the steps; when the obstacle is a step, the step is below the ground , And when the maximum vertical distance to the ground is greater than or equal to the fourth threshold, it is determined that the working strategy of the self-mobile device is to change the driving direction to avoid obstacles.
  • the second threshold is greater than the first threshold, and this embodiment does not limit the values of the first threshold and the second threshold.
  • the fourth threshold is greater than the third threshold, and this embodiment does not limit the values of the third threshold and the fourth threshold.
  • FIG. 4 is a block diagram of an obstacle recognition device provided by an embodiment of the present application.
  • the device is applied to the self-mobile device shown in FIG. 1 as an example for description.
  • the device includes at least the following modules: an image acquisition module 410, a contour determination module 420, and an object recognition module 430.
  • the image acquisition module 410 is configured to acquire the environment image sent by the image acquisition component
  • the contour determination module 420 is configured to use the pixel coordinates of the reflected light in the environment image to determine contour information of the object acted on by the line laser;
  • the object recognition module 430 is configured to determine that the object acted by the line laser is an obstacle when the contour indicated by the contour information is the contour of an obstacle.
  • the obstacle recognition device provided in the above embodiment performs obstacle recognition
  • only the division of the above-mentioned functional modules is used as an example for illustration.
  • the above-mentioned functions can be assigned to different functions according to needs.
  • Module completion that is, divide the internal structure of the obstacle recognition device into different functional modules to complete all or part of the functions described above.
  • the obstacle recognition device provided in the foregoing embodiment and the obstacle recognition method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • FIG. 5 is a block diagram of an obstacle recognition device provided by an embodiment of the present application.
  • the device may be the self-moving device shown in FIG. 1.
  • the device at least includes a processor 501 and a memory 502.
  • the processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 501 may adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). accomplish.
  • the processor 501 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 501 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 501 may further include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 502 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 502 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 502 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 501 to implement the obstacles provided in the method embodiments of the present application. recognition methods.
  • the obstacle recognition apparatus may optionally further include: a peripheral device interface and at least one peripheral device.
  • the processor 501, the memory 502, and the peripheral device interface may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface through a bus, a signal line or a circuit board.
  • peripheral devices include but are not limited to: radio frequency circuits, image capture components, line laser transmitters, audio circuits, and power supplies.
  • the obstacle recognition device may also include fewer or more components, which is not limited in this embodiment.
  • an embodiment of the present application also provides a computer-readable storage medium in which a program is stored, and the program is loaded and executed by a processor to implement the foregoing method embodiments. Obstacle recognition method.
  • an embodiment of the present application also provides a computer product, the computer product includes a computer-readable storage medium, the computer-readable storage medium stores a program, and the program is loaded and loaded by a processor. Execute to realize the obstacle recognition method of the above method embodiment.

Abstract

本申请涉及一种障碍物识别方法、装置、自移动设备及存储介质,属于计算机技术领域,该方法包括:获取图像采集组件发送的环境图像;使用环境图像中反射光的像素坐标,确定线激光所作用的物体的轮廓信息;在轮廓信息指示的轮廓为障碍物轮廓时,确定线激光所作用的物体为障碍物。由于通过第一线激光发射器和图像采集组件联合实现采集环境图像,在环境光较暗的情况下,图像采集组件仍然能采集到线激光的反射光的图像,因此,可以保证在环境光较暗的情况下依然能够识别出障碍物。

Description

障碍物识别方法、装置、自移动设备及存储介质 技术领域
本申请涉及一种障碍物识别方法、装置、自移动设备及存储介质,属于计算机技术领域。
背景技术
随着智能化的不断发展,自移动设备(比如:扫地机器人、智能割草机等)可以自动识别前方是否存在障碍物,以采取避障策略。
相关技术中通过图像识别算法来识别图像中的障碍物。
然而,通过图像识别算法识别障碍物时,若环境光较弱,则会影响图像识别结果,导致无法识别障碍物。
发明内容
本申请提供了一种障碍物识别方法、装置、自移动设备及存储介质,能够提高障碍物识别结果。
第一方面,本申请实施例提供了一种障碍物识别方法,用于自移动设备中,所述自移动设备上安装有第一线激光发射器和图像采集组件;所述第一线激光发射器用于沿行进方向的斜下方发射线激光;所述图像采集组件用于采集包括所述线激光经物体反射得到的反射光的环境图像;所述方法包括:
获取所述图像采集组件发送的环境图像;
使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息;
在所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物。
在一种可能的实现方式中,所述确定所述线激光所作用的物体为障碍物之后,还包括:
根据所述轮廓信息确定所述障碍物相对于地面的垂直距离。
在一种可能的实现方式中,所述根据所述轮廓信息确定所述障碍物相对于地面的垂直距离,,包括:
获取地面与所述第一线激光发射器之间的第一垂直距离;
基于激光测距原理和所述轮廓信息的像素坐标,确定所述线激光所作用的物体与所述第一线激光发射器之间的第二垂直距离;
将所述第二垂直距离与所述第一垂直距离之间的差值确定为所述垂直距离。
在一种可能的实现方式中,所述获取地面与所述第一线激光发射器之间的第一垂直距离,包括:
对于历史采集到的、且所述线激光所作用的物体相对于地面的垂直距离小于或等于所述预设距离阈值的历史环境图像,基于激光测距原理和所述历史环境图像中反射光的像素坐标,确定所述物体与所述第一线激光发射器之间的垂直距离,得到所述第一垂直距离。
在一种可能的实现方式中,所述自移动设备上还设置有第二线激光发射器,所述第二线激光发射器用于发送其它线激光,所述其它线激光的发射方向与所述线激光的发射方向不同;所述环境图像还包括所述其它线激光经物体反射得到的反射光。
在一种可能的实现方式中,所述确定所述线激光所作用的物体为障碍物之后,还包括:
基于所述障碍物的类型确定所述自移动设备的工作策略,所述工作策略用于躲避或越过所述障碍物。
在一种可能的实现方式中,所述基于所述障碍物的类型确定所述自移动设备的工作策略,包括:
在所述障碍物为地毯,且所述地毯相对于地面的垂直距离大于第一阈值小于第二阈值的情况下,确定所述自移动设备的工作策略为加速行驶,以越过所述障碍物;
在所述障碍物为地毯,且所述地毯相对于地面的垂直距离大于或等于第二阈值的情况下,确定所述自移动设备的工作策略为改变行驶方向,以躲避所述障碍物;
在所述障碍物为台阶,所述台阶位于地面之下、且相对于地面的垂直距离大于第三阈值小于第四阈值的情况下,确定所述自移动设备的工作策略为减速行驶,以下降至所述台阶对应的高度;
在所述障碍物为台阶,所述台阶位于地面之下、且相对于地面的垂直距离大于或等于第四阈值的情况下,确定所述自移动设备的工作策略为改变行驶方向,以躲避所述障碍物。
在一种可能的实现方式中,在使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息之后,还包括:确定所述轮廓信息中的轮廓形状中的凸起数量;
在所述凸起数量大于数量阈值、以使用非图像识别算法确定出所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物;在所述凸起数量小于或等于数量阈值时,确定轮廓信息指示的轮廓不是障碍物轮廓。
在一种可能的实现方式中,当所述线激光打到平坦地面时,所述图像采集组件捕捉到的环境图像中提取到的地面信息是平坦且光滑的;在所述障碍物为地毯、且所述线激光打到地毯时,所述图像采集组件捕捉到的环境图像中提取到的地面信息是带有噪声的不规则数据。
第二方面,本申请实施例提供了一种障碍物识别装置,用于自移动设备中,所述自移动设备上安装有第一线激光发射器和图像采集组件;所述第一线激光发射器用于沿行进方向的斜下方发射线激光;所述图像采集组件用于采集包括所述线激光经物体反射得到的反射光的环境图像;所述装置包括:
图像获取模块,用于获取所述图像采集组件发送的环境图像;
轮廓确定模块,用于使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息;
物体识别模块,用于在所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物。
第三方面,本申请实施例提供一种障碍物识别装置,所述装置包括处理器和存储器;所述存储器中存 储有程序,所述程序由所述处理器加载并执行以实现第一方面所述的障碍物识别方法。
第四方面,本申请实施例提供一种计算机可读存储介质,所述存储介质中存储有程序,所述程序由所述处理器加载并执行以实现第一方面所述的障碍物识别方法。
第五方面,本申请实施例提供一种自移动设备,所述自移动设备包括壳体、第一线激光发射器、图像采集组件和控制组件;所述第一线激光发射器第设置于所述壳体上,用于沿行进方向的斜下方发射线激光;所述图像采集组件用于采集包括所述线激光经物体反射得到的反射光的环境图像;所述第一线激光发射器和所述图像采集组件分别与所述控制组件通信相连;所述控制组件用于:
获取所述图像采集组件发送的环境图像;
使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息;
在所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物。
在一种可能的实现方式中,所述自移动设备还包括第二线激光发射器,所述第二线激光发射器用于发送其它线激光,所述其它线激光的发射方向与所述线激光的发射方向不同;所述环境图像还包括所述其它线激光经物体反射得到的反射光。
本申请实施例通过在自移动设备上安装第一线激光发射器和图像采集组件;第一线激光发射器用于沿行进方向的斜下方发射线激光;图像采集组件用于采集包括线激光经物体反射得到的反射光的环境图像;通过获取图像采集组件发送的环境图像;使用环境图像中反射光的像素坐标,确定线激光所作用的物体的轮廓信息;在轮廓信息指示的轮廓为障碍物轮廓时,确定线激光所作用的物体为障碍物;可以解决现有的障碍物识别方法容易受到环境光的影响,导致识别结果不准确的问题;由于通过第一线激光发射器和图像采集组件联合实现采集环境图像,在环境光较暗的情况下,图像采集组件仍然能采集到线激光的反射光的图像,因此,可以保证在环境光较暗的情况下依然能够识别出障碍物。另外,通过根据线激光的反射光的像素坐标提取物体轮廓信息,根据该轮廓信息来识别障碍物,而不是使用图像识别算法来识别障碍物,可以节省识别障碍物时消耗的设备资源。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,并可依照说明书的内容予以实施,以下以本申请的较佳实施例并配合附图详细说明如后。
附图说明
图1是本申请一实施例提供的自移动设备的结构示意图。
图2是本申请一实施例提供的障碍物识别方法的流程图。
图3是本申请一实施例提供的识别障碍物的场景示意图。
图4是本申请一实施例提供的障碍物识别装置的框图。
图5是本申请一实施例提供的障碍物识别装置的框图。
具体实施方式
下面结合附图和实施例,对本申请的具体实施方式作进一步详细描述。以下实施例用于说明本申请,但不用来限制本申请的范围。
图1是本申请一实施例提供的自移动设备的结构示意图。自移动设备是指无需人为施力即可实现移动的设备。自移动设备包括但不限于:扫地机器人、智能割草机等,本实施例不对自移动设备的类型作限定。如图1所示,该自移动设备至少包括:第一线激光发射器110、图像采集组件120和控制组件130。
第一线激光发射器110设置于自移动设备的壳体上,用于沿行进方向的斜下方发射线激光。第一线激光发射器110的数量可以为一个或多个,本实施例不对第一线激传感器110的数量作限定。
其中,沿行进方向的斜下方的角度可以为与水平方向呈15°、30°、45°等,本实施例不对沿行进方向的斜下方的角度的取值作限定。
图像采集组件120用于采集包括线激光经物体反射得到的反射光的环境图像。在一种可能的实现方式中,图像采集组件120为照相机、摄像机等,本实施例不对图像采集组件120的设备类型作限定。
第一线激光发射器110和图像采集组件120分别与控制组件130通信相连。控制组件130控制第一线激光发射器110和图像采集组件120的工作时序。在一种可能的实现方式中,控制组件130控制第一线激光发射器110在图像采集组件120之前开始工作。
本申请实施例中,控制组件130还用于:获取图像采集组件120发送的环境图像;使用环境图像中反射光的像素坐标,确定线激光所作用的物体的轮廓信息;在轮廓信息指示的轮廓为障碍物轮廓时,确定线激光所作用的物体为障碍物。
在一种可能的实现方式中,轮廓信息包括轮廓的形状和位置。
在一种可能的实现方式中,自移动设备上还设置有第二线激光发射器140,第二线激光发射器140用于发送其它线激光,其它线激光的发射方向与第一线激光发射器110发射的线激光的发射方向不同;环境图像还包括其它线激光经物体反射得到的反射光。此时,其它线激光的反射光可以辅助自移动设备确定是否存在障碍物。
本实施例提供的自移动设备,通过第一线激光发射器和图像采集组件联合实现采集环境图像,由于在环境光较暗的情况下,图像采集组件仍然能采集到线激光的反射光的图像,因此,可以保证在环境光较暗的情况下依然能够识别出障碍物。
另外,通过根据线激光的反射光的像素坐标提取物体的轮廓信息,根据该轮廓信息来识别障碍物,而不是使用图像识别算法来识别障碍物,可以节省识别障碍物时消耗的设备资源。
图2是本申请一实施例提供的障碍物识别方法的流程图,本实施例以该方法应用于图1所示的自移动设备中,且各个步骤的执行主体为自移动设备中的控制组件130为例进行说明。该方法至少包括以下几个 步骤:
步骤201,获取图像采集组件发送的环境图像。
环境图像包括第一线激光发射器发射的线激光经物体反射得到的反射光。
在一种可能的实现方式中,自移动设备上还设置有第二线激光发射器,第二线激光发射器用于发送其它线激光,其它线激光的发射方向与线激光的发射方向不同;环境图像还包括其它线激光经物体反射得到的反射光。此时,环境图像还包括其它线激光经物体反射得到的反射光。
步骤202,使用环境图像中反射光的像素坐标,确定线激光所作用的物体的轮廓信息。
轮廓信息包括但不限于轮廓的形状和位置。
由于在自移动设备的行进方向上不存在障碍物时,物体的轮廓信息应当为平坦光滑的;而在行进方向上存在障碍物时,物体的轮廓信息通常是凹凸不平的。基于此,自移动设备可以通过轮廓信息确定行进方向上是否存在障碍物。
在一种可能的实现方式中,自移动设备将反射光的像素坐标连接,得到的物体的轮廓信息。
步骤203,在轮廓信息指示的轮廓为障碍物轮廓时,确定线激光所作用的物体为障碍物。
在一种可能的实现方式中,自移动设备将轮廓信息中的轮廓形状与模板形状进行比较;若该轮廓形状与模板形状相匹配,则轮廓信息指示的轮廓为障碍物轮廓;若该轮廓形状与模板形状不匹配,则轮廓信息指示的轮廓不是障碍物轮廓。
其中,模板形状包括各个障碍物的轮廓形状,比如:地毯的轮廓形状;或者,台阶的轮廓形状;或者,衣柜的轮廓形状等。
和/或,自移动设备确定轮廓形状中的凸起数量;在凸起数量大于数量阈值时,确定轮廓信息指示的轮廓为障碍物轮廓;在凸起数量小于或等于数量阈值时,确定轮廓信息指示的轮廓不是障碍物轮廓。
当然,自移动设备还可以采用其它方式确定轮廓信息指示的轮廓是否为障碍物轮廓,本实施例在此不再一一列举。
在一种可能的实现方式中,在确定线激光所作用的物体为障碍物之后,自移动设备还可以根据轮廓信息确定障碍物相对于地面的垂直距离。
在一个示例中,根据轮廓信息确定障碍物相对于地面的垂直距离,包括:获取地面与第一线激光发射器之间的第一垂直距离;基于激光测距原理和轮廓信息的像素坐标,确定线激光所作用的物体与第一线激光发射器之间的第二垂直距离;将第二垂直距离与第一垂直距离之间的差值确定为垂直距离。
由于控制组件基于激光测距原理和反射光的像素坐标可以测得第一线激光发射器与物体之间的距离,线激光的发射角度预存在自移动设备中;基于第一线激光发射器与物体之间的距离和该发射角度,即可确定出物体与第一线激光发射器之间的第二垂直距离。
获取地面与第一线激光发射器之间的第一垂直距离,包括:对于历史采集到的、且线激光所作用的物体相对于地面的垂直距离小于或等于预设距离阈值的历史环境图像,基于激光测距原理和历史环境图像中反射光的像素坐标,确定物体与第一线激光发射器之间的垂直距离,得到第一垂直距离。由于线激光所作用的物体相对于地面的垂直距离小于或等于预设距离阈值,说明历史环境图像中线激光所作用的物体不是障碍物,此时,默认为该物体为大地。
在一种可能的实现方式中,确定线激光所作用的物体与第一线激光发射器之间的第二垂直距离,包括:基于激光测距原理和环境图像中反射光的像素坐标,确定线激光所作用的物体与第一线激光发射器之间的垂直距离和最小垂直距离;将垂直距离和最小垂直距离之间的平均值确定为第二垂直距离。当然,第二垂直距离也可以为该垂直距离,本实施例不对第二垂直距离的设置方式作限定。
当线激光打到平坦地面时,图像采集组件捕捉到的图像中,提取到的地面信息是平坦且光滑的。此时,可估算出地面相对于第一线激光发射器的第一垂直距离H1。
参考图3,当线激光打到地毯时,由于地毯的边缘特性,提取到的地面信息是带有噪声的不规则数据。此时,可估算出地毯相对于第一线激光发射器的第二垂直距离H2。当检测到地毯后,根据自移动设备行进方向上检测到的第一垂直距离H1,可估算出地毯的垂直距离(H2-H1)。
在自移动设备还包括第二线激光发射器时,参考图3,第二线激光发射器发射的其它线激光打到地毯时,由于地毯的边缘特性,提取到的地面信息是带有噪声的不规则数据。此时,其它线激光经地毯发射得到的反射光辅助自移动设备确定地毯的垂直方向距离。
综上所述,本实施例提供的障碍物识别方法,通过在自移动设备上安装第一线激光发射器和图像采集组件;第一线激光发射器用于沿行进方向的斜下方发射线激光;图像采集组件用于采集包括线激光经物体反射得到的反射光的环境图像;通过获取图像采集组件发送的环境图像;使用环境图像中反射光的像素坐标,确定线激光所作用的物体的轮廓信息;在轮廓信息指示的轮廓为障碍物轮廓时,确定线激光所作用的物体为障碍物;可以解决现有的障碍物识别方法容易受到环境光的影响,导致识别结果不准确的问题;由于通过第一线激光发射器和图像采集组件联合实现采集环境图像,在环境光较暗的情况下,图像采集组件仍然能采集到线激光的反射光的图像,因此,可以保证在环境光较暗的情况下依然能够识别出障碍物。另外,通过根据线激光的反射光的像素坐标提取物体轮廓信息,根据该轮廓信息来识别障碍物,而不是使用图像识别算法来识别障碍物,可以节省识别障碍物时消耗的设备资源。
在一种可能的实现方式中,在步骤203之后,自移动设备还可以基于障碍物的类型确定自移动设备的工作策略,该工作策略用于躲避或越过障碍物。
在一个示例中,在障碍物为地毯,且地毯相对于地面的垂直距离大于第一阈值小于第二阈值的情况下,确定自移动设备的工作策略为加速行驶,以越过障碍物;在障碍物为地毯,且地毯相对于地面的垂直距离 大于或等于第二阈值的情况下,确定自移动设备的工作策略为改变行驶方向,以躲避障碍物;在障碍物为台阶,台阶位于地面之下、且相对于地面的最大垂直距离大于第三阈值小于第四阈值的情况下,确定自移动设备的工作策略为减速行驶,以下降至台阶对应的高度;在障碍物为台阶,台阶位于地面之下、且相对于地面的最大垂直距离大于或等于第四阈值的情况下,确定自移动设备的工作策略为改变行驶方向,以躲避障碍物。
第二阈值大于第一阈值,本实施例不对第一阈值和第二阈值的取值作限定。
第四阈值大于第三阈值,本实施例不对第三阈值和第四阈值的取值作限定。
本实施例中,通过基于障碍物相对于地面的不同垂直距离采取不同的工作策略,可以保证自移动设备不会卡在障碍物上或者从高处坠落,从而影响自移动设备的使用寿命。
图4是本申请一实施例提供的障碍物识别装置的框图,本实施例以该装置应用于图1所示的自移动设备中为例进行说明。该装置至少包括以下几个模块:图像获取模块410、轮廓确定模块420和物体识别模块430。
图像获取模块410,用于获取所述图像采集组件发送的环境图像;
轮廓确定模块420,用于使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息;
物体识别模块430,用于在所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物。
相关细节参考上述方法实施例。
需要说明的是:上述实施例中提供的障碍物识别装置在进行障碍物识别时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将障碍物识别装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的障碍物识别装置与障碍物识别方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图5是本申请一实施例提供的障碍物识别装置的框图,该装置可以是图1所示的自移动设备。该装置至少包括处理器501和存储器502。
处理器501可以包括一个或多个处理核心,比如:4核心处理器、8核心处理器等。处理器501可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器501也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。 在一些实施例中,处理器501可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器501还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器502可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器502还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器502中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器501所执行以实现本申请中方法实施例提供的障碍物识别方法。
在一些实施例中,障碍物识别装置还可选包括有:外围设备接口和至少一个外围设备。处理器501、存储器502和外围设备接口之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口相连。示意性地,外围设备包括但不限于:射频电路、图像采集组件、线激光发射器、音频电路、和电源等。
当然,障碍物识别装置还可以包括更少或更多的组件,本实施例对此不作限定。
在一种可能的实现方式中,本申请实施例还提供有一种计算机可读存储介质,所述计算机可读存储介质中存储有程序,所述程序由处理器加载并执行以实现上述方法实施例的障碍物识别方法。
在一种可能的实现方式中,本申请实施例还提供有一种计算机产品,该计算机产品包括计算机可读存储介质,所述计算机可读存储介质中存储有程序,所述程序由处理器加载并执行以实现上述方法实施例的障碍物识别方法。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (14)

  1. 一种障碍物识别方法,其特征在于,用于自移动设备中,所述自移动设备上安装有第一线激光发射器和图像采集组件;所述第一线激光发射器用于沿行进方向的斜下方发射线激光;所述图像采集组件用于采集包括所述线激光经物体反射得到的反射光的环境图像;所述方法包括:
    获取所述图像采集组件发送的环境图像;
    使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息;
    在所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述线激光所作用的物体为障碍物之后,还包括:
    根据所述轮廓信息确定所述障碍物相对于地面的垂直距离。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述轮廓信息确定所述障碍物相对于地面的垂直距离,包括:
    获取地面与所述第一线激光发射器之间的第一垂直距离;
    基于激光测距原理和所述轮廓信息的像素坐标,确定所述线激光所作用的物体与所述第一线激光发射器之间的第二垂直距离;
    将所述第二垂直距离与所述第一垂直距离之间的差值确定为所述垂直距离。
  4. 根据权利要求3所述的方法,其特征在于,所述获取地面与所述第一线激光发射器之间的第一垂直距离,包括:
    对于历史采集到的、且所述线激光所作用的物体相对于地面的垂直距离小于或等于所述预设距离阈值的历史环境图像,基于激光测距原理和所述历史环境图像中反射光的像素坐标,确定所述物体与所述第一线激光发射器之间的垂直距离,得到所述第一垂直距离。
  5. 根据权利要求1所述的方法,其特征在于,所述自移动设备上还设置有第二线激光发射器,所述第二线激光发射器用于发送其它线激光,所述其它线激光的发射方向与所述线激光的发射方向不同;所述环境图像还包括所述其它线激光经物体反射得到的反射光。
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述确定所述线激光所作用的物体为障碍物之后,还包括:
    基于所述障碍物的类型确定所述自移动设备的工作策略,所述工作策略用于躲避或越过所述障碍物。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述障碍物的类型确定所述自移动设备的工作策略,包括:
    在所述障碍物为地毯,且所述地毯相对于地面的垂直距离大于第一阈值小于第二阈值的情况下,确定所述自移动设备的工作策略为加速行驶,以越过所述障碍物;
    在所述障碍物为地毯,且所述地毯相对于地面的垂直距离大于或等于第二阈值的情况下,确定所述自移动设备的工作策略为改变行驶方向,以躲避所述障碍物;
    在所述障碍物为台阶,所述台阶位于地面之下、且相对于地面的最大垂直距离大于第三阈值小于第四阈值的情况下,确定所述自移动设备的工作策略为减速行驶,以下降至所述台阶对应的高度;
    在所述障碍物为台阶,所述台阶位于地面之下、且相对于地面的最大垂直距离大于或等于第四阈值的情况下,确定所述自移动设备的工作策略为改变行驶方向,以躲避所述障碍物。
  8. 根据权利要求1所述的方法,其特征在于,在使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息之后,还包括:确定所述轮廓信息中的轮廓形状中的凸起数量;
    在所述凸起数量大于数量阈值、以使用非图像识别算法确定出所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物;在所述凸起数量小于或等于数量阈值时,确定轮廓信息指示的轮廓不是障碍物轮廓。
  9. 根据权利要求1所述的方法,其特征在于,当所述线激光打到平坦地面时,所述图像采集组件捕捉到的环境图像中提取到的地面信息是平坦且光滑的;在所述障碍物为地毯、且所述线激光打到地毯时,所述图像采集组件捕捉到的环境图像中提取到的地面信息是带有噪声的不规则数据。
  10. 一种障碍物识别装置,其特征在于,用于自移动设备中,所述自移动设备上安装有第一线激光发射器和图像采集组件;所述第一线激光发射器用于沿行进方向的斜下方发射线激光;所述图像采集组件用于采集包括所述线激光经物体反射得到的反射光的环境图像;所述装置包括:
    图像获取模块,用于获取所述图像采集组件发送的环境图像;
    轮廓确定模块,用于使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息;
    物体识别模块,用于在所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物。
  11. 一种障碍物识别装置,其特征在于,所述装置包括处理器和存储器;所述存储器中存储有程序,所述程序由所述处理器加载并执行以实现如权利要求1至9任一项所述的障碍物识别方法。
  12. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有程序,所述程序被处理器执行时用于实现如权利要求1至9任一项所述的障碍物识别方法。
  13. 一种自移动设备,其特征在于,所述自移动设备包括壳体、第一线激光发射器、图像采集组件和控制组件;所述第一线激光发射器第设置于所述壳体上,用于沿行进方向的斜下方发射线激光;所述图像采集组件用于采集包括所述线激光经物体反射得到的反射光的环境图像;所述第一线激光发射器和所述图像采集组件分别与所述控制组件通信相连;所述控制组件用于:
    获取所述图像采集组件发送的环境图像;
    使用所述环境图像中反射光的像素坐标,确定所述线激光所作用的物体的轮廓信息;
    在所述轮廓信息指示的轮廓为障碍物轮廓时,确定所述线激光所作用的物体为障碍物。
  14. 根据权利要求13所述的自移动设备,其特征在于,所述自移动设备还包括第二线激光发射器,所述第二线激光发射器用于发送其它线激光,所述其它线激光的发射方向与所述线激光的发射方向不同;所述环境图像还包括所述其它线激光经物体反射得到的反射光。
PCT/CN2021/076964 2020-04-22 2021-02-20 障碍物识别方法、装置、自移动设备及存储介质 WO2021212986A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020227018029A KR20220086682A (ko) 2020-04-22 2021-02-20 장애물 식별 방법, 장치, 자율 이동 디바이스 및 저장 매체
JP2022541668A JP7383828B2 (ja) 2020-04-22 2021-02-20 障害物認識方法、装置、自律移動機器及び記憶媒体
EP21793582.4A EP4050378A4 (en) 2020-04-22 2021-02-20 OBSTACLE DETECTION METHOD AND DEVICE, SELF-MOVING DEVICE AND STORAGE MEDIA
US17/747,957 US20220273152A1 (en) 2020-04-22 2022-05-18 Obstacle identification method, apparatus, self-moving device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010322554.9A CN111538034B (zh) 2020-04-22 2020-04-22 障碍物识别方法、装置及存储介质
CN202010322554.9 2020-04-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/747,957 Continuation US20220273152A1 (en) 2020-04-22 2022-05-18 Obstacle identification method, apparatus, self-moving device and storage medium

Publications (1)

Publication Number Publication Date
WO2021212986A1 true WO2021212986A1 (zh) 2021-10-28

Family

ID=71973097

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076964 WO2021212986A1 (zh) 2020-04-22 2021-02-20 障碍物识别方法、装置、自移动设备及存储介质

Country Status (6)

Country Link
US (1) US20220273152A1 (zh)
EP (1) EP4050378A4 (zh)
JP (1) JP7383828B2 (zh)
KR (1) KR20220086682A (zh)
CN (3) CN113296117B (zh)
WO (1) WO2021212986A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114815821A (zh) * 2022-04-19 2022-07-29 山东亚历山大智能科技有限公司 基于多线激光雷达的室内自适应全景避障方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113296117B (zh) * 2020-04-22 2023-08-08 追觅创新科技(苏州)有限公司 障碍物识别方法、装置及存储介质
CN112417944B (zh) * 2020-08-31 2024-04-16 深圳银星智能集团股份有限公司 一种机器人控制方法及电子设备
CN114445440A (zh) * 2020-11-03 2022-05-06 苏州科瓴精密机械科技有限公司 一种应用于自行走设备的障碍物识别方法及自行走设备
CN114617476A (zh) * 2021-06-02 2022-06-14 北京石头创新科技有限公司 自移动设备
CN113848943B (zh) * 2021-10-18 2023-08-08 追觅创新科技(苏州)有限公司 栅格地图的修正方法及装置、存储介质及电子装置
CN116069004A (zh) * 2021-10-29 2023-05-05 追觅创新科技(苏州)有限公司 自移动设备、自移动设备的障碍物边缘确定方法及介质
CN117048596A (zh) * 2023-08-04 2023-11-14 广州汽车集团股份有限公司 避让障碍物的方法、装置、车辆及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060113A (ja) * 2009-09-11 2011-03-24 Mitsubishi Fuso Truck & Bus Corp 車両の衝突回避支援装置
CN105074600A (zh) * 2013-02-27 2015-11-18 夏普株式会社 周围环境识别装置、使用其的自主移动系统以及周围环境识别方法
CN107632308A (zh) * 2017-08-24 2018-01-26 吉林大学 一种基于递归叠加算法的车辆前方障碍物轮廓检测方法
CN110353583A (zh) * 2019-08-21 2019-10-22 追创科技(苏州)有限公司 扫地机器人及扫地机器人的自动控制方法
CN110621208A (zh) * 2017-06-02 2019-12-27 伊莱克斯公司 检测机器人清洁设备前方的表面的高度差的方法
CN111538034A (zh) * 2020-04-22 2020-08-14 追创科技(苏州)有限公司 障碍物识别方法、装置及存储介质

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9203448D0 (en) * 1992-02-18 1992-04-01 British Steel Plc Shape detection
US7079943B2 (en) * 2003-10-07 2006-07-18 Deere & Company Point-to-point path planning
JP2006146778A (ja) 2004-11-24 2006-06-08 Konica Minolta Photo Imaging Inc ヘッドマウントディスプレイ装置
JP4241651B2 (ja) * 2005-03-16 2009-03-18 パナソニック電工株式会社 移動装置
CN101227539B (zh) * 2007-01-18 2010-09-29 联想移动通信科技有限公司 导盲手机及导盲方法
JP5372680B2 (ja) 2009-09-24 2013-12-18 日立オートモティブシステムズ株式会社 障害物検知装置
CN102495672A (zh) * 2011-10-20 2012-06-13 广州市迪拓信息科技有限公司 一种体感控制中位置判断的方法
JP5862623B2 (ja) * 2013-08-08 2016-02-16 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
US11176655B2 (en) * 2014-01-27 2021-11-16 Cognex Corporation System and method for determining 3D surface features and irregularities on an object
JP2017535279A (ja) * 2014-09-23 2017-11-30 宝時得科技(中国)有限公司Positec Technology (China) Co.,Ltd. 自動移動ロボット
JP2016211852A (ja) * 2015-04-28 2016-12-15 一般財団法人電力中央研究所 表面凹凸の計測方法及び計測装置
KR101762504B1 (ko) * 2015-08-31 2017-07-28 고려대학교 산학협력단 레이저 거리 센서를 이용한 바닥 장애물 검출 방법
JP6677516B2 (ja) * 2016-01-21 2020-04-08 シャープ株式会社 自律走行装置
CN108008411A (zh) * 2016-10-31 2018-05-08 张舒怡 一种用于自动驾驶的传感器
CN107589625B (zh) * 2017-09-30 2020-05-29 歌尔科技有限公司 投影仪的自动变焦方法和投影仪
CN109753982B (zh) * 2017-11-07 2021-09-03 北京京东乾石科技有限公司 障碍点检测方法、装置和计算机可读存储介质
CN108444390B (zh) * 2018-02-08 2021-01-26 天津大学 一种无人驾驶汽车障碍物识别方法及装置
CN109166125B (zh) * 2018-07-06 2021-03-12 长安大学 一种基于多边缘融合机制的三维深度图像分割算法
CN209803009U (zh) * 2018-10-30 2019-12-17 四川晴测科技有限公司 路面裂纹检测装置
CN109350018B (zh) * 2019-01-08 2019-04-26 湖南超能机器人技术有限公司 应用于手掌疱疹检测系统的基于图像的手掌检测方法
CN109782807B (zh) * 2019-03-08 2021-10-01 哈尔滨工程大学 一种回形障碍物环境下的auv避障方法
CN110298853B (zh) * 2019-07-04 2021-05-25 易思维(杭州)科技有限公司 面差视觉检测方法
CN110928301B (zh) * 2019-11-19 2023-06-30 北京小米智能科技有限公司 一种检测微小障碍的方法、装置及介质
CN110928315A (zh) * 2019-12-20 2020-03-27 深圳市杉川机器人有限公司 自主机器人及其控制方法
CN110989631B (zh) * 2019-12-30 2022-07-12 科沃斯机器人股份有限公司 自移动机器人控制方法、装置、自移动机器人和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060113A (ja) * 2009-09-11 2011-03-24 Mitsubishi Fuso Truck & Bus Corp 車両の衝突回避支援装置
CN105074600A (zh) * 2013-02-27 2015-11-18 夏普株式会社 周围环境识别装置、使用其的自主移动系统以及周围环境识别方法
CN110621208A (zh) * 2017-06-02 2019-12-27 伊莱克斯公司 检测机器人清洁设备前方的表面的高度差的方法
CN107632308A (zh) * 2017-08-24 2018-01-26 吉林大学 一种基于递归叠加算法的车辆前方障碍物轮廓检测方法
CN110353583A (zh) * 2019-08-21 2019-10-22 追创科技(苏州)有限公司 扫地机器人及扫地机器人的自动控制方法
CN111538034A (zh) * 2020-04-22 2020-08-14 追创科技(苏州)有限公司 障碍物识别方法、装置及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4050378A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114815821A (zh) * 2022-04-19 2022-07-29 山东亚历山大智能科技有限公司 基于多线激光雷达的室内自适应全景避障方法及系统
CN114815821B (zh) * 2022-04-19 2022-12-09 山东亚历山大智能科技有限公司 基于多线激光雷达的室内自适应全景避障方法及系统

Also Published As

Publication number Publication date
CN113189614B (zh) 2023-08-11
EP4050378A4 (en) 2023-05-24
US20220273152A1 (en) 2022-09-01
KR20220086682A (ko) 2022-06-23
CN111538034A (zh) 2020-08-14
EP4050378A1 (en) 2022-08-31
CN113189614A (zh) 2021-07-30
CN113296117B (zh) 2023-08-08
JP7383828B2 (ja) 2023-11-20
JP2023500994A (ja) 2023-01-17
CN111538034B (zh) 2021-05-28
CN113296117A (zh) 2021-08-24

Similar Documents

Publication Publication Date Title
WO2021212986A1 (zh) 障碍物识别方法、装置、自移动设备及存储介质
US20230014874A1 (en) Obstacle detection method and apparatus, computer device, and storage medium
CN111932943B (zh) 动态目标的检测方法、装置、存储介质及路基监测设备
CN108303096B (zh) 一种视觉辅助激光定位系统及方法
US20200167993A1 (en) Map constructing apparatus and map constructing method
CN110442120B (zh) 控制机器人在不同场景下移动的方法、机器人及终端设备
CN111814752B (zh) 室内定位实现方法、服务器、智能移动设备、存储介质
CN113907663B (zh) 障碍物地图构建方法、清洁机器人及存储介质
KR20210061839A (ko) 전자 장치 및 그 제어 방법
TW202029134A (zh) 行車偵測方法、車輛及行車處理裝置
US11539871B2 (en) Electronic device for performing object detection and operation method thereof
US20210156697A1 (en) Method and device for image processing and mobile apparatus
WO2023045749A1 (zh) 充电设备、自移动设备、充电方法、系统及存储介质
WO2023216555A1 (zh) 基于双目视觉的避障方法、装置、机器人及介质
CN116486130A (zh) 障碍物识别的方法、装置、自移动设备及存储介质
CN115973144A (zh) 一种自动泊车识别障碍物的方法、装置、电子设备及介质
CN114995387A (zh) 一种智慧型引导运输车的控制方法和装置
US20210256720A1 (en) Vanishing point extraction devices and methods of extracting vanishing point
KR20220021581A (ko) 로봇 및 이의 제어 방법
CN112200130A (zh) 一种三维目标检测方法、装置及终端设备
US20220343530A1 (en) On-floor obstacle detection method and mobile machine using the same
CN115797412B (zh) 动态对象异常值并行检测方法、装置、系统、设备及介质
US11688176B2 (en) Devices and methods for calibrating vehicle cameras
US20220284707A1 (en) Target detection and control method, system, apparatus and storage medium
US20230298357A1 (en) Information processing device and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21793582

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227018029

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021793582

Country of ref document: EP

Effective date: 20220525

ENP Entry into the national phase

Ref document number: 2022541668

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE