WO2022041737A1 - 一种测距方法、装置、机器人和存储介质 - Google Patents

一种测距方法、装置、机器人和存储介质 Download PDF

Info

Publication number
WO2022041737A1
WO2022041737A1 PCT/CN2021/085877 CN2021085877W WO2022041737A1 WO 2022041737 A1 WO2022041737 A1 WO 2022041737A1 CN 2021085877 W CN2021085877 W CN 2021085877W WO 2022041737 A1 WO2022041737 A1 WO 2022041737A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
measured
distance
acquisition device
image acquisition
Prior art date
Application number
PCT/CN2021/085877
Other languages
English (en)
French (fr)
Inventor
于炀
吴震
Original Assignee
北京石头世纪科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京石头世纪科技股份有限公司 filed Critical 北京石头世纪科技股份有限公司
Priority to US18/023,846 priority Critical patent/US20240028044A1/en
Publication of WO2022041737A1 publication Critical patent/WO2022041737A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/617Safety or protection, e.g. defining protection zones around obstacles or avoiding hazards
    • G05D1/622Obstacle avoidance
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/243Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
    • G05D1/2435Extracting 3D information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2105/00Specific applications of the controlled vehicles
    • G05D2105/10Specific applications of the controlled vehicles for cleaning, vacuuming or polishing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2107/00Specific environments of the controlled vehicles
    • G05D2107/40Indoor domestic environment
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/10Land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/10Optical signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/60Combination of two or more signals
    • G05D2111/63Combination of two or more signals of the same type, e.g. stereovision or optical flow
    • G05D2111/64Combination of two or more signals of the same type, e.g. stereovision or optical flow taken simultaneously from spaced apart sensors, e.g. stereovision
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present disclosure relates to the technical field of distance measurement of sweeping robots, and in particular, to a distance measurement method, device, robot and storage medium.
  • lidar In the process of identifying obstacles, existing cleaning robots often use lidar to continuously scan the surroundings, use reflected signals to determine whether there are obstacles around, and perform obstacle avoidance operations when there are obstacles.
  • the cost of lidar is relatively high and the volume is large, so there are many defects for the miniaturized sweeping robot.
  • sweeping robots that use cameras to identify obstacles, but they also have the defects of complex algorithms and low calculation accuracy.
  • embodiments of the present disclosure provide a distance measuring method, device, robot, and storage medium, so as to enable the robot to accurately calculate the target distance of obstacles.
  • An embodiment of the present disclosure provides a ranging method, which is applied to a self-moving robot equipped with a first image capturing device and a second image capturing device, including: identifying in a first image captured by the first image capturing device After the object to be measured is taken out, determine the first distance of the object to be measured relative to the self-moving robot; wherein, the first image at least includes the object to be measured and the surface on which the object to be measured is located; In the first image, a point located on the object to be measured is selected as a reference point; the initial parallax is determined according to the first distance; in the second image collected by the second image acquisition device, according to the initial parallax and a preset parallax range to determine a region of interest, and determine the position of the reference point in the region of interest as a first target point; based on the first target point, determine a second target point in the region of interest target point; use the position of the second target point to determine the actual parallax distance between the first image acquisition
  • the acquisition time difference between the first image and the second image does not exceed a preset value.
  • the determining a second target point in the region of interest based on the first target point specifically includes: determining a relationship between the region of interest and the first target point in the region of interest based on the first target point. A point to which the target point image matches is taken as the second target point.
  • the first image collected by the first image collection device determines a first distance of the object to be measured relative to the self-moving robot, wherein the first image at least includes the object to be measured.
  • the object to be measured and the surface on which the object to be measured is located include: acquiring a first image of the object to be measured by the first image acquisition device, wherein the first image at least includes the image of the object to be measured and the image obtained from the object to be measured.
  • the determining the first distance of the object to be measured based on the lower edge position of the object area and the lower edge position of the first image includes: determining a reference position in the first image as the origin of coordinates; select any point in the length of the lower side of the minimum rectangle as the first reference point, and determine the second reference point at the lower edge of the image according to the first reference point; according to the first reference point and The position coordinates of the second reference point are used to calculate the first distance of the object to be measured.
  • the acquiring the first image of the object to be measured by the first image acquisition device includes: acquiring a field of view image by the first image acquisition device; performing quality detection on the field of view image, and deleting no objects to be tested. Object frame to obtain an image including the object to be measured.
  • performing quality detection on the field of view image, deleting the frame without the object to be measured, and obtaining an image including the object to be measured includes: performing edge filtering in the y direction on the field of view image, and filtering the filtered image.
  • the image is projected in the x direction; for the projected one-dimensional image signal, the maximum value is taken; when the maximum value is less than the preset threshold, it is determined that the field of view image is a frame without an object to be measured, and the no-to-be-measured object frame is deleted.
  • Object frame when the maximum value is greater than or equal to a preset threshold, it is determined that the field of view image is an object frame to be measured, and the object frame to be measured is retained.
  • An embodiment of the present disclosure provides a distance measuring device, which is applied to a self-moving robot equipped with a first image capturing device and a second image capturing device, comprising: a capturing unit for After the object to be measured is identified in the first image, the first distance of the object to be measured relative to the self-moving robot is determined; wherein, the first image at least includes the object to be measured and the location where the object to be measured is located.
  • a surface a selection unit for selecting a point on the object to be measured as a reference point in the first image; a first determination unit for determining an initial parallax according to the first distance;
  • a region of interest is determined according to the initial parallax and a preset parallax range, and the position of the reference point is determined in the region of interest as the first target point;
  • Two determining units configured to determine a second target point in the region of interest based on the first target point; determine the first image acquisition device and the second image acquisition device by using the position of the second target point The actual parallax distance; a calculation unit, configured to calculate the depth information of the object to be measured according to the actual parallax distance.
  • the acquisition unit is further configured to acquire a first image of the object to be measured through the first image acquisition device, wherein the first image at least includes the image of the object to be measured and the image obtained from the first image.
  • An image acquisition device to a ground image of the object to be measured; determining an object area of the object to be measured in the first image, wherein the object area is the smallest rectangle including the object to be measured;
  • a first distance of the object to be measured is determined based on the lower edge position of the object area and the lower edge position of the first image, where the first distance refers to the The distance between the object to be measured and the first image acquisition device.
  • the acquisition unit is further configured to determine a reference position in the first image as the origin of coordinates; select any point in the length of the lower side of the minimum rectangle as the first reference point, and in the image The lower edge determines a second reference point according to the first reference point; and calculates the first distance of the object to be measured according to the first reference point and the position coordinates of the second reference point.
  • An embodiment of the present disclosure provides a robot, including a processor and a memory, where the memory stores computer program instructions that can be executed by the processor, and when the processor executes the computer program instructions, any one of the above-mentioned implementations is implemented method steps.
  • Embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer program instructions, the computer program instructions implementing any of the method steps described above when invoked and executed by a processor.
  • the present invention has at least the following technical effects:
  • Embodiments of the present disclosure provide a ranging method, device, robot, and storage medium, wherein the ranging method utilizes the feature of the camera of a sweeping robot near the ground to obtain an image of a target object and a ground image, and analyzes the image features to obtain a first image
  • the depth distance of the target object under the acquisition device combined with the binocular ranging calculation method, can accurately obtain the binocular target position, so as to correct the depth distance of the object obtained by the monocular, and finally obtain a more accurate object distance.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure
  • FIG. 2 is a perspective view of the structure of a cleaning robot according to an embodiment of the present disclosure
  • FIG. 3 is a top view of the structure of a cleaning robot provided by an embodiment of the present disclosure.
  • FIG. 4 is a bottom view of the structure of a cleaning robot provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of a ranging method for a cleaning robot provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of an image acquired by a cleaning robot according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic flowchart of a monocular ranging method for a cleaning robot provided by an embodiment of the present disclosure
  • FIG. 8 is a schematic diagram of area search in the binocular ranging method for a cleaning robot provided by an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of the geometric structure of the method for calculating binocular distance measurement of a sweeping robot according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic geometrical structure diagram of a method for calculating binocular distance measurement of a cleaning robot provided by another embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of a distance measuring device for a cleaning robot provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of an electronic structure of a robot according to an embodiment of the present disclosure.
  • first, second, third, etc. may be used to describe . . . in the embodiments of the present disclosure, these . . . should not be limited to these terms. These terms are only used to distinguish one from another.
  • the first... may also be referred to as the second..., and similarly, the second... may also be referred to as the first... without departing from the scope of the embodiments of the present disclosure.
  • the embodiment of the present disclosure provides a possible application scenario, and the application scenario includes an automatic cleaning device 100, such as a sweeping robot, a mopping robot, a vacuum cleaner, a lawn mower, and the like.
  • an automatic cleaning device 100 such as a sweeping robot, a mopping robot, a vacuum cleaner, a lawn mower, and the like.
  • a household sweeping robot is used as an example for illustration.
  • the front view field image is obtained in real time through the camera at the front end of the sweeping robot, and the field of view image is obtained according to the corresponding field of view image.
  • the analysis performs obstacle avoidance or other operations, such as identifying the obstacle 200, etc., according to the search and comparison of the stored database, judges the category of the obstacle, and executes different schemes according to different categories.
  • the robot may be provided with a touch-sensitive display or controlled by a mobile terminal to receive operation instructions input by the user.
  • the sweeping robot can be equipped with various sensors, such as buffers, cliff sensors, ultrasonic sensors, infrared sensors, magnetometers, accelerometers, gyroscopes, odometers and other sensing devices, and the robot can also be equipped with WIFI modules, Bluetooth modules and other wireless communication
  • the module is connected with the intelligent terminal or server, and receives the operation instructions transmitted by the intelligent terminal or server through the wireless communication module.
  • the automatic cleaning apparatus 100 can travel on the ground through various combinations of movement relative to three mutually perpendicular axes defined by the body 110 : a front-to-back axis X, a lateral axis Y, and a central vertical axis Z.
  • the forward drive direction along the front-rear axis X is designated “forward” and the rearward drive direction along the front-rear axis X is designated “rear”.
  • the direction of the transverse axis Y is substantially the direction extending between the right and left wheels of the robot along the axis defined by the center point of the drive wheel module 141 .
  • the automatic cleaning device 100 can be rotated about the Y axis. "Tilt up” when the forward portion of the automatic cleaning device 100 is tilted upward and the rearward portion is tilted downward, and “tilt down” when the forward portion of the automatic cleaning device 100 is tilted downward and the rearward portion is tilted upward .
  • the robot 100 can rotate around the Z axis. In the forward direction of the automatic cleaning device 100, when the automatic cleaning device 100 is tilted to the right of the X axis, it is “turn right", and when the automatic cleaning device 100 is tilted to the left of the X axis, it is "turn left".
  • the automatic cleaning device 100 includes a machine body 110 , a sensing system 120 , a control system, a driving system 140 , a cleaning system, an energy system, and a human-computer interaction system 180 .
  • the machine body 110 includes a forward portion 111 and a rearward portion 112, and has an approximately circular shape (both front and rear), and may also have other shapes, including but not limited to an approximately D-shaped shape with front and rear circles and a front and rear rectangle. or square shape.
  • the sensing system 120 includes a position determination device 121 located on the machine body 110 , a collision sensor and a proximity sensor provided on the buffer 122 of the forward portion 111 of the machine body 110 , and a proximity sensor provided on the lower part of the machine body 110 .
  • the cliff sensor as well as the magnetometer, accelerometer, gyroscope (Gyro), odometer (ODO, full name odograph) and other sensing devices arranged inside the main body of the machine are used to provide various position information and motion of the machine to the control system 130 status information.
  • the position determination device 121 includes, but is not limited to, a camera and a laser ranging device (LDS, full name of Laser Direct Structuring).
  • the forward part 111 of the machine main body 110 can carry the buffer 122 , and the buffer 122 passes through a sensor system, such as an infrared sensor, disposed thereon when the driving wheel module 141 propels the robot to walk on the ground during the cleaning process.
  • a sensor system such as an infrared sensor
  • the automatic cleaning device 100 can control the driving wheel module 141 to cause the automatic cleaning device 100 to pass the events detected by the buffer 122, such as obstacles, walls, and Respond to the event, such as moving away from an obstacle.
  • the control system 130 is arranged on the circuit board in the main body 110 of the machine, and includes a computing processor that communicates with non-transitory memory, such as hard disk, flash memory, and random access memory, such as central processing unit, application processor, application processing
  • non-transitory memory such as hard disk, flash memory, and random access memory, such as central processing unit, application processor, application processing
  • the device uses localization algorithms, such as Simultaneous Localization and Map Construction (SLAM, full name Simultaneous Localization And Mapping), to draw a real-time map of the environment where the robot is located.
  • SLAM Simultaneous Localization and Map Construction
  • the drive system 140 may maneuver the robot 100 to travel across the ground based on drive commands having distance and angle information (eg, x, y, and theta components).
  • the driving system 140 includes a driving wheel module 141, which can control the left and right wheels at the same time.
  • the driving wheel module 141 preferably includes a left driving wheel module and a right driving wheel module, respectively.
  • the left and right drive wheel modules are opposed along a lateral axis defined by the body 110 .
  • the robot may include one or more driven wheels 142, and the driven wheels include but are not limited to universal wheels.
  • the driving wheel module includes a traveling wheel, a driving motor and a control circuit for controlling the driving motor.
  • the driving wheel module can also be connected to a circuit for measuring driving current and an odometer.
  • the driving wheel module 141 can be detachably connected to the main body 110 for easy disassembly and maintenance.
  • the drive wheel may have a biased drop suspension system, movably fastened, such as rotatably attached, to the robot body 110 and receiving a spring bias biased downward and away from the robot body 110.
  • the spring bias allows the drive wheels to maintain contact and traction with the ground with a certain ground force, while the cleaning elements of the automatic cleaning apparatus 100 also contact the ground 10 with a certain pressure.
  • the cleaning system may be a dry cleaning system and/or a wet cleaning system.
  • a dry cleaning system the main cleaning function comes from the cleaning system 151 composed of the roller brush, the dust box, the fan, the air outlet and the connecting parts between the four.
  • the roller brush with certain interference with the ground sweeps up the garbage on the ground and rolls it up to the front of the suction port between the roller brush and the dust box, and then is sucked into the dust box by the suction gas generated by the fan and passing through the dust box.
  • the dry cleaning system may also include a side brush 152 having an axis of rotation angled relative to the ground for moving debris into the rolling brush area of the cleaning system.
  • the energy system includes rechargeable batteries such as NiMH and Lithium batteries.
  • the rechargeable battery can be connected with a charging control circuit, a battery pack charging temperature detection circuit and a battery undervoltage monitoring circuit, and the charging control circuit, the battery pack charging temperature detection circuit, and the battery undervoltage monitoring circuit are then connected with the single-chip microcomputer control circuit.
  • the host is charged by connecting to the charging pile through the charging electrode arranged on the side or below of the fuselage. If there is dust on the bare charging electrode, the plastic body around the electrode will melt and deform due to the accumulation effect of the charge during the charging process, and even the electrode itself will be deformed, making it impossible to continue normal charging.
  • the human-computer interaction system 180 includes buttons on the host panel, and the buttons are used for the user to select functions; it may also include a display screen and/or indicator lights and/or speakers, and the display screen, indicator lights and speakers can show the user the current state of the machine or Feature selections; may also include mobile client programs.
  • the mobile phone client can show the user a map of the environment where the equipment is located, as well as the location of the machine, which can provide users with richer and more user-friendly function items.
  • the embodiment of the present disclosure provides a ranging method, which utilizes the feature that the camera of a sweeping robot is located on the ground to obtain a target image and a ground image, analyzes the image features to obtain the depth distance of the target object under the first image acquisition device, and then combines the binocular vision
  • the distance measurement calculation method can accurately obtain the target position of the binocular, so as to correct the depth distance of the object obtained by the monocular, and finally obtain a more accurate object distance.
  • a ranging method applied to a self-moving robot equipped with a first image acquisition device and a second image acquisition device, specifically includes the following method steps:
  • Step S502 Identify the object to be measured in the first image collected by the first image acquisition device, and determine a first distance of the object to be measured relative to the self-moving robot (may be referred to as a monocular distance for short); Wherein, the first image includes at least the object to be measured and a surface on which the object to be measured is located.
  • the first image acquisition device is shown in FIG. 1 .
  • the first image acquisition device eg, a camera
  • the front view field image is sent to the control system, and the control system gives the analysis result of the field of view image according to the operation results of the sweeping robot itself or the remote end, and then controls the drive system to perform obstacle avoidance or other operations.
  • the object to be measured refers to any obstacle encountered by the sweeping robot in the process of traveling.
  • the sweeping robot can classify the relevant categories of obstacles in advance and store them in its own or remote storage system.
  • the obstacle image pre-stored in the storage system can be called to judge the category information of the current obstacle, and related operations can be performed according to the category information.
  • the object to be detected is identified, it can also be understood that the existence of an obstacle encountered in the walking process is detected, and there is no need to identify its type.
  • the image obtained by the first image acquisition device at the front end of the cleaning robot includes the image of the object to be measured 601 on the ground, and other scene images in the forward field of view from the first image acquisition device. Since the first image The acquisition device is located in front of the sweeping robot and has a low height from the ground. Therefore, the field of view image includes the ground image from the first image acquisition device to the object to be measured, as shown by the scale in FIG. 6 . The ground image is used to calculate the depth distance from the object to be measured 601 to the first image acquisition device (eg, a camera).
  • the first image acquisition device eg, a camera
  • the determining the first distance of the object to be measured relative to the self-moving robot includes: determining the first distance of the object to be measured based on the lower edge position of the object area and the lower edge position of the first image.
  • the object to be measured 601 on the ground is obtained by the first image acquisition device at the front end of the sweeping robot, and a corresponding minimum rectangle 602 is constructed based on the object to be measured 601 , and the minimum rectangle 602 is just
  • the object to be measured 601 is enveloped. Selecting the smallest rectangle as the circumscribed area can conveniently select any point on the lower edge to calculate the first distance.
  • the object region ROI (Region Of Interest) of the object to be measured is determined in the first image in the above manner, wherein, as an example, the object region ROI is the smallest rectangle including the object to be measured .
  • the object area can also be other graphics except the smallest rectangle, such as circumscribed circle, ellipse, any specific shape, etc.; it can also be the lower edge line of the object to be measured in contact with the ground.
  • the method for determining the first distance of the object to be measured based on the lower edge position of the object area and the lower edge position of the first image includes the following sub-steps, such as As shown in Figure 7:
  • Step S5021 Determine a reference position in the first image as a coordinate origin.
  • the lower left corner of the first image may be selected as the coordinate origin, the horizontal direction is the x direction, the vertical direction is the y direction, and the vertical direction of the image is the z direction.
  • the selection position of the origin of the image coordinates is not unique, and can be selected arbitrarily according to the needs of analyzing the data.
  • Step S5022 As an implementation manner, select any point in the length of the lower side of the minimum rectangle as the first reference point 603, and determine the second reference point 604 on the lower edge of the first image according to the first reference point .
  • the selection of the first reference point 603 is to select any point from the length of the lower side of the minimum rectangle. If the object area is a circle, the lowest point is selected as the first reference point.
  • the second reference point 604 is a point that intersects with the lower edge of the first image after vertically extending downward according to the first reference point 603, that is, the lowest point in the first image representing the ground on the first image, so that the The positional relationship between the first reference point 603 and the second reference point 604 is used to calculate the distance from the object to be measured to the camera, that is, to obtain the distance from the object to be measured to the camera through the ground distance.
  • Step S5023 Calculate the first distance of the object to be measured according to the position coordinates of the first reference point and the second reference point.
  • the coordinates of the first reference point 603 are (x1, y1), and the coordinates of the second reference point 604 are (x2, y2), then the distance between the first reference point 603 and the second reference point 604 can be calculated.
  • the pixel position of the first reference point 603 and the pixel position of the second reference point 604 are obtained, and then the pixel distance between the first reference point 603 and the second reference point 604 is estimated, and then according to the object to be measured.
  • the relationship between the actual height and the pixel and then determine the actual distance between the first reference point 603 and the second reference point 604 .
  • the method for acquiring an image of an object to be measured by a first image acquisition device includes: acquiring a field of view image by a first image acquisition device; performing quality detection on the field of view image, and deleting There is no object-to-be-measured frame, and an image including the object to be measured is obtained.
  • it includes: performing edge filtering in the y-direction of the field of view image, and projecting the filtered image in the x-direction; taking the maximum value of the projected one-dimensional image signal, for example, the projection obtains the x-direction according to the position parameter If the one-dimensional distance is 80-100 pixels, the value of 100 pixels is taken; when the maximum value is less than the preset threshold, it is judged that the field of view image is a frame without an object to be measured, and the frame without an object to be measured is deleted. ; when the maximum value is greater than or equal to a preset threshold, it is determined that the field of view image is a frame with an object to be measured, and the frame of the object to be measured is reserved.
  • the threshold is set to 50 pixels, when the projected pixels are larger than 50 pixels, it is considered as a valid frame, otherwise it is an invalid frame.
  • Step S504 Select a point located on the object to be measured in the first image as a reference point; for example, as shown in FIG. 8, select any point of the object to be measured in the object area as a reference point 801 ;
  • the selection of the reference point 801 is preferably the geometric center of the object to be measured, or a position where the features of the object are easier to identify.
  • Step S506 Determine the initial parallax according to the first distance; in the second image collected by the second image acquisition device (for example, a camera), determine a region of interest according to the initial parallax and a preset parallax range, and determining the position of the reference point in the region of interest as the first target point.
  • the second image acquisition device for example, a camera
  • the region of interest is determined according to the above-mentioned calculated initial parallax and a preset parallax range, where the preset parallax range is due to the difference in calculation accuracy. Determinism may cause the initial disparity to be inaccurate, and a redundant numerical range is set to facilitate the accurate finding of the corresponding target point.
  • the region of interest is determined according to the object region ROI determined in the first image, the above-mentioned initial parallax and the preset parallax range, that is, the region of the first target point corresponding to the reference point is searched in the second image.
  • the acquisition time difference between the first image and the second image does not exceed a preset value.
  • a preset value such as a millisecond time range of 100ms or less, which can optimize the efficiency It ensures the consistency of the two images and avoids the failure of the corresponding point search due to object motion or other reasons.
  • Step S508 Determine a second target point in the region of interest based on the first target point; use the position of the second target point to determine the actual parallax between the first image capture device and the second image capture device distance.
  • the second target point is a point in the region of interest that matches the image of the first target point.
  • any point of the object to be measured in the object area as a reference point 801 ; determine the distance based on binocular measurement according to the first distance, the initial parallax distance d' and the preset parallax range.
  • search area 802 of the reference point 801 search the search area 802 for the first target point A corresponding to the reference point 801 in the search area. Due to the inaccuracy of the estimated parallax range, the first target point A is not necessarily the target point that actually corresponds to the reference point, and there may be a small distance error.
  • An accurate second target point is searched near the first target point, the second target point is a target point exactly corresponding to the reference point, and the determination of the second target point can be performed by means of image comparison.
  • Step S510 Calculate the depth information of the object to be measured according to the actual parallax distance.
  • calculating the binocular distance D of the object to be measured during binocular measurement specifically including: determining the binocular measurement by measuring and other methods.
  • the baseline distance b of the first image acquisition device and the second image acquisition device, the binocular distance of the object to be measured is calculated according to the baseline distance b, the actual parallax distance d and the focal length f, wherein the binocular distance
  • D f*b/(bd)
  • the embodiment of the present disclosure proposes to use the characteristics of the camera of the cleaning robot near the ground to obtain the target image and the ground image, analyze the image characteristics to obtain the depth distance of the target object under the first image acquisition device, and then combine with the binocular ranging calculation method to accurately obtain The binocular target position, so as to correct the depth distance of the object obtained by the monocular, and finally obtain a more accurate object distance.
  • an embodiment of the present disclosure provides a distance measuring device, which is applied to a self-moving robot equipped with a first image capturing device and a second image capturing device, including: a collection unit 1102 , a selection unit 1104 , and a first determination unit 1106, the second determination unit 1108 and the calculation unit 1106, each unit executes the method steps described in the above embodiments, the same method steps have the same technical effect, and will not be repeated here, the details are as follows:
  • the acquisition unit 1102 is configured to determine the first distance of the object to be measured relative to the self-moving robot after the object to be measured is identified in the first image collected by the first image acquisition device; An image at least includes the object to be measured and the surface on which the object to be measured is located.
  • the acquisition unit 1102 is further configured to acquire a first image of the object to be measured through the first image acquisition device, wherein the first image at least includes the image of the object to be measured and the From the first image acquisition device to the ground image of the object to be measured; determine the object area of the object to be measured in the first image, wherein the object area includes the object to be measured The smallest rectangle of The determined distance between the object to be measured and the first image acquisition device.
  • the acquisition unit 1102 is further configured to: determine a reference position in the first image as a coordinate origin. Any point in the length of the lower side of the minimum rectangle is selected as a first reference point, and a second reference point is determined on the lower edge of the first image according to the first reference point. According to the position coordinates of the first reference point and the second reference point, the first distance of the object to be measured is calculated.
  • the acquisition unit 1102 is further configured to: acquire an image of the field of view through the first image acquisition device; perform quality detection on the image of the field of view, delete the frame without the object to be measured, and obtain a frame including the object to be measured. image of the object.
  • it includes: performing edge filtering in the y direction on the field of view image, and projecting the filtered image in the x direction; taking the maximum value for the projected one-dimensional image signal; when the maximum value is less than a preset value
  • the threshold is set, it is judged that the field of view image is a frame without an object to be measured, and the frame without an object to be measured is deleted; when the maximum value is greater than or equal to a preset threshold, it is judged that the image of the field of view has an object to be measured frame, and keep the frame of the object to be measured.
  • the selection unit 1104 is used to select a point located on the object to be measured in the first image as a reference point; for example, as shown in FIG. 8 , select any point of the object to be measured in the object area as a reference point.
  • Reference point 801; the selection of reference point 801 is preferably the geometric center of the object to be measured, or a position where the features of the object are relatively easy to identify.
  • the first determining unit 1106 is configured to determine the initial parallax according to the first distance; in the second image collected by the second image acquisition device (eg camera), according to the initial parallax and a preset parallax range A region of interest is determined, and the position of the reference point is determined in the region of interest as a first target point; wherein, the acquisition time difference between the first image and the second image does not exceed a preset value.
  • the second image acquisition device eg camera
  • the second determining unit 1108 is configured to, based on the first target point, determine a point in the region of interest that matches the image of the first target point as a second target point; using the position of the second target point The actual parallax distance of the first image capturing device and the second image capturing device is determined.
  • the calculation unit 1110 is configured to calculate the depth information of the object to be measured according to the actual parallax distance.
  • calculating the binocular distance D of the object to be measured during binocular measurement specifically including: determining the binocular measurement by measuring and other methods.
  • the baseline distance b of the first image acquisition device and the second image acquisition device, the binocular distance of the object to be measured is calculated according to the baseline distance b, the actual parallax distance d and the focal length f, wherein the binocular distance
  • D f*b/(bd)
  • the embodiments of the present disclosure provide a distance measuring device, which utilizes the characteristics of the camera of the sweeping robot near the ground to obtain a target image and a ground image, analyzes the image characteristics to obtain the depth distance of the target object under the first image acquisition device, and combines the binocular measurement
  • the distance calculation method can accurately obtain the target position of the binocular, so as to correct the depth distance of the object obtained by the monocular, and finally obtain a more accurate object distance.
  • Embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer program instructions, the computer program instructions implementing any of the method steps described above when invoked and executed by a processor.
  • An embodiment of the present disclosure provides a robot, including a processor and a memory, where the memory stores computer program instructions that can be executed by the processor, and when the processor executes the computer program instructions, any one of the foregoing embodiments is implemented method steps.
  • the robot may include a processing device (eg, a central processing unit, a graphics processor, etc.) 1201, which may be loaded into a random access memory (ROM) according to a program stored in a read only memory (ROM) 1202 or from a storage device 1208 ( The program in RAM) 1203 executes various appropriate actions and processes. In the RAM 1203, various programs and data necessary for the operation of the electronic robot 1200 are also stored.
  • the processing device 1201, the ROM 1202, and the RAM 1203 are connected to each other through a bus 1204.
  • An input/output (I/O) interface 1205 is also connected to bus 1204 .
  • the following devices may be connected to the I/O interface 1205: input devices 1206 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 1207 such as a computer; a storage device 1208 including, for example, a hard disk; and a communication device 1209 .
  • Communication means 1209 may allow the electronic robot to communicate wirelessly or by wire with other robots to exchange data. While FIG. 12 shows an electronic robot having various devices, it should be understood that not all of the illustrated devices are required to be implemented or equipped. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a robotic software program product comprising a computer program carried on a readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 1209, or from the storage device 1208, or from the ROM 1202.
  • the processing apparatus 1201 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the above-mentioned computer-readable medium may be included in the above-mentioned robot; or may exist alone without being assembled into the robot.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment. Those of ordinary skill in the art can understand and implement it without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

一种测距方法、装置、机器人和存储介质,测距方法包括:在通过第一图像采集装置采集到第一图像中识别出待测物体后,确定待测物体相对于自移动机器人的第一距离(502);在第一图像中选取参考点(504);根据第一距离确定初始视差;根据初始视差及预设的视差范围确定感兴趣区域,在感兴趣区域中确定第二目标点(506);利用第二目标点的位置确定实际视差距离(508);根据实际视差距离计算待测物体的深度信息(510),该方法利用扫地机器人摄像头近地面的特点,获得目标图像以及地面图像,通过分析图像特征获得第一图像采集装置下目标物体的深度距离,再结合双目测距计算方法,准确获得目标位置,从而对单目获得的物体深度距离进行矫正,最终获得更加准确的物体距离。

Description

一种测距方法、装置、机器人和存储介质
相关申请的交叉引用
本申请要求于2020年8月28日递交的中国专利申请第202010887031.9号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开涉及扫地机器人测距技术领域,尤其涉及一种测距方法、装置、机器人和存储介质。
背景技术
随着人工智能技术的发展,出现了各种各样的智能化的机器人,比如扫地机器人、拖地机器人、吸尘器、除草机等。这些清洁机器人在工作过程中能够自动识别周围的障碍物,并对障碍物执行避障操作,这些清洁机器人不仅解放了劳动力、节约了人力成本,而且提升了清洁效率。
现有的清洁机器人在对障碍物识别过程中,往往是通过激光雷达对周围不停的扫描,利用反射信号判断周围是否存在障碍物,当存在障碍物时,执行避障操作。但激光雷达造价比较高,且体积较大,对于小型化的扫地机器人存在诸多缺陷。也有扫地机器人采用摄像头进行障碍物识别,但也都存在算法复杂、计算精度不高的缺陷。
发明内容
在有鉴于此,本公开实施例提供一种测距方法、装置、机器人和存储介质,用以使机器人能够精准的计算障碍物的目标距离。
本公开实施例提供一种测距方法,应用于配置有第一图像采集装置和第二图像采集装置的自移动机器人,包括:在通过所述第一图像采集装置采集到的第一图像中识别出待测物体后,确定待测物体相对于所述自移动机器人的第一距离;其中,所述第一图像中至少包含所述待测物体及所述待测物体所在的表面;在所述第一图像中选取位于所述待测物体上的点作为参考点;根据所述第一距离确定初始视差;在利用所述第二图像采集装置采集到的第二图像中,根据所述初始视差及预设的视差范围确定感兴趣区域,并在所述感兴趣区域中确定所述参考点的位置作为第一目标点;基于所述第一目标点,在所述感兴趣区域中确定第二目标点;利用所述第二目标点的位置确定所述第一图像采集装置和第二图像采集装置的实际视差距离;根据所述实际视差距离计算所述待测物体的深度信息。
可选地,所述第一图像和第二图像的采集时间差不超过预设值。
可选地,所述基于所述第一目标点,在所述感兴趣区域中确定第二目标点,具体包括:基于所述第一目标点,在所述感兴趣区域中确定与所述第一目标点图像匹配的点作为第二 目标点。可选地,所述通过所述第一图像采集装置采集到的第一图像,确定待测物体相对于所述自移动机器人的第一距离,其中,所述第一图像中至少包含所述待测物体及所述待测物体所在的表面包括:通过所述第一图像采集装置获取待测物体的第一图像,其中,所述第一图像中至少包含所述待测物体图像及从所述第一图像采集装置到所述待测物体的地面图像;在所述第一图像中确定所述待测物体的物体区域,其中,所述物体区域为包括所述待测物体在内的最小矩形;基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离,所述第一距离是指基于所述第一图像采集装置确定的所述待测物体与所述第一图像采集装置的距离。
可选地,所述基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离,包括:在所述第一图像中确定一基准位置作为坐标原点;在所述最小矩形的下边长中选取任意一点作为第一参考点,在所述图像的下边缘根据所述第一参考点确定第二参考点;根据所述第一参考点以及所述第二参考点的位置坐标,计算所述待测物体的第一距离。
可选地,所述根据所述实际视差距离计算所述待测物体的深度信息,包括:确定所述第一图像采集装置和所述第二图像采集装置的基线距离,根据所述基线距离、所述实际视差距离以及焦距计算所述待测物体的深度信息,其中,所述待测物体的深度信息满足如下关系:D=f*b/(b-d),其中:f为焦距,b为基线距离,d为实际视差距离,D为深度信息。
可选地,还包括:当所述第一图像采集装置的光轴仰角为θ时,所述待测物体与所述自移动机器人前边缘的距离满足如下关系:Z=D*cosθ-s,其中θ为光轴仰角,s为第一图像采集装置至自移动机器人前边缘的距离,D为深度信息,Z为待测物体与自移动机器人前边缘的距离。
可选地,所述通过所述第一图像采集装置获取待测物体的第一图像,包括:通过第一图像采集装置获取视场图像;对所述视场图像进行质量检测,删除无待测物体帧,获得包括待测物体的图像。
可选地,所述对所述视场图像进行质量检测,删除无待测物体帧,获得包括待测物体的图像,包括:对所述视场图像做y方向的边缘滤波,将滤波后的图像按x方向投影;对投影后的一维图像信号,取最大值;当所述最大值小于预设阀值时,判断所述视场图像为无待测物体帧,删除所述无待测物体帧;当所述最大值大于等于预设阀值时,判断所述视场图像为有待测物体帧,保留所述待测物体帧。
本公开实施例提供一种测距装置,应用于配置有第一图像采集装置和第二图像采集装置的自移动机器人,包括:采集单元,用于在通过所述第一图像采集装置采集到的第一图像中识别出待测物体后,确定待测物体相对于所述自移动机器人的第一距离;其中,所述第一图像中至少包含所述待测物体及所述待测物体所在的表面;选取单元,用于在所述第一图像中选取位于所述待测物体上的点作为参考点;第一确定单元,用于根据所述第一距离确定初始视差;在利用所述第二图像采集装置采集到的第二图像中,根据所述初始视差 及预设的视差范围确定感兴趣区域,并在所述感兴趣区域中确定所述参考点的位置作为第一目标点;第二确定单元,用于基于所述第一目标点,在所述感兴趣区域中确定第二目标点;利用所述第二目标点的位置确定所述第一图像采集装置和第二图像采集装置的实际视差距离;计算单元,用于根据所述实际视差距离计算所述待测物体的深度信息。
可选地,所述采集单元还用于,通过所述第一图像采集装置获取待测物体的第一图像,其中,所述第一图像中至少包含所述待测物体图像及从所述第一图像采集装置到所述待测物体的地面图像;在所述第一图像中确定所述待测物体的物体区域,其中,所述物体区域为包括所述待测物体在内的最小矩形;基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离,所述第一距离是指基于所述第一图像采集装置确定的所述待测物体与所述第一图像采集装置的距离。
可选地,所述采集单元还用于,在所述第一图像中确定一基准位置作为坐标原点;在所述最小矩形的下边长中选取任意一点作为第一参考点,在所述图像的下边缘根据所述第一参考点确定第二参考点;根据所述第一参考点以及所述第二参考点的位置坐标,计算所述待测物体的第一距离。
本公开实施例提供一种机器人,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的计算机程序指令,所述处理器执行所述计算机程序指令时,实现如上任一所述的方法步骤。
本公开实施例提供一种非瞬时性计算机可读存储介质,存储有计算机程序指令,所述计算机程序指令在被处理器调用和执行时实现如上任一所述的方法步骤。
相对于现有技术,本发明至少具有以下技术效果:
本公开实施例提供一种测距方法、装置、机器人和存储介质,其中,测距方法利用扫地机器人摄像头近地面的特点,获得目标物体的图像以及地面图像,分析图像特征就可以获得第一图像采集装置下目标物体的深度距离,再结合双目测距计算方法,准确获得双目目标位置,从而对单目获得的物体深度距离进行矫正,最终获得更加准确的物体距离。。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的应用场景示意图;
图2为本公开实施例提供的扫地机器人结构立体图;
图3为本公开实施例提供的扫地机器人结构俯视图;
图4为本公开实施例提供的扫地机器人结构仰视图;
图5为本公开实施例提供的扫地机器人测距方法的流程示意图;
图6为本公开实施例提供的扫地机器人获取的图像示意图;
图7为本公开实施例提供的扫地机器人单目测距方法的流程示意图;
图8为本公开实施例提供的扫地机器人双目测距方法中区域搜索示意图;
图9为本公开实施例提供的扫地机器人双目测距计算方法的几何结构示意图;
图10为本公开另一实施例提供的扫地机器人双目测距计算方法的几何结构示意图;
图11为本公开实施例提供的清洁机器人测距装置的结构示意图;
图12为本公开实施例提供的机器人的电子结构示意图。
具体实施方式
在为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
应当理解,尽管在本公开实施例中可能采用术语第一、第二、第三等来描述……,但这些……不应限于这些术语。这些术语仅用来将……彼此区分开。例如,在不脱离本公开实施例范围的情况下,第一……也可以被称为第二……,类似地,第二……也可以被称为第一……。
本公开实施例提供一种可能的应用场景,该应用场景包括自动清洁设备100,例如扫地机器人、拖地机器人、吸尘器、除草机等等。在某些实施例中。在本实施例中,如图1所示,以家用式扫地机器人为例进行说明,在扫地机器人工作过程中,实时的通过扫地机器人前端的摄像头获取前方的视场图像,并根据对视场图像的分析执行避障或其他操作,例如识别到障碍物200等,根据存储数据库的搜索对比,判断障碍物的类别,并根据不同类别执行不同的方案。在本实施例中,机器人可以设置有触敏显示器或者通过移动终端控制,以接收用户输入的操作指令。扫地机器人可以设置各种传感器,例如缓冲器、悬崖传感器、超声传感器、红外传感器、磁力计、加速度计、陀螺仪、里程计等传感装置,机器人还可以设置有WIFI模块、Bluetooth模块等无线通讯模块,以与智能终端或服务器连接,并通过无线通讯模块接收智能终端或服务器传输的操作指令。
如图2所示,自动清洁设备100可通过相对于由主体110界定的如下三个相互垂直轴的移动的各种组合在地面上行进:前后轴X、横向轴Y及中心垂直轴Z。沿着前后轴X的前向驱动方向标示为“前向”,且沿着前后轴X的向后驱动方向标示为“后向”。横向轴Y的方向实质上是沿着由驱动轮模块141的中心点界定的轴心在机器人的右轮与左轮之间延伸的方向。
自动清洁设备100可以绕Y轴转动。当自动清洁设备100的前向部分向上倾斜,后向部分向下倾斜时为“上仰”,且当自动清洁设备100的前向部分向下倾斜,后向部分向上倾斜时为“下俯”。另外,机器人100可以绕Z轴转动。在自动清洁设备100的前向方向 上,当自动清洁设备100向X轴的右侧倾斜为“右转”,当自动清洁设备100向X轴的左侧倾斜为“左转”。
如图3所示,自动清洁设备100包含机器主体110、感知系统120、控制系统、驱动系统140、清洁系统、能源系统和人机交互系统180。
机器主体110包括前向部分111和后向部分112,具有近似圆形形状(前后都为圆形),也可具有其他形状,包括但不限于前方后圆的近似D形形状及前方后方的矩形或正方形形状。
如图3所示,感知系统120包括位于机器主体110上的位置确定装置121、设置于机器主体110的前向部分111的缓冲器122上的碰撞传感器、近距离传感器,设置于机器主体下部的悬崖传感器,以及设置于机器主体内部的磁力计、加速度计、陀螺仪(Gyro)、里程计(ODO,全称odograph)等传感装置,用于向控制系统130提供机器的各种位置信息和运动状态信息。位置确定装置121包括但不限于摄像头、激光测距装置(LDS,全称Laser Direct Structuring)。
如图3所示,机器主体110的前向部分111可承载缓冲器122,在清洁过程中驱动轮模块141推进机器人在地面行走时,缓冲器122经由设置在其上的传感器系统,例如红外传感器,检测自动清洁设备100的行驶路径中的一或多个事件,自动清洁设备100可通过由缓冲器122检测到的事件,例如障碍物、墙壁,而控制驱动轮模块141使自动清洁设备100来对所述事件做出响应,例如远离障碍物。
控制系统130设置在机器主体110内的电路主板上,包括与非暂时性存储器,例如硬盘、快闪存储器、随机存取存储器,通信的计算处理器,例如中央处理单元、应用处理器,应用处理器根据激光测距装置反馈的障碍物信息利用定位算法,例如即时定位与地图构建(SLAM,全称Simultaneous Localization And Mapping),绘制机器人所在环境中的即时地图。并且结合缓冲器122上所设置传感器、悬崖传感器、磁力计、加速度计、陀螺仪、里程计等传感装置反馈的距离信息、速度信息综合判断扫地机当前处于何种工作状态、位于何位置,以及扫地机当前位姿等,如过门槛,上地毯,位于悬崖处,上方或者下方被卡住,尘盒满,被拿起等等,还会针对不同情况给出具体的下一步动作策略,使得机器人的工作更加符合主人的要求,有更好的用户体验。
如图4所示,驱动系统140可基于具有距离和角度信息(例如x、y及θ分量)的驱动命令而操纵机器人100跨越地面行驶。驱动系统140包含驱动轮模块141,驱动轮模块141可以同时控制左轮和右轮,为了更为精确地控制机器的运动,优选驱动轮模块141分别包括左驱动轮模块和右驱动轮模块。左、右驱动轮模块沿着由主体110界定的横向轴对置。为了机器人能够在地面上更为稳定地运动或者更强的运动能力,机器人可以包括一个或者多个从动轮142,从动轮包括但不限于万向轮。驱动轮模块包括行走轮和驱动马达以及控制驱动马达的控制电路,驱动轮模块还可以连接测量驱动电流的电路和里程计。驱动轮模块141可以可拆卸地连接到主体110上,方便拆装和维修。驱动轮可具有偏置下落式 悬挂系统,以可移动方式紧固,例如以可旋转方式附接,到机器人主体110,且接收向下及远离机器人主体110偏置的弹簧偏置。弹簧偏置允许驱动轮以一定的着地力维持与地面的接触及牵引,同时自动清洁设备100的清洁元件也以一定的压力接触地面10。
清洁系统可为干式清洁系统和/或湿式清洁系统。作为干式清洁系统,主要的清洁功能源于滚刷、尘盒、风机、出风口以及四者之间的连接部件所构成的清扫系统151。与地面具有一定干涉的滚刷将地面上的垃圾扫起并卷带到滚刷与尘盒之间的吸尘口前方,然后被风机产生并经过尘盒的有吸力的气体吸入尘盒。干式清洁系统还可包含具有旋转轴的边刷152,旋转轴相对于地面成一定角度,以用于将碎屑移动到清洁系统的滚刷区域中。
能源系统包括充电电池,例如镍氢电池和锂电池。充电电池可以连接有充电控制电路、电池组充电温度检测电路和电池欠压监测电路,充电控制电路、电池组充电温度检测电路、电池欠压监测电路再与单片机控制电路相连。主机通过设置在机身侧方或者下方的充电电极与充电桩连接进行充电。如果裸露的充电电极上沾附有灰尘,会在充电过程中由于电荷的累积效应,导致电极周边的塑料机体融化变形,甚至导致电极本身发生变形,无法继续正常充电。
人机交互系统180包括主机面板上的按键,按键供用户进行功能选择;还可以包括显示屏和/或指示灯和/或喇叭,显示屏、指示灯和喇叭向用户展示当前机器所处状态或者功能选择项;还可以包括手机客户端程序。对于路径导航型自动清洁设备,在手机客户端可以向用户展示设备所在环境的地图,以及机器所处位置,可以向用户提供更为丰富和人性化的功能项。
本公开实施例提供一种测距方法,利用扫地机器人摄像头位于地面上的特点,获得目标图像以及地面图像,分析图像特征就可以获得第一图像采集装置下目标物体的深度距离,再结合双目测距计算方法,准确获得双目目标位置,从而对单目获得的物体深度距离进行矫正,最终获得更加准确的物体距离。
如图5所示,一种测距方法,应用于配置有第一图像采集装置和第二图像采集装置的自移动机器人,具体包括如下方法步骤:
步骤S502:在通过所述第一图像采集装置采集到的第一图像中识别到待测物体,确定该待测物体相对于所述自移动机器人的第一距离(可简称为单目距离);其中,所述第一图像中至少包含所述待测物体及所述待测物体所在的表面。
第一图像采集装置如图1所示,第一图像采集装置(例如摄像头)设置于扫地机器人的前端,用于实时的获取扫地机器人行进过程中的前方视场图像。并将前方视场图像发送至控制系统,控制系统根据扫地机器人自身的或远端的运算结果,给出视场图像的分析结果,进而控制驱动系统执行避障或其他操作。
其中,待测物体是指在扫地机器人行进过程中遇到的任何障碍物,扫地机器人可以预先将相关类别的障碍物进行分类,并存储于自身的或远端的存储系统当中,当扫地机器人在作业过程中获取到障碍物图像时,就可以调用存储系统中预先存储的障碍物图像判断当 前障碍物的类别信息,并根据类别信息执行相关操作。当然,在上述步骤中,识别到待测物体,亦可以理解为检测到一个行走过程中遇到的障碍物的存在即可,无需识别出其类别。
如图6所示,通过扫地机器人前端的第一图像采集装置获得图像包括地面上的待测物体601图像,以及从第一图像采集装置开始向前视场内的其他场景图像,由于第一图像采集装置位于扫地机器人的前方且距离地面高度较低,所以,视场图像内包括从所述第一图像采集装置到所述待测物体的地面图像,如图6中标尺所示的位置。所述地面图像用于计算待测物体601到第一图像采集装置(例如摄像头)的深度距离。
所述确定待测物体相对于所述自移动机器人的第一距离包括:基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离。作为一种示例,如图6所示,通过扫地机器人前端的第一图像采集装置获得地面上的待测物体601,并基于所述待测物体601构建相应的最小矩形602,最小矩形602刚好将待测物体601包络在内。选取最小矩形作为外接区域,可以方便的选取下边缘上的任意一点进行第一距离的计算。
通过上述方式在所述第一图像中确定所述待测物体的物体区域ROI(Region Of Interest),其中,作为一种示例,所述物体区域ROI为包括所述待测物体在内的最小矩形。当然物体区域也可以是除最小矩形为的其他图形,例如外接圆、椭圆、任意特定形状等;还可以是待测物体与地面接触的下边缘线。
作为一种可选的实施方式,所述基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离的方法,包括如下子步骤,如图7所示:
步骤S5021:在所述第一图像中确定一基准位置作为坐标原点。
如图6所示,可以选取第一图像的左下角为坐标原点,横向为x方向,纵向为y方向,垂直图像的方向为z方向。当然图像坐标原点的选取位置不唯一,可以根据分析数据的需要进行随意的选择。
步骤S5022:作为一种实施方式,在所述最小矩形的下边长中选取任意一点作为第一参考点603,在所述第一图像的下边缘根据所述第一参考点确定第二参考点604。
第一参考点603的选取为最小矩形的下边长中选取任意一点,如果物体区域为圆形,则选取最低点作为第一参考点。第二参考点604为根据第一参考点603向下垂直延伸后,与第一图像下边缘相交的一点,也就是第一图像上表示地面的第一图像中最下面的一点,这样可以从第一参考点603和第二参考点604的位置关系,计算出待测物体到摄像头的距离,也即通过地面距离来获得待测物体到摄像头的距离。
步骤S5023:根据所述第一参考点以及所述第二参考点的位置坐标,计算所述待测物体的第一距离。
例如第一参考点603的坐标为(x1,y1),第二参考点604的坐标为(x2,y2),则可以计算出第一参考点603和第二参考点604的距离。再例如,通过分析像素关系,获得第一参考点603的像素位置和第二参考点604的像素位置,进而估算出第一参考点603和第二 参考点604的像素距离,再根据待测物体的实际高度和像素关系,进而确定出第一参考点603和第二参考点604的实际距离。
作为一种可选的实施方式,所述通过第一图像采集装置获取待测物体的图像的方法,包括:通过第一图像采集装置获取视场图像;对所述视场图像进行质量检测,删除无待测物体帧,获得包括待测物体的图像。具体的,例如包括:对所述视场图像做y方向的边缘滤波,将滤波后的图像按x方向投影;对投影后的一维图像信号,取最大值,例如投影按照位置参数获取x方向的一维距离,例如80-100像素,则取100像素值;当所述最大值小于预设阀值时,判断所述视场图像为无待测物体帧,删除所述无待测物体帧;当所述最大值大于等于预设阀值时,判断所述视场图像为有待测物体帧,保留所述待测物体帧。例如阈值设置为50像素,当投影像素大于50像素时,则认为是有效帧,否则为无效帧。当判断为有效帧时,进入后续的待测物体距离判断步骤,否则删除对应的图像。
步骤S504:在所述第一图像中选取位于所述待测物体上的点作为参考点;例如,如图8所示,在所述物体区域选取所述待测物体的任意一点作为参考点801;参考点801的选取优选为待测物体的几何中心,或者物体特征比较容易识别的位置。
步骤S506:根据所述第一距离确定初始视差;在利用所述第二图像采集装置(例如摄像头)采集到的第二图像中,根据所述初始视差及预设的视差范围确定感兴趣区域,并在所述感兴趣区域中确定所述参考点的位置作为第一目标点。
其中,初始视差是在确定第一距离后,基于类似双目测距的几何关系确定的视差,具体的,根据所述第一距离确定初始视差可以通过如下公式进行计算:D=f*b/(b-d),其中:f为第一图像采集装置焦距,b为基线距离,d为视差距离,D为第一距离。
在利用所述第二图像采集装置(例如摄像头)采集到的第二图像中,根据上述计算的初始视差及预设的视差范围确定感兴趣区域,其中预设的视差范围是由于计算精度的不确定性,可能导致所述初始视差不准确,而设置一个冗余的数值范围,以便于准确的找到对应的目标点。根据在第一图像中确定的物体区域ROI以及上述初始视差及预设的视差范围确定感兴趣区域,即在第二图像中搜索与所述参考点对应的第一目标点的区域。
作为一种可选的实施方式,所述第一图像和第二图像的采集时间差不超过预设值。作为一种限制条件,第一、第二图像获取装置在获取所述第一图像和第二图像时,时间差不超过预设值,例如100ms或者更短的毫秒级时间范围,这样可以最优效的保证两张图像的一致性,避免由于物体运动或其他原因引起的对应点搜索失败。
步骤S508:基于所述第一目标点,在所述感兴趣区域中确定第二目标点;利用所述第二目标点的位置确定所述第一图像采集装置和第二图像采集装置的实际视差距离。具体的,第二目标点为在所述感兴趣区域中与所述第一目标点图像匹配的点。
如图8所示:在所述物体区域选取所述待测物体的任意一点作为参考点801;根据所述第一距离和初始视差距离d’及预设的视差范围确定基于双目测量时所述参考点801的搜索区域802;在所述搜索区域802中搜索所述参考点801在所述搜索区域对应的第一目 标点A。由于预估视差范围的不准确性,第一目标点A不一定就是实与参考点真正对应的目标点,可能存在较小距离的误差,此时,可以以第一目标点A为中心,在第一目标点附近搜索确定准确的第二目标点,该第二目标点是与参考点准确对应的目标点,第二目标点的确定可以通过图像对比的方法进行匹配。
步骤S510:根据所述实际视差距离计算所述待测物体的深度信息。
可选地,如图9所示,根据所述双目测量的几何关系,计算双目测量时所述待测物体的双目距离D,具体包括:通过测量等方法确定双目测量时所述第一图像获取装置和所述第二图像获取装置的基线距离b,根据所述基线距离b、实际视差距离d以及焦距f计算所述待测物体的双目距离,其中,所述双目距离满足如下关系:D=f*b/(b-d),其中:f为焦距,b为基线距离,d为实际视差距离,D为双目距离。
可选地,作为另外一种实施方式,如图10所示,当所述第一图像采集装置的光轴具有仰角时,计算双目测量时所述待测物体的双目距离D包括:当所述第一图像采集装置的光轴仰角为θ时,所述待测物体与所述扫地机前边缘的距离满足如下关系:Z=D*cosθ-s,其中θ为光轴仰角,s为第一图像采集装置至扫地机前边缘的距离,D为双目距离,Z为待测物体与扫地机前边缘的距离。
本公开实施例提利用扫地机器人摄像头近地面的特点,获得目标图像以及地面图像,分析图像特征就可以获得第一图像采集装置下目标物体的深度距离,再结合双目测距计算方法,准确获得双目目标位置,从而对单目获得的物体深度距离进行矫正,最终获得更加准确的物体距离。
如图11所示,本公开实施例提供一种测距装置,应用于配置有第一图像采集装置和第二图像采集装置的自移动机器人包括:采集单元1102、选取单元1104、第一确定单元1106、第二确定单元1108以及计算单元1106,各单元执行如上实施例所述的方法步骤,相同的方法步骤具有相同的技术效果,在此不做赘述,具体如下:
采集单元1102,用于在通过所述第一图像采集装置采集到的第一图像中识别出待测物体后,确定待测物体相对于所述自移动机器人的第一距离;其中,所述第一图像中至少包含所述待测物体及所述待测物体所在的表面。
作为一种可选的实施方式,采集单元1102还用于:通过所述第一图像采集装置获取待测物体的第一图像,其中,所述第一图像中至少包含所述待测物体图像及从所述第一图像采集装置到所述待测物体的地面图像;在所述第一图像中确定所述待测物体的物体区域,其中,所述物体区域为包括所述待测物体在内的最小矩形;基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离,所述第一距离是指基于所述第一图像采集装置确定的所述待测物体与所述第一图像采集装置的距离。
作为一种可选的实施方式,采集单元1102还用于:在所述第一图像中确定一基准位置作为坐标原点。在所述最小矩形的下边长中选取任意一点作为第一参考点,在所述第一图像的下边缘根据所述第一参考点确定第二参考点。根据所述第一参考点以及所述第二参 考点的位置坐标,计算所述待测物体的第一距离。作为一种可选的实施方式,所述采集单元1102还用于:通过第一图像采集装置获取视场图像;对所述视场图像进行质量检测,删除无待测物体帧,获得包括待测物体的图像。具体的,例如包括:对所述视场图像做y方向的边缘滤波,将滤波后的图像按x方向投影;对投影后的一维图像信号,取最大值;当所述最大值小于预设阀值时,判断所述视场图像为无待测物体帧,删除所述无待测物体帧;当所述最大值大于等于预设阀值时,判断所述视场图像为有待测物体帧,保留所述待测物体帧。
选取单元1104,用于在所述第一图像中选取位于所述待测物体上的点作为参考点;例如,如图8所示,在所述物体区域选取所述待测物体的任意一点作为参考点801;参考点801的选取优选为待测物体的几何中心,或者物体特征比较容易识别的位置。
第一确定单元1106,用于根据所述第一距离确定初始视差;在利用所述第二图像采集装置(例如摄像头)采集到的第二图像中,根据所述初始视差及预设的视差范围确定感兴趣区域,并在所述感兴趣区域中确定所述参考点的位置作为第一目标点;其中,所述第一图像和第二图像的采集时间差不超过预设值。
第二确定单元1108,用于基于所述第一目标点,在所述感兴趣区域中确定与所述第一目标点图像匹配的点作为第二目标点;利用所述第二目标点的位置确定所述第一图像采集装置和第二图像采集装置的实际视差距离。
计算单元1110,用于根据所述实际视差距离计算所述待测物体的深度信息。
可选地,如图9所示,根据所述双目测量的几何关系,计算双目测量时所述待测物体的双目距离D,具体包括:通过测量等方法确定双目测量时所述第一图像获取装置和所述第二图像获取装置的基线距离b,根据所述基线距离b、实际视差距离d以及焦距f计算所述待测物体的双目距离,其中,所述双目距离满足如下关系:D=f*b/(b-d),其中:f为焦距,b为基线距离,d为实际视差距离,D为双目距离。
可选地,作为另外一种实施方式,如图10所示,当所述第一图像采集装置的光轴具有仰角时,计算双目测量时所述待测物体的双目距离D包括:当所述第一图像采集装置的光轴仰角为θ时,所述待测物体与所述扫地机前边缘的距离满足如下关系:Z=D*cosθ-s,其中θ为光轴仰角,s为第一图像采集装置至扫地机前边缘的距离,D为双目距离,Z为待测物体与扫地机前边缘的距离。
本公开实施例提供一种测距装置,利用扫地机器人摄像头近地面的特点,获得目标图像以及地面图像,分析图像特征就可以获得第一图像采集装置下目标物体的深度距离,再结合双目测距计算方法,准确获得双目目标位置,从而对单目获得的物体深度距离进行矫正,最终获得更加准确的物体距离。
本公开实施例提供一种非瞬时性计算机可读存储介质,存储有计算机程序指令,所述计算机程序指令在被处理器调用和执行时实现如上任一所述的方法步骤。
本公开实施例提供一种机器人,包括处理器和存储器,所述存储器存储有能够被所述 处理器执行的计算机程序指令,所述处理器执行所述计算机程序指令时,实现前述任一实施例的方法步骤。
如图12所示,机器人可以包括处理装置(例如中央处理器、图形处理器等)1201,其可以根据存储在只读存储器(ROM)1202中的程序或者从存储装置1208加载到随机访问存储器(RAM)1203中的程序而执行各种适当的动作和处理。在RAM 1203中,还存储有电子机器人1200操作所需的各种程序和数据。处理装置1201、ROM 1202以及RAM 1203通过总线1204彼此相连。输入/输出(I/O)接口1205也连接至总线1204。
通常,以下装置可以连接至I/O接口1205:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1206;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1207;包括例如硬盘等的存储装置1208;以及通信装置1209。通信装置1209可以允许电子机器人与其他机器人进行无线或有线通信以交换数据。虽然12图12示出了具有各种装置的电子机器人,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为机器人软件程序。例如,本公开的实施例包括一种机器人软件程序产品,其包括承载在可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1209从网络上被下载和安装,或者从存储装置1208被安装,或者从ROM 1202被安装。在该计算机程序被处理装置1201执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述机器人中所包含的;也可以是单独存在,而未装配入 该机器人中。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
最后应说明的是:以上实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的精神和范围。

Claims (14)

  1. 一种测距方法,应用于配置有第一图像采集装置和第二图像采集装置的自移动机器人,其特征在于包括:
    在通过所述第一图像采集装置采集到的第一图像中识别出待测物体后,确定所述待测物体相对于所述自移动机器人的第一距离;其中,所述第一图像中至少包含所述待测物体及所述待测物体所在的表面;
    在所述第一图像中选取位于所述待测物体上的点作为参考点;
    根据所述第一距离确定初始视差;在利用所述第二图像采集装置采集到的第二图像中,根据所述初始视差及预设的视差范围确定感兴趣区域,并在所述感兴趣区域中确定所述参考点的位置作为第一目标点;
    基于所述第一目标点,在所述感兴趣区域中确定第二目标点;利用所述第二目标点的位置确定所述第一图像采集装置和第二图像采集装置的实际视差距离;
    根据所述实际视差距离计算所述待测物体的深度信息。
  2. 根据权利要求1所述的方法,其特征在于,
    所述第一图像和第二图像的采集时间差不超过预设值。
  3. 根据权利要求1所述的方法,其特征在于,
    所述基于所述第一目标点,在所述感兴趣区域中确定第二目标点,具体包括:
    基于所述第一目标点,在所述感兴趣区域中确定与所述第一目标点图像匹配的点作为第二目标点。
  4. 根据权利要求1所述的方法,其特征在于,所述在通过所述第一图像采集装置采集到的第一图像中识别出待测物体后,确定所述待测物体相对于所述自移动机器人的第一距离,具体包括:
    通过所述第一图像采集装置识别出待测物体后,获取所述待测物体的第一图像,其中,所述第一图像中至少包含所述待测物体图像及从所述第一图像采集装置到所述待测物体的地面图像;
    在所述第一图像中确定所述待测物体的物体区域,其中,所述物体区域为包括所述待测物体在内的最小矩形;
    基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离,所述第一距离是指基于所述第一图像采集装置确定的所述待测物体与所述第一图像采集装置的距离。
  5. 根据权利要求4所述的方法,其特征在于,所述基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离,包括:
    在所述第一图像中确定一基准位置作为坐标原点;
    在所述最小矩形的下边长中选取任意一点作为第一参考点,在所述图像的下边缘根据所述第一参考点确定第二参考点;
    根据所述第一参考点以及所述第二参考点的位置坐标,计算所述待测物体的第一距离。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述实际视差距离计算所述待测物体的深度信息,包括:
    确定所述第一图像采集装置和所述第二图像采集装置的基线距离,根据所述基线距离、所述实际视差距离以及焦距计算所述待测物体的深度信息,其中,所述待测物体的深度信息满足如下关系:
    D=f*b/(b-d),其中:f为焦距,b为基线距离,d为实际视差距离,D为深度信息。
  7. 根据权利要求5所述的方法,其特征在于,还包括:
    当所述第一图像采集装置的光轴仰角为θ时,所述待测物体与所述自移动机器人前边缘的距离满足如下关系:
    Z=D*cosθ-s,其中θ为光轴仰角,s为第一图像采集装置至自移动机器人前边缘的距离,D为深度信息,Z为待测物体与自移动机器人前边缘的距离。
  8. 根据权利要求4所述的方法,其特征在于,所述通过所述第一图像采集装置获取待测物体的第一图像,包括:
    通过第一图像采集装置获取视场图像;
    对所述视场图像进行质量检测,删除无待测物体帧,获得包括待测物体的图像。
  9. 根据权利要求8所述的方法,其特征在于,所述对所述视场图像进行质量检测,删除无待测物体帧,获得包括待测物体的图像,包括:
    对所述视场图像做y方向的边缘滤波,将滤波后的图像按x方向投影;
    对投影后的一维图像信号,取最大值;
    当所述最大值小于预设阀值时,判断所述视场图像为无待测物体帧,删除所述无待测物体帧;
    当所述最大值大于等于预设阀值时,判断所述视场图像为有待测物体帧,保留所述待测物体帧。
  10. 一种测距装置,应用于配置有第一图像采集装置和第二图像采集装置的自移动机器人,其特征在于包括:
    采集单元,用于通过所述第一图像采集装置在采集到的第一图像中识别出待测物体后,确定所述待测物体相对于所述自移动机器人的第一距离;其中,所述第一图像中至少包含所述待测物体及所述待测物体所在的表面;
    选取单元,用于在所述第一图像中选取位于所述待测物体上的点作为参考点;
    第一确定单元,用于根据所述第一距离确定初始视差;在利用所述第二图像采集装置采集到的第二图像中,根据所述初始视差及预设的视差范围确定感兴趣区域,并在所述感兴趣区域中确定所述参考点的位置作为第一目标点;第二确定单元,用于基于所述第一目标点,在所述感兴趣区域中确定第二目标点;利用所述第二目标点的位置确定所述第一图 像采集装置和第二图像采集装置的实际视差距离;
    计算单元,用于根据所述实际视差距离计算所述待测物体的深度信息。
  11. 根据权利要求10所述的测距装置,其特征在于,所述采集单元还用于:
    通过所述第一图像采集装置获取待测物体的第一图像,其中,所述第一图像中至少包含所述待测物体图像及从所述第一图像采集装置到所述待测物体的地面图像;
    在所述第一图像中确定所述待测物体的物体区域,其中,所述物体区域为包括所述待测物体在内的最小矩形;
    基于所述物体区域的下边缘位置与所述第一图像的下边缘位置,确定所述待测物体的第一距离,所述第一距离是指基于所述第一图像采集装置确定的所述待测物体与所述第一图像采集装置的距离。
  12. 根据权利要求11所述的测距装置,其特征在于,所述采集单元还用于:
    在所述第一图像中确定一基准位置作为坐标原点;
    在所述最小矩形的下边长中选取任意一点作为第一参考点,在所述图像的下边缘根据所述第一参考点确定第二参考点;
    根据所述第一参考点以及所述第二参考点的位置坐标,计算所述待测物体的第一距离。
  13. 一种机器人,包括处理器和存储器,其特征在于,所述存储器存储有能够被所述处理器执行的计算机程序指令,所述处理器执行所述计算机程序指令时,实现如权利要求1-9任一所述的方法步骤。
  14. 一种非瞬时性计算机可读存储介质,其特征在于,存储有计算机程序指令,所述计算机程序指令在被处理器调用和执行时实现如权利要求1-9任一所述的方法步骤。
PCT/CN2021/085877 2020-08-28 2021-04-08 一种测距方法、装置、机器人和存储介质 WO2022041737A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/023,846 US20240028044A1 (en) 2020-08-28 2021-04-08 Ranging method and apparatus, robot, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010887031.9A CN111990930B (zh) 2020-08-28 2020-08-28 一种测距方法、装置、机器人和存储介质
CN202010887031.9 2020-08-28

Publications (1)

Publication Number Publication Date
WO2022041737A1 true WO2022041737A1 (zh) 2022-03-03

Family

ID=73465403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/085877 WO2022041737A1 (zh) 2020-08-28 2021-04-08 一种测距方法、装置、机器人和存储介质

Country Status (3)

Country Link
US (1) US20240028044A1 (zh)
CN (1) CN111990930B (zh)
WO (1) WO2022041737A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115401689A (zh) * 2022-08-01 2022-11-29 北京市商汤科技开发有限公司 基于单目相机的距离测量方法、装置以及计算机存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111990930B (zh) * 2020-08-28 2022-05-20 北京石头创新科技有限公司 一种测距方法、装置、机器人和存储介质
CN112539704B (zh) * 2020-12-24 2022-03-01 国网山东省电力公司检修公司 一种输电线路通道内隐患与导线距离的测量方法
CN114608520B (zh) * 2021-04-29 2023-06-02 北京石头创新科技有限公司 一种测距方法、装置、机器人和存储介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030209633A1 (en) * 2002-05-10 2003-11-13 Thal German Von Distance measuring using passive visual means
CN103054522A (zh) * 2012-12-31 2013-04-24 河海大学 基于视觉测量的清洁机器人系统及其测控方法
CN106657600A (zh) * 2016-10-31 2017-05-10 维沃移动通信有限公司 一种图像处理方法和移动终端
CN107277367A (zh) * 2017-07-27 2017-10-20 未来科技(襄阳)有限公司 拍照处理方法、装置、设备和存储介质
CN107729856A (zh) * 2017-10-26 2018-02-23 海信集团有限公司 一种障碍物检测方法及装置
CN110063694A (zh) * 2019-04-28 2019-07-30 彭春生 一种双目扫地机器人及工作方法
CN110136186A (zh) * 2019-05-10 2019-08-16 安徽工程大学 一种用于移动机器人目标测距的检测目标匹配方法
CN110232707A (zh) * 2018-03-05 2019-09-13 华为技术有限公司 一种测距方法及装置
CN110231832A (zh) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 用于无人机的避障方法和避障装置
CN111990930A (zh) * 2020-08-28 2020-11-27 北京石头世纪科技股份有限公司 一种测距方法、装置、机器人和存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012159470A (ja) * 2011-02-02 2012-08-23 Toyota Motor Corp 車両用画像認識装置
JP2013186293A (ja) * 2012-03-08 2013-09-19 Seiko Epson Corp 画像生成装置および画像表示方法
CN105627932B (zh) * 2015-12-31 2019-07-30 天津远翥科技有限公司 一种基于双目视觉的测距方法及装置
CN105719290B (zh) * 2016-01-20 2019-02-05 天津师范大学 一种使用时域视觉传感器的双目立体深度匹配方法
JP7025912B2 (ja) * 2017-12-13 2022-02-25 日立Astemo株式会社 車載環境認識装置
CN111210468B (zh) * 2018-11-22 2023-07-11 中移(杭州)信息技术有限公司 一种图像深度信息获取方法及装置
CN111382591B (zh) * 2018-12-27 2023-09-29 海信集团有限公司 一种双目相机测距校正方法及车载设备
CN110009682B (zh) * 2019-03-29 2022-12-06 北京理工大学 一种基于单目视觉的目标识别定位方法
CN110297232A (zh) * 2019-05-24 2019-10-01 合刃科技(深圳)有限公司 基于计算机视觉的单目测距方法、装置及电子设备

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030209633A1 (en) * 2002-05-10 2003-11-13 Thal German Von Distance measuring using passive visual means
CN103054522A (zh) * 2012-12-31 2013-04-24 河海大学 基于视觉测量的清洁机器人系统及其测控方法
CN106657600A (zh) * 2016-10-31 2017-05-10 维沃移动通信有限公司 一种图像处理方法和移动终端
CN107277367A (zh) * 2017-07-27 2017-10-20 未来科技(襄阳)有限公司 拍照处理方法、装置、设备和存储介质
CN107729856A (zh) * 2017-10-26 2018-02-23 海信集团有限公司 一种障碍物检测方法及装置
CN110232707A (zh) * 2018-03-05 2019-09-13 华为技术有限公司 一种测距方法及装置
CN110231832A (zh) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 用于无人机的避障方法和避障装置
CN110063694A (zh) * 2019-04-28 2019-07-30 彭春生 一种双目扫地机器人及工作方法
CN110136186A (zh) * 2019-05-10 2019-08-16 安徽工程大学 一种用于移动机器人目标测距的检测目标匹配方法
CN111990930A (zh) * 2020-08-28 2020-11-27 北京石头世纪科技股份有限公司 一种测距方法、装置、机器人和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115401689A (zh) * 2022-08-01 2022-11-29 北京市商汤科技开发有限公司 基于单目相机的距离测量方法、装置以及计算机存储介质
CN115401689B (zh) * 2022-08-01 2024-03-29 北京市商汤科技开发有限公司 基于单目相机的距离测量方法、装置以及计算机存储介质

Also Published As

Publication number Publication date
CN111990930B (zh) 2022-05-20
CN111990930A (zh) 2020-11-27
US20240028044A1 (en) 2024-01-25

Similar Documents

Publication Publication Date Title
WO2022041740A1 (zh) 一种障碍物探测方法、装置、自行走机器人和存储介质
US20230225576A1 (en) Obstacle avoidance method and apparatus for self-walking robot, robot, and storage medium
WO2022041737A1 (zh) 一种测距方法、装置、机器人和存储介质
CN109947109B (zh) 机器人工作区域地图构建方法、装置、机器人和介质
WO2021208530A1 (zh) 一种机器人避障方法、装置和存储介质
TWI789625B (zh) 一種清潔機器人及其控制方法
CN114468898B (zh) 机器人语音控制方法、装置、机器人和介质
CN112205937B (zh) 一种自动清洁设备控制方法、装置、设备和介质
CN111857153B (zh) 一种距离检测装置及扫地机器人
WO2022227876A1 (zh) 一种测距方法、装置、机器人和存储介质
CN217792839U (zh) 自动清洁设备
WO2022077945A1 (zh) 障碍物识别信息反馈方法、装置、机器人和存储介质
CN114879691A (zh) 自行走机器人的控制方法、存储介质和自行走机器人
CN113625700B (zh) 自行走机器人控制方法、装置、自行走机器人和存储介质
AU2023201499A1 (en) Method and apparatus for detecting obstacle, self-moving robot, and storage medium
CN113625700A (zh) 自行走机器人控制方法、装置、自行走机器人和存储介质
WO2024140195A1 (zh) 基于线激光的自行走设备避障方法及装置、设备和介质
CN116942017A (zh) 自动清洁设备、控制方法及存储介质
CN116977858A (zh) 一种地面识别方法、装置、机器人和存储介质
CN118285699A (zh) 一种清洁机器人及其控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21859586

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18023846

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21859586

Country of ref document: EP

Kind code of ref document: A1