WO2023045639A1 - Method for determining target object, mobile robot, storage medium, and electronic apparatus - Google Patents

Method for determining target object, mobile robot, storage medium, and electronic apparatus Download PDF

Info

Publication number
WO2023045639A1
WO2023045639A1 PCT/CN2022/113312 CN2022113312W WO2023045639A1 WO 2023045639 A1 WO2023045639 A1 WO 2023045639A1 CN 2022113312 W CN2022113312 W CN 2022113312W WO 2023045639 A1 WO2023045639 A1 WO 2023045639A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
laser
lasers
information
mobile robot
Prior art date
Application number
PCT/CN2022/113312
Other languages
French (fr)
Chinese (zh)
Inventor
俞浩
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2023045639A1 publication Critical patent/WO2023045639A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Definitions

  • the present disclosure relates to the communication field, and in particular, relates to a method for determining a target object, a mobile robot, a storage medium, and an electronic device.
  • sweeping robots With the development of society, more and more families begin to use sweeping robots. In the process of using sweeping robots, sweeping robots need to identify the front area and judge whether there are obstacles in the front area, so as to avoid them during the movement. obstacle.
  • the sweeping robots with active obstacle avoidance function rely on dot matrix sensors or line array depth sensors to realize obstacle recognition, but dot matrix or line array sensors need to perform many different operations on an object. Obstacle information can only be determined by scanning irradiation at different angles and combining the scanning results of different angles, which is computationally intensive and inaccurate.
  • the sensing range of dot matrix or linear array sensors is relatively small, the judgment accuracy of the active obstacle avoidance function is limited, the user experience is poor, and small obstacles cannot be cleaned, resulting in an increased probability of the sweeping robot getting stuck and the rolling brush of the sweeping robot being entangled .
  • the traditional method has low detection accuracy in the process of detecting objects in front of the sweeping robot, and no effective solution has been proposed so far.
  • the purpose of the present disclosure is to provide a method for determining a target object, a mobile robot, a storage medium, and an electronic device, so as to at least solve the problem of low detection accuracy in the process of detecting objects in front of the sweeping robot in traditional methods.
  • a method for determining a target object including: controlling the laser panel of the area array depth sensor to emit multiple groups of first laser light to the target object located in the direction of travel of the mobile robot, wherein the The laser panel is arranged on the front side of the mobile robot, and the front side is used to indicate the frontmost position of the mobile robot during travel, and any two groups of first lasers in the multiple groups of first lasers have different light beams Intensity; multiple sets of second lasers reflected from the target object by receiving the multiple sets of first lasers through the laser panel; determining the target according to the multiple sets of first lasers and the multiple sets of second lasers The obstacle type corresponding to the object.
  • determining the obstacle type corresponding to the target object according to the plurality of groups of first lasers and the plurality of groups of second lasers includes: determining from the plurality of groups of first lasers and the plurality of groups of second lasers A first laser and a second laser with the same light intensity; wherein, the first laser carries coded information, and the second laser carries decoding information; according to the coded information of the first laser and the coded information of the second laser The decoding information of the two lasers determines the obstacle type corresponding to the target object.
  • determining the obstacle type corresponding to the target object according to the coding information of the first laser and the decoding information of the second laser includes: according to the coding information in the first laser and the decoding information of the second laser Determine the time-of-flight of the first laser from the decoding information in the system; determine the three-dimensional information of the target object according to the time-of-flight, and determine the obstacle type corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information includes At least one of the following: height information of the target object, length information of the target object, and width information of the target object.
  • determining the three-dimensional information of the target object according to the flight time includes: determining the three-dimensional coordinates of the target object according to the flight time; separating the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates ; Determine the three-dimensional information of the target object based on the coordinate information of the ground.
  • determining the obstacle type corresponding to the target object according to the three-dimensional information includes: obtaining a preset correspondence between three-dimensional information and obstacle types; determining the three-dimensional information of the target object from the correspondence Corresponding obstacle type.
  • the method further includes: determining an avoidance strategy corresponding to the obstacle type; The avoidance strategy controls the traveling route of the mobile robot, so as to control the mobile robot to successfully avoid the target object.
  • a mobile robot including: an area array depth sensor, arranged on the front side of the mobile robot, for using a laser panel to detect a target located in the direction of travel of the mobile robot
  • the object emits multiple groups of first lasers, wherein the laser panel is arranged on the front side of the mobile robot, and the front side is used to indicate the frontmost position of the mobile robot during travel, and the multiple groups of first lasers Any two groups of first lasers have different light intensities;
  • the processor is arranged in the mobile robot and is connected with the area array depth sensor, or the processor is located in the area array depth sensor for receiving multiple sets of second lasers sent by the laser panel; determining the obstacle type corresponding to the target object according to the multiple sets of first lasers and the multiple sets of second lasers, wherein the multiple sets of second lasers are the The multiple groups of first laser light are received by the laser panel after being reflected from the target object.
  • the processor is also used to determine the first laser and the second laser with the same light intensity from the multiple groups of the first laser and the multiple groups of the second laser; wherein, the first laser carries There is encoded information, and decoding information is carried in the second laser.
  • a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute any of the above-mentioned The method of determining the target object.
  • an electronic device including a memory and a processor, the memory stores a computer program, and the processor is configured to run the computer program to perform any of the above The determination method of the target object described in .
  • the laser panel controlling the area array depth sensor emits multiple groups of first lasers to the target object in the direction of the mobile robot's travel, and receives multiple groups of first lasers from the target object through the laser panel
  • the reflected multiple sets of second lasers are used to determine the obstacle type corresponding to the target object according to the multiple sets of first lasers and multiple sets of second lasers.
  • FIG. 1 is a block diagram of the hardware structure of a computer terminal of a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 2 is a flow chart of a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram (1) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram (2) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram (3) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure
  • Fig. 6 is a structural block diagram of a mobile robot according to an embodiment of the present disclosure.
  • Fig. 7 is a structural block diagram of an apparatus for determining a target object according to an embodiment of the present disclosure.
  • FIG. 1 is a block diagram of a hardware structure of a computer terminal according to a method for determining a target object according to an embodiment of the present disclosure.
  • the computer terminal can include one or more (only one is shown in Figure 1) processor 102 (processor 102 can include but not limited to microprocessor (Microprocessor Unit, MPU for short) or programmable logic Devices (Programmable logic device, PLD for short) and other processing devices and memory 104 for storing data, optionally, the above-mentioned computer terminal may also include transmission equipment 106 and input and output equipment 108 for communication functions.
  • processor 102 can include but not limited to microprocessor (Microprocessor Unit, MPU for short) or programmable logic Devices (Programmable logic device, PLD for short) and other processing devices and memory 104 for storing data
  • MPU Microprocessor Unit
  • PLD programmable logic device
  • the structure shown in Figure 1 is only illustrative, and it does not limit the structure of the above-mentioned computer terminal.
  • the computer terminal can also include more or less components than shown in Figure 1, or have the same Functionally equivalent to that shown in Figure 1 or a different configuration with more functionality than shown in Figure 1.
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the method for determining the target object in the embodiment of the present disclosure, and the processor 102 runs the computer program stored in the memory 104, thereby Executing various functional applications and data processing is to realize the above-mentioned method.
  • the memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include a memory that is remotely located relative to the processor 102, and these remote memories may be connected to a computer terminal through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 106 is used to receive or transmit data via the network.
  • the specific example of the above-mentioned network may include a wireless network provided by the communication provider of the computer terminal.
  • the transmission device 106 includes a network interface controller (NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • FIG. 2 is a flowchart of a method for determining a target object according to an embodiment of the present disclosure. As shown in FIG. 2 , the process includes the following steps:
  • Step S202 controlling the laser panel of the area array depth sensor to emit multiple groups of first lasers to the target object located in the traveling direction of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, and the front side is used for Any two sets of first lasers in the plurality of sets of first lasers have different light intensities to indicate that the mobile robot is at the forefront during the traveling process;
  • the area array depth sensor in the embodiment of the present application includes: an area array Time of Flight (TOF for short) sensor.
  • TOF Time of Flight
  • the dot matrix laser sensor can only measure one point at a time
  • the line array laser sensor can only measure the values of all points on a line at a time
  • the dot matrix or line array sensor needs to scan an object many times at different angles Obstacle information can only be determined by irradiating and combining scanning results from different angles, which is computationally intensive and inaccurate.
  • the area array depth laser sensor in the embodiment of the present application can measure all points on a surface at one time, and the calculation amount is small and accurate.
  • Step S204 receiving multiple sets of second laser beams reflected from the target object by the multiple sets of first laser beams through the laser panel;
  • the laser panel of the area array depth sensor can realize the laser emitting function and the laser receiving function, mainly including: laser transmitter, laser receiver; in the process of moving the mobile robot, control the laser emission in the area array depth sensor
  • the device emits multiple sets of lasers to the front area of the mobile robot. If there is a target object in the front area, after the multiple sets of lasers touch the target object, the multiple sets of laser light will be reflected by the target object to form multiple sets of reflected light. Multiple sets of reflected light are received by the laser receiver of the area array depth sensor.
  • the multiple groups of laser light emitted by the laser transmitter are defined as multiple groups of first laser light
  • the multiple groups of reflected light received by the laser receiver are defined as It is a plurality of groups of second lasers.
  • Step S206 determining an obstacle type corresponding to the target object according to the multiple sets of first lasers and the multiple sets of second lasers.
  • the laser panel controlling the area array depth sensor emits multiple sets of first lasers to the target object in the direction of the mobile robot, and receives multiple sets of first lasers from the target object through the laser panel
  • the reflected multiple sets of second lasers are used to determine the obstacle type corresponding to the target object according to the multiple sets of first lasers and multiple sets of second lasers.
  • the installation position of the laser panel of the area array depth sensor is preferably at the forefront of the mobile robot during travel, and it can also be installed at other positions of the mobile robot, for example, the laser panel is installed on the upper surface of the mobile robot , in order to be able to detect objects in front of the mobile robot.
  • determining the obstacle type corresponding to the target object according to the multiple sets of first lasers and the multiple sets of second lasers is achieved in the following manner: from the multiple sets of first lasers and the multiple sets of The first laser and the second laser with the same light intensity are determined in the second laser; wherein, the first laser carries coding information, and the second laser carries decoding information, according to the coding of the first laser information and the decoded information of the second laser to determine the obstacle type corresponding to the target object.
  • the area array depth sensor emits multiple sets of first laser light forward on one surface, for example, emits multiple sets of first laser light within 1 second, and the brightness (light intensity) of each set of laser light is different. Since the light intensity of the laser light does not change during the reflection process of the laser light, the first laser light and the second laser light with the same light intensity can be determined from multiple groups of first laser light and multiple sets of second laser light, and in the area array
  • the first laser carries coded information.
  • the coded information is pre-set for the area array depth sensor.
  • the coded information contains parameter information. After the first laser is reflected by the target object, the coded information The parameter information inside the information will change.
  • the changed coded information is defined as the decoded information here, and then the obstacle of the target object can be determined according to the coded information of the first laser and the decoded information of the second laser. type.
  • the area array TOF depth sensor includes sensors using infrared receivers or infrared receivers + RGB receivers, which use technologies including direct measurement of time of flight and indirect measurement of time of flight.
  • the obstacle type corresponding to the target object based on the encoding information of the first laser and the decoding information of the second laser, in an optional embodiment, it can be achieved in the following manner: according to the first laser Determine the flight time of the first laser from the encoded information in the second laser and the decoding information in the second laser; determine the three-dimensional information of the target object according to the flight time, and determine the obstacle corresponding to the target object according to the three-dimensional information
  • the object type wherein the three-dimensional information includes at least one of the following: height information of the target object, length information of the target object, and width information of the target object.
  • the area array depth sensor can determine the time-of-flight of the first laser according to the encoded information in the first laser and the decoded information in the second laser. Specifically, the area array depth sensor can determine the flight time of the first laser according to the parameter information in the decoded information and the parameter information in the encoded information, and determine the three-dimensional information of the target object according to the flight time of the first laser, and then according to the three-dimensional The information determines the obstacle type corresponding to the target object.
  • the area array depth sensor can quickly determine the flight time of the first laser, and determine the three-dimensional information of the target object according to the flight time of the first laser.
  • the above-mentioned three-dimensional information of the target object is determined according to the flight time, optionally, the three-dimensional coordinates of the target object need to be determined according to the flight time; the coordinate information of the ground where the mobile robot is located is separated from the three-dimensional coordinates ; Determine the three-dimensional information of the target object based on the coordinate information of the ground.
  • the space where the mobile robot is located can be regarded as a three-dimensional space coordinate system, and then the area array depth sensor can determine the three-dimensional coordinates of the target object in the space coordinate system according to the flight time of the first laser, such as the target object
  • the three-dimensional coordinates of the target object are (X, Y, Z).
  • the area array depth sensor separates the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates of the target object, and uses the coordinate information of the ground as Datum, to determine the three-dimensional information of the target object.
  • determining the obstacle type corresponding to the target object according to the three-dimensional information may be achieved in the following manner: obtaining a preset correspondence between three-dimensional information and obstacle types; Determine the obstacle type corresponding to the three-dimensional information of the target object.
  • each three-dimensional information can determine the type of an obstacle.
  • there can be a three-dimensional information-obstacle type table which has different three-dimensional information corresponding to the obstacle type, array depth
  • the sensor After the sensor acquires the three-dimensional information of the target object, it will obtain the three-dimensional information-obstacle type table, and determine the obstacle type corresponding to the three-dimensional information of the target object from the table.
  • the types of obstacles include: table and chair legs, walls and steps.
  • the method further includes: determining an avoidance strategy corresponding to the obstacle type; The avoidance strategy controls the traveling route of the mobile robot, so as to control the mobile robot to successfully avoid the target object.
  • the area array depth sensor determines the type of obstacle corresponding to the target object, it will select the corresponding avoidance strategy according to the type of obstacle. For chair legs, choose to bypass. Then, the route of the mobile robot is controlled according to the avoidance strategy, and the mobile robot is controlled to successfully avoid the target object. In this manner, the mobile robot can quickly avoid the target object, reducing the probability of the mobile robot getting stuck and the rolling brush of the mobile robot being entangled.
  • the area array depth sensor emits multiple groups of lasers in one surface forward, for example, multiple groups of lasers are emitted within 1s, and the brightness (light intensity) of each group of lasers is different.
  • Each brightness ( Light intensity) corresponds to a time point, so that multiple groups of lasers within 1s carry coded information, which can represent the flight time of the laser, and the laser reflected from the object has corresponding decoding information, for those with the same brightness ( Light intensity) of the laser, the time it takes for the laser to reflect back and forth can be calculated.
  • the distance from the light source to the object can be calculated according to the light flight time, and the three-dimensional information of the object can be obtained by combining the distances from each point of the object to the light source.
  • a method for identifying obstacles using an area array TOF depth sensor is proposed.
  • the laser emitter of the area array depth sensor is installed facing forward, and the laser receiver of the area array depth sensor is installed on the laser The emitter is adjacent to the side, and the two form an area array TOF depth sensor.
  • the active light emitted by the laser transmitter is received by the laser receiver after being reflected by the object.
  • the area array TOF depth sensor calculates the time of flight of the emitted laser light, and obtains it into the field of view of the area array TOF depth sensor according to the time of flight and the calibration parameters of the laser receiver. The three-dimensional coordinates of the object.
  • the ground information belonging to the plane where the sweeping robot (equivalent to the mobile robot in the above-mentioned embodiment) is first separated, based on this ground information, plus the three-dimensional point Width, length, height, classify objects within the perception range, and judge whether they are obstacles.
  • Fig. 3 is a schematic diagram (1) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure
  • Fig. 4 is a schematic diagram (2) of detecting obstacles according to a method of determining a target object according to an embodiment of the present disclosure
  • Figure 5 is a schematic diagram (3) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure.
  • the cylinder is a device using an area array TOF sensor (equivalent to the above implementation The mobile robot in the example), the middle of the cylinder and the cuboid is the measurement perception range of TOF, and the cuboid is the obstacle in front of the device:
  • the object can be classified as a table and chair leg;
  • the object can be classified as a wall
  • the object can be classified into steps; according to different classifications, the device can adopt different obstacle avoidance or obstacle surmounting strategies.
  • the area array TOF depth sensor includes sensors using infrared receivers or infrared receivers + RGB receivers, which use technologies including direct measurement of time of flight and indirect measurement of time of flight.
  • the above-mentioned technical solutions of the embodiments of the present disclosure can identify various obstacles by using the area array TOF depth sensor to identify obstacles.
  • the mobile robot recognizes obstacles such as power lines, it can prevent entanglement and collision.
  • the table and chair legs are identified, a closer distance can be used to avoid the table and chair legs while ensuring passability.
  • objects that can be crossed such as upper steps, carpets, and sliding door slide rails within the obstacle-crossing capability are identified, it is prevented from missing scanning.
  • the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as a read-only memory (Read-Only Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, or network device) etc.) to implement the methods described in the various embodiments of the present application.
  • a storage medium such as a read-only memory (Read-Only Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk
  • a terminal device which can be a mobile phone, computer, server, or network device
  • FIG. 6 is a structural block diagram of a mobile robot according to an embodiment of the present disclosure, including:
  • the area array depth sensor 62 is arranged on the front side of the mobile robot, and is used to use a laser panel to emit multiple groups of first lasers to target objects located in the direction of travel of the mobile robot, wherein the laser panel is arranged on the mobile robot.
  • the front side of the robot, the front side is used to indicate the frontmost part of the mobile robot during travel, and any two groups of first lasers in the multiple groups of first lasers have different light intensities;
  • the processor 64 is arranged in the mobile robot and connected to the area array depth sensor, or the processor is located in the area array depth sensor, and is used to receive multiple groups of second laser light sent by the laser panel; Determine the obstacle type corresponding to the target object according to the multiple sets of first lasers and the multiple sets of second lasers, wherein the multiple sets of second lasers are the multiple sets of first lasers from the target object After reflection back, the laser panel receives it.
  • the control area array depth sensor 62 uses the laser panel to emit multiple sets of first lasers to the target objects in the direction of the mobile robot, and then the processor 64 receives multiple sets of second laser beams sent by the laser panel.
  • the laser is used to determine the obstacle type corresponding to the target object according to multiple sets of first lasers and multiple sets of second lasers.
  • the installation position of the laser panel of the area array depth sensor is preferably at the forefront of the mobile robot during travel, and it can also be installed at other positions of the mobile robot, for example, the laser panel is installed on the upper surface of the mobile robot , in order to be able to detect objects in front of the mobile robot.
  • the processor 64 is also used to determine the first laser light and the second laser light with the same light intensity from the multiple sets of first laser light and the multiple sets of second laser light;
  • the second laser carries encoding information, and the second laser carries decoding information, and the obstacle type corresponding to the target object is determined according to the encoding information of the first laser and the decoding information of the second laser.
  • the area array depth sensor emits multiple sets of first laser light forward on one surface, for example, emits multiple sets of first laser light within 1 second, and the brightness (light intensity) of each set of laser light is different. Since the light intensity of the laser light does not change during the reflection process of the laser light, the first laser light and the second laser light with the same light intensity can be determined from multiple groups of first laser light and multiple sets of second laser light, and in the area array
  • the first laser carries coded information.
  • the coded information is pre-set for the area array depth sensor.
  • the coded information contains parameter information. After the first laser is reflected by the target object, the coded information The parameter information inside the information will change.
  • the changed coded information is defined as the decoded information here, and then the obstacle of the target object can be determined according to the coded information of the first laser and the decoded information of the second laser. type.
  • the area array TOF depth sensor includes sensors using infrared receivers or infrared receivers + RGB receivers, which use technologies including direct measurement of time of flight and indirect measurement of time of flight.
  • the processor 64 is further configured to determine the time-of-flight of the first laser according to the encoded information in the first laser and the decoded information in the second laser; according to the time-of-flight determining the three-dimensional information of the target object, and determining the obstacle type corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information includes at least one of the following: height information of the target object, length information of the target object, Width information of the target object.
  • the area array depth sensor can determine the time-of-flight of the first laser according to the encoded information in the first laser and the decoded information in the second laser. Specifically, the area array depth sensor can determine the flight time of the first laser according to the parameter information in the decoded information and the parameter information in the encoded information, and determine the three-dimensional information of the target object according to the flight time of the first laser, and then according to the three-dimensional The information determines the obstacle type corresponding to the target object.
  • the area array depth sensor can quickly determine the flight time of the first laser, and determine the three-dimensional information of the target object according to the flight time of the first laser.
  • the processor 64 is also used to determine the three-dimensional coordinates of the target object according to the time-of-flight; separate the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates; take the coordinate information of the ground as a reference , to determine the three-dimensional information of the target object.
  • the space where the mobile robot is located can be regarded as a three-dimensional space coordinate system, and then the area array depth sensor can determine the three-dimensional coordinates of the target object in the space coordinate system according to the flight time of the first laser, such as the target object
  • the three-dimensional coordinates of the target object are (X, Y, Z).
  • the area array depth sensor separates the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates of the target object, and uses the coordinate information of the ground as Datum, to determine the three-dimensional information of the target object.
  • the processor 64 is further configured to acquire a preset correspondence between three-dimensional information and obstacle types; and determine an obstacle type corresponding to the three-dimensional information of the target object from the correspondence.
  • each three-dimensional information can determine the type of an obstacle.
  • there can be a three-dimensional information-obstacle type table which has different three-dimensional information corresponding to the obstacle type, array depth
  • the sensor After the sensor acquires the three-dimensional information of the target object, it will obtain the three-dimensional information-obstacle type table, and determine the obstacle type corresponding to the three-dimensional information of the target object from the table.
  • the types of obstacles include: table and chair legs, walls and steps.
  • the processor 64 is also configured to determine an avoidance strategy corresponding to the obstacle type; and control the traveling route of the mobile robot according to the avoidance strategy, so as to control the mobile robot to successfully avoid the target object.
  • the area array depth sensor determines the type of obstacle corresponding to the target object, it will select the corresponding avoidance strategy according to the type of obstacle. For chair legs, choose to bypass. Then, the route of the mobile robot is controlled according to the avoidance strategy, and the mobile robot is controlled to successfully avoid the target object. In this manner, the mobile robot can quickly avoid the target object, reducing the probability of the mobile robot getting stuck and the rolling brush of the mobile robot being entangled.
  • a detection device for a target object is also provided, and the detection device for a target object is used to realize the above-mentioned embodiments and preferred implementation modes, and what has already been described will not be repeated.
  • the term "module” may be a combination of software and/or hardware that realizes a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
  • Fig. 7 is a structural block diagram of a device for determining a target object according to an embodiment of the present disclosure, as shown in Fig. 7 :
  • the sending module 72 is used to control the laser panel of the area array depth sensor to emit multiple groups of first lasers to the target object located in the direction of travel of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, and the The front side is used to indicate the frontmost part of the mobile robot during travel, and any two groups of first lasers in the plurality of groups of first lasers have different light intensities;
  • the receiving module 74 is configured to receive multiple sets of second laser light reflected from the target object by the multiple sets of first laser beams through the laser panel;
  • the determination module 76 is configured to determine the obstacle type corresponding to the target object according to the multiple sets of first laser light and the multiple sets of second laser light.
  • the laser panel controlling the area array depth sensor emits multiple sets of first lasers to the target object in the direction of the mobile robot, and receives multiple sets of first lasers from the target object through the laser panel
  • the reflected multiple sets of second lasers are used to determine the obstacle type corresponding to the target object according to the multiple sets of first lasers and multiple sets of second lasers.
  • the above-mentioned modules can be realized by software or hardware. For the latter, it can be realized by the following methods, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules can be combined in any combination The forms of are located in different processors.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the above method embodiments when running.
  • the above-mentioned storage medium may be configured to store a computer program for performing the following steps:
  • the above-mentioned storage medium may include but not limited to: various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
  • various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
  • Embodiments of the present disclosure also provide an electronic device, including a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to execute the following steps through a computer program:
  • Embodiments of the present disclosure also provide a robot, including a main body, a motion component, and a controller, where the controller is configured to execute the steps in any one of the above method embodiments.
  • each module or each step of the above-mentioned disclosure can be realized by a general-purpose computing device, and they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Alternatively, they may be implemented in program code executable by a computing device so that they may be stored in a storage device to be executed by a computing device, and in some cases in an order different from that shown here
  • the steps shown or described are carried out, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps among them are fabricated into a single integrated circuit module for implementation.
  • the present disclosure is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A method for determining a target object, a mobile robot, a storage medium, and an electronic apparatus. The method comprises: controlling a laser panel of an area array depth sensor to emit multiple groups of first lasers to a target object located in a traveling direction of a mobile robot, the laser panel being disposed on a front side of the mobile robot, the front side being used to indicate a farthest forward part of the mobile robot in a traveling process, and any two groups of the first lasers among the multiple groups of first lasers having different light intensities (S202); by means of the laser panel, receiving multiple groups of second lasers reflected back by the target object from the multiple groups of first lasers (S204); and determining an obstacle type corresponding to the target object according to the multiple groups of first lasers and the multiple groups of second lasers (S206). The present invention solves the problem of low detection accuracy of conventional methods in a process of detecting an object in front of a floor sweeping robot.

Description

目标物体的确定方法、移动机器人、存储介质及电子装置Target object determination method, mobile robot, storage medium and electronic device
本公开要求如下专利申请的优先权:于2021年09月23日提交中国专利局、申请号为202111116319.7、发明名称为“目标物体的确定方法、移动机器人、存储介质及电子装置”的中国专利申请;上述专利申请的全部内容通过引用结合在本公开中。This disclosure claims the priority of the following patent application: a Chinese patent application submitted to the China Patent Office on September 23, 2021, with the application number 202111116319.7, and the title of the invention is "Method for Determining Target Object, Mobile Robot, Storage Medium, and Electronic Device" ; the entire content of the above patent application is incorporated by reference in this disclosure.
技术领域technical field
本公开涉及通信领域,具体而言,涉及一种目标物体的确定方法、移动机器人、存储介质及电子装置。The present disclosure relates to the communication field, and in particular, relates to a method for determining a target object, a mobile robot, a storage medium, and an electronic device.
背景技术Background technique
随着社会的发展,越来越多的家庭开始使用扫地机器人,在使用扫地机器人的过程中,扫地机器人需要对前方区域进行识别,判断前方区域是否有障碍物,从而在移动的过程中避开障碍物。With the development of society, more and more families begin to use sweeping robots. In the process of using sweeping robots, sweeping robots need to identify the front area and judge whether there are obstacles in the front area, so as to avoid them during the movement. obstacle.
在现有的扫地机机器人中,具有主动避障功能的扫地机器人都依赖点阵传感器或者线阵深度传感器来实现对障碍物的识别,但点阵或者线阵传感器需要对一个物体进行很多次不同角度的扫描照射,并将不同角度的扫描结果进行结合才可以确定一个障碍物信息,计算量大且不准确。此外,点阵或线阵传感器的感知范围比较小,主动避障功能的判断准确性有限,用户体验差,不能清扫小障碍物,导致扫地机器人卡住和扫地机器人滚刷缠绕发生的概率增大。Among the existing sweeping robots, the sweeping robots with active obstacle avoidance function rely on dot matrix sensors or line array depth sensors to realize obstacle recognition, but dot matrix or line array sensors need to perform many different operations on an object. Obstacle information can only be determined by scanning irradiation at different angles and combining the scanning results of different angles, which is computationally intensive and inaccurate. In addition, the sensing range of dot matrix or linear array sensors is relatively small, the judgment accuracy of the active obstacle avoidance function is limited, the user experience is poor, and small obstacles cannot be cleaned, resulting in an increased probability of the sweeping robot getting stuck and the rolling brush of the sweeping robot being entangled .
针对相关技术,传统方法在检测扫地机器人前方物体的过程中,检测准确率较低的问题,目前尚未提出有效的解决方案。In view of related technologies, the traditional method has low detection accuracy in the process of detecting objects in front of the sweeping robot, and no effective solution has been proposed so far.
因此,有必要对相关技术予以改良以克服相关技术中的所述缺陷。Therefore, it is necessary to improve the related technology to overcome the above-mentioned defects in the related technology.
发明内容Contents of the invention
本公开的目的在于提供一种目标物体的确定方法、移动机器人、存储介质及电子装置,以至少解决传统方法在检测扫地机器人前方物体的过程中,检测准确率较低的问题。The purpose of the present disclosure is to provide a method for determining a target object, a mobile robot, a storage medium, and an electronic device, so as to at least solve the problem of low detection accuracy in the process of detecting objects in front of the sweeping robot in traditional methods.
本公开的目的是通过以下技术方案实现:The purpose of this disclosure is to be achieved through the following technical solutions:
根据本公开实施例的一方面,提供了一种目标物体的确定方法,包括:控制面阵深度传感器的激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移动机器人在行进过程中的 最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;通过所述激光面板接收所述多组第一激光从所述目标物体上反射回的多组第二激光;根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型。According to an aspect of an embodiment of the present disclosure, a method for determining a target object is provided, including: controlling the laser panel of the area array depth sensor to emit multiple groups of first laser light to the target object located in the direction of travel of the mobile robot, wherein the The laser panel is arranged on the front side of the mobile robot, and the front side is used to indicate the frontmost position of the mobile robot during travel, and any two groups of first lasers in the multiple groups of first lasers have different light beams Intensity; multiple sets of second lasers reflected from the target object by receiving the multiple sets of first lasers through the laser panel; determining the target according to the multiple sets of first lasers and the multiple sets of second lasers The obstacle type corresponding to the object.
进一步地,根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型,包括:从所述多组第一激光与所述多组第二激光中确定具有相同光线强度的第一激光与第二激光;其中,所述第一激光中携带有编码信息,所述第二激光中携带有解码信息;根据所述第一激光的编码信息和所述第二激光的解码信息确定所述目标物体对应的障碍物类型。Further, determining the obstacle type corresponding to the target object according to the plurality of groups of first lasers and the plurality of groups of second lasers includes: determining from the plurality of groups of first lasers and the plurality of groups of second lasers A first laser and a second laser with the same light intensity; wherein, the first laser carries coded information, and the second laser carries decoding information; according to the coded information of the first laser and the coded information of the second laser The decoding information of the two lasers determines the obstacle type corresponding to the target object.
进一步地,根据所述第一激光的编码信息和所述第二激光的解码信息确定所述目标物体对应的障碍物类型,包括:根据所述第一激光中的编码信息和所述第二激光中的解码信息确定所述第一激光的飞行时间;根据所述飞行时间确定所述目标物体的三维信息,并根据所述三维信息确定所述目标物体对应的障碍物类型,其中,三维信息包括以下至少之一:所述目标物体的高度信息,所述目标物体的长度信息,所述目标物体的宽度信息。Further, determining the obstacle type corresponding to the target object according to the coding information of the first laser and the decoding information of the second laser includes: according to the coding information in the first laser and the decoding information of the second laser Determine the time-of-flight of the first laser from the decoding information in the system; determine the three-dimensional information of the target object according to the time-of-flight, and determine the obstacle type corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information includes At least one of the following: height information of the target object, length information of the target object, and width information of the target object.
进一步地,根据所述飞行时间确定所述目标物体的三维信息,包括:根据所述飞行时间确定所述目标物体的三维坐标;从所述三维坐标中分离出所述移动机器人所在地面的坐标信息;以所述地面的坐标信息为基准,确定所述目标物体的三维信息。Further, determining the three-dimensional information of the target object according to the flight time includes: determining the three-dimensional coordinates of the target object according to the flight time; separating the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates ; Determine the three-dimensional information of the target object based on the coordinate information of the ground.
进一步地,根据所述三维信息确定所述目标物体对应的障碍物类型,包括:获取预设的三维信息与障碍物类型的对应关系;从所述对应关系中确定与所述目标物体的三维信息对应的障碍物类型。Further, determining the obstacle type corresponding to the target object according to the three-dimensional information includes: obtaining a preset correspondence between three-dimensional information and obstacle types; determining the three-dimensional information of the target object from the correspondence Corresponding obstacle type.
进一步地,根据多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型之后,所述方法还包括:确定所述障碍物类型所对应的躲避策略;根据所述躲避策略控制所述移动机器人的行进路线,以控制所述移动机器人成功躲避所述目标物体。Further, after determining the obstacle type corresponding to the target object according to the plurality of groups of first lasers and the plurality of groups of second lasers, the method further includes: determining an avoidance strategy corresponding to the obstacle type; The avoidance strategy controls the traveling route of the mobile robot, so as to control the mobile robot to successfully avoid the target object.
根据本公开实施例的另一方面,还提供了一种移动机器人,包括:面阵深度传感器,设置在所述移动机器人的前侧,用于使用激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移动机器人在行进过程中的最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;处理器,设置在所述移动机器人内,与所述面阵深度传感器连接,或所述处理器位于所述面阵深度传感器内,用于接收所述激光面板发送的多组第二激光;根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型,其中,所述多组第二激光为所述多组第一激光从所述目标物体上反射回后,所述激光面板接收到的。According to another aspect of the embodiments of the present disclosure, there is also provided a mobile robot, including: an area array depth sensor, arranged on the front side of the mobile robot, for using a laser panel to detect a target located in the direction of travel of the mobile robot The object emits multiple groups of first lasers, wherein the laser panel is arranged on the front side of the mobile robot, and the front side is used to indicate the frontmost position of the mobile robot during travel, and the multiple groups of first lasers Any two groups of first lasers have different light intensities; the processor is arranged in the mobile robot and is connected with the area array depth sensor, or the processor is located in the area array depth sensor for receiving multiple sets of second lasers sent by the laser panel; determining the obstacle type corresponding to the target object according to the multiple sets of first lasers and the multiple sets of second lasers, wherein the multiple sets of second lasers are the The multiple groups of first laser light are received by the laser panel after being reflected from the target object.
进一步地,所述处理器还用于从所述多组第一激光与所述多组第二激光中确定具有相同 光线强度的第一激光与第二激光;其中,所述第一激光中携带有编码信息,所述第二激光中携带有解码信息。Further, the processor is also used to determine the first laser and the second laser with the same light intensity from the multiple groups of the first laser and the multiple groups of the second laser; wherein, the first laser carries There is encoded information, and decoding information is carried in the second laser.
根据本公开实施例的又一方面,提供了一种计算机可读的存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行以上任一项中所述的目标物体的确定方法。According to yet another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute any of the above-mentioned The method of determining the target object.
根据本公开实施例的又一方面,提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行以上任一项中所述的目标物体的确定方法。According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device, including a memory and a processor, the memory stores a computer program, and the processor is configured to run the computer program to perform any of the above The determination method of the target object described in .
通过本公开,在移动机器人行进的过程中,控制面阵深度传感器的激光面板对移动机器人行进方向上的目标物体发射多组第一激光,并通过激光面板接收多组第一激光从目标物体上反射回的多组第二激光,进而根据多组第一激光与多组第二激光确定目标物体对应的障碍物类型。采用上述技术方案,解决了传统方法在检测扫地机器人前方物体的过程中,检测准确率较低的问题。进而通过控制面阵深度传感器对物体进行检测,提高了检测的准确率。Through the present disclosure, during the moving process of the mobile robot, the laser panel controlling the area array depth sensor emits multiple groups of first lasers to the target object in the direction of the mobile robot's travel, and receives multiple groups of first lasers from the target object through the laser panel The reflected multiple sets of second lasers are used to determine the obstacle type corresponding to the target object according to the multiple sets of first lasers and multiple sets of second lasers. By adopting the above technical solution, the problem of low detection accuracy in the process of detecting objects in front of the sweeping robot in the traditional method is solved. Furthermore, by controlling the area array depth sensor to detect the object, the detection accuracy is improved.
附图说明Description of drawings
此处所说明的附图用来提供对本公开的进一步理解,构成本公开的一部分,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。在附图中:The drawings described here are used to provide a further understanding of the present disclosure, and constitute a part of the present disclosure. The schematic embodiments of the present disclosure and their descriptions are used to explain the present disclosure, and do not constitute improper limitations to the present disclosure. In the attached picture:
图1是本公开实施例的一种目标物体的确定方法的计算机终端的硬件结构框图;FIG. 1 is a block diagram of the hardware structure of a computer terminal of a method for determining a target object according to an embodiment of the present disclosure;
图2是根据本公开实施例的一种目标物体的确定方法的流程图;FIG. 2 is a flow chart of a method for determining a target object according to an embodiment of the present disclosure;
图3是根据本公开实施例的一种目标物体的确定方法的检测障碍物的示意图(一);FIG. 3 is a schematic diagram (1) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure;
图4是根据本公开实施例的一种目标物体的确定方法的检测障碍物的示意图(二);FIG. 4 is a schematic diagram (2) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure;
图5是根据本公开实施例的一种目标物体的确定方法的检测障碍物的示意图(三);FIG. 5 is a schematic diagram (3) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure;
图6为根据本公开实施例的一种移动机器人的结构框图;Fig. 6 is a structural block diagram of a mobile robot according to an embodiment of the present disclosure;
图7为根据本公开实施例的一种目标物体的确定装置的结构框图。Fig. 7 is a structural block diagram of an apparatus for determining a target object according to an embodiment of the present disclosure.
具体实施方式Detailed ways
下文中将参考附图并结合实施例来详细说明本公开。需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。Hereinafter, the present disclosure will be described in detail with reference to the accompanying drawings and embodiments. It should be noted that, in the case of no conflict, the embodiments in the present disclosure and the features in the embodiments can be combined with each other.
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。It should be noted that the terms "first" and "second" in the specification and claims of the present disclosure and the above drawings are used to distinguish similar objects, but not necessarily used to describe a specific sequence or sequence.
本公开实施例所提供的方法实施例可以在计算机终端或者类似的运算装置中执行。以运 行在计算机终端上为例,图1是本公开实施例的一种目标物体的确定方法的计算机终端的硬件结构框图。如图1所示,计算机终端可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器(Microprocessor Unit,简称是MPU)或可编程逻辑器件(Programmable logic device,简称是PLD)等的处理装置和用于存储数据的存储器104,可选地,上述计算机终端还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述计算机终端的结构造成限定。例如,计算机终端还可包括比图1中所示更多或者更少的组件,或者具有与图1所示等同功能或比图1所示功能更多的不同的配置。The method embodiments provided by the embodiments of the present disclosure may be executed in a computer terminal or a similar computing device. Taking running on a computer terminal as an example, FIG. 1 is a block diagram of a hardware structure of a computer terminal according to a method for determining a target object according to an embodiment of the present disclosure. As shown in Figure 1, the computer terminal can include one or more (only one is shown in Figure 1) processor 102 (processor 102 can include but not limited to microprocessor (Microprocessor Unit, MPU for short) or programmable logic Devices (Programmable logic device, PLD for short) and other processing devices and memory 104 for storing data, optionally, the above-mentioned computer terminal may also include transmission equipment 106 and input and output equipment 108 for communication functions. Common in the art Those skilled in the art can understand that the structure shown in Figure 1 is only illustrative, and it does not limit the structure of the above-mentioned computer terminal.For example, the computer terminal can also include more or less components than shown in Figure 1, or have the same Functionally equivalent to that shown in Figure 1 or a different configuration with more functionality than shown in Figure 1.
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本公开实施例中的目标物体的确定方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the method for determining the target object in the embodiment of the present disclosure, and the processor 102 runs the computer program stored in the memory 104, thereby Executing various functional applications and data processing is to realize the above-mentioned method. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include a memory that is remotely located relative to the processor 102, and these remote memories may be connected to a computer terminal through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
传输设备106用于经由网络接收或者发送数据。上述的网络具体实例可包括计算机终端的通信供应商提供的无线网络。在一个实例中,传输设备106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输设备106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。The transmission device 106 is used to receive or transmit data via the network. The specific example of the above-mentioned network may include a wireless network provided by the communication provider of the computer terminal. In one example, the transmission device 106 includes a network interface controller (NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet. In an example, the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
在本实施例中提供了一种运行于上述目标物体的确定方法,图2是根据本公开实施例的目标物体的确定方法的流程图,如图2所示,该流程包括如下步骤:In this embodiment, a method for determining a target object operating on the above-mentioned target object is provided. FIG. 2 is a flowchart of a method for determining a target object according to an embodiment of the present disclosure. As shown in FIG. 2 , the process includes the following steps:
步骤S202,控制面阵深度传感器的激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移动机器人在行进过程中的最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;Step S202, controlling the laser panel of the area array depth sensor to emit multiple groups of first lasers to the target object located in the traveling direction of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, and the front side is used for Any two sets of first lasers in the plurality of sets of first lasers have different light intensities to indicate that the mobile robot is at the forefront during the traveling process;
需要说明的是,本申请实施例中的面阵深度传感器包括:面阵飞行时间(Time of Flight,简称为TOF)传感器。It should be noted that the area array depth sensor in the embodiment of the present application includes: an area array Time of Flight (TOF for short) sensor.
需要说明的是,点阵激光传感器一次只能测量一个点,线阵激光传感器一次只能测量一条线上所有点的值,并且点阵或者线阵传感器需要对一个物体进行很多次不同角度的扫描照射并将不同角度的扫描结果进行结合才可以确定一个障碍物信息,计算量大且不准确。而本 申请实施例中的面阵深度激光传感器可以一次测量一个面上的所有点,计算量小且准确。It should be noted that the dot matrix laser sensor can only measure one point at a time, and the line array laser sensor can only measure the values of all points on a line at a time, and the dot matrix or line array sensor needs to scan an object many times at different angles Obstacle information can only be determined by irradiating and combining scanning results from different angles, which is computationally intensive and inaccurate. However, the area array depth laser sensor in the embodiment of the present application can measure all points on a surface at one time, and the calculation amount is small and accurate.
步骤S204,通过所述激光面板接收所述多组第一激光从所述目标物体上反射回的多组第二激光;Step S204, receiving multiple sets of second laser beams reflected from the target object by the multiple sets of first laser beams through the laser panel;
可以理解的是,面阵深度传感器的激光面板可以实现激光发射功能和激光接收功能,主要包括:激光发射器,激光接收器;在移动机器人移动的过程中,控制面阵深度传感器中的激光发射器向移动机器人的前方区域呈一个面发射多组激光,如果前方区域里面有目标物体,则多组激光在接触到目标物体以后,多组激光会被目标物体反射,形成多组反射光,进而多组反射光被面阵深度传感器的的激光接收器接收。为了更好的区分激光发射器发射的激光和激光接收器接收到的反射光,将激光发射器发射的多组激光定义为多组第一激光,将激光接收器接收到的多组反射光定义为多组第二激光。It can be understood that the laser panel of the area array depth sensor can realize the laser emitting function and the laser receiving function, mainly including: laser transmitter, laser receiver; in the process of moving the mobile robot, control the laser emission in the area array depth sensor The device emits multiple sets of lasers to the front area of the mobile robot. If there is a target object in the front area, after the multiple sets of lasers touch the target object, the multiple sets of laser light will be reflected by the target object to form multiple sets of reflected light. Multiple sets of reflected light are received by the laser receiver of the area array depth sensor. In order to better distinguish the laser emitted by the laser transmitter from the reflected light received by the laser receiver, the multiple groups of laser light emitted by the laser transmitter are defined as multiple groups of first laser light, and the multiple groups of reflected light received by the laser receiver are defined as It is a plurality of groups of second lasers.
步骤S206,根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型。Step S206, determining an obstacle type corresponding to the target object according to the multiple sets of first lasers and the multiple sets of second lasers.
通过上述步骤,在移动机器人行进的过程中,控制面阵深度传感器的激光面板对移动机器人行进方向上的目标物体发射多组第一激光,并通过激光面板接收多组第一激光从目标物体上反射回的多组第二激光,进而根据多组第一激光与多组第二激光确定目标物体对应的障碍物类型。采用上述技术方案,解决了传统方法在检测扫地机器人前方物体的过程中,检测准确率较低的问题。进而通过控制面阵深度传感器对物体进行检测,提高了检测的准确率。Through the above steps, in the process of moving the mobile robot, the laser panel controlling the area array depth sensor emits multiple sets of first lasers to the target object in the direction of the mobile robot, and receives multiple sets of first lasers from the target object through the laser panel The reflected multiple sets of second lasers are used to determine the obstacle type corresponding to the target object according to the multiple sets of first lasers and multiple sets of second lasers. By adopting the above technical solution, the problem of low detection accuracy in the process of detecting objects in front of the sweeping robot in the traditional method is solved. Furthermore, by controlling the area array depth sensor to detect the object, the detection accuracy is improved.
在本公开实施例中,面阵深度传感器的激光面板的安装位置优选是移动机器人在行进过程中的最前方,也可以安装在移动机器人的其他位置,例如,激光面板安装在移动机器人的上表面,以便能够对移动机器人的前方物体进行检测。In the embodiment of the present disclosure, the installation position of the laser panel of the area array depth sensor is preferably at the forefront of the mobile robot during travel, and it can also be installed at other positions of the mobile robot, for example, the laser panel is installed on the upper surface of the mobile robot , in order to be able to detect objects in front of the mobile robot.
需要说明的是,根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型,通过以下方式实现:从所述多组第一激光与所述多组第二激光中确定具有相同光线强度的第一激光与第二激光;其中,所述第一激光中携带有编码信息,所述第二激光中携带有解码信息,根据所述第一激光的编码信息和所述第二激光的解码信息确定所述目标物体对应的障碍物类型。It should be noted that determining the obstacle type corresponding to the target object according to the multiple sets of first lasers and the multiple sets of second lasers is achieved in the following manner: from the multiple sets of first lasers and the multiple sets of The first laser and the second laser with the same light intensity are determined in the second laser; wherein, the first laser carries coding information, and the second laser carries decoding information, according to the coding of the first laser information and the decoded information of the second laser to determine the obstacle type corresponding to the target object.
需要说明的是,本申请实施例中,面阵深度传感器向前方呈一个面发射出多组第一激光,例如在1s内发射多组第一激光,每组激光的亮度(光线强度)不同。由于激光在反射的过程中,激光的光线强度不会发生变化,进而可以从多组第一激光和多组第二激光中确定具有相同光线强度的第一激光和第二激光,并且在面阵深度传感器发射第一激光的时候,第一激光中携带有编码信息,其中,编码信息为面阵深度传感器预先设置好的,编码信息内部具有参数信息,在第一激光被目标物体反射以后,编码信息内部的参数信息会发生变化,为了更好 的理解,此处将变化后的编码信息定义为解码信息,进而可以根据第一激光的编码信息和第二激光的解码信息确定目标物体的障碍物类型。在一个可选的实施例中。需要说明的是,面阵TOF深度传感器,包括使用了红外接收器或者红外接收器+RGB接收器的传感器,该传感器使用了包括直接测量飞行时间和间接测量飞行时间的技术。It should be noted that, in the embodiment of the present application, the area array depth sensor emits multiple sets of first laser light forward on one surface, for example, emits multiple sets of first laser light within 1 second, and the brightness (light intensity) of each set of laser light is different. Since the light intensity of the laser light does not change during the reflection process of the laser light, the first laser light and the second laser light with the same light intensity can be determined from multiple groups of first laser light and multiple sets of second laser light, and in the area array When the depth sensor emits the first laser, the first laser carries coded information. The coded information is pre-set for the area array depth sensor. The coded information contains parameter information. After the first laser is reflected by the target object, the coded information The parameter information inside the information will change. For a better understanding, the changed coded information is defined as the decoded information here, and then the obstacle of the target object can be determined according to the coded information of the first laser and the decoded information of the second laser. type. In an optional embodiment. It should be noted that the area array TOF depth sensor includes sensors using infrared receivers or infrared receivers + RGB receivers, which use technologies including direct measurement of time of flight and indirect measurement of time of flight.
为了更好的理解上述根据第一激光的编码信息和第二激光的解码信息确定目标物体对应的障碍物类型,在一个可选的实施例中,可以通过以下方式实现:根据所述第一激光中的编码信息和所述第二激光中的解码信息确定所述第一激光的飞行时间;根据所述飞行时间确定所述目标物体的三维信息,并根据所述三维信息确定标物体对应的障碍物类型,其中,三维信息包括以下至少之一:所述目标物体的高度信息,所述目标物体的长度信息,所述目标物体的宽度信息。In order to better understand the above-mentioned determination of the obstacle type corresponding to the target object based on the encoding information of the first laser and the decoding information of the second laser, in an optional embodiment, it can be achieved in the following manner: according to the first laser Determine the flight time of the first laser from the encoded information in the second laser and the decoding information in the second laser; determine the three-dimensional information of the target object according to the flight time, and determine the obstacle corresponding to the target object according to the three-dimensional information The object type, wherein the three-dimensional information includes at least one of the following: height information of the target object, length information of the target object, and width information of the target object.
也就是说,面阵深度传感器可以根据第一激光中的编码信息和第二激光中的解码信息来确定第一激光的飞行时间。具体的,面阵深度传感器可以根据解码信息中的参数信息和编码信息中的参数信息来确定第一激光的飞行时间,并根据第一激光的飞行时间来确定目标物体的三维信息,进而根据三维信息确定目标物体对应的障碍物类型。采用上述技术方案,可以使得面阵深度传感器快速的确定第一激光的飞行时间,并根据第一激光的飞行时间来确定目标物体的三维信息。That is to say, the area array depth sensor can determine the time-of-flight of the first laser according to the encoded information in the first laser and the decoded information in the second laser. Specifically, the area array depth sensor can determine the flight time of the first laser according to the parameter information in the decoded information and the parameter information in the encoded information, and determine the three-dimensional information of the target object according to the flight time of the first laser, and then according to the three-dimensional The information determines the obstacle type corresponding to the target object. By adopting the above technical solution, the area array depth sensor can quickly determine the flight time of the first laser, and determine the three-dimensional information of the target object according to the flight time of the first laser.
进一步地,上述根据飞行时间确定目标物体的三维信息,可选的,需要根据所述飞行时间确定所述目标物体的三维坐标;从所述三维坐标中分离出所述移动机器人所在地面的坐标信息;以所述地面的坐标信息为基准,确定所述目标物体的三维信息。Further, the above-mentioned three-dimensional information of the target object is determined according to the flight time, optionally, the three-dimensional coordinates of the target object need to be determined according to the flight time; the coordinate information of the ground where the mobile robot is located is separated from the three-dimensional coordinates ; Determine the three-dimensional information of the target object based on the coordinate information of the ground.
也就是说,可以将移动机器人所在的空间看成是一个三维的空间坐标系,随后面阵深度传感器可以根据第一激光的飞行时间来确定目标物体在空间坐标系中的三维坐标,例如目标物体的三维坐标为(X,Y,Z),在确定了目标物体的三维坐标以后,面阵深度传感器从目标物体的三维坐标中分离出移动机器人所在地面的坐标信息,并以地面的坐标信息为基准,确定目标物体的三维信息。采用上述技术方案,可以更加精准的确定目标物体的三维信息。That is to say, the space where the mobile robot is located can be regarded as a three-dimensional space coordinate system, and then the area array depth sensor can determine the three-dimensional coordinates of the target object in the space coordinate system according to the flight time of the first laser, such as the target object The three-dimensional coordinates of the target object are (X, Y, Z). After determining the three-dimensional coordinates of the target object, the area array depth sensor separates the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates of the target object, and uses the coordinate information of the ground as Datum, to determine the three-dimensional information of the target object. By adopting the above technical solution, the three-dimensional information of the target object can be determined more accurately.
在一个可选的实施例中,根据所述三维信息确定所述目标物体对应的障碍物类型,可以通过以下方式实现:获取预设的三维信息与障碍物类型的对应关系;从所述对应关系中确定与所述目标物体的三维信息对应的障碍物类型。In an optional embodiment, determining the obstacle type corresponding to the target object according to the three-dimensional information may be achieved in the following manner: obtaining a preset correspondence between three-dimensional information and obstacle types; Determine the obstacle type corresponding to the three-dimensional information of the target object.
需要说明的是,每一个三维信息都可以确定一个障碍物的类型,具体的,可以有一个三维信息-障碍物类型的表格,在表格中具有不同的三维信息对应的障碍物类型,面阵深度传感器在获取到目标物体的三维信息以后,会获取到三维信息-障碍物类型的表格,并从表格中确定目标物体的三维信息对应的障碍物类型,具体的,根据不同的三维信息,可以确定的障碍 物类型包括:桌椅腿,墙和台阶。采用上述技术方案,可以使得面阵深度传感器快速的根据目标物体的三维信息确定目标物体的障碍物类型。It should be noted that each three-dimensional information can determine the type of an obstacle. Specifically, there can be a three-dimensional information-obstacle type table, which has different three-dimensional information corresponding to the obstacle type, array depth After the sensor acquires the three-dimensional information of the target object, it will obtain the three-dimensional information-obstacle type table, and determine the obstacle type corresponding to the three-dimensional information of the target object from the table. Specifically, according to different three-dimensional information, it can be determined The types of obstacles include: table and chair legs, walls and steps. By adopting the above technical solution, the area array depth sensor can quickly determine the obstacle type of the target object according to the three-dimensional information of the target object.
进一步地,根据多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型之后,所述方法还包括:确定所述障碍物类型所对应的躲避策略;根据所述躲避策略控制所述移动机器人的行进路线,以控制所述移动机器人成功躲避所述目标物体。Further, after determining the obstacle type corresponding to the target object according to the plurality of groups of first lasers and the plurality of groups of second lasers, the method further includes: determining an avoidance strategy corresponding to the obstacle type; The avoidance strategy controls the traveling route of the mobile robot, so as to control the mobile robot to successfully avoid the target object.
可以理解的是,面阵深度传感器在确定目标物体对应的障碍物类型以后,会根据障碍物的类型选择对应的躲避策略,例如障碍物类型为墙的时候,选择退后,障碍物类型为桌椅腿的时候,选择绕开。随后根据躲避策略来控制移动机器人的行进路线,控制移动机器人成功躲避目标物体。采用此种方式,可以使得移动机器人快速的避让目标物体,降低移动机器人卡住和移动机器人滚刷缠绕发生的概率。It can be understood that after the area array depth sensor determines the type of obstacle corresponding to the target object, it will select the corresponding avoidance strategy according to the type of obstacle. For chair legs, choose to bypass. Then, the route of the mobile robot is controlled according to the avoidance strategy, and the mobile robot is controlled to successfully avoid the target object. In this manner, the mobile robot can quickly avoid the target object, reducing the probability of the mobile robot getting stuck and the rolling brush of the mobile robot being entangled.
为了更好的理解,以下做具体说明:面阵深度传感器向前方呈一个面发射出多组激光,例如在1s内发出多组激光,每组激光的亮度(光强)不同,每个亮度(光线强度)对应一个时间点,这样这1s内的多组激光就携带了编码信息,该编码信息可以表征激光的飞行时间,从物体上反射回的激光具有对应的解码信息,对于具有相同亮度(光线强度)的激光,可计算激光来回反射所用的时间,进一步,根据光飞行的时间计算光源至物体的距离,将物体的各个点至光源的距离结合,则可以得到物体的三维信息。For a better understanding, the following is a specific description: the area array depth sensor emits multiple groups of lasers in one surface forward, for example, multiple groups of lasers are emitted within 1s, and the brightness (light intensity) of each group of lasers is different. Each brightness ( Light intensity) corresponds to a time point, so that multiple groups of lasers within 1s carry coded information, which can represent the flight time of the laser, and the laser reflected from the object has corresponding decoding information, for those with the same brightness ( Light intensity) of the laser, the time it takes for the laser to reflect back and forth can be calculated. Further, the distance from the light source to the object can be calculated according to the light flight time, and the three-dimensional information of the object can be obtained by combining the distances from each point of the object to the light source.
显然,上述所描述的实施例仅仅是本公开一部分的实施例,而不是全部的实施例。为了更好的理解上述目标物体的确定方法,以下结合实施例对上述过程进行说明,但不用于限定本公开实施例的技术方案,具体地:Apparently, the embodiments described above are only some of the embodiments of the present disclosure, not all of them. In order to better understand the method for determining the above-mentioned target object, the above-mentioned process will be described below in conjunction with the embodiments, but it is not used to limit the technical solutions of the embodiments of the present disclosure, specifically:
在一个可选的实施例中,提出一种利用面阵TOF深度传感器识别障碍物的方法,具体的,面阵深度传感器的激光发射器朝前安装,面阵深度传感器的激光接收器安装在激光发射器相邻侧边,两者组成面阵TOF深度传感器。激光发射器发射的主动光在被物体反射后被激光接收器接收,面阵TOF深度传感器计算所发射激光的飞行时间,根据飞行时间和激光接收器的标定参数获取到面阵TOF深度传感器视野内的物体的三维坐标。根据三维点信息(相当于上述实施例中的三维信息)首先分离出属于扫地机器人(相当于上述实施例中的移动机器人)所处平面的地面信息,以此地面信息为基准,加上三维点宽度、长度、高度,对感知范围内的物体进行分类,并判断是否属于障碍物。In an optional embodiment, a method for identifying obstacles using an area array TOF depth sensor is proposed. Specifically, the laser emitter of the area array depth sensor is installed facing forward, and the laser receiver of the area array depth sensor is installed on the laser The emitter is adjacent to the side, and the two form an area array TOF depth sensor. The active light emitted by the laser transmitter is received by the laser receiver after being reflected by the object. The area array TOF depth sensor calculates the time of flight of the emitted laser light, and obtains it into the field of view of the area array TOF depth sensor according to the time of flight and the calibration parameters of the laser receiver. The three-dimensional coordinates of the object. According to the three-dimensional point information (equivalent to the three-dimensional information in the above-mentioned embodiment), the ground information belonging to the plane where the sweeping robot (equivalent to the mobile robot in the above-mentioned embodiment) is first separated, based on this ground information, plus the three-dimensional point Width, length, height, classify objects within the perception range, and judge whether they are obstacles.
图3是根据本公开实施例的一种目标物体的确定方法的检测障碍物的示意图(一),图4是根据本公开实施例的一种目标物体的确定方法的检测障碍物的示意图(二),图5是根据本公开实施例的一种目标物体的确定方法的检测障碍物的示意图(三),如图3-5所示,圆柱为使用面阵TOF传感器的设备(相当于上述实施例中的移动机器人),圆柱和长方体的 中间为TOF的测量感知范围,长方体为设备前方的障碍物:Fig. 3 is a schematic diagram (1) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure, and Fig. 4 is a schematic diagram (2) of detecting obstacles according to a method of determining a target object according to an embodiment of the present disclosure ), Figure 5 is a schematic diagram (3) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure. As shown in Figure 3-5, the cylinder is a device using an area array TOF sensor (equivalent to the above implementation The mobile robot in the example), the middle of the cylinder and the cuboid is the measurement perception range of TOF, and the cuboid is the obstacle in front of the device:
如图3所示,根据测量感知到的物体的宽度、长度、高度属性,可以将物体分类为桌椅腿;As shown in Figure 3, according to the width, length, and height attributes of the object perceived by the measurement, the object can be classified as a table and chair leg;
如图4所示,根据测量感知到的物体的宽度、长度、高度属性,可以将物体分类为墙;As shown in Figure 4, according to the width, length and height attributes of the object perceived by the measurement, the object can be classified as a wall;
如图5所示,根据测量感知到的物体的宽度、长度、高度属性,可以将物体分类为台阶;根据不同的分类,设备可以采用不同的避障或者越障策略。As shown in Figure 5, according to the width, length, and height attributes of the measured object, the object can be classified into steps; according to different classifications, the device can adopt different obstacle avoidance or obstacle surmounting strategies.
需要说明的是,面阵TOF深度传感器,包括使用了红外接收器或者红外接收器+RGB接收器的传感器,该传感器使用了包括直接测量飞行时间和间接测量飞行时间的技术。It should be noted that the area array TOF depth sensor includes sensors using infrared receivers or infrared receivers + RGB receivers, which use technologies including direct measurement of time of flight and indirect measurement of time of flight.
此外,本公开实施例的上述技术方案,利用面阵TOF深度传感器识别障碍物的方法,可以识别各类障碍物,当移动机器人识别出电源线等障碍物,可以防止缠绕和碰撞。当识别出桌椅腿,可以用更近的距离去避开桌椅腿,同时保证通过性。当识别出越障能力内的上台阶、地毯、移门滑轨等可以越过的物体,防止漏扫。In addition, the above-mentioned technical solutions of the embodiments of the present disclosure can identify various obstacles by using the area array TOF depth sensor to identify obstacles. When the mobile robot recognizes obstacles such as power lines, it can prevent entanglement and collision. When the table and chair legs are identified, a closer distance can be used to avoid the table and chair legs while ensuring passability. When objects that can be crossed such as upper steps, carpets, and sliding door slide rails within the obstacle-crossing capability are identified, it is prevented from missing scanning.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as a read-only memory (Read-Only Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, or network device) etc.) to implement the methods described in the various embodiments of the present application.
本公开还提供一种移动机器人,图6为根据本公开实施例的一种移动机器人的结构框图,包括:The present disclosure also provides a mobile robot. FIG. 6 is a structural block diagram of a mobile robot according to an embodiment of the present disclosure, including:
面阵深度传感器62,设置在所述移动机器人的前侧,用于使用激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移动机器人在行进过程中的最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;The area array depth sensor 62 is arranged on the front side of the mobile robot, and is used to use a laser panel to emit multiple groups of first lasers to target objects located in the direction of travel of the mobile robot, wherein the laser panel is arranged on the mobile robot. The front side of the robot, the front side is used to indicate the frontmost part of the mobile robot during travel, and any two groups of first lasers in the multiple groups of first lasers have different light intensities;
处理器64,设置在所述移动机器人内,与所述面阵深度传感器连接,或所述处理器位于所述面阵深度传感器内,用于接收所述激光面板发送的多组第二激光;根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型,其中,所述多组第二激光为所述多组第一激光从所述目标物体上反射回后,所述激光面板接收到的。The processor 64 is arranged in the mobile robot and connected to the area array depth sensor, or the processor is located in the area array depth sensor, and is used to receive multiple groups of second laser light sent by the laser panel; Determine the obstacle type corresponding to the target object according to the multiple sets of first lasers and the multiple sets of second lasers, wherein the multiple sets of second lasers are the multiple sets of first lasers from the target object After reflection back, the laser panel receives it.
通过本公开,在移动机器人行进的过程中,控制面阵深度传感器62利用激光面板对移动机器人行进方向上的目标物体发射多组第一激光,进而处理器64接收激光面板发送的多 组第二激光,根据多组第一激光与多组第二激光确定目标物体对应的障碍物类型。采用上述技术方案,解决了传统方法在检测扫地机器人前方物体的过程中,检测准确率较低的问题。进而通过控制面阵深度传感器对物体进行检测,提高了检测的准确率。Through the present disclosure, during the moving process of the mobile robot, the control area array depth sensor 62 uses the laser panel to emit multiple sets of first lasers to the target objects in the direction of the mobile robot, and then the processor 64 receives multiple sets of second laser beams sent by the laser panel. The laser is used to determine the obstacle type corresponding to the target object according to multiple sets of first lasers and multiple sets of second lasers. By adopting the above technical solution, the problem of low detection accuracy in the process of detecting objects in front of the sweeping robot in the traditional method is solved. Furthermore, by controlling the area array depth sensor to detect the object, the detection accuracy is improved.
在本公开实施例中,面阵深度传感器的激光面板的安装位置优选是移动机器人在行进过程中的最前方,也可以安装在移动机器人的其他位置,例如,激光面板安装在移动机器人的上表面,以便能够对移动机器人的前方物体进行检测。In the embodiment of the present disclosure, the installation position of the laser panel of the area array depth sensor is preferably at the forefront of the mobile robot during travel, and it can also be installed at other positions of the mobile robot, for example, the laser panel is installed on the upper surface of the mobile robot , in order to be able to detect objects in front of the mobile robot.
需要说明的是,处理器64还用于从所述多组第一激光与所述多组第二激光中确定具有相同光线强度的第一激光与第二激光;其中,所述第一激光中携带有编码信息,所述第二激光中携带有解码信息,根据所述第一激光的编码信息和所述第二激光的解码信息确定所述目标物体对应的障碍物类型。It should be noted that the processor 64 is also used to determine the first laser light and the second laser light with the same light intensity from the multiple sets of first laser light and the multiple sets of second laser light; The second laser carries encoding information, and the second laser carries decoding information, and the obstacle type corresponding to the target object is determined according to the encoding information of the first laser and the decoding information of the second laser.
需要说明的是,本申请实施例中,面阵深度传感器向前方呈一个面发射出多组第一激光,例如在1s内发射多组第一激光,每组激光的亮度(光线强度)不同。由于激光在反射的过程中,激光的光线强度不会发生变化,进而可以从多组第一激光和多组第二激光中确定具有相同光线强度的第一激光和第二激光,并且在面阵深度传感器发射第一激光的时候,第一激光中携带有编码信息,其中,编码信息为面阵深度传感器预先设置好的,编码信息内部具有参数信息,在第一激光被目标物体反射以后,编码信息内部的参数信息会发生变化,为了更好的理解,此处将变化后的编码信息定义为解码信息,进而可以根据第一激光的编码信息和第二激光的解码信息确定目标物体的障碍物类型。在一个可选的实施例中。需要说明的是,面阵TOF深度传感器,包括使用了红外接收器或者红外接收器+RGB接收器的传感器,该传感器使用了包括直接测量飞行时间和间接测量飞行时间的技术。It should be noted that, in the embodiment of the present application, the area array depth sensor emits multiple sets of first laser light forward on one surface, for example, emits multiple sets of first laser light within 1 second, and the brightness (light intensity) of each set of laser light is different. Since the light intensity of the laser light does not change during the reflection process of the laser light, the first laser light and the second laser light with the same light intensity can be determined from multiple groups of first laser light and multiple sets of second laser light, and in the area array When the depth sensor emits the first laser, the first laser carries coded information. The coded information is pre-set for the area array depth sensor. The coded information contains parameter information. After the first laser is reflected by the target object, the coded information The parameter information inside the information will change. For a better understanding, the changed coded information is defined as the decoded information here, and then the obstacle of the target object can be determined according to the coded information of the first laser and the decoded information of the second laser. type. In an optional embodiment. It should be noted that the area array TOF depth sensor includes sensors using infrared receivers or infrared receivers + RGB receivers, which use technologies including direct measurement of time of flight and indirect measurement of time of flight.
在一个可选的实施例中,处理器64还用于根据所述第一激光中的编码信息和所述第二激光中的解码信息确定所述第一激光的飞行时间;根据所述飞行时间确定所述目标物体的三维信息,并根据所述三维信息确定标物体对应的障碍物类型,其中,三维信息包括以下至少之一:所述目标物体的高度信息,所述目标物体的长度信息,所述目标物体的宽度信息。In an optional embodiment, the processor 64 is further configured to determine the time-of-flight of the first laser according to the encoded information in the first laser and the decoded information in the second laser; according to the time-of-flight determining the three-dimensional information of the target object, and determining the obstacle type corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information includes at least one of the following: height information of the target object, length information of the target object, Width information of the target object.
也就是说,面阵深度传感器可以根据第一激光中的编码信息和第二激光中的解码信息来确定第一激光的飞行时间。具体的,面阵深度传感器可以根据解码信息中的参数信息和编码信息中的参数信息来确定第一激光的飞行时间,并根据第一激光的飞行时间来确定目标物体的三维信息,进而根据三维信息确定目标物体对应的障碍物类型。采用上述技术方案,可以使得面阵深度传感器快速的确定第一激光的飞行时间,并根据第一激光的飞行时间来确定目标物体的三维信息。That is to say, the area array depth sensor can determine the time-of-flight of the first laser according to the encoded information in the first laser and the decoded information in the second laser. Specifically, the area array depth sensor can determine the flight time of the first laser according to the parameter information in the decoded information and the parameter information in the encoded information, and determine the three-dimensional information of the target object according to the flight time of the first laser, and then according to the three-dimensional The information determines the obstacle type corresponding to the target object. By adopting the above technical solution, the area array depth sensor can quickly determine the flight time of the first laser, and determine the three-dimensional information of the target object according to the flight time of the first laser.
进一步地,处理器64还用于根据所述飞行时间确定所述目标物体的三维坐标;从所述 三维坐标中分离出所述移动机器人所在地面的坐标信息;以所述地面的坐标信息为基准,确定所述目标物体的三维信息。Further, the processor 64 is also used to determine the three-dimensional coordinates of the target object according to the time-of-flight; separate the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates; take the coordinate information of the ground as a reference , to determine the three-dimensional information of the target object.
也就是说,可以将移动机器人所在的空间看成是一个三维的空间坐标系,随后面阵深度传感器可以根据第一激光的飞行时间来确定目标物体在空间坐标系中的三维坐标,例如目标物体的三维坐标为(X,Y,Z),在确定了目标物体的三维坐标以后,面阵深度传感器从目标物体的三维坐标中分离出移动机器人所在地面的坐标信息,并以地面的坐标信息为基准,确定目标物体的三维信息。采用上述技术方案,可以更加精准的确定目标物体的三维信息。That is to say, the space where the mobile robot is located can be regarded as a three-dimensional space coordinate system, and then the area array depth sensor can determine the three-dimensional coordinates of the target object in the space coordinate system according to the flight time of the first laser, such as the target object The three-dimensional coordinates of the target object are (X, Y, Z). After determining the three-dimensional coordinates of the target object, the area array depth sensor separates the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates of the target object, and uses the coordinate information of the ground as Datum, to determine the three-dimensional information of the target object. By adopting the above technical solution, the three-dimensional information of the target object can be determined more accurately.
在一个可选的实施例中,处理器64还用于获取预设的三维信息与障碍物类型的对应关系;从所述对应关系中确定与所述目标物体的三维信息对应的障碍物类型。In an optional embodiment, the processor 64 is further configured to acquire a preset correspondence between three-dimensional information and obstacle types; and determine an obstacle type corresponding to the three-dimensional information of the target object from the correspondence.
需要说明的是,每一个三维信息都可以确定一个障碍物的类型,具体的,可以有一个三维信息-障碍物类型的表格,在表格中具有不同的三维信息对应的障碍物类型,面阵深度传感器在获取到目标物体的三维信息以后,会获取到三维信息-障碍物类型的表格,并从表格中确定目标物体的三维信息对应的障碍物类型,具体的,根据不同的三维信息,可以确定的障碍物类型包括:桌椅腿,墙和台阶。采用上述技术方案,可以使得面阵深度传感器快速的根据目标物体的三维信息确定目标物体的障碍物类型。It should be noted that each three-dimensional information can determine the type of an obstacle. Specifically, there can be a three-dimensional information-obstacle type table, which has different three-dimensional information corresponding to the obstacle type, array depth After the sensor acquires the three-dimensional information of the target object, it will obtain the three-dimensional information-obstacle type table, and determine the obstacle type corresponding to the three-dimensional information of the target object from the table. Specifically, according to different three-dimensional information, it can be determined The types of obstacles include: table and chair legs, walls and steps. By adopting the above technical solution, the area array depth sensor can quickly determine the obstacle type of the target object according to the three-dimensional information of the target object.
进一步地,处理器64还用于确定所述障碍物类型所对应的躲避策略;根据所述躲避策略控制所述移动机器人的行进路线,以控制所述移动机器人成功躲避所述目标物体。Further, the processor 64 is also configured to determine an avoidance strategy corresponding to the obstacle type; and control the traveling route of the mobile robot according to the avoidance strategy, so as to control the mobile robot to successfully avoid the target object.
可以理解的是,面阵深度传感器在确定目标物体对应的障碍物类型以后,会根据障碍物的类型选择对应的躲避策略,例如障碍物类型为墙的时候,选择退后,障碍物类型为桌椅腿的时候,选择绕开。随后根据躲避策略来控制移动机器人的行进路线,控制移动机器人成功躲避目标物体。采用此种方式,可以使得移动机器人快速的避让目标物体,降低移动机器人卡住和移动机器人滚刷缠绕发生的概率。It can be understood that after the area array depth sensor determines the type of obstacle corresponding to the target object, it will select the corresponding avoidance strategy according to the type of obstacle. For chair legs, choose to bypass. Then, the route of the mobile robot is controlled according to the avoidance strategy, and the mobile robot is controlled to successfully avoid the target object. In this manner, the mobile robot can quickly avoid the target object, reducing the probability of the mobile robot getting stuck and the rolling brush of the mobile robot being entangled.
在本实施例中还提供了一种目标对象的检测装置,该目标对象的检测装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。In this embodiment, a detection device for a target object is also provided, and the detection device for a target object is used to realize the above-mentioned embodiments and preferred implementation modes, and what has already been described will not be repeated. As used below, the term "module" may be a combination of software and/or hardware that realizes a predetermined function. Although the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
图7为根据本公开实施例的一种目标物体的确定装置的结构框图,如图7所示:Fig. 7 is a structural block diagram of a device for determining a target object according to an embodiment of the present disclosure, as shown in Fig. 7 :
发送模块72,用于控制面阵深度传感器的激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移动机器人在行进过程中的最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;The sending module 72 is used to control the laser panel of the area array depth sensor to emit multiple groups of first lasers to the target object located in the direction of travel of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, and the The front side is used to indicate the frontmost part of the mobile robot during travel, and any two groups of first lasers in the plurality of groups of first lasers have different light intensities;
接收模块74,用于通过所述激光面板接收所述多组第一激光从所述目标物体上反射回的多组第二激光;The receiving module 74 is configured to receive multiple sets of second laser light reflected from the target object by the multiple sets of first laser beams through the laser panel;
确定模块76,用于根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型。The determination module 76 is configured to determine the obstacle type corresponding to the target object according to the multiple sets of first laser light and the multiple sets of second laser light.
通过上述模块,在移动机器人行进的过程中,控制面阵深度传感器的激光面板对移动机器人行进方向上的目标物体发射多组第一激光,并通过激光面板接收多组第一激光从目标物体上反射回的多组第二激光,进而根据多组第一激光与多组第二激光确定目标物体对应的障碍物类型。采用上述技术方案,解决了传统方法在检测扫地机器人前方物体的过程中,检测准确率较低的问题。进而通过控制面阵深度传感器对物体进行检测,提高了检测的准确率。Through the above modules, in the process of moving the mobile robot, the laser panel controlling the area array depth sensor emits multiple sets of first lasers to the target object in the direction of the mobile robot, and receives multiple sets of first lasers from the target object through the laser panel The reflected multiple sets of second lasers are used to determine the obstacle type corresponding to the target object according to the multiple sets of first lasers and multiple sets of second lasers. By adopting the above technical solution, the problem of low detection accuracy in the process of detecting objects in front of the sweeping robot in the traditional method is solved. Furthermore, by controlling the area array depth sensor to detect the object, the detection accuracy is improved.
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。It should be noted that the above-mentioned modules can be realized by software or hardware. For the latter, it can be realized by the following methods, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules can be combined in any combination The forms of are located in different processors.
本公开的实施例还提供了一种计算机可读的存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。Embodiments of the present disclosure also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the above method embodiments when running.
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:Optionally, in this embodiment, the above-mentioned storage medium may be configured to store a computer program for performing the following steps:
S1,控制面阵深度传感器的激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移动机器人在行进过程中的最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;S1, controlling the laser panel of the area array depth sensor to emit multiple groups of first lasers to the target object located in the direction of travel of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, and the front side is used for Indicating that the mobile robot is at the forefront during the traveling process, and any two groups of first lasers in the plurality of groups of first lasers have different light intensities;
S2,通过所述激光面板接收所述多组第一激光从所述目标物体上反射回的多组第二激光;S2, receiving multiple sets of second laser light reflected from the target object by the multiple sets of first laser light through the laser panel;
S3,根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型。S3. Determine an obstacle type corresponding to the target object according to the multiple sets of first laser light and the multiple sets of second laser light.
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器ROM、随机存取存储器RAM、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。Optionally, in this embodiment, the above-mentioned storage medium may include but not limited to: various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
本公开的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。Embodiments of the present disclosure also provide an electronic device, including a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
可选地,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。Optionally, the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:Optionally, in this embodiment, the above-mentioned processor may be configured to execute the following steps through a computer program:
S1,控制面阵深度传感器的激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移 动机器人在行进过程中的最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;S1, controlling the laser panel of the area array depth sensor to emit multiple groups of first lasers to the target object located in the direction of travel of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, and the front side is used for Indicating that the mobile robot is at the forefront during the traveling process, and any two groups of first lasers in the plurality of groups of first lasers have different light intensities;
S2,通过所述激光面板接收所述多组第一激光从所述目标物体上反射回的多组第二激光;S2, receiving multiple sets of second laser light reflected from the target object by the multiple sets of first laser light through the laser panel;
S3,根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型。S3. Determine an obstacle type corresponding to the target object according to the multiple sets of first laser light and the multiple sets of second laser light.
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。Optionally, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementation manners, and details are not repeated in this embodiment.
本公开的实施例还提供了一种机器人,包括主体、运动组件及控制器,该控制器被设置为执行上述任一项方法实施例中的步骤。Embodiments of the present disclosure also provide a robot, including a main body, a motion component, and a controller, where the controller is configured to execute the steps in any one of the above method embodiments.
显然,本领域的技术人员应该明白,上述的本公开的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本公开不限制于任何特定的硬件和软件结合。Obviously, those skilled in the art should understand that each module or each step of the above-mentioned disclosure can be realized by a general-purpose computing device, and they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Alternatively, they may be implemented in program code executable by a computing device so that they may be stored in a storage device to be executed by a computing device, and in some cases in an order different from that shown here The steps shown or described are carried out, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps among them are fabricated into a single integrated circuit module for implementation. As such, the present disclosure is not limited to any specific combination of hardware and software.
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the principle of the present disclosure shall be included in the protection scope of the present disclosure.

Claims (10)

  1. 一种目标物体的确定方法,其特征在于,包括:A method for determining a target object, comprising:
    控制面阵深度传感器的激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移动机器人在行进过程中的最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;The laser panel controlling the area array depth sensor emits multiple groups of first lasers to the target object located in the traveling direction of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, and the front side is used to indicate the At the forefront of the mobile robot during its travel, any two groups of first lasers in the plurality of groups of first lasers have different light intensities;
    通过所述激光面板接收所述多组第一激光从所述目标物体上反射回的多组第二激光;receiving multiple sets of second laser light reflected from the target object by the multiple sets of first laser beams through the laser panel;
    根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型。The obstacle type corresponding to the target object is determined according to the multiple sets of first lasers and the multiple sets of second lasers.
  2. 根据权利要求1所述的目标物体的确定方法,其中,根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型,包括:The method for determining a target object according to claim 1, wherein determining the obstacle type corresponding to the target object according to the plurality of groups of first lasers and the plurality of groups of second lasers includes:
    从所述多组第一激光与所述多组第二激光中确定具有相同光线强度的第一激光与第二激光;其中,所述第一激光中携带有编码信息,所述第二激光中携带有解码信息;Determining a first laser and a second laser with the same light intensity from the multiple groups of the first laser and the multiple groups of the second laser; wherein, the first laser carries coded information, and the second laser contains coded information carry decoding information;
    根据所述第一激光的编码信息和所述第二激光的解码信息确定所述目标物体对应的障碍物类型。The obstacle type corresponding to the target object is determined according to the encoding information of the first laser and the decoding information of the second laser.
  3. 根据权利要求2所述的目标物体的确定方法,其中,根据所述第一激光的编码信息和所述第二激光的解码信息确定所述目标物体对应的障碍物类型,包括:The method for determining a target object according to claim 2, wherein determining the obstacle type corresponding to the target object according to the encoding information of the first laser and the decoding information of the second laser comprises:
    根据所述第一激光中的编码信息和所述第二激光中的解码信息确定所述第一激光的飞行时间;determining a time-of-flight of the first laser based on encoded information in the first laser and decoded information in the second laser;
    根据所述飞行时间确定所述目标物体的三维信息,并根据所述三维信息确定所述目标物体对应的障碍物类型,其中,三维信息包括以下至少之一:所述目标物体的高度信息,所述目标物体的长度信息,所述目标物体的宽度信息。Determine the three-dimensional information of the target object according to the flight time, and determine the obstacle type corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information includes at least one of the following: height information of the target object, the The length information of the target object and the width information of the target object.
  4. 根据权利要求3所述的目标物体的确定方法,其中,根据所述飞行时间确定所述目标物体的三维信息,包括:The method for determining a target object according to claim 3, wherein determining the three-dimensional information of the target object according to the time-of-flight comprises:
    根据所述飞行时间确定所述目标物体的三维坐标;determining the three-dimensional coordinates of the target object according to the time-of-flight;
    从所述三维坐标中分离出所述移动机器人所在地面的坐标信息;separating the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates;
    以所述地面的坐标信息为基准,确定所述目标物体的三维信息。The three-dimensional information of the target object is determined based on the coordinate information of the ground.
  5. 根据权利要求3所述的目标物体的确定方法,其中,根据所述三维信息确定所述目标物体对应的障碍物类型,包括:The method for determining a target object according to claim 3, wherein determining the obstacle type corresponding to the target object according to the three-dimensional information includes:
    获取预设的三维信息与障碍物类型的对应关系;Obtain the correspondence between preset three-dimensional information and obstacle types;
    从所述对应关系中确定与所述目标物体的三维信息对应的障碍物类型。The obstacle type corresponding to the three-dimensional information of the target object is determined from the corresponding relationship.
  6. 根据权利要求1所述的目标物体的确定方法,其中,根据多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型之后,所述方法还包括:The method for determining a target object according to claim 1, wherein, after determining the obstacle type corresponding to the target object according to the plurality of groups of first lasers and the plurality of groups of second lasers, the method further comprises:
    确定所述障碍物类型所对应的躲避策略;determining an avoidance strategy corresponding to the obstacle type;
    根据所述躲避策略控制所述移动机器人的行进路线,以控制所述移动机器人成功躲避所述目标物体。The traveling route of the mobile robot is controlled according to the avoidance strategy, so as to control the mobile robot to successfully avoid the target object.
  7. 一种移动机器人,其中,包括:A mobile robot, including:
    面阵深度传感器,设置在所述移动机器人的前侧,用于使用激光面板对位于移动机器人的行进方向上的目标物体发射多组第一激光,其中,所述激光面板设置在所述移动机器人的前侧,所述前侧用于指示所述移动机器人在行进过程中的最前方,所述多组第一激光中的任意两组第一激光具有不同光线强度;The area array depth sensor is arranged on the front side of the mobile robot, and is used to use a laser panel to emit multiple groups of first lasers to target objects located in the direction of travel of the mobile robot, wherein the laser panel is arranged on the mobile robot The front side of the front side is used to indicate the frontmost part of the mobile robot during travel, and any two groups of first lasers in the plurality of groups of first lasers have different light intensities;
    处理器,设置在所述移动机器人内,与所述面阵深度传感器连接,或所述处理器位于所述面阵深度传感器内,用于接收所述激光面板发送的多组第二激光;根据所述多组第一激光与所述多组第二激光确定所述目标物体对应的障碍物类型,其中,所述多组第二激光为所述多组第一激光从所述目标物体上反射回后,所述激光面板接收到的。The processor is arranged in the mobile robot and is connected to the area array depth sensor, or the processor is located in the area array depth sensor, and is used to receive multiple groups of second laser light sent by the laser panel; according to The multiple sets of first lasers and the multiple sets of second lasers determine the type of obstacle corresponding to the target object, wherein the multiple sets of second lasers are the multiple sets of first lasers reflected from the target object Back after the laser panel is received.
  8. 根据权利要求7所述的移动机器人,其中,所述处理器还用于从所述多组第一激光与所述多组第二激光中确定具有相同光线强度的第一激光与第二激光;其中,所述第一激光中携带有编码信息,所述第二激光中携带有解码信息。The mobile robot according to claim 7, wherein the processor is further configured to determine the first laser light and the second laser light having the same light intensity from the multiple sets of first laser light and the multiple sets of second laser light; Wherein, the first laser light carries coding information, and the second laser light carries decoding information.
  9. 一种计算机可读的存储介质,其中,所述计算机可读的存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至6任一项中所述的方法。A computer-readable storage medium, wherein the computer-readable storage medium includes a stored program, wherein the program executes the method described in any one of claims 1 to 6 when running.
  10. 一种电子装置,包括存储器和处理器,其中,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至6任一项中所述的方法。An electronic device, comprising a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to execute the method described in any one of claims 1 to 6 through the computer program .
PCT/CN2022/113312 2021-09-23 2022-08-18 Method for determining target object, mobile robot, storage medium, and electronic apparatus WO2023045639A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111116319.7 2021-09-23
CN202111116319.7A CN113848902A (en) 2021-09-23 2021-09-23 Target object determination method, mobile robot, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2023045639A1 true WO2023045639A1 (en) 2023-03-30

Family

ID=78979014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113312 WO2023045639A1 (en) 2021-09-23 2022-08-18 Method for determining target object, mobile robot, storage medium, and electronic apparatus

Country Status (2)

Country Link
CN (1) CN113848902A (en)
WO (1) WO2023045639A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848902A (en) * 2021-09-23 2021-12-28 追觅创新科技(苏州)有限公司 Target object determination method, mobile robot, storage medium, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130204483A1 (en) * 2012-02-04 2013-08-08 Chulmo Sung Robot cleaner
CN105866790A (en) * 2016-04-07 2016-08-17 重庆大学 Laser radar barrier identification method and system taking laser emission intensity into consideration
CN110622085A (en) * 2019-08-14 2019-12-27 珊口(深圳)智能科技有限公司 Mobile robot and control method and control system thereof
CN110916562A (en) * 2018-09-18 2020-03-27 科沃斯机器人股份有限公司 Autonomous mobile device, control method, and storage medium
CN112749643A (en) * 2020-12-30 2021-05-04 深圳市欢创科技有限公司 Obstacle detection method, device and system
CN113848902A (en) * 2021-09-23 2021-12-28 追觅创新科技(苏州)有限公司 Target object determination method, mobile robot, storage medium, and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130204483A1 (en) * 2012-02-04 2013-08-08 Chulmo Sung Robot cleaner
CN105866790A (en) * 2016-04-07 2016-08-17 重庆大学 Laser radar barrier identification method and system taking laser emission intensity into consideration
CN110916562A (en) * 2018-09-18 2020-03-27 科沃斯机器人股份有限公司 Autonomous mobile device, control method, and storage medium
CN110622085A (en) * 2019-08-14 2019-12-27 珊口(深圳)智能科技有限公司 Mobile robot and control method and control system thereof
CN112749643A (en) * 2020-12-30 2021-05-04 深圳市欢创科技有限公司 Obstacle detection method, device and system
CN113848902A (en) * 2021-09-23 2021-12-28 追觅创新科技(苏州)有限公司 Target object determination method, mobile robot, storage medium, and electronic device

Also Published As

Publication number Publication date
CN113848902A (en) 2021-12-28

Similar Documents

Publication Publication Date Title
US10932635B2 (en) Vacuum cleaner
CN111142526B (en) Obstacle crossing and operation method, equipment and storage medium
US11787041B2 (en) Mobile robot and method of controlling a plurality of mobile robots
WO2023045639A1 (en) Method for determining target object, mobile robot, storage medium, and electronic apparatus
US20180289225A1 (en) Vacuum cleaner
US20170360266A1 (en) Vacuum cleaner
CN111067432B (en) Determination method for charging working area of charging pile of sweeper and sweeper
US11586201B2 (en) Method and apparatus for recognizing a stuck status as well as computer storage medium
KR20220086682A (en) Obstacle identification method, apparatus, autonomous mobile device and storage medium
CN110471086A (en) A kind of radar survey barrier system and method
US11867798B2 (en) Electronic device including sensor and method of determining path of electronic device
CN112826393B (en) Sweeping robot operation management method, sweeping robot, equipment and storage medium
TW202028908A (en) Mobile robot
CN111694360A (en) Method and device for determining position of sweeping robot and sweeping robot
KR20210116415A (en) LiDAR system design to mitigate lidar crosstalk
WO2022041343A1 (en) Robotic vacuum cleaner, control method and device for robotic vacuum cleaner, and computer-readable storage medium
US20220139086A1 (en) Device and method for generating object image, recognizing object, and learning environment of mobile robot
CN110928296A (en) Method for avoiding charging seat by robot and robot thereof
CN112014830A (en) Radar laser reflection and filtering method, sweeping robot, equipment and storage medium
CN114879690A (en) Scene parameter adjusting method and device, electronic equipment and storage medium
KR20210031828A (en) Electronic device including sensor and path planning method of the electronic device
CN114299392A (en) Mobile robot and threshold identification method, device and storage medium thereof
KR20220121483A (en) Method of intelligently generating map and mobile robot thereof
CN113031006B (en) Method, device and equipment for determining positioning information
TWI418834B (en) Scanning positioning system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871697

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE