WO2022121392A1 - 停泊控制方法、控制系统、移动机器人及存储介质 - Google Patents

停泊控制方法、控制系统、移动机器人及存储介质 Download PDF

Info

Publication number
WO2022121392A1
WO2022121392A1 PCT/CN2021/116331 CN2021116331W WO2022121392A1 WO 2022121392 A1 WO2022121392 A1 WO 2022121392A1 CN 2021116331 W CN2021116331 W CN 2021116331W WO 2022121392 A1 WO2022121392 A1 WO 2022121392A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile robot
parking
image data
dimensional image
parking device
Prior art date
Application number
PCT/CN2021/116331
Other languages
English (en)
French (fr)
Inventor
李重兴
李磊
Original Assignee
深圳阿科伯特机器人有限公司
上海阿科伯特机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳阿科伯特机器人有限公司, 上海阿科伯特机器人有限公司 filed Critical 深圳阿科伯特机器人有限公司
Publication of WO2022121392A1 publication Critical patent/WO2022121392A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0225Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Definitions

  • the present application relates to the technical field of mobile robots, in particular to a parking control method, a control system, a mobile robot and a storage medium.
  • Mobile robots are more and more widely used in industrial applications and living environments because of their ability to move autonomously and perform tasks autonomously.
  • the mobile robot needs to perform docking operations with other equipment.
  • the mobile robot needs to dock with the parking equipment.
  • the mobile robot needs to move to a position that is convenient for docking with the corresponding equipment, and then perform the docking operation. Docking operation.
  • the way to make the mobile robot autonomously move to the dockable position of the docked device is also constantly improving.
  • the purpose of the present application is to provide a parking control method, a control system, a mobile robot and a storage medium, in order to overcome the above-mentioned related art that the mobile robot can accurately move to The question of where to dock.
  • a first aspect disclosed in the present application provides a parking control method for a mobile robot, including: acquiring three-dimensional image data; according to the direction information corresponding to at least one side of the parking device in the three-dimensional image data , determine the deflection information of the mobile robot moving from the current posture to the parking side of the parking device; output a control instruction according to the deflection information, so that the mobile robot moves to the parking side of the parking device.
  • a second aspect of the present application provides a parking control method for a mobile robot, including: acquiring two-dimensional image data and three-dimensional image data; The first deflection information between the current posture of the mobile robot and the posture when it is facing the parking device; according to the direction information corresponding to at least one side of the parking device in the three-dimensional image data, it is determined that the mobile robot moves from the current posture to the parking device.
  • second deflection information for the movement of the berth side of the device output at least one control command according to the determined first deflection information and/or the second deflection information, so as to make the mobile robot move toward the berth side of the parking device.
  • a third aspect of the present application provides a parking control method for a mobile robot, including: acquiring three-dimensional image data; The first deflection information between the postures when facing the parking device; according to the direction information corresponding to at least one side of the parking device in the three-dimensional image data, determine the first position of the mobile robot moving from the current posture to the parking side of the parking device. Two deflection information; output at least one control command according to the determined first deflection information and/or the second deflection information, so as to make the mobile robot move toward the berth side facing the parking device.
  • a fourth aspect of the present application provides a control system for a mobile robot, including: at least one first interface terminal for receiving three-dimensional image data; at least one memory for storing at least one program; at least one processor for communicating with the At least one first interface terminal is connected to at least one memory for calling and executing the at least one program, so as to coordinate the at least one first interface terminal and the at least one memory to execute and implement the parking control method according to any one of the foregoing ;
  • the second interface terminal is used to confirm the docking with the parking device.
  • a fifth aspect of the present application provides a mobile robot, comprising: at least one sensor for providing at least three-dimensional image data; a moving system for performing a moving operation according to the received control instruction; the aforementioned control system, using the The first interface end of the device is respectively connected with each of the sensors and the moving system, and is used for outputting the control instruction according to at least the acquired three-dimensional image data.
  • a sixth aspect of the present application provides a mobile robot system, comprising: the mobile robot as described above; and a parking device for docking with the mobile robot.
  • a seventh aspect of the present application provides a computer-readable storage medium storing at least one program, and the at least one program executes and implements the parking control method as described in any preceding one when called.
  • the parking control method, control system, mobile robot and storage medium disclosed in the present application use three-dimensional image data to determine the deflection direction of the mobile robot moving to the berth side, which is beneficial for the mobile robot to use the measured spatial data to carry out
  • the attitude adjustment can reduce the situation that the mobile robot cannot efficiently deflect and move to the parking space due to the lack of reflection of the distance relationship between the mobile robot and the parking device in the two-dimensional image data.
  • FIG. 1 is a schematic structural diagram of a mooring device according to an embodiment of the present application.
  • FIG. 2 shows a bottom view of an embodiment of the mobile robot of the present application.
  • FIG. 3 is a schematic flowchart of an embodiment of the parking control method of the present application.
  • FIG. 4 is a schematic flowchart of another embodiment of the parking control method of the present application.
  • FIG. 5 shows the angle data ⁇ 1 between the normal direction of the berth side of the berthing device model and the current posture of the mobile robot in a visualized manner for the present application.
  • FIG. 6 is a diagram showing the deflection information of the rotation required for the mobile robot of the present application to move to the berth side.
  • FIG. 7 shows another illustration of the deflection required for the mobile robot of the present application to move to the berth side of the parking device.
  • FIG. 8 shows that the first deflection information deviating from the direction facing the parking device in the current posture of the mobile robot of the present application is ⁇ .
  • FIG. 9 is a schematic flowchart of another embodiment of the parking control method of the present application.
  • FIG. 10 shows a schematic flowchart of another embodiment of the parking control method of the present application.
  • A, B or C or “A, B and/or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C” . Exceptions to this definition arise only when combinations of elements, functions, steps, or operations are inherently mutually exclusive in some way.
  • the functions of the mobile robot that need to work in conjunction with the parking device are gradually converted to one side of the mobile robot or the parking device, so that the mobile robot or the parking device can complete it alone.
  • the parking device As a charging pile or a garbage collection device as an example, the parking device provides a specific marking pattern for the mobile robot, and the mobile robot recognizes the marking pattern through image recognition to determine that the parking device has been found. .
  • the parking device is still provided with an infrared emitting device, and the mobile robot determines the relative distance between the mobile robot and the parking device by identifying the intensity and direction of the infrared signal. position, thereby ensuring that the mobile robot moves to align with the docking device and has a certain distance from the docking device for the mobile robot to finally move slowly toward the docking device to complete the docking.
  • the mobile robot docks with the charging pile for subsequent charging operations.
  • the mobile robot is docked with the garbage collection device to perform subsequent garbage collection operations and the like.
  • the above process still requires mobile robots or parking devices to work together, which increases the cost and design problems brought about by maintaining the technologies that both parties rely on; long.
  • the mobile robot utilizes the image position of the identified marking pattern on the docking device in the two-dimensional image data to determine deflection information between the mobile robot and the marking pattern on the docking device, and adjust the mobile robot's position, until the image position of the marking pattern in the two-dimensional image data is located in the center of the entire image, then confirm and ensure that the mobile robot moves to the alignment parking device. Affected by the distance between the mobile robot and the parking device, when the mobile robot is relatively close to the parking device and the direction is adjusted, the change of the pixel position of the marking pattern in the two-dimensional image data may not be able to be detected by the mobile robot.
  • this azimuth error is enough to make the mobile robot fail to align with the parking device, so that the mobile robot cannot complete the docking operation; or when the mobile robot completes the docking operation, the mobile robot and the docking device perform the docking operation. bring difficulties. In this way, the success rate of docking between the mobile robot and the parking device is not high enough.
  • the present application also provides a parking control method for a mobile robot.
  • the parking control method utilizes at least three-dimensional image data to determine a deflection information of the parking side of the parking device relative to the current posture of the mobile robot, so as to effectively determine the direction of moving to the parking side of the parking device to move to the parking side.
  • the parking device is used for parking the mobile robot and interacts with the parking device to solve the troubles caused to the mobile robot in the process of the mobile robot using the autonomous movement ability to complete the behavior and operation corresponding to each function.
  • the parking device is a charging pile
  • the charging pile is used for the mobile robot to park and replenish electric energy, so as to provide energy for autonomous movement and behavioral operations.
  • the parking device is a garbage collection device, and the garbage collection device is used for the mobile robot to park and dump the collected garbage.
  • the parking device may also be a device with both charging and garbage collection functions.
  • the berthing device is a solid object including a berth side provided with a docking end for docking with the mobile robot.
  • the docking end can be arranged on the side surface of the main body of the parking device, or arranged on the bottom plate extending from the side surface of the main body in the direction of the traveling plane (such as the ground) of the mobile robot.
  • FIG. 1 is a schematic structural diagram of a mooring device in an embodiment, wherein the mooring device includes a berth side 11 and a back side 12 facing away from the berth side, and between the berth side and the back side, and The two body sides 13 of the main structure are formed with the berth side and the back side; wherein, the docking end on the berth side 11 is arranged on the bottom plate, at the target position with the corresponding interface end 111 on the mobile robot; the berthing device is connected to the mains The plug protrudes from the back side; the cavity formed by the main structure accommodates a power supply circuit for supplying power to the mobile robot, so that when the corresponding interface end of the mobile robot is electrically connected to the docking end, the alternating current provided by the mains can be converted The power required for the battery of the mobile robot.
  • the mooring device includes a berth side 11 and a back side 12 facing away from the berth side, and between the berth side and the back side
  • the parking control method mainly operates in the hardware and software environment provided by the mobile robot.
  • the mobile robot at least includes: a perception system, a control system, a mobile system, an interface terminal for docking with a parking device, and the like.
  • the sensing system includes various types of sensors configured on the mobile robot.
  • the control system is a central processing system for the mobile robot to move autonomously and perform a certain behavior function autonomously. The movement operation provided by the mobile system to make the mobile robot perform one of the behavior functions is controlled by the control system of the mobile robot.
  • the control system and the sensors in the sensing system communicate with each other through the first interface; the control system and the control circuit (or drive circuit) in the mobile system communicate with each other through the third interface.
  • the interface end docked with the mooring device is also called the second interface end.
  • the sensor is used to provide sensing information for the mobile robot to perform movement and/or behavior operations.
  • the number of the sensors is one or more.
  • Each sensor outputs one-dimensional data, two-dimensional image data, or three-dimensional image data according to the data organization form.
  • examples of sensors that provide one-dimensional data include at least one of the following: a single-point (or single-line) light sensor, a single-line acoustic sensor, a motor rotation counting sensor, a speed (or acceleration) sensor, an angular velocity (or angular acceleration) Sensors, inertial navigation sensors, collision sensors, proximity sensors, cliff sensors, etc.
  • the one-dimensional data sensed by the light sensing sensor and the acoustic wave sensor based on the light/sound wave reflection principle is used to reflect the fact that the measured direction has a solid object, or reflect the distance data between the solid object and the mobile robot in the measured direction .
  • the single-point (or single-line) light sensing sensor includes at least one of the following examples: single-point radar sensor, single-point lidar sensor, single-point infrared sensor, single-point ToF sensor, single-line laser sensor, single-line radar sensor, single-line lidar sensor, single-line infrared sensor, single-line ToF sensor, etc.
  • the single-wire acoustic wave sensor include at least one of the following: an ultrasonic sensor and the like.
  • the senor that provides the two-dimensional image data is based on the two-dimensional image data (also called color image data) provided by the principle of photosensitive imaging, which is used to reflect the shape of the measured position of the solid object.
  • This type of sensor is used to convert the light energy reflected by each measurement point of each object captured within its viewing angle range into image data corresponding to pixel resolution.
  • the measurement point is a reflective area on a solid object corresponding to each pixel position in the image data based on the principle of light reflection.
  • the image acquisition device can be used to provide obstacle data of the surrounding environment of the mobile robot.
  • the sensor for providing two-dimensional image data includes at least one of the following: a camera device integrating a CCD or CMOS sensing device, a fisheye camera device, a camera device for sensing infrared light, and the like.
  • the two-dimensional image data includes matrix data describing, using color data, the surrounding environment captured within the viewing angle of the corresponding sensor.
  • the number of pixel rows/columns in the matrix data corresponds to the pixel resolution of the image acquisition device.
  • the color image data reflects the wavelength band of the light reflected by each object measurement point in the surrounding environment that can be acquired by the corresponding sensor, and is converted into color data.
  • Examples of the color data include any of the following: RGB data, R/G/B data, or light intensity data (also known as grayscale data), etc., wherein any of the R/G/B data can be used as light
  • the intensity data is used, in other words, the light intensity data and the color image data of a single color are shared data.
  • the light intensity data is determined by the image acquisition device by detecting the intensity of the light beam in a preset wavelength band reflected by the surface of the object in the surrounding environment within the viewing angle range; wherein, the wavelength band includes at least one of the following examples Infrared band, ultraviolet band, or visible light band, etc.
  • sensors that provide three-dimensional image data include at least one of the following: multi-line laser sensor, multi-line radar sensor, multi-line lidar sensor, infrared image sensor, ToF-based area array sensor, binocular camera device, depth image camera device etc.
  • the three-dimensional image data provided by each of the above-mentioned sensors for providing three-dimensional image data based on the principles of structured light and time-of-flight of light waves is used to reflect the angle data and distance data of the measured position of the solid object and the mobile robot.
  • some sensors are integrated with sensing devices that can acquire two-dimensional image data and three-dimensional image data.
  • a depth sensor and a CMS sensing device are integrated in the sensor to simultaneously acquire depth image data and color image data.
  • the movement system of the mobile robot provides movement operations for the mobile robot to perform certain behavioral functions, which are controlled by the control system of the mobile robot.
  • FIG. 2 is a bottom view of an embodiment of the mobile robot.
  • the moving system includes: a driving wheel 21 , a driven wheel 22 , a driving motor (not shown) and a driving unit (not shown).
  • the driving wheels 21 are installed along opposite sides of the chassis 20.
  • the driving wheels 21 are arranged at the rear end of the dust suction port, and are used to drive the mobile robot to reciprocate back and forth according to the planned movement trajectory. , rotational motion or curved motion, etc., or drive the mobile robot to adjust the posture, and provide two contact points between the body and the floor surface.
  • the drive wheel 21 may have a biased drop suspension system, movably fastened, eg rotatably mounted to the body, and receiving a spring bias biased downward and away from the body.
  • the spring bias allows the drive wheel 21 to maintain contact and traction with the ground with a certain ground force to ensure that the tire tread of the drive wheel 21 is in sufficient contact with the ground.
  • the steering is realized by the difference in rotational speed of the driving wheels 21 on both sides of the body that is driven by the adjuster.
  • At least one driven wheel may also be provided on the body to stably Support body.
  • the driven wheel in some embodiments, the driven wheel is also referred to as: auxiliary wheel, caster, roller, universal wheel, etc.
  • at least one driven wheel 22 is provided on the main body, and together with the driving wheels 21 on both sides of the main body, the main body maintains the balance of the moving state.
  • the driving wheel 21, its driving motor, and the battery part in the mobile system are located at the front part and the rear part of the mobile robot body, respectively, so that the weight of the entire mobile robot can be reduced. balance.
  • the moving system further includes a driving motor.
  • the mobile robot may further comprise at least one drive unit, such as a left-wheel drive unit for driving the left-hand drive wheel and a right-wheel drive unit for driving the right-hand drive wheel.
  • the drive unit may contain one or more processors (CPUs) or microprocessor units (MCUs) dedicated to controlling the drive motors.
  • the micro-processing unit converts the control instructions or data output by the control system into electrical signals for controlling the driving motor, and controls the rotational speed and steering of the driving motor according to the electrical signals to adjust the mobile robot speed and direction of movement.
  • the information or data is the declination angle determined by the processing means.
  • the first interface end is used for connecting with the sensor for environment perception on the mobile robot, so as to receive the perception data provided by the connected sensor.
  • each sensor can be connected with one or more first interface terminals, and the types of the first interfaces configured by different sensors can be the same or different.
  • the first interface terminal includes but is not limited to: an interface set based on a serial transmission protocol, and/or an interface set based on a parallel transmission protocol.
  • the first interface terminal includes at least one of the following USB interfaces, RS232 interfaces, HDMI interfaces, bus interfaces, and the like. Taking the sensor including a depth image capturing device as an example, the depth image capturing device interacts with the processor through a USB interface to output three-dimensional image data, and receives instructions for outputting the three-dimensional image data.
  • the second interface terminal is used to confirm docking with the parking device.
  • the second interface end can be disposed on the body side of the mobile robot or under the chassis.
  • the second interface end is provided in the form of a metal patch on the surface of the casing under the chassis of the mobile robot and beside the two rollers.
  • the wireless charging coil of the second interface end is arranged in a position in the casing of the mobile robot close to the bottom surface.
  • the third interface terminal is used to connect with each circuit device in the mobile robot, wherein the circuit device is a control circuit (or referred to as a drive circuit) or the like that enables the mobile robot to perform operations such as movement or behavior.
  • the third interface terminal includes, but is not limited to: an interface set based on a serial transmission protocol, and/or an interface set based on a parallel transmission protocol.
  • the third interface terminal includes at least one of the following USB interface, RS232 interface, twisted pair interface, and the like.
  • the memory is used to store at least one program, and is also used to store the acquired three-dimensional image data.
  • the memory includes but is not limited to high-speed random access memory and non-volatile memory.
  • the number of the memories is one or more.
  • the memory may also include memory remote from the one or more processors, eg, network-attached memory accessed via RF circuitry or external ports and a communications network (not shown), wherein the communications network It may be the Internet, one or more intranets, a local area network (LAN), a wide area network (WLAN), a storage area network (SAN), etc., or a suitable combination thereof.
  • the memory controller may control access to the memory device by other components of the robot, such as the CPU and peripheral interfaces.
  • the processor communicates data with each memory, and communicates with each hardware through the first interface end, the second interface end and the third interface end respectively.
  • the number of processors is one or more.
  • At least one processor is operably coupled to volatile memory and/or non-volatile memory.
  • At least one processor may execute instructions stored in memory and/or non-volatile storage to perform operations in the mobile robot, such as determining yaw information for poses, etc., from acquired three-dimensional image data.
  • the processor may include one or more general-purpose microprocessors, one or more application-specific processors (ASICs), one or more digital signal processors (DSPs), one or more field programmable logic arrays (FPGAs) , or any combination of them.
  • ASICs application-specific processors
  • DSPs digital signal processors
  • FPGAs field programmable logic arrays
  • the processor is also operably coupled to an I/O port that can enable the robot to interact with various other electronic devices and an input structure that can enable a user to interact with a computing device.
  • the input structures may include buttons, keyboards, mice, trackpads, and the like.
  • the other electronic device may be a mobile motor in the mobile device in the robot, or a slave processor in the robot dedicated to controlling the mobile device, such as an MCU (Microcontroller Unit, micro control unit, MCU for short).
  • the aforementioned drive units may share a processor with the control system or be provided independently of each other.
  • At least one processor reads programs stored in memory and acquires three-dimensional image data to perform parking control by executing subsequent methods and coordinating systems in the mobile robot.
  • FIG. 3 is a schematic flowchart of a parking control method.
  • the process in which the processor coordinates the memory and each hardware system to perform the parking control operation is also referred to as the process in which the control system performs the following operations.
  • step S110 three-dimensional image data is acquired.
  • the three-dimensional image data includes matrix data provided by any of the aforementioned sensors for providing three-dimensional image data, which can describe the distance and angle between the mobile robot and the obstacle.
  • control system may collect the three-dimensional image data of the corresponding sensor in real time, and confirm the identification of the berthing device or the identification of the berth side of the berthing device through step S120.
  • the control system is preset with multiple working modes, and in different working modes, the control system operates in The corresponding movement control methods when the parking device is recognized are different.
  • the working mode is used to represent the movement and/or behavior operation performed by the control system to achieve a certain function.
  • the control system executes the movement control performed to accurately dock the berth side of the parking device; in the cleaning mode, the control system recognizes When parking the device, the parking device is regarded as a virtual wall, and the autonomous movement operation of traversing a certain cleaning area is performed, and the operation of performing the cleaning behavior during the movement process is performed. Therefore, the parking and homing mode reflects that the mobile robot performs an aligning movement operation and a docking behavior operation for docking with the parking device; the cleaning mode reflects that the mobile robot performs an autonomous movement operation and a cleaning behavior operation for cleaning a cleaning area.
  • the working modes include at least a parking and homing mode, and other working modes set according to the functions of the mobile robot.
  • the mobile robot is a cleaning robot
  • its working modes further include: cleaning mode, grabbing mode, and map building mode.
  • the control system adjusts the working modes based on preset switching conditions. Examples of the switching conditions include: being set based on the interaction between the external device and the mobile robot, or being set based on data provided by sensors of the mobile robot. Taking the switching condition of the parking and homing mode as an example, the switching condition is set based on the data provided by the control system monitoring the hardware system in the mobile robot, which is associated with the cooperation mechanism between the parking devices.
  • FIG. 4 is a schematic flowchart of a parking control method in another embodiment.
  • the control system executes steps S101 and S102 in different working modes, so as to execute step S110 at least after switching to the parking control mode.
  • step S101 battery data and/or dust collection data of the mobile robot and current positioning information of the mobile robot are monitored.
  • the battery data reflects the remaining power of the battery in the mobile robot. It can be calculated by measuring the battery's power supply voltage, power supply, power supply duration, remaining charge and other data. value, etc.
  • the dust collection data reflects the amount of dust in the dust collection box in the mobile robot. This can be represented by a signal provided by a sensor, such as a pressure sensor, provided in the dust box.
  • the current positioning information of the mobile robot is usually represented by the position of the mobile robot in the map data, and may also be represented by the relative positional relationship between the mobile robot and the parking device.
  • the map data is data pre-stored in the memory, which is individually designed based on the physical space where the mobile robot is located, or constructed by the mobile robot during the moving process.
  • the map data describes the physical space as a grid or a vector, in which the position information of the parking device, such as coordinate values, is recorded; and the current positioning information calculated by the mobile robot according to the sensing information obtained from each sensor.
  • step S102 according to the monitored battery data and/or dust collection data, and the navigation route between the positioning information and the parking device, output the information for controlling the mobile robot to move to the parking device along the navigation route. Control instruction.
  • control system monitors the battery data and/or the dust collection data to confirm whether the mobile robot satisfies the switching condition for switching its working mode to the parking-return mode, and if so, constructs the relationship between the positioning information and the parking device according to the parking-return mode and output a control command according to the navigation route, and the control command is output to the mobile system, so that under the movement of the mobile system, the mobile robot as a whole moves to the parking device along the navigation route.
  • the control system when monitoring the battery data below a battery threshold, switches to the park-and-home mode to perform movement and charging docking operations corresponding to the park-and-home mode.
  • the battery threshold may be a fixed value, or determined by evaluating the power consumption required for the route distance between the current positioning information and the location of the parking device.
  • control system switches to the parking and homing mode when monitoring that the sensor generates dust collection data, so as to perform the movement and garbage collection and docking operations corresponding to the parking and homing mode.
  • control system integrates the monitored battery data and dust collection data to perform mode switching analysis to evaluate whether the monitored battery data and dust collection data meet the switching conditions, thereby ensuring that the mobile robot can autonomously homing to the park On the device, to perform the operations of movement, docking charging and garbage collection corresponding to the parking and homing mode.
  • step S110 uses the three-dimensional image data acquired at different positions for positioning or obstacle avoidance.
  • the mobile robot uses other sensors, such as the image pickup device and the inertial navigation sensor, for positioning, and the infrared sensor for obstacle avoidance, and the determined previous positioning positions and obstacle positions are marked in the map data.
  • step S110 is performed to improve the accuracy of aligning the parking space side of the parking device.
  • control system using any of the above examples to acquire the three-dimensional image data is to use the three-dimensional image data to calculate the deflection information of the mobile robot moving from the current posture to the berth side of the parking device.
  • the control system when the control system continuously performs step S110 during the process of navigating and moving to the parking device, the control system obtains the When corresponding to the three-dimensional image data of the parking device, the corresponding first image feature can be identified. Thereby, the positional relationship between the mobile robot and the parking side of the parking device can be determined earlier, so as to facilitate the movement to the position where the docking operation can be performed in a short route.
  • control system performs the identifying operation upon confirmation of movement near the mooring device.
  • the way of confirming moving to the vicinity of the parking device is determined according to the distance calculated by the navigation route, or determined according to the navigation strategy.
  • control system pre-records the location information on the map based on the successful docking to the berth side, and sets the navigation route to the corresponding location at the current time accordingly.
  • the recognition operation is performed to increase the recognition success rate.
  • control system takes a position at a preset distance from the parking device as the end point, and accordingly sets a navigation route for the current move to the corresponding end point; Control the mobile robot to move towards the parking device. Therefore, in the process of moving along the edge, the acquired three-dimensional image data is identified to identify the image features corresponding to either side of the berthing device, so as to analyze and obtain the berth side of the berthing device.
  • the control system when identifying a first image feature containing the back side and/or body side of the corresponding parking device, the control system provides the orientation information based on the image features of the back side and the body side, and the back side and the berth side have a reverse The angle relationship of , and the direction information of the berth side of the parking device relative to the current posture of the mobile robot is obtained by analysis. For another example, the control system determines the orientation information of the berth side relative to the current posture of the mobile robot when identifying the image features including the berth side and the body side of the corresponding parking device.
  • step S120 according to the direction information corresponding to at least one side of the parking device in the three-dimensional image data, the deflection information of the mobile robot moving from the current posture to the parking side of the parking device is determined.
  • the control system uses at least three-dimensional image data to identify image features corresponding to at least one side of the parking device, so as to use the image features to determine the direction information of the parking side of the parking device relative to the current posture of the mobile robot.
  • the image features of the corresponding mooring device are in the three-dimensional image data: image features determined based on the identified contour of the mooring device, or based on an identification frame (such as a rectangular frame) containing the identified mooring device. Determined image features.
  • the image features include: an image area of the parking device in the three-dimensional image data, and/or feature information of the parking device in the three-dimensional image data, and the like.
  • the feature information includes, for example, at least one of the following: a feature surface, a feature point, a feature line, and the like.
  • the feature line includes: a line segment formed by mapping the feature surface into a plane, and/or a line segment determined based on the contour line of the mooring device, and the like.
  • the feature points include: points formed by mapping the feature lines into a plane, and/or points determined based on the intersection of the contour lines of the mooring device, and the like.
  • the image features extracted from the three-dimensional image data are called the first image features, and the image features extracted from the two-dimensional image data are referred to as the first image features. is called the second image feature.
  • control system executes step S1211 to identify the mooring device according to the first image feature in the 3D image data, and determine at least one side of the mooring device that falls within the viewing angle of the 3D image data.
  • the control system uses a preset mooring device model to perform image matching on the three-dimensional image data, so as to obtain at least one image of the mooring device that falls within the viewing angle of the three-dimensional image data according to the matched first image features.
  • the mooring device model includes three-dimensional data capable of describing the three-dimensional space occupied by the mooring device, and/or three-dimensional feature identifiers; or feature identifiers obtained by reducing the dimensionality of the three-dimensional contour data of the mooring device.
  • the mooring device models are pre-stored in memory in the form of arrays, databases, or three-dimensional model files.
  • control system calculates the similarity between various preset feature identifiers in the mooring device model and the acquired three-dimensional image data, and when the obtained similarity meets the identification condition, determines that the similarity in the three-dimensional image data meets the identification condition
  • the first image feature of Using at least one side of the mooring device model to which the obtained first image feature belongs, at least one side of the corresponding mooring device is determined.
  • step S1212 based on the first image feature in the preset parking device model that matches the three-dimensional image data and its orientation information, determine the distance between the current posture of the mobile robot and the berth side of the parking device model angle data.
  • the control system determines the direction information of at least one side of the parking device model that the mobile robot faces when the mobile robot shoots the parking device with the current posture according to the mapping relationship of the first image feature in the parking device model. .
  • the mapping relationship reflects at least one side of the parking device model that falls within the viewing angle range under the current posture of the mobile robot.
  • the direction information corresponding to at least one side in the parking device model is, for example, the normal direction of the corresponding side that can be mapped.
  • the direction information corresponding to at least one side in the mooring device model is, for example, a direction determined based on the normal direction of the corresponding side that can be mapped.
  • each normal direction is a direction F1 perpendicular to each side surface of the mooring device model body, or a direction F1 projected to a direction F2 determined along a plane parallel to the traveling plane of the mobile robot.
  • the control system constructs a three-dimensional coordinate system of the three-dimensional image data, and determines the direction information corresponding to the image feature, that is, the normal direction of at least one side of the parking device model, according to the identified mapping relationship of the first image feature in the parking device model.
  • FIG. 5 shows the angle data ⁇ 1 between the normal direction of the berth side of the parking device model and the current posture of the mobile robot in a visual manner.
  • the coordinate system xyz is the three-dimensional coordinate system; the ray Ray1 is the normal direction of the berth side of the parking device model, the ray Ray2 is the normal direction of the integrated side of the parking device model, and the ray Ray3 is the depth image capture of the mobile robot in the current posture
  • the optical axis direction of the device wherein, the angle data ⁇ 1 of the normal line direction Ray1 on the berth side of the parking device model in the illustration in this coordinate system.
  • the first image feature and the corresponding direction information identified by the control system are not only the normal direction of the berth side of the parking device model, but can be the parking device model determined based on the first image feature.
  • the angle between the current posture of the mobile robot and the berth side of the parking device model can be calculated according to the direction information between the side of the parking device model and the berth side corresponding to the first image feature. data (also orientation information).
  • the control system executes step S1221 (not shown), and the control system identifies from the three-dimensional image data that the mooring reflects the mooring A first image feature of the berth side of the device.
  • the structural change of the mooring device is not disruptive.
  • various types of mooring devices include a main body and a structure of a bottom plate extending from the bottom of the main body along the ground direction, wherein the shapes of the bottom plate and the main body of different types of mooring devices may be the same or different.
  • all types of mooring devices include a main body, wherein the slope of the berth side of the main body is different from that of the other sides; It is said that the berth side of each main body presents structural characteristics with different slopes from other sides.
  • the first image feature reflecting the berth side of the berthing device may be determined based on the image feature of the berth side identified from the three-dimensional image data.
  • the first image feature reflecting the berth side of the mooring device includes at least one of the following: a feature surface, a feature point, a feature line and the like determined based on the three-dimensional shape of the berth side of the mooring device.
  • the feature line includes: a line segment formed by mapping the feature surface into a plane, and/or a line segment determined based on the contour line of the mooring device on the berth side, and the like.
  • the feature points include: points formed by mapping the feature lines into a plane, and/or points determined based on the intersection of the contour lines of the mooring device on the berth side, and the like.
  • the first image feature reflecting the berth side of the mooring device may also be determined based on the image feature of the non-berth side identified from the three-dimensional image data.
  • the first image feature reflecting the berth side of the mooring device includes at least one of the following: a feature surface, a feature point, a feature line, etc. determined based on the three-dimensional shape of the non-berth side of the mooring device.
  • the feature line includes: a line segment formed by mapping the feature surface into a plane, and/or a line segment determined based on the contour line of the mooring device on the non-berth side, and the like.
  • the feature points include: points formed by mapping the feature lines into a plane, and/or points determined based on the intersection of the contour lines of the mooring device on the non-berth side, and the like.
  • the control system can estimate the direction information of the berth side according to the identified angular relationship between each side of the berthing device and the berth side.
  • control system extracts a first image feature in the three-dimensional image data according to a preset identification condition reflecting the contour features of the mooring device, and uses the extracted first image feature to identify at least one side of the mooring device.
  • the identification conditions include but are not limited to at least one of the following: the angular relationship and/or the positional relationship between the planes conforming to the contour characteristics in the three-dimensional coordinate system; the relationship between the planes conforming to the contour characteristics in the two-dimensional coordinate system angular relationship and/or position relationship; a classifier that can identify and reflect the contour characteristics of the mooring device obtained by machine learning in advance, etc.
  • the identification condition is used to reflect the angular relationship and/or the structural relationship between the bottom plate on the berth side of the mooring device and the main body on the berth side.
  • the identification conditions are used to reflect the angular relationship and/or the structural relationship of the berth side, back side, and each body side of the mooring device with respect to the slope of the ground.
  • control system clusters the acquired three-dimensional image data in the three-dimensional coordinate system to obtain the positional and angular relationships between the plane features in the three-dimensional coordinate system, and when the obtained plane features are
  • the positional relationship and the angular relationship include each plane feature that meets the identification conditions of the mooring device set based on the profile feature
  • the first image feature of the identified mooring device is determined based on each plane feature that meets the identification condition .
  • the first image feature is used to determine at least one side of the docking device within the viewing angle of the three-dimensional image data.
  • control system converts the three-dimensional image data into a two-dimensional coordinate system that reflects the moving plane where the mobile robot is located; and identifies the angular relationship and/or the position in the two-dimensional coordinate system that meets the recognition conditions.
  • Feature lines, and or feature surfaces optionally, it can also be determined that the identified feature lines correspond to feature surfaces in a three-dimensional coordinate system.
  • Each feature line and/or each special front face in the obtained coordinate system of each dimension can be regarded as the first image feature.
  • the first image feature is used to determine at least one side of the docking device within the viewing angle of the three-dimensional image data.
  • the control system uses a two-dimensional coordinate system parallel to the traveling plane of the mobile robot to reduce the dimension of the acquired three-dimensional image data process, and obtain two-dimensional data described by position coordinates; according to the distance of each position coordinate, the obtained two-dimensional data are subjected to clustering processing to determine the characteristic lines reflecting the surfaces on each side of the parking device; according to the distance between each characteristic line Whether the positional relationship and/or angular relationship of the mobile robot meet the identification conditions, if so, confirm that the parking device is recognized, and determine that at least one side of the recognized parking device falls within the viewing angle range of the depth image device of the mobile robot; otherwise , the three-dimensional image data is re-acquired to repeat the above process.
  • the positional relationship reflecting the above structure is, for example, the line segments determined by clustering include intersecting line segments.
  • the angle relationship reflecting the above structure includes: the line segment determined by clustering includes a line segment whose slope is smaller than a preset angle threshold.
  • the above identification conditions comprehensively reflect at least the following structural features of the mooring device: the angular relationship of the bottom plate contour on the berth side that is attached to the ground, and the positional relationship between the bottom plate contour and the connected main body contour.
  • the control system screens out the line segments that meet the above two identification conditions to obtain the characteristic line.
  • the control system confirms the mooring device and identifies the mooring device by analyzing the closed figure formed by the obtained characteristic line and other connected line segments, or the closed space structure screened out from the three-dimensional image data by the obtained characteristic line. At least the berth side of the parking device falls within the viewing angle range of the corresponding depth image camera device of the mobile robot.
  • the analysis process can be implemented by algorithms such as classifiers or constructing connected domains in the corresponding coordinate system.
  • the above identification conditions may also include reflecting the structural features of the back side of the mooring device, so that the control system can still identify the mooring device by using the first image features filtered out. Therefore, the control system can still estimate the direction information between the berth side and the mobile robot by using the identification result of the back side.
  • control system obtains the first image feature reflecting at least one side of the parking device in the three-dimensional image data.
  • the control system executes step S1222 to determine the direction information between the parking side of the parking device and the current posture of the mobile robot.
  • step S1222 based on the three-dimensional data identified from the three-dimensional image data reflecting the parking side of the parking device, the direction information between the parking side and the current posture of the mobile robot is determined.
  • control system uses the obtained three-dimensional data corresponding to the first image feature to calculate the direction information of the corresponding side and the mobile robot.
  • the control system determines, according to the obtained three-dimensional data corresponding to the first image feature reflecting the at least one side of the parking device, when the mobile robot shoots the parking device with the current posture, the position of the at least one side of the parking device that the posture faces.
  • direction information reflects the angle data of at least one side of the parking device falling within the viewing angle range under the current posture of the mobile robot.
  • the direction information corresponding to at least one side of the parking device is, for example, the normal direction of the corresponding side.
  • the direction information corresponding to at least one side of the mooring device is, for example, a direction determined based on the normal direction of the corresponding side.
  • Each normal direction is, for example, a direction F3 perpendicular to each side plane of the parking device main body, or a direction F3 projected to a direction F4 determined along a plane parallel to the traveling plane of the mobile robot.
  • the control system uses any of the above examples to identify that the three-dimensional image data includes the first image feature on the berth side of the corresponding parking device; three-dimensional coordinate system), the control system determines the normal direction of the plane (or line segment) where the first image feature is located, and according to the normal direction in the same three-dimensional coordinate system (or two-dimensional coordinate system) and the current posture of the mobile robot The angle data between the two, and the direction information between the current posture of the mobile robot and the parking side of the parking device is determined.
  • the control system determines where the first image feature is located.
  • the obtained direction information is also adjusted in its accuracy based on the distance between the docking device and the mobile robot.
  • the accuracy of the obtained direction information is adjusted according to the depth value corresponding to at least one side of the parking device in the three-dimensional image data. The larger the depth value, the lower the accuracy of its orientation information.
  • a three-dimensional identification structure that is stable and at least characterizes the berth side of the mooring device is provided on the mooring device.
  • the three-dimensional identification structure may be provided only on the berth side, or on the berth side and other sides (eg, the back side and/or each body side) of the berthing device.
  • the three-dimensional image data acquired by the control system includes an image area reflecting the berth-side marker of the parking device.
  • the berth side sign is the three-dimensional sign structure.
  • the control system executes step S1231 (not shown), and uses the preset identification conditions corresponding to the three-dimensional identification structure to identify the acquired three-dimensional image data to determine the identified parking device and at least one side thereof.
  • the control system also executes the aforementioned step S1222 to obtain the direction information, which will not be repeated here.
  • step S120 includes steps S1241, S1242 and S1243.
  • step S1241 two-dimensional image data is acquired.
  • step S1242 at least one side of the parking device is identified according to the first image feature in the three-dimensional image data and the second image feature in the two-dimensional image data.
  • the two-dimensional image data is provided by an image camera device including a photosensitive sensor (eg, CCD, CMS, etc.).
  • step S1243 based on the first image feature identified from the three-dimensional image data reflecting the parking side of the parking device, the direction information between the parking side and the current posture of the mobile robot is determined.
  • the execution process of step S1243 is the same as or similar to the aforementioned step S1222, and will not be described in detail here.
  • the viewing angle range of the acquired two-dimensional image data and the viewing angle range of the three-dimensional image data acquired in step S110 have overlapping range regions.
  • the control system performs identification processing on the image areas within the corresponding ranges of the two types of image data.
  • the control system presets the image position mapping relationship between the image regions of the two types of images.
  • a light sensor and a distance sensor are integrated into the image camera device provided on the mobile robot, then the control system synchronously acquires three-dimensional image data and two-dimensional image data with completely overlapping viewing angle ranges, and presets two Image position mapping relationship for image-like data.
  • the image camera device provided on the mobile robot includes a binocular camera
  • the control system synchronously acquires two two-dimensional image data with a partially overlapping range area, and reconstructs the two-dimensional image data based on the two two-dimensional image data.
  • the three-dimensional image data constructed in the overlapping range area is determined, and the image position mapping relationship between the three-dimensional image data and one of the two-dimensional image data is determined.
  • the control system identifies and reflects the berth side of the berthing device and its direction information from each image area in which the viewing angle range overlaps in the 3D image data and the 2D image data.
  • the control system presets a second image feature for identifying the parking device from the two-dimensional image data, or presets a classification obtained by machine learning and identify the mooring device and at least one side thereof in a preset manner.
  • the classifier is used to identify the mooring device and at least one side thereof according to the characteristics of at least one side of the mooring device in the trained image sample.
  • a three-dimensional marking structure or a planar marking pattern is provided on at least the berth side of the mooring device.
  • the marking structure or marking pattern can be described in the two-dimensional image data by the light sensing sensor; in other words, the two-dimensional image data includes an image area reflecting the marking on the berth side of the berthing device. Therefore, the control system presets the second image feature for identifying the mooring device and/or the identification structure (or identification pattern) from the two-dimensional image data, or the classifier obtained through machine learning, etc., and uses the preset means to identify the mooring device and at least one side thereof. Wherein, the classifier is used to identify the mooring device and at least one side thereof according to the features of at least one side of the mooring device and/or the features of the marking structure (or the marking pattern) in the trained image sample.
  • the control system identifies the second image features on at least one side of the parking device from the two-dimensional image data, and maps them into the three-dimensional image data according to the image position mapping relationship to obtain three-dimensional data on the corresponding side.
  • the determined sides of the mooring device and its three-dimensional data are used to determine directional information for the berth side of the mooring device.
  • the control system identifies the second image features on the berth side and the integrated side of the mooring device from the two-dimensional image data, and obtains three-dimensional data corresponding to the two sides in the three-dimensional image data through the image position mapping relationship.
  • control system identifies the second image features on the back side and the integrated side of the mooring device from the two-dimensional image data, and obtains the three-dimensional data corresponding to the two sides in the three-dimensional image data through the image position mapping relationship. Using the preset back side and the berth side with opposite direction information, the control system calculates the direction information corresponding to the berth side according to the obtained three-dimensional data of the back side.
  • Determining at least one side of the parking device within the viewing angle range of the three-dimensional image data helps the control system to calculate the deflection information between the parking side and the current posture of the mobile robot.
  • the direction information is the angle data between the current posture of the mobile robot and at least one side of the parking device calculated from the three-dimensional image data.
  • the direction information is angle data determined based on the angle range of the image area occupied by the parking device in the three-dimensional image data.
  • the direction information is angle data determined based on the normal direction of at least one side of the corresponding parking device in the three-dimensional image data.
  • the direction information is the angle data of the corresponding berth side obtained by using the 3D data corresponding to at least one side of the berthing device in the 3D image data; for example, the normal direction (or angle range) of the berth side.
  • the corresponding direction information represents the angle data between the current posture of the mobile robot and the berth side of the parking device.
  • the obtained corresponding side is the back side
  • the direction information between the current posture of the mobile robot and the berth side of the parking device is calculated.
  • control system executes step S125.
  • step S125 according to the direction information, the deflection information of the mobile robot to be moved from the current posture is determined.
  • the control system targets the interface end of the mobile robot to face the berth side of the berth, and in some examples, determines a normal along the berth side based on the obtained direction information before moving to the berth side of the berth.
  • Direction deflection information that moves perpendicular to the normal direction, or at a preset angle to the normal direction.
  • FIG. 6 which is an illustration of the deflection information required for the mobile robot to move to the berth side.
  • the control system is to move to the berth side of the berth, and determine according to the angle data ⁇ 1 as the berth side of the vertical berth.
  • the angle data ⁇ 2 rotated in the normal direction is used as deflection information.
  • the angle data ⁇ 1 and ⁇ 2 may be complementary angles, complementary angles, or the same angle value.
  • the control system is based on a preset target position located at a docking distance from the berth side of the berth device, and the obtained corresponding berth side The direction information of , determines the deflection information corresponding to moving to the target position.
  • the docking distance is a preset distance for the mobile robot to travel straight in order to accurately move to the docking end.
  • FIG. 7 shows another illustration of the deflection required for the mobile robot to move toward the berth side of the parking device.
  • the control system based on the angle data ⁇ 1, the distance data d1 between the berth and the mobile robot, and the distance data d2 between P and the berth side of the berth, The angle data ⁇ 3 to be rotated in order to move to the target position P is determined, and this is used as deflection information.
  • the distance data d1 is obtained according to the three-dimensional data describing the characteristics of the first image.
  • step S120 the examples mentioned in the above step S120 are described in a step-by-step manner, they can be encapsulated into a more tightly coupled algorithm for execution during implementation, so that the control system can
  • the received three-dimensional image data, or the three-dimensional image data and the two-dimensional image data are processed to obtain the direction information between the current posture of the mobile robot and the berth side of the parking device.
  • a control command is outputted according to the deflection information, so that the mobile robot is moved toward the berth side of the parking device.
  • the control command includes at least an angle command obtained based on the deflection information.
  • An example of the angle command is the deflection information itself, or the number of revolutions (or rotational speed and duration) corresponding to the deflection information.
  • the movement system of the mobile robot performs posture adjustment according to the angle command in the control command, so that the entire mobile robot rotates in the direction of the parking position facing the parking device.
  • the rotation of the mobile robot facing the berth side is to connect the second interface end of the mobile robot with the docking end of the berth side of the parking device in a signal connection, for example, an electrical signal connection, or a short-distance wireless signal connection (such as RF signal communication) , or wireless charging signal connection), etc.
  • Using the three-dimensional image data to determine the deflection direction of the mobile robot moving to the berth side is beneficial to the mobile robot to adjust the attitude with the measured spatial data, thereby reducing the lack of reflection of the relationship between the mobile robot and the parking device in the two-dimensional image data. Due to the distance relationship between them, the mobile robot cannot efficiently deflect and move to the berth side.
  • the mobile robot also needs to perform a displacement operation, so that the second interface end of the mobile robot faces the berth side.
  • the control command output by the control system further includes a displacement command.
  • the displacement instruction includes: a preset fixed movement length or fixed movement duration, or a motor rotation speed, motor rotation duration, and number of motor rotations set based on the fixed movement length or movement duration. In this way, under the control of the control instruction, the mobile system of the mobile robot adjusts the posture according to the deflection direction, drives the mobile robot to move as a whole by a linear distance according to the preset moving length, and then turns back according to the deflection direction to re-determine the deflection direction.
  • control system further executes step S140, according to the relative positional relationship between the parking device and the mobile robot reflected by at least one environmental sensing data, and the Deflection information to generate a moving route.
  • the relative position relationship includes further deflection information and/or distance information between the current position of the mobile robot and the parking device.
  • the deflection information used to represent the posture of the mobile robot and the posture of the mobile robot when facing the parking device is referred to as the first deflection information.
  • One deflection information, for example, the further deflection information is referred to as first deflection information; and deflection information representing between the attitude of the mobile robot and the direction in which the berth side faces is referred to as second deflection information, for example,
  • the deflection information determined in step S120 is referred to as second deflection information.
  • the control system utilizes the two deflection information to realize that the mobile robot moves to the berth side facing the parking device, which is more favorable for the mobile robot to perform subsequent docking movements.
  • the environment sensing data comes from at least one environment sensing device provided on the mobile robot.
  • the at least one environment sensing device is exemplified by the aforementioned various types of sensors, such as a depth image capturing device, an image capturing device, an inertial navigation sensor, and the like.
  • step S140 includes step S141 , determining the first deflection information according to the image features corresponding to the parking device in the three-dimensional image data.
  • the first deflection information is determined according to an image position in the three-dimensional image data corresponding to a first image feature of the mooring device.
  • the control system presets the difference between the target image position when the second interface end of the mobile robot is facing the parking side of the parking device and the entire 3D image data positional relationship between.
  • the target image position is located in the middle area of the entire three-dimensional image data; or the edge of the target image position is located at one side boundary of the entire three-dimensional image data, etc.
  • the control system obtains the image position (also known as the image area) of the first image feature in the entire 3D image data, according to the image position deviation between the image position and the target image position, the current posture and the normal position of the mobile robot are obtained. For the first deflection information between attitudes when the device is parked.
  • FIG. 8 shows that the first deflection information deviating from the direction facing the parking device under the current posture of the mobile robot is ⁇ ; wherein, the dotted line L1 is the direction facing the current posture of the mobile robot, and the dotted line L2 is based on the three-dimensional The orientation of the parking device relative to the mobile robot determined by the image data, the angle between the dotted lines L1 and L2 is ⁇ .
  • the parking device shown in FIG. 8 may also have the back side falling within the viewing angle range of the three-dimensional image data, that is, the included angle ⁇ is not necessarily related to a certain side of the parking device.
  • the first deflection information can also be obtained through step S142, and step S142: according to the second image feature corresponding to the parking device, in the two-dimensional image data the image position, and determine the relative position relationship.
  • the relative positional relationship includes an angular relationship.
  • the execution process of determining the first deflection information in the above step S141 is also applicable to the method of calculating the image position corresponding to the parking device in the two-dimensional image data to determine the first deflection information, which will not be described in detail here.
  • the above steps S141 and S142 may also be used in combination to improve the calculation accuracy of the first deflection information.
  • the second image feature of the docking device in the two-dimensional image data and the first image feature in the three-dimensional image data are respectively extracted, and by matching the image positions of the first image feature and the second image feature, a matchable first image feature is selected.
  • the image feature or the image position where the second image feature is located is used to determine the first deflection information between the current posture of the mobile robot and the posture when it is facing the parking device.
  • the method of determining the distance data in the relative position relationship in the step S140 includes step S143.
  • the distance data is determined according to the image features corresponding to the parking device in the three-dimensional image data.
  • the control system determines the distance information between the mobile robot and the parking device by using the depth value corresponding to the parking device in the three-dimensional image data, and the control system determines the distance information between the parking device and the mobile robot accordingly.
  • the environmental sensing data further includes two-dimensional image data and inertial navigation data, and in this example, the control system determines the relative positional relationship using the two-dimensional image data and the inertial navigation data.
  • the step S140 may further include a step S144 of determining the relative positional relationship according to the two-dimensional image data obtained by the mobile robot at different positions and the inertial navigation data moving between different positions.
  • the control system uses the inertial navigation sensor to measure the moving distance and movement posture of the mobile robot from the position Pos1 to the position Pos2, and uses the image acquisition device to capture two two-dimensional image data Pic1 and Pic2 at the position Pos1 and the position Pos2 respectively.
  • the image acquisition device the physical objects in the physical space are photosensitive into two-dimensional image data in a proportional relationship
  • the control system determines the physical position of the corresponding same physical object and The transformation relationship s (also known as scale) between the image positions of the second image feature of the entity object in the image data, and then the relative position relationship between the mobile robot and the entity object is determined by using the transformation relationship.
  • the obtained relative positional relationship includes the first deflection information between the current posture of the mobile robot and the parking device, and the distance information therebetween.
  • the step S130 includes step S131, outputting a control command according to the first deflection information, so that the mobile robot is facing the parking device.
  • the control command only includes an angle command corresponding to the first deflection information, so as to adjust the posture of the mobile robot so that it faces the parking device.
  • the control system executes steps S110-S130, thereby improving the accuracy of calculating the second deflection information and the relative position relationship by using the first image features and richer three-dimensional image information to be extracted more easily when facing the parking device.
  • the control system uses the second deflection information to generate a moving route.
  • the movement route may be a movement route of a preset movement distance or a preset movement time interval.
  • the moving route may also be a moving route from the current position of the mobile robot to the destination with a predetermined distance from the berth side as the destination.
  • the step S130 includes a step S132, outputting a control command according to the moving route, so that the mobile robot is facing the parking device.
  • the control command output by the control system includes an angle command and/or a displacement command.
  • the angle command includes information obtained only according to the second deflection information.
  • the angle command includes the second deflection information itself, or the number of revolutions (or, the rotational speed and duration) corresponding to the second deflection information, and the like.
  • the displacement command includes information obtained according to the movement route.
  • the displacement instruction includes the distance moved under the corresponding angle instruction according to the moving route, and the like.
  • the control system In the case of obtaining the relative positional relationship between the mobile robot and the parking device, and the second deflection information, based on the relative positional relationship and the second deflection information, the control system generates a moving route, and executes step S132, from the mobile robot The movement route from the current location to the destination.
  • the control command output by the control system includes an angle command and/or a displacement command.
  • the angle command includes deflection information obtained only according to the second deflection information, or deflection information obtained by superimposing (or de-emphasizing) the first deflection information and the second deflection information.
  • the angle command includes the obtained deflection information itself, or the number of revolutions (or, the rotational speed and duration) corresponding to the deflection information, and the like.
  • the displacement command includes information obtained according to the movement route.
  • the displacement instruction includes the distance moved under the corresponding angle instruction according to the moving route, and the like.
  • the control system continuously repeats the above process to continuously correct the relative positional relationship between the mobile robot and the berth side, so that the mobile robot moves to the position where its second interface is facing in this navigation movement mode.
  • the berth-side orientation of the mooring device is not limited to:
  • the pixel position of the identified outline of the berthing device is affected by filtering, background similar colors, etc., which is prone to errors.
  • the various examples mentioned in this application all take advantage of the advantages of direct measurement of the angle and depth value of each pixel in the three-dimensional image data, and according to the structural characteristics of the mooring device, it is possible to measure each side of the mooring device. the spatial orientation.
  • using the orientation information of the berth side reflected in the three-dimensional image data is beneficial to quickly determine the second deflection information required for the mobile robot to turn from the current posture to the berth side before docking, and effectively reduce the number of attempts of the mobile robot.
  • the present application also provides an embodiment of a parking control method, which may initiate execution based on the monitoring as in steps S101-102 to determine to execute an example of the parking control method.
  • a parking control method which may initiate execution based on the monitoring as in steps S101-102 to determine to execute an example of the parking control method.
  • FIG. 9 shows a schematic flowchart of yet another parking control method based on the inventive concept of the present application.
  • step S210 two-dimensional image data and three-dimensional image data are acquired.
  • the overlapping viewing angle range between the two-dimensional image data and the three-dimensional image data is the same as or similar to each example mentioned in the foregoing steps S110-S130.
  • the two-dimensional image data and the three-dimensional image data come from an environment sensing device integrating a photosensitive device and a ToF measurement device.
  • the environment sensing device outputs two-dimensional image data and three-dimensional image data based on the same viewing angle range.
  • the two-dimensional image data and the three-dimensional image data can be acquired synchronously or asynchronously.
  • the control system first acquires the two-dimensional image data to perform subsequent steps S220 and S240, so that the mobile robot is facing the parking device; and then acquires the three-dimensional image data. , so as to perform subsequent steps S230 and S240, so as to move the mobile robot to the side of the parking space facing the parking device.
  • control system can use the two-dimensional image data and the three-dimensional image data to identify the parking device or at least one side of the parking device in subsequent steps, such as S220 and S230. Wait.
  • control system can also obtain two-dimensional image data in order to determine the first deflection information according to the data requirements for the two-dimensional image data and the three-dimensional image data;
  • the two-dimensional image data and the three-dimensional image data are acquired synchronously.
  • step S220 the first deflection information between the current posture of the mobile robot and the posture when facing the parking device is determined according to the image position of the two-dimensional image area corresponding to the parking device in the two-dimensional image data.
  • step S220 the execution process of step S220 is the same as or similar to steps S141 and S142 in the foregoing example.
  • the control system determines the image position in the two-dimensional image data of the two-dimensional image region corresponding to the second image feature by identifying the second image feature of the parking device.
  • the two-dimensional image area is, for example, a rectangular frame including pixel data of the parking device, or an outline surrounded by pixel data of the parking device. It is preset that when the mobile robot is facing the parking device and the parking device is photographed, the image position relationship between the corresponding target image area of the parking device and the entire two-dimensional image data, the control system determines according to the image position relationship.
  • the image position deviation between the identified two-dimensional image area and the target image area is used to obtain the first deflection information between the current posture of the mobile robot and the posture when it is facing the parking device.
  • step S230 according to the direction information corresponding to at least one side of the parking device in the three-dimensional image data, the second deflection information of the mobile robot moving from the current posture to the berth side of the parking device is determined.
  • step S230 is the same as or similar to the execution process of each example in the foregoing step S120. It will not be described in detail here.
  • step S240 at least one control command is output according to the determined first deflection information and/or second deflection information, so as to make the mobile robot move in the direction of the berth side facing the parking device.
  • step S240 is the same as or similar to the execution process of each example in the foregoing step S130.
  • step S220 the control system executes step S241, and outputs a first control command according to the first deflection information, so that the mobile robot is facing the the mooring device.
  • the first control command includes an angle command corresponding to the first deflection information, so that the mobile robot rotates and faces the parking device.
  • the control system continues to execute step S230 to obtain second deflection information, and executes step S242 to output a second control command according to the second deflection information, so as to move the mobile robot toward the berth side facing the parking device.
  • the control system may generate a movement route based on the second deflection information.
  • the outputted second control command includes an angle command corresponding to the second deflection information and a displacement command corresponding to the moving route.
  • the control system can obtain the first deflection information and the second deflection information according to the determined first deflection information and the second deflection information.
  • the deflection information outputs the control command.
  • the control system may generate a moving route based on the first deflection information and the second deflection information.
  • the outputted second control command includes an angle command corresponding to the first deflection information and the second deflection information, and a displacement command corresponding to the moving route.
  • the present application also provides an embodiment of a parking control method, which may initiate execution based on the monitoring as in steps S101-102 to determine to execute an example of the parking control method.
  • a parking control method which may initiate execution based on the monitoring as in steps S101-102 to determine to execute an example of the parking control method.
  • FIG. 10 shows a schematic flowchart of yet another parking control method based on the inventive concept of the present application.
  • step S310 three-dimensional image data is acquired.
  • step S310 is the same as or similar to step S110 in the foregoing example, and will not be repeated here.
  • step S320 the first deflection information between the current posture of the mobile robot and the posture when facing the parking device is determined according to the image position of the three-dimensional image area corresponding to the parking device in the three-dimensional image data.
  • step S320 is the same as or similar to the aforementioned step S141, and will not be described in detail here.
  • step S330 according to the direction information corresponding to at least one side of the parking device in the three-dimensional image data, second deflection information of the mobile robot moving from the current posture to the parking side of the parking device is determined.
  • step S330 is the same as or similar to the aforementioned step S120, and will not be described in detail here.
  • step S340 at least one control command is output according to the determined first deflection information and/or the second deflection information, so as to make the mobile robot move in the direction of the berth side facing the parking device.
  • step S340 is the same as or similar to the aforementioned step S130 or the aforementioned step 240, and will not be described in detail here.
  • the present application also provides a computer readable and writable storage medium, storing at least one program, which executes and implements the above-described control methods shown in FIGS. 3 , 4 , 7 , and 8 when called. at least one embodiment.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to enable the mobile robot installed with the storage medium to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the computer readable and writable storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, A USB stick, a removable hard disk, or any other medium that can be used to store the desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.
  • the instructions are sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead intended to be non-transitory, tangible storage media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc, where disks usually reproduce data magnetically, while discs use lasers to optically reproduce data replicate the data.
  • the functions described by the computer programs of the methods described herein may be implemented in hardware, software, firmware, or any combination thereof.
  • the functions When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • the steps of the methods or algorithms disclosed herein may be embodied in processor-executable software modules, where the processor-executable software modules may reside on a tangible, non-transitory computer readable and writable storage medium.
  • Tangible, non-transitory computer-readable storage media can be any available media that can be accessed by a computer.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which contains one or more possible functions for implementing the specified logical function(s) Execute the instruction.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by dedicated hardware-based systems that perform the specified functions or operations , or can be implemented by a combination of dedicated hardware and computer instructions.

Abstract

一种停泊控制方法、控制系统、移动机器人及存储介质。停泊控制方法包括:获取三维图像数据(S110);根据三维图像数据中对应停泊装置的至少一侧的方向信息,确定移动机器人从当前姿态向停泊装置的泊位侧移动的偏转信息(S120);依据偏转信息输出控制指令,以使移动机器人向面向停泊装置的泊位侧方向移动(S130)。该方法利用三维图像数据确定移动机器人向泊位侧移动的偏转方向,有利于移动机器人以经测量得到的空间数据来进行姿态上的调整。

Description

停泊控制方法、控制系统、移动机器人及存储介质 技术领域
本申请涉及移动机器人技术领域,具体的涉及一种停泊控制方法、控制系统、移动机器人及存储介质。
背景技术
移动机器人因为具有能够自主移动并自主执行任务的功能,因此,在工业应用环境和生活环境中的应用越来越广泛。为了补充能量、或执行任务,移动机器人需要执行与其他设备对接的操作,例如,移动机器人需要与停泊设备进行对接操作,为此,移动机器人需要移动至便于与相应设备进行对接的位置,再执行对接操作。
随着移动机器人与所对接的设备各自不断优化,促使移动机器人自主地移动至所对接的设备的可对接位置的方式也在不断改进。
发明内容
鉴于以上所述相关技术的缺点,本申请的目的在于提供一种停泊控制方法、控制系统、移动机器人及存储介质,用以克服上述相关技术中存在移动机器人在没有停泊设备的帮助下准确移动至可供对接的位置的问题。
为实现上述目的及其他相关目的,本申请公开的第一方面提供一种移动机器人的停泊控制方法,包括:获取三维图像数据;根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的偏转信息;依据所述偏转信息输出控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
本申请的第二方面提供一种移动机器人的停泊控制方法,包括:获取二维图像数据和三维图像数据;根据对应停泊装置的二维图像区域在所述二维图像数据中的图像位置,确定所述移动机器人当前姿态与正对于停泊装置时的姿态之间的第一偏转信息;根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的第二偏转信息;依据所确定的第一偏转信息和/或第二偏转信息输出至少一个控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
本申请的第三方面提供一种移动机器人的停泊控制方法,包括:获取三维图像数据;根据对应停泊装置的三维图像区域在所述三维图像数据中的图像位置,确定所述移动机器人当 前姿态与正对于停泊装置时的姿态之间的第一偏转信息;根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的第二偏转信息;依据所确定的第一偏转信息和/或第二偏转信息输出至少一个控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
本申请的第四方面提供一种移动机器人的控制系统,包括:至少一个第一接口端,用于接收三维图像数据;至少一个存储器,用于存储至少一个程序;至少一个处理器,与所述至少一个第一接口端和至少一个存储器相连,用于调用并执行所述至少一个程序,以协调所述至少一个第一接口端和至少一个存储器执行并实现如前述任一所述的停泊控制方法;第二接口端,用于与停泊装置进行对接确认。
本申请的第五方面提供一种移动机器人,包括:至少一种传感器,用于至少提供三维图像数据;移动系统,用于根据所接收到的控制指令执行移动操作;如前述控制系统,利用其中的第一接口端分别与各所述传感器和移动系统连接,用于根据所至少获取的三维图像数据输出所述控制指令。
本申请的第六方面提供一种移动机器人系统,包括:如所述的移动机器人;以及停泊装置,用于与所述移动机器人对接。
本申请的第七方面提供一种计算机可读存储介质,存储有至少一程序,所述至少一程序在被调用时执行并实现如前任一所述的停泊控制方法。
综上所述,本申请公开的停泊控制方法、控制系统、移动机器人及存储介质,利用三维图像数据确定移动机器人向泊位侧移动的偏转方向,有利于移动机器人以经测量得到的空间数据来进行姿态上的调整,由此减少因二维图像数据中缺少反映移动机器人与停泊装置之间的远近关系,而带来的使移动机器人无法高效地向泊位侧偏转移动等的情况。
本领域技术人员能够从下文的详细描述中容易地洞察到本申请的其它方面和优势。下文的详细描述中仅显示和描述了本申请的示例性实施方式。如本领域技术人员将认识到的,本申请的内容使得本领域技术人员能够对所公开的具体实施方式进行改动而不脱离本申请所涉及发明的精神和范围。相应地,本申请的附图和说明书中的描述仅仅是示例性的,而非为限制性的。
附图说明
本申请所涉及的发明的具体特征如所附权利要求书所显示。通过参考下文中详细描述的示例性实施方式和附图能够更好地理解本申请所涉及发明的特点和优势。对附图简要说明书如下:
图1显示为本申请停泊装置在一实施例中的结构示意图。
图2显示为本申请移动机器人一实施例的仰视图。
图3显示为本申请停泊控制方法的一实施例的流程示意图。
图4显示为本申请停泊控制方法的又一实施例的流程示意图。
图5显示为本申请利用可视化方式呈现停泊装置模型的泊位侧的法线方向与移动机器人当前姿态之间的角度数据α1。
图6显示为本申请移动机器人向泊位侧移动所需转动的偏转信息的一种图示。
图7显示为本申请移动机器人向停泊装置的泊位侧移动所需偏转的另一图示。
图8显示为本申请移动机器人当前姿态下偏离于正面对停泊装置方向的第一偏转信息为β。
图9显示为本申请停泊控制方法的又一实施例的流程示意图。
图10显示为本申请停泊控制方法的又一实施例的流程示意图。
具体实施方式
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。
在下述描述中,参考附图,附图描述了本申请的若干实施例。应当理解,还可使用其他实施例,并且可以在不背离本公开的精神和范围的情况下进行模块或单元组成、电气以及操作上的改变。下面的详细描述不应该被认为是限制性的,并且本申请的实施例的范围仅由公布的专利的权利要求所限定。这里使用的术语仅是为了描述特定实施例,而并非旨在限制本申请。
如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合。因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
随着移动机器人和停泊装置各自对图像处理能力逐步提升,移动机器人中需要与停泊装置协同工作的功能,逐渐向移动机器人或停泊装置单侧转换,以便由移动机器人或停泊装置单独完成。
但这一过程并不容易,以停泊装置为充电桩或垃圾回收装置为例,停泊装置为移动机器人提供特定的标记图案,移动机器人通过图像识别方式识别出该标记图案,来确定找到了停泊装置。然而,由于二维的图像数据无法提供深度信息,在一些示例中,停泊装置上仍设有红外发射装置,移动机器人通过对红外信号的强弱、方向进行辨别,来确认其与停泊装置的相对位置,由此确保移动机器人移动至对准停泊装置、且与停泊装置具有一定距离,以供移动机器人最后缓慢地向停泊装置移动以完成对接。例如,移动机器人与充电桩对接以进行后续的充电操作。又如,移动机器人与垃圾回收装置对接,以进行后续的垃圾回收操作等。上述过程一方面仍需要移动机器人或停泊装置协同工作,增加了为维持双方依赖的技术而带来的成本、设计等问题;另一方面由于需要不断尝试性地往复移动,对接前的移动时长较长。
在又一些示例中,移动机器人利用所识别出的停泊装置上的标记图案在二维图像数据中的图像位置来确定移动机器人与停泊装置上的标记图案之间的偏转信息,并调整移动机器人的位置,直至该标记图案在二维图像数据中的图像位置位于整幅图像的中心,则确认确保移动机器人移动至对准停泊装置。受移动机器人与停泊装置之间距离远近的影响,当移动机器人与停泊装置相对较近,并调整方向过程中时,标记图案在二维图像数据中像素位置的变化,不一定能够被移动机器人所识别,但实际上这种方位上的误差足以让移动机器人未能对准停泊装置,使得移动机器人无法完成对接操作;或者当移动机器人完成对接操作时,移动机器人与停泊装置对执行对接后的操作带来困难。如此,使得移动机器人与停泊装置之间的对接成功率不够高。
为此,本申请还提供一种移动机器人的停泊控制方法。该停泊控制方法至少利用三维图像数据来确定所述停泊装置的泊位侧相对于所述移动机器人当前姿态的一偏转信息,以有效确定向停泊装置的泊位侧移动的方向,以向泊位侧移动。
在此,所述停泊装置用于供移动机器人停泊下来,并与停泊装置交互以解决移动机器人利用自主移动能力完成各功能所对应的行为操作过程中,对移动机器人造成的困扰。例如,停泊装置为一充电桩,则充电桩用于供移动机器人停泊并补充电能,以便为自主移动和行为操作提供能量。又如,停泊装置为垃圾回收装置,则垃圾回收装置用于供移动机器人停泊并倾倒所收集的垃圾。再如,所述停泊装置还可以是兼具充电和垃圾回收功能的设备。
停泊装置为实体物体,其包含设有与移动机器人对接用的对接端的泊位侧。其中,该对接端可设置在停泊装置的主体侧面上,或者设置在由主体侧面沿移动机器人的行进平面(如地面)方向延伸出的底板上。
请参阅图1,其显示为停泊装置在一实施例中的结构示意图,其中,所述停泊装置包含泊 位侧11和背对泊位侧的背侧12,以及在泊位侧和背侧之间、且与泊位侧和背侧形成主体结构的两个体侧13;其中,泊位侧11上的对接端设置在底板上、与移动机器人上的相应接口端111对标的位置上;停泊装置与市电连接的插头从该背侧伸出;在主体结构所形成的腔体内容纳用于为移动机器人进行供电的电源电路,以便在移动机器人相应的接口端与对接端电连接时,将市电提供的交流电转换成移动机器人的电池所需的供电。
在此,所述停泊控制方法主要运行于移动机器人所提供的硬件和软件环境。其中,移动机器人至少包括:感知系统、控制系统、移动系统、以及用于与停泊装置对接的接口端等。其中,所述感知系统包括配置在移动机器人上的各类传感器。控制系统为移动机器人自主移动、自主执行某一行为功能的中央处理系统。移动系统为使移动机器人执行其中的一行为功能而提供的移动操作,其受移动机器人的控制系统。其中,控制系统与感知系统中的各传感器通过第一接口端数据通信;控制系统与移动系统中的控制电路(或称驱动电路)通过第三接口端进行数据通信。与停泊装置对接的接口端又称为第二接口端。
其中,所述传感器用于供移动机器人执行移动和/或行为操作提供感知信息。所述传感器的数量为一个或多个。各传感器根据数据组织形式而输出一维数据、二维图像数据、或三维图像数据。
其中,提供一维数据的传感器举例包括以下至少一种:单点(或单线)的光感应传感器、单线的声波传感器、电机旋转的计数传感器、速度(或加速度)传感器、角速度(或角加速度)传感器、惯导传感器、碰撞传感器、接近传感器、悬崖传感器等。其中,所述光感应传感器和声波传感器基于光波/声波反射原理所感应的一维数据用于反映所测量的方向具有实体物体的事实,或者反映所测量的方向的实体物体与移动机器人的距离数据。其中,所述单点(或单线)的光感应传感器举例包括以下至少一种:单点的雷达传感器、单点激光雷达传感器、单点红外传感器、单点的ToF传感器、单线的激光传感器、单线的雷达传感器、单线激光雷达传感器、单线红外传感器、单线的ToF传感器等。所述单线的声波传感器举例包括以下至少一种:超声波传感器等。
其中,提供二维图像数据的传感器基于感光成像原理所提供的二维图像数据(又称颜色图像数据),用于反映实体物体所测量位置的形状。该类传感器用于将其视角范围内所摄取的各物体的各测量点所反射的光能量转换成对应像素分辨率的图像数据。其中,所述测量点为基于光反射原理而与图像数据中各像素位置对应的实体物体上的反光区域。所述图像获取装置可用于提供移动机器人周围环境的障碍物数据。
所述提供二维图像数据的传感器包括以下至少一种:集成CCD或CMOS感应器件的摄 像装置、鱼眼摄像装置、感应红外光的摄像装置等。
所述二维图像数据包括利用颜色数据描述在相应传感器的视角范围内被摄取的周围环境的矩阵数据。所述矩阵数据中的像素行/列的数量对应于所述图像获取装置的像素分辨率。
所述颜色图像数据反映了相应的传感器所能获取到的周围环境中各物体测量点所能反射的光的波段,并转换成颜色数据。所述颜色数据举例包括以下任一种:RGB数据、R/G/B数据、或光强度数据(又称灰度数据)等,其中,所述R/G/B中任一数据可作为光强度数据使用,换言之,所述光强度数据与单一颜色的颜色图像数据为共用数据。或者,所述光强度数据是在所述视角范围内图像获取装置通过检测周围环境的物体表面所反射的预设波段的光束强弱程度而确定的;其中,所述波段举例包括以下至少一种红外波段、紫外波段、或可见光波段等。
其中,提供三维图像数据的传感器举例包括以下至少一种:多线激光传感器、多线雷达传感器、多线激光雷达传感器、红外图像传感器、基于ToF的面阵列传感器、双目摄像装置、深度图像摄像装置等。上述举例的各提供三维图像数据的传感器基于结构光、光波的飞行时间等原理所提供的三维图像数据用于反映实体物体所测量位置与移动机器人的角度数据和距离数据。
在一些实际传感器结构中,一些传感器集成有可获取二维图像数据和三维图像数据的感应器件。例如,传感器中集成深度传感器和CMS感应器件以同时获取深度图像数据和颜色图像数据。
移动机器人的移动系统为使移动机器人执行某些行为功能而提供的移动操作,其受移动机器人的控制系统的控制。
以移动系统提供移动机器人的整体移动为例,请参阅图2,其显示为移动机器人一实施例的仰视图。所述移动系统包括:驱动轮21、从动轮22、驱动电机(未予图示)和驱动单元(未予图示)。
其中,所述驱动轮21沿着底盘20的相对两侧安装,通常所述驱动轮21设置位于所述吸尘口的后端,用于驱动所述移动机器人按照规划的移动轨迹进行前后往复运动、旋转运动或曲线运动等,或者驱动所述移动机器人进行姿态的调整,并且提供所述本体与地板表面的两个接触点。所述驱动轮21可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式安装到所述本体上,且接收向下及远离所述本体偏置的弹簧偏置。所述弹簧偏置允许驱动轮21以一定的着地力维持与地面的接触及牵引,以确保所述驱动轮21的轮胎面与地面充分地接触。在本申请中,在移动机器人需要转弯或曲线行走时,通过调整器驱动所述本体移动 的两侧的驱动轮21的转速差来实现转向。
在某些实施例中,所述本体上还可以设置至少一个从动轮(在某些实施例中,所述从动轮也被称为:辅轮、脚轮、滚轮、万向轮等)以稳定地支撑本体。例如,如图2所示,在所述本体上设置至少一个从动轮22,并与所述本体两侧的驱动轮21一并保持所述本体在运动状态的平衡。基于移动机器人整机配重的考虑,所述移动系统中的驱动轮21及其驱动电机、和电池部分等分别位于所述移动机器人的本体的前部分和后部分,以使得整个移动机器人的重量平衡。
为了驱动所述驱动轮和从动轮运转,所述移动系统还包括驱动电机。移动机器人还可以包括至少一个驱动单元,例如用于驱动左侧驱动轮的左轮驱动单元以及用于驱动右侧驱动轮的右轮驱动单元。所述驱动单元可以包含专用于控制驱动电机的一个或多个处理器(CPU)或微处理单元(MCU)。例如,所述微处理单元将所述控制系统所输出的控制指令或数据转化为对驱动电机进行控制的电信号,并根据所述电信号控制所述驱动电机的转速、转向等以调整移动机器人的移动速度和移动方向。所述信息或数据如所述处理装置所确定的偏角。
第一接口端用于与移动机器人上的用于环境感知的传感器连接,以接收所连接的传感器提供的感知数据。其中,每一传感器可与一个或多个第一接口端连接,不同的传感器所配置的第一接口的类型可以相同或不同。其中,所述第一接口端包括但不限于:基于串行传输协议而设置的接口、和/或基于并行传输协议而设置的接口。例如,所述第一接口端包括以下至少一种USB接口、RS232接口、HDMI接口、总线接口等。以传感器包括深度图像摄取装置为例,深度图像摄取装置通过USB接口与处理器交互,以输出三维图像数据,以及接收用于输出该三维图像数据的指令等。
第二接口端用于与停泊装置进行对接确认。该第二接口端可设置于移动机器人的体侧或底盘下方。例如,第二接口端以金属贴片的形式设置于移动机器人的底盘下方的壳体表面、两个滚轮的旁边。又如,第二接口端无线充电线圈形式设置在移动机器人的壳体内贴着底面的位置。
第三接口端用于与移动机器人中的各电路设备连接,其中,所述电路设备为使移动机器人执行移动、或行为等操作的控制电路(或称为驱动电路)等。所述第三接口端包括但不限于:基于串行传输协议而设置的接口、和/或基于并行传输协议而设置的接口。例如,所述第三接口端包括以下至少一种USB接口、RS232接口、双绞线接口等。
所述存储器用于存储至少一个程序,还用于存储所获取的三维图像数据。所述存储器包括但不限于高速随机存取存储器、非易失性存储器。所述存储器的数量为一个或多个。例如 一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。在某些实施例中,所述存储器还可以包括远离一个或多个处理器的存储器,例如,经由RF电路或外部端口以及通信网络(未示出)访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储器控制器可控制机器人的诸如CPU和外设接口之类的其他组件对存储装置的访问。
处理器与各存储器进行数据通信,以及通过第一接口端、第二接口端和第三接口端分别与各硬件数据通信。处理器的数量为一个或多个。至少一个处理器可操作地与易失性存储器和/或非易失性存储器耦接。至少一个处理器可执行在存储器和/或非易失性存储设备中存储的指令以在移动机器人中执行操作,诸如根据所获取的三维图像数据确定姿态的偏转信息等。如此,处理器可包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(DSP)、一个或多个现场可编程逻辑阵列(FPGA)、或它们的任何组合。所述处理器还与I/O端口和输入结构可操作地耦接,该I/O端口可使得机器人能够与各种其他电子设备进行交互,该输入结构可使得用户能够与计算设备进行交互。因此,输入结构可包括按钮、键盘、鼠标、触控板等。所述其他电子设备可以是所述机器人中移动装置中的移动电机,或机器人中专用于控制移动装置的从处理器,如MCU(Microcontroller Unit,微控制单元,简称MCU)。例如,前述提及的驱动单元可以和所述控制系统共用处理器或彼此独立设置。
至少一个处理器读取存储器中所存储的程序以及获取三维图像数据,以通过执行后续方法并协调移动机器人中的各系统,执行停泊控制。请参阅图3,其显示为一种停泊控制方法的流程示意图。为便于描述,处理器协调存储器、以及各硬件系统执行停泊控制操作的过程也称为控制系统执行下述操作的过程。
在步骤S110中,获取三维图像数据。其中,所述三维图像数据包括利用如前所述的任一种提供三维图像数据的传感器所提供的、可描述移动机器人与障碍物之间距离和角度的矩阵数据。
在一些示例中,控制系统可实时采集相应传感器的三维图像数据,并通过步骤S120来确认识别到停泊装置或识别到停泊装置的泊位侧。
为了避免控制系统基于所识别的停泊装置或停泊装置的泊位侧而错误地执行停泊操作,在另一些示例中,控制系统中预设有多种工作模式,在不同的工作模式下,控制系统在识别出停泊装置时所对应的移动控制方式不同。其中,所述工作模式用于表示控制系统为实现某一功能而执行的移动和/或行为操作。
以移动机器人为清洁机器人为例,在停泊归位模式下,在识别出停泊装置时,控制系统执行为了准确对接停泊装置的泊位侧而执行的移动控制;在清洁模式下,控制系统在识别出停泊装置时,视停泊装置为虚拟墙,执行遍历某一清洁区域的自主移动操作、以及在移动过程中执行清洁行为的操作。故而,停泊归位模式反映移动机器人为与停泊装置对接而执行对准移动操作和对接行为操作;清洁模式反映于移动机器人为清洁一清洁区域而执行自主移动操作和清洁行为操作。
于本实施例而言,工作模式至少包括停泊归位模式,以及根据移动机器人的功能而设置的其他工作模式。例如,所述移动机器人为清洁机器人,则其工作模式还包括:清洁模式、抓取模式、和构建地图模式等。当移动机器人包含多个工作模式时,所述控制系统基于预设的切换条件调整工作模式。所述切换条件举例包括:基于外部设备与移动机器人的交互操作而设置的,或者基于移动机器人的传感器所提供的数据而设置的。以所述停泊归位模式的切换条件为例,该切换条件是基于控制系统监测移动机器人中的、与停泊装置之间配合机制关联的硬件系统所提供数据而设置的。
以停泊装置为充电桩和/或垃圾回收装置为例,请参阅图4,其显示为停泊控制方法在又一实施例中的流程示意图。所述控制系统在不同工作模式下都执行步骤S101和S102,以便至少在切换至停泊控制模式后,执行步骤S110。
在步骤S101中,监测所述移动机器人的电池数据和/或集尘数据,以及所述移动机器人当前的定位信息。
其中,所述电池数据反映移动机器人中电池的剩余电量。其可通过测量电池的供电电压、供电功率、供电时长、剩余电荷数等数据计算而得,该电池数据可由电池剩余电能的百分比、电池中所储存的剩余电能值、或电池中已消耗的电能值等来表示。
其中,所述集尘数据反映移动机器人中集尘盒内的灰尘量。其可通过设置在集尘盒内的传感器(如压力传感器)提供的信号来表示。
其中,所述移动机器人当前的定位信息通常以移动机器人在地图数据中的位置来表示,也可以以移动机器人与停泊装置之间的相对位置关系来表示。在此,所述地图数据为预存在存储器中的数据,是基于移动机器人所在的物理空间而单独设计的,或者借由移动机器人在移动过程中所构建的。地图数据以栅格或矢量描述物理空间,其中记录有停泊装置的位置信息,如坐标值;以及移动机器人根据从各传感器所获取的感知信息来计算出的当前定位信息。
在步骤S102中,根据所监测的电池数据和/或集尘数据,以及所述定位信息与停泊装置之间的导航路线,输出用于控制移动机器人沿所述导航路线向所述停泊装置移动的控制指令。
在此,控制系统监测电池数据和/或集尘数据以确认移动机器人是否满足其工作模式切换至停泊归为模式的切换条件,若是,则按照停泊归位模式构建所述定位信息与停泊装置之间的导航路线,并按照所述导航路线输出控制指令,该控制指令输出至移动系统,以便在移动系统的移动下,移动机器人整体沿所述导航路线向所述停泊装置移动。
在一些示例中,控制系统在监测到所述电池数据低于一电池阈值时,切换至停泊归位模式,以执行对应停泊归位模式的移动和充电对接操作。在此,所述电池阈值可为固定值,或者通过评估当前定位信息与停泊装置的位置之间的路线距离所需消耗的电能而确定的。
在另一些示例中,控制系统在监测到该传感器产生集尘数据时,切换至停泊归位模式,以执行对应停泊归位模式的移动和垃圾回收对接操作。
在又一些示例中,控制系统综合所监测的电池数据和集尘数据进行模式切换分析,以评估所监测的电池数据和集尘数据是否达到切换条件,由此确保移动机器人能够自主归位到停泊装置上,以执行对应停泊归位模式的移动、对接充电和垃圾回收的操作。
在停泊归位模式下的导航移动过程中,移动机器人不断地执行步骤S110,并利用在不同位置所获取的三维图像数据进行定位、或避障。或者移动机器人利用其他传感器,如图像摄取装置和惯导传感器的配合来进行定位,以及利用红外传感器进行避障,并将所确定的历次定位位置和障碍物位置标记在地图数据中,以当利用地图数据确定移动机器人在停泊装置附近时,执行步骤S110,以提高对准停泊装置的泊位侧的准确度。
控制系统采用上述任一种示例来获取三维图像数据的目的,是为了利用三维图像数据来计算所述移动机器人从当前姿态向停泊装置的泊位侧移动的偏转信息。
为了便于执行后续步骤S120时所使用的三维图像数据中包含对应停泊装置的三维数据,在一些示例中,控制系统在导航移动至停泊装置的过程中,不断执行步骤S110时,控制系统在获取到对应停泊装置的三维图像数据时,即可识别出相应的第一图像特征。由此,可较早地确定移动机器人与停泊装置的泊位侧之间的位置关系,以利于以较短的路线移动至可进行对接操作的位置。
在另一些示例中,控制系统在确认移动至停泊装置附近时,执行所述识别操作。其中,确认移动至停泊装置附近的方式是依据导航路线而计算的距离确定的,或者是依据导航策略的而确定的。
在一些具体示例中,控制系统在地图中预先记录有基于历史成功对接泊位侧时的位置信息,并据此设置当次移动至相应位置处的导航路线。由此,在移动至该泊位侧附近时,执行识别操作以提高识别成功率。
在另一些具体示例中,控制系统以相距停泊装置预设距离的一位置为终点,并据此设置当次移动至相应终点处的导航路线;在移动至终点后,控制系统按照沿边移动策略,控制移动机器人向停泊装置移动。由此,在沿边移动过程中,对所获取的三维图像数据进行识别,以识别对应停泊装置任一侧的图像特征,以便据此分析得到停泊装置的泊位侧。例如,在识别出包含对应停泊装置背侧和/体侧的第一图像特征时,控制系统根据所述背侧和体侧的图像特征所提供的方向信息,以及背侧与泊位侧具有反向的角度关系,分析得到停泊装置的泊位侧相对于移动机器人的当前姿态的方向信息。又如,控制系统在识别出包含对应停泊装置泊位侧和体侧的图像特征时,确定所述泊位侧相对于移动机器人的当前姿态的方向信息。
在步骤S120中,根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的偏转信息。
在此,控制系统至少利用三维图像数据,识别对应停泊装置中至少一侧的图像特征,以利用该图像特征确定所述停泊装置的泊位侧相对于所述移动机器人当前姿态的方向信息。其中,所述对应停泊装置的图像特征为三维图像数据中的:基于所识别出的停泊装置的轮廓而确定的图像特征,或者基于包含所识别出的停泊装置的识别框(如矩形框)而确定的图像特征。所述图像特征包括:停泊装置在三维图像数据中的图像区域、和/或停泊装置在三维图像数据中的特征信息等。其中,该特征信息举例包括以下至少一种:特征面、特征点、特征线等。其中,所述特征线包括:将特征面映射到一平面内而形成的线段、和/或基于停泊装置的轮廓线而确定的线段等。所述特征点包括:将特征线映射到一平面内而形成的点、和/或基于停泊装置的轮廓线的交界处而确定的点等。
为了便于与后续示例中描述的二维图像数据中的图像特征进行区分,在后续描述中,从三维图像数据中提取的图像特征称为第一图像特征,从二维图像数据中提取的图像特征称为第二图像特征。
在一些实施例中,控制系统执行步骤S1211,依据三维图像数据中的第一图像特征,来识别停泊装置,以及确定落入该三维图像数据的视角范围内的停泊装置的至少一侧。
在一些示例中,控制系统利用预设的停泊装置模型,对三维图像数据进行图像匹配,以根据相匹配的第一图像特征,得到落入三维图像数据的视角范围内的所述停泊装置的至少一侧。在此,所述停泊装置模型包括能够描述停泊装置所占立体空间的三维数据、和/或三维的特征标识;或者将停泊装置的立体轮廓数据降维处理后得到的特征标识。所述停泊装置模型以数组、数据库、或三维模型文件等形式预存在存储器中。
例如,控制系统计算停泊装置模型中各种预设的特征标识与所获取的三维图像数据之间 的相似性,当所得到的相似性满足识别条件时,确定该三维图像数据中相似性符合识别条件的第一图像特征。利用所得到的第一图像特征所属的停泊装置模型的至少一侧,确定对应停泊装置的至少一侧。
在步骤S1212中,基于预设的停泊装置模型中与所述三维图像数据相匹配的第一图像特征及其方向信息,确定所述移动机器人的当前姿态与所述停泊装置模型的泊位侧之间的角度数据。
在此,控制系统根据所匹配出的第一图像特征在停泊装置模型中的映射关系,确定在移动机器人以当前姿态拍摄停泊装置时,该姿态所面向的停泊装置模型的至少一侧的方向信息。换言之,该映射关系反映了停泊装置模型中、在移动机器人当前姿态下落入视角范围内的至少一侧。其中,对应于停泊装置模型中的至少一侧的方向信息举例为所能映射的相应侧的法线方向。或者,对应于停泊装置模型中的至少一侧的方向信息举例为基于所能映射的相应侧的法线方向而确定的方向。例如根据所能映射的各侧的法线方向而进行加权均值计算得到的。各法线方向举例为:与停泊装置模型主体的各体侧面垂直的方向F1、或者方向F1投影到沿与移动机器人的行进平面平行的面而确定的方向F2。
控制系统构建三维图像数据的三维坐标系,以及依据所识别出的第一图像特征在停泊装置模型的映射关系,确定图像特征对应的方向信息,即停泊装置模型中至少一侧的法线方向。请参阅图5,其显示为利用可视化方式呈现停泊装置模型的泊位侧的法线方向与移动机器人当前姿态之间的角度数据α1。其中,坐标系xyz为所述三维坐标系;射线Ray1为停泊装置模型的泊位侧的法线方向,射线Ray2为停泊装置模型一体侧的法线方向,射线Ray3为移动机器人当前姿态下深度图像摄取装置的光轴方向;其中,图示中停泊装置模型的泊位侧的法线方向Ray1在该坐标系下的角度数据α1。
需要说明的是,控制系统所识别出的第一图像特征及其对应的方向信息并非只能是停泊装置模型的泊位侧的法线方向,而可以是基于第一图像特征而确定的停泊装置模型的任一侧的方向信息,依据第一图像特征所对应的停泊装置模型的一侧以及泊位侧之间的方向信息,可计算得到移动机器人的当前姿态与停泊装置模型的泊位侧之间的角度数据(也是方向信息)。
为了兼容停泊装置的产品更迭所带来的局部结构上的变化,在另一些示例中,控制系统执行步骤S1221(未予图示),控制系统从所述三维图像数据中识别出反映所述停泊装置的泊位侧的第一图像特征。在本示例中,停泊装置的结构变化并非颠覆性的。比如,各类停泊装置都包含主体、以及沿地面方向从主体底部延伸的底板的结构,其中,不同类的停泊装置中底板形状、主体形状可相同或不同。又如,各类停泊装置都包含主体,其中,所述主体的泊 位侧的坡度与其他侧的坡度不同;不同类的各停泊装置的主体之间的泊位侧坡度可相同或不同,但整体来说,各主体的泊位侧与其他侧呈现坡度不同的结构特点。
其中,反映所述停泊装置的泊位侧的第一图像特征可以是基于从三维图像数据中识别出的泊位侧的图像特征而确定的。例如,所述反映停泊装置的泊位侧的第一图像特征包括以下至少一种:基于停泊装置的泊位侧的立体形状而确定的特征面、特征点、特征线等。其中,所述特征线包括:将特征面映射到一平面内而形成的线段、和/或基于停泊装置在泊位侧的轮廓线而确定的线段等。所述特征点包括:将特征线映射到一平面内而形成的点、和/或基于停泊装置在泊位侧的轮廓线的交界处而确定的点等。
反映所述停泊装置的泊位侧的第一图像特征也可以是基于从三维图像数据中识别出的非泊位侧的图像特征而确定的。例如,所述反映停泊装置的泊位侧的第一图像特征包括以下至少一种:基于停泊装置的非泊位侧的立体形状而确定的特征面、特征点、特征线等。其中,所述特征线包括:将特征面映射到一平面内而形成的线段、和/或基于停泊装置在非泊位侧的轮廓线而确定的线段等。所述特征点包括:将特征线映射到一平面内而形成的点、和/或基于停泊装置在非泊位侧的轮廓线的交界处而确定的点等。利用描述除泊位侧以外的各侧第一图像特征,控制系统可依据所识别出的停泊装置的各侧与泊位侧的角度关系,对泊位侧的方向信息进行估计。
在此,控制系统依据预设的反映停泊装置的轮廓特点的识别条件,提取三维图像数据中的第一图像特征,以及利用所提取的第一图像特征识别停泊装置的至少一侧。
其中,所述识别条件包括但不限于以下至少一种:符合在三维坐标系下轮廓特点的各平面之间角度关系、和/或位置关系;符合在二维坐标系下轮廓特点的各平面之间角度关系、和/或位置关系;预先经机器学习而得到的可识别出反映停泊装置的轮廓特点的分类器等。例如,对一些带有上述底板的停泊装置来说,所述识别条件用于反映停泊装置的泊位侧的底板与泊位侧主体之间的角度关系、和/或结构关系。对于一些带有上述坡度关系的停泊装置来说,所述识别条件用于反映停泊装置的泊位侧、背侧、和各体侧相对于地面的坡度的角度关系、和/或结构关系。
在一些具体示例中,控制系统在三维坐标系下对所获取的三维图像数据进行聚类以得到在三维坐标系下各平面特征之间的位置关系和角度关系,以及当所得到的各平面特征之间位置关系和角度关系中包含符合基于所述轮廓特点而设置的停泊装置的识别条件的各平面特征时,基于符合识别条件的各平面特征,确定所识别出所述停泊装置的第一图像特征。利用第一图像特征确定所述停泊装置在三维图像数据的视角范围内的至少一侧。
在另一些具体示例中,控制系统将所述三维图像数据转换到反映移动机器人所在的移动平面的二维坐标系内;以及在该二维坐标系内识别角度关系和/或位置符合识别条件的特征线、和或特征面;可选地,还可以确定所识别出的特征线对应于三维坐标系下的特征面。所得到各维度坐标系下的各特征线和/或各特正面均可视为第一图像特征。利用第一图像特征确定所述停泊装置在三维图像数据的视角范围内的至少一侧。
以待对接的停泊装置包含主体、以及沿地面方向从主体底部延伸的底板的结构为例,控制系统以平行于移动机器人行进平面的二维坐标系,将所获取到的三维图像数据进行降维处理,得到利用位置坐标描述的二维数据;依据各位置坐标的距离,将所得到的二维数据进行聚类处理,以确定反映停泊装置各侧的面的特征线;依据各特征线之间的位置关系、和/或角度关系是否满足识别条件,若是,则确认识别出停泊装置,且确定所识别出的停泊装置的至少一侧落入了移动机器人的深度图像装置的视角范围内;反之,则重新获取三维图像数据,以重复执行上述过程。其中,反映上述结构的位置关系例如:经聚类确定的线段中包含相交的线段。反映上述结构的角度关系包括:经聚类确定的线段中包含斜率小于预设角度阈值的线段。上述识别条件综合反映了停泊装置的至少以下结构特点:贴于地面的泊位侧的底板轮廓的角度关系、和底板轮廓与所连接的主体轮廓之间位置关系。控制系统将符合上述两识别条件的线段筛选出来,得到特征线。控制系统通过分析借由所得到的特征线及其连接的其他线段所构成的封闭图形、或者借由所得到的特征线在三维图像数据中筛选出的封闭空间结构,来确认停泊装置以及识别出停泊装置的至少泊位侧落入了移动机器人相应深度图像摄像装置的视角范围。该分析过程可借由分类器、或者在相应坐标系下构建连通域等算法来实现。
上述识别条件还可以包含反映停泊装置背侧的结构特点,以便控制系统利用所筛选出的第一图像特征,仍能识别出停泊装置。由此便于控制系统仍可以利用背侧的识别结果估计出泊位侧与移动机器人之间的方向信息。
利用上述步骤S1221所提及的各示例,控制系统得到三维图像数据中反映停泊装置至少一侧的第一图像特征。控制系统执行步骤S1222,以确定停泊装置的泊位侧与所述移动机器人当前姿态之间的方向信息。
在步骤S1222中,基于从所述三维图像数据中所识别出的反映所述停泊装置的泊位侧的三维数据,确定所述泊位侧与所述移动机器人当前姿态之间的方向信息。
在此,控制系统利用所得到的第一图像特征所对应的三维数据,计算相应侧与移动机器人的方向信息。
在此,控制系统根据所得到的反映停泊装置至少一侧的第一图像特征所对应的三维数据, 确定在移动机器人以当前姿态拍摄停泊装置时,该姿态所面向的停泊装置的至少一侧的方向信息。换言之,该方向信息反映了停泊装置在移动机器人当前姿态下落入视角范围内的至少一侧的角度数据。其中,对应于停泊装置中的至少一侧的方向信息举例为所相应侧的法线方向。或者,对应于停泊装置中的至少一侧的方向信息举例为基于相应侧的法线方向而确定的方向。例如根据各侧的法线方向而进行加权均值计算得到的。各法线方向举例为:与停泊装置主体的各侧平面垂直的方向F3、或者方向F3投影到沿与移动机器人的行进平面平行的面而确定的方向F4。
例如,利用上述各示例中,控制系统利用上述任一示例识别出所述三维图像数据中包含对应停泊装置的泊位侧的第一图像特征;在基于三维图像数据而构建的三维坐标系(或二维坐标系)中,控制系统确定所述第一图像特征所在的平面(或线段)的法线方向,根据在同一三维坐标系(或二维坐标系)中该法线方向与移动机器人当前姿态之间的角度数据,确定所述移动机器人的当前姿态与所述停泊装置的泊位侧之间的方向信息。
又如,若控制系统识别出对应停泊装置的背侧的第一图像特征,在基于三维图像数据而构建的三维坐标系(或二维坐标系)中,控制系统确定所述第一图像特征所在的平面(或线段)的法线方向,根据在同一三维坐标系(或二维坐标系)中该法线方向与移动机器人当前姿态之间的角度数据,以及背侧与泊位侧之间具有180度的方向偏转,确定所述移动机器人的当前姿态与所述停泊装置的泊位侧之间的方向信息。
基于上述各示例,所得到的方向信息还基于停泊装置和移动机器人之间的距离而按照调整其精度。例如,根据三维图像数据中对应停泊装置至少一侧的深度值,调整所得到的方向信息的精度。其中深度值越大,其方向信息的精度越低。
为了兼容整体结构变化大、但局部结构较为稳定的停泊装置,在又一些示例中,在所述停泊装置上设置有稳定至少表征停泊装置的泊位侧的立体标识结构。其中,该立体标识结构可仅设置在泊位侧,或者设置在停泊装置的泊位侧和其他侧(如背侧、和/或各体侧)。由此,控制系统所获取的三维图像数据中包含有反映所述停泊装置的泊位侧标识的图像区域。该泊位侧标识即所述立体标识结构。控制系统执行步骤S1231(未予图示),利用预设的对应该立体标识结构的识别条件,对所获取的三维图像数据进行识别,以确定所识别出的停泊装置及其至少一侧。控制系统还执行前述步骤S1222,以得到所述方向信息,在此不再重述。
技术人员应理解,上述各示例中所描述的各种识别条件和识别方式并非择一设置,可为了提高识别准确率选择组合,或在上述各示例的基础上进行改进。
在另一些实施例中,步骤S120包括:步骤S1241、S1242和S1243。其中,在步骤S1241 中,获取二维图像数据。在步骤S1242中,依据三维图像数据中的第一图像特征,以及二维图像数据的第二图像特征,来识别停泊装置的至少一侧。其中,二维图像数据为包含光感传感器(如CCD、CMS等)的图像摄像装置所提供的。在步骤S1243中,基于从所述三维图像数据中所识别出的反映所述停泊装置的泊位侧的第一图像特征,确定所述泊位侧与所述移动机器人当前姿态之间的方向信息。其中,步骤S1243与前述步骤S1222的执行过程相同或相似,在此不再详述。
在此,所获取的二维图像数据的视角范围和步骤S110中获取的三维图像数据的视角范围具有重叠的范围区域。为了便于利用两类图像数据对停泊装置的至少一侧进行识别,控制系统对两类图像数据对应范围区域内的图像区域进行识别处理。控制系统预设两类图像的各图像区域之间的图像位置映射关系。在一些示例中,设置在移动机器人上的图像摄像装置中集成有光感传感器和距离传感器,则控制系统同步地获取具有完全重叠的视角范围的三维图像数据和二维图像数据,并预设二类图像数据的图像位置映射关系。在另一些示例中,设置在移动机器人上的图像摄像装置包括双目摄像头,则控制系统同步地获取具有部分重叠的范围区域的两幅二维图像数据,并基于该两幅二维图像数据重构在重叠的范围区域内的三维图像数据,并确定所述三维图像数据和其中一个二维图像数据之间的图像位置映射关系。
控制系统从所述三维图像数据中和二维图像数据中视角范围重叠的各图像区域,识别反映所述停泊装置的泊位侧及其方向信息。
在此,为便于控制系统对停泊装置进行识别,在一些示例中,控制系统中预设用于从二维图像数据中识别停泊装置的第二图像特征,或着预设经机器学习得到的分类器等,并利用所预设的方式来识别停泊装置及其至少一侧。其中,该分类器用于根据经训练的图像样本中停泊装置的至少一侧的特征来识别停泊装置及其至少一侧。
在又一些示例中,在停泊装置的至少泊位侧设置立体的标识结构或平面的标识图案。该标识结构或标识图案可通过光感应传感器描述在二维图像数据中;换言之,所述二维图像数据中包含反映所述停泊装置的泊位侧标识的图像区域。由此,控制系统中预设用于从二维图像数据中识别停泊装置、和/或标识结构(或标识图案)的第二图像特征,或经机器学习得到的分类器等,并利用所预设的方式来识别停泊装置及其至少一侧。其中,该分类器用于根据经训练的图像样本中停泊装置的至少一侧的特征、和/或标识结构(或标识图案)的特征,来识别停泊装置及其至少一侧。
控制系统从二维图像数据中识别停泊装置至少一侧的第二图像特征,并按照图像位置映射关系将其映射到三维图像数据中,以得到相应侧的三维数据。所确定的停泊装置的各侧及 其三维数据用来确定停泊装置泊位侧的方向信息。例如,控制系统从二维图像数据中识别出停泊装置泊位侧和一体侧的第二图像特征,并通过图像位置映射关系,得到在三维图像数据中对应该两侧的三维数据。又如,控制系统从二维图像数据中识别出停泊装置背侧和一体侧的第二图像特征,并通过图像位置映射关系,得到在三维图像数据中对应该两侧的三维数据。利用预设的背侧与泊位侧具有相反的方向信息,控制系统根据所得到的背侧的三维数据计算对应泊位侧的方向信息。
确定所述停泊装置在三维图像数据的视角范围内的至少一侧,有助于控制系统计算所述泊位侧与所述移动机器人当前姿态的偏转信息。
其中,所述方向信息为通过三维图像数据而计算出的移动机器人的当前姿态与停泊装置的至少一侧之间的角度数据。例如,所述方向信息为基于三维图像数据中停泊装置所占图像区域的角度范围而确定的角度数据。又如,所述方向信息为基于三维图像数据中对应停泊装置中至少一侧的法线方向而确定的角度数据。再如,所述方向信息为利用三维图像数据中对应停泊装置中至少一侧的三维数据而推测出相应泊位侧的角度数据;比如,泊位侧的法线方向(或角度范围)。
例如,所得到的相应侧为泊位侧,则相应的方向信息表示移动机器人的当前姿态与停泊装置的泊位侧之间的角度数据。又如,所得到的相应侧为背侧,则根据背侧与泊位侧的角度关系以及背侧的角度数据,计算出移动机器人的当前姿态与停泊装置的泊位侧之间的方向信息。
基于上述各示例而得到的停泊装置的泊位侧与所述移动机器人当前姿态之间的方向信息,控制系统执行步骤S125。
在步骤S125中,根据所述方向信息,确定所述移动机器人从当前姿态开始待移动的偏转信息。
控制系统以使所述移动机器人的接口端面向停泊装置的泊位侧为目标,在一些示例中,在移动至面向停泊装置的泊位侧之前,根据所得到的方向信息确定沿着泊位侧的法线方向、垂直该法线方向、或者与该法线方向呈预设角度而移动的偏转信息。请参阅图6,其显示为移动机器人向泊位侧移动所需转动的偏转信息的一种图示,控制系统为移动至停泊装置的泊位侧,根据角度数据α1确定为了垂直停泊装置的泊位侧的法线方向而转动的角度数据α2,并将此作为偏转信息。在其他举例中,该角度数据α1和α2可互为余角、或互为补角、或为同一角度值。
仍以所述移动机器人的接口端面向停泊装置的泊位侧为目标,在另一些示例中,控制系 统基于预设的位于停泊装置的泊位侧一对接距离的目标位置,以及所得到的对应泊位侧的方向信息,确定为移动至该目标位置而对应的偏转信息。其中,所述对接距离为预设的为准确移动至对接端而使移动机器人直行的一距离。
例如,请参阅图7,其显示为移动机器人向停泊装置的泊位侧移动所需偏转的另一图示。为了移动至待与停泊装置的泊位侧对接的目标位置P,控制系统根据角度数据α1、停泊装置与移动机器人之间的距离数据d1、以及P与停泊装置的泊位侧之间的距离数据d2,确定为了移动至目标位置P而需转动的角度数据α3,并将此作为偏转信息。其中,所述距离数据d1是依据描述第一图像特征的三维数据而得到的。
需要说明的是,技术人员可理解,上述步骤S120中所提及的各示例,虽然以分步的方式来描述,在实现时可被封装成更紧密耦合的算法方式来执行,以使控制系统将所接收的三维图像数据、或者三维图像数据和二维图像数据,进行数据处理后得到所述移动机器人的当前姿态与所述停泊装置的泊位侧之间的方向信息。
在步骤S130中,依据所述偏转信息输出控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。其中,所述控制指令中至少包括基于所述偏转信息而得到的角度指令,所述角度指令举例为所述偏转信息本身、或者对应偏转信息的转数(或者,转速和时长)等。
移动机器人的移动系统按照所述控制指令中的角度指令执行姿态上的调整,以使移动机器人整体向面向所述停泊装置的泊位侧方向转动。在此,所述移动机器人面向泊位侧方向转动是为了使移动机器人的第二接口端与停泊装置泊位侧的对接端信号连接,例如,电信号连接、或者短距离无线信号连接(如RF信号通信、或无线充电信号连接)等。
利用三维图像数据确定移动机器人向泊位侧移动的偏转方向,有利于移动机器人以经测量得到的空间数据来进行姿态上的调整,由此减少因二维图像数据中缺少反映移动机器人与停泊装置之间的远近关系,而带来的使移动机器人无法高效地向泊位侧偏转移动等的情况。
在实际应用中,移动机器人还需要执行位移操作,以使移动机器人的第二接口端面向泊位侧。为此,所述控制系统所输出的控制指令还包括位移指令。为了输出包含位移指令和角度指令的控制指令。在一些示例中,所述位移指令中包含:预设的固定移动长度、或固定移动时长,或者基于所述固定移动长度或移动时长而设置的电机转速、电机转动时长、电机转动圈数等。如此,所述移动机器人的移动系统在所述控制指令的控制下,按照偏转方向进行姿态调整,以及按照预设移动长度带动移动机器人整体移动一直线距离,再按照偏转方向转回,重新确定偏转方向。
在另一些示例中,与上述步骤S120无必然顺序关系地,控制系统还执行步骤S140,根据至少一种环境感应数据所反映的所述停泊装置与所述移动机器人的相对位置关系,以及所述偏转信息,生成移动路线。其中,所述相对位置关系包括移动机器人在当前位置与停泊装置之间的又一偏转信息和/或距离信息。
为了便于区分上述步骤S120中所确定的偏转信息和该又一偏转信息,在后续描述中,将用于表示移动机器人的姿态与正对于停泊装置时移动机器人的姿态之间的偏转信息称为第一偏转信息,例如,将该又一偏转信息称为第一偏转信息;以及,将用于表示移动机器人的姿态与泊位侧所面向的方向之间的偏转信息称为第二偏转信息,例如,将步骤S120中所确定的偏转信息称为第二偏转信息。控制系统利用该两个偏转信息,实现移动机器人移动至正对于停泊装置的泊位侧,由此更利于移动机器人执行后续的对接移动。
在此,所述环境感应数据来自于设置在所述移动机器人上的至少一种环境感应装置。其中,所述至少一种环境感应装置举例为前述提及的各类传感器,例如,深度图像摄取装置、图像摄取装置、惯导传感器等。
以所述环境感知数据为三维图像数据为例,步骤S140包括步骤S141,根据所述三维图像数据中对应所述停泊装置的图像特征,确定所述第一偏转信息。
根据对应所述停泊装置的第一图像特征在所述三维图像数据中的图像位置,确定所述第一偏转信息。在此,根据相应的图像摄像装置在移动机器人上的装配位置,控制系统中预设有当移动机器人的第二接口端正向面对停泊装置泊位侧时的目标图像位置与整幅三维图像数据之间的位置关系。例如,目标图像位置位于整幅三维图像数据的中间区域;或者目标图像位置的边缘位于整幅三维图像数据的一侧边界等。当控制系统得到第一图像特征在整幅三维图像数据中的图像位置(又称图像区域)时,根据该图像位置与目标图像位置之间的图像位置偏差,得到所述移动机器人当前姿态与正对于停泊装置时的姿态之间的第一偏转信息。
请参阅图8所示,其显示为移动机器人当前姿态下偏离于正面对停泊装置方向的第一偏转信息为β;其中,虚线L1为移动机器人当前姿态所正对的方向,虚线L2为基于三维图像数据而确定的停泊装置相对于移动机器人的方向,虚线L1和L2的夹角为β。需要说明的是,图8中所示的停泊装置也可以是背侧落入三维图像数据的视角范围,即该夹角β与停泊装置的某一侧无必然关系。
以所述环境感应数据还包括二维图像数据为例,所述第一偏转信息也可以通过步骤S142来得到,步骤S142:根据对应所述停泊装置的第二图像特征在该二维图像数据中的图像位置,确定所述相对位置关系。其中,所述相对位置关系包含角度关系。例如,上述步骤S141中确 定第一偏转信息的执行过程也适用于对二维图像数据中对应停泊装置的图像位置进行计算而确定所述第一偏转信息的方式,在此不再详述。
上述步骤S141和S142还可以结合使用,以提高所述第一偏转信息的计算精度。例如,分别提取停泊装置在二维图像数据中的第二图像特征和在三维图像数据中的第一图像特征,通过匹配第一图像特征和第二图像特征的图像位置,选择可匹配的第一图像特征或第二图像特征所在的图像位置,以确定所述移动机器人当前姿态与正对于停泊装置时的姿态之间的第一偏转信息。
所述步骤S140中确定相对位置关系中的距离数据的方式包括步骤S143。在步骤S143中,根据所述三维图像数据中对应所述停泊装置的图像特征,确定所述距离数据。具体地,控制系统利用所述三维图像数据中对应停泊装置的深度值,确定移动机器人与停泊装置之间的距离信息,控制系统由此确定所述停泊装置与所述移动机器人的距离信息。
所述环境感应数据还包括二维图像数据和惯导数据,在该示例中,控制系统利用二维图像数据和惯导数据,确定所述相对位置关系。
在一些示例中,所述步骤S140还可以包含步骤S144,根据移动机器人在不同位置所获取的二维图像数据以及在不同位置之间移动的惯导数据,确定所述相对位置关系。
在此,控制系统利用惯导传感器测量移动机器人从位置Pos1移动至位置Pos2的移动距离和移动姿态,以及利用图像获取装置分别在位置Pos1和位置Pos2处拍摄两幅二维图像数据Pic1和Pic2。根据图像获取装置将物理空间内的各实体物体以比例关系感光成二维图像数据,控制系统通过在至少两个位置Pos1和Pos2之间移动的惯导数据,确定相应同一实体物体的物理位置与图像数据中对应该实体物体的第二图像特征的图像位置之间的转换关系s(又称比例尺度),进而利用该转换关系确定移动机器人与该实体物体之间的相对位置关系。当所述实体物体为所识别出的停泊装置时,所得到的相对位置关系包括移动机器人的当前姿态与停泊装置之间的第一偏转信息、以及二者之间的距离信息。
对应步骤S140中的各示例,在控制系统得到第一偏转信息的情况下,所述步骤S130包括步骤S131,依据所述第一偏转信息输出控制指令,以使所述移动机器人正对于所述停泊装置。在此,所述控制指令仅包含对应第一偏转信息的角度指令,以调整移动机器人的姿态,使其正对停泊装置。控制系统执行步骤S110-S130,由此利用正对于停泊装置时,更容易提取第一图像特征和更丰富的三维图像信息,来提高计算第二偏转信息和相对位置关系的准确性。
在控制系统得到第二偏转信息的情况下,控制系统利用所述第二偏转信息生成移动路线。其中,所述移动路线可为预设移动距离、或预设移动时间间隔的移动路线。所述移动路线还 可以是以相距所述泊位侧一预设距离处为目的地,从移动机器人的当前位置移动至该目的地的移动路线。所述步骤S130包括步骤S132,依据所述移动路线输出控制指令,以使所述移动机器人正对于所述停泊装置。
在此,利用上述各示例所提供的移动路线,所述控制系统输出的控制指令包含角度指令和/或位移指令。其中,所述角度指令包括仅依据第二偏转信息而得到的信息。例如,所述角度指令包括第二偏转信息本身、或者对应第二偏转信息的转数(或者,转速和时长)等。所述位移指令包含依据移动路线而得到的信息。例如,所述位移指令包括依照移动路线在相应角度指令下所移动的距离等。
在得到移动机器人与停泊装置之间的相对位置关系,以及第二偏转信息的情况下,基于所述相对位置关系和第二偏转信息,控制系统生成移动路线,以及执行步骤S132,从移动机器人的当前位置移动至该目的地的移动路线。在此,所述控制系统输出的控制指令包含角度指令和/或位移指令。其中,所述角度指令包括仅依据第二偏转信息而得到的偏转信息,或者叠加(或去重)了第一偏转信息和第二偏转信息而得到的偏转信息。例如,所述角度指令包括所得到的偏转信息本身、或者对应偏转信息的转数(或者,转速和时长)等。所述位移指令包含依据移动路线而得到的信息。例如,所述位移指令包括依照移动路线在相应角度指令下所移动的距离等。
控制系统在移动机器人按照上述控制指令移动期间,不断重复上述过程,以不断修正移动机器人与泊位侧之间的相对位置关系,以使移动机器人在此导航移动方式下移动至其第二接口端正对于所述停泊装置的泊位侧方向。
相比于利用二维图像数据计算泊位侧与移动机器人之间的方位信息,在图像识别过程中,所识别出的停泊装置的轮廓的像素位置,受滤波、背景相似色等影响,易产生误差。而本申请中所提及的多种示例,均利用三维图像数据中各像素点所在角度及其深度值是直接测得的优势,以及根据停泊装置结构上的特点,能够测得停泊装置各侧的空间方位。因此,利用三维图像数据中所反映的泊位侧的方向信息,有利于在对接前快速确定移动机器人从当前姿态转至泊位侧所需的第二偏转信息,有效减少移动机器人尝试的次数。
本申请还提供一种停泊控制方法的实施例,该实施例可基于如步骤S101-102的监测而确定执行停泊控制方法的示例启动执行。请参阅图9,其显示为基于本申请的发明构思而提供的又一停泊控制方法的流程示意图。
在步骤S210中,获取二维图像数据和三维图像数据。其中,所述二维图像数据与三维图像数据之间的重叠的视角范围与前述步骤S110-S130中所提及的各示例相同或相似。例如, 所述二维图像数据和三维图像数据来自于集成感光器件和ToF测量器件的环境感应装置。所述环境感应装置输出基于同一视角范围内的二维图像数据和三维图像数据。
在此,所述二维图像数据和三维图像数据可以同步或非同步获取。
以所述二维图像数据和三维图像数据是非同步获取的为例,控制系统先获取二维图像数据,以执行后续步骤S220和S240,以使移动机器人正对着停泊装置;再获取三维图像数据,以执行后续步骤S230和S240,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
以所述二维图像数据和三维图像数据是同步获取的为例,控制系统可在后续步骤,如S220和S230中利用二维图像数据和三维图像数据来识别停泊装置或停泊装置的至少一侧等。
在向面向所述停泊装置的泊位侧方向移动过程中,控制系统还可以根据对二维图像数据和三维图像数据的数据需求,为确定第一偏转信息时,获取二维图像数据;以及为确定第二偏转信息时,同步获取二维图像数据和三维图像数据。
在步骤S220中,根据对应停泊装置的二维图像区域在所述二维图像数据中的图像位置,确定所述移动机器人当前姿态与正对于停泊装置时的姿态之间的第一偏转信息。
在此,所述步骤S220的执行过程与前述示例中的步骤S141、S142相同或相似。例如,控制系统通过识别停泊装置的第二图像特征,确定该第二图像特征所对应的二维图像区域在所述二维图像数据中的图像位置。其中,所述二维图像区域举例为包含停泊装置的像素数据的矩形框、或者停泊装置的像素数据所围成的轮廓。预设所述移动机器人在正对于停泊装置时而拍摄到停泊装置时,停泊装置的所对应的目标图像区域与整幅二维图像数据之间的图像位置关系,控制系统根据该图像位置关系,确定所识别出的二维图像区域与目标图像区域之间的图像位置偏差,由此得到所述移动机器人当前姿态与正对于停泊装置时的姿态之间的第一偏转信息。
在步骤S230中,根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的第二偏转信息。
在此,所述步骤S230的执行过程与前述步骤S120中各示例的执行过程相同或相似。在此不再详述。
在步骤S240中,依据所确定的第一偏转信息和/或第二偏转信息输出至少一个控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
在此,所述步骤S240的执行过程与前述步骤S130中各示例的执行过程相同或相似。
在上述分步获取二维图像数据和三维图像数据的示例中,控制系统在执行步骤S220之后,执行步骤S241,依据所述第一偏转信息输出第一控制指令,以使所述移动机器人正对于 所述停泊装置。其中,该第一控制指令中包含对应第一偏转信息的角度指令,以使移动机器人旋转姿态,面向停泊装置。
控制系统继续执行步骤S230,以得到第二偏转信息,并执行步骤S242,依据所述第二偏转信息输出第二控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。例如,所述控制系统可依据第二偏转信息生成移动路线。所输出的第二控制指令包含对应第二偏转信息的角度指令,以及对应移动路线的位移指令。
在上述同步获取二维图像数据和三维图像数据,且基于二维图像数据和三维图像数据得到第一偏转信息和第二偏转信息的示例中,控制系统依据所确定的第一偏转信息和第二偏转信息输出控制指令。例如,所述控制系统可依据第一偏转信息和第二偏转信息生成移动路线。所输出的第二控制指令包含对应第一偏转信息和第二偏转信息的角度指令,以及对应移动路线的位移指令。
上述各示例中所生成的移动路线举例对应于步骤S130中的相应各示例,在此不再详述。
本申请还提供一种停泊控制方法的实施例,该实施例可基于如步骤S101-102的监测而确定执行停泊控制方法的示例启动执行。请参阅图10,其显示为基于本申请的发明构思而提供的又一停泊控制方法的流程示意图。
在步骤S310,获取三维图像数据。在此,步骤S310与前述示例中步骤S110相同或相似,在此不再重述。
在步骤S320中,根据对应停泊装置的三维图像区域在所述三维图像数据中的图像位置,确定所述移动机器人当前姿态与正对于停泊装置时的姿态之间的第一偏转信息。在此,步骤S320与前述步骤S141相同或相似,在此不再详述。
在步骤S330中,根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的第二偏转信息。在此,步骤S330与前述步骤S120相同或相似,在此不再详述。
在步骤S340中,依据所确定的第一偏转信息和/或第二偏转信息输出至少一个控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。在此,步骤S340与前述步骤S130、或者前述步骤240相同或相似,在此不再详述。
本申请还提供一种计算机可读写存储介质,存储至少一种程序,所述至少一种程序在被调用时执行并实现上述针对图3、4、7、8所示的控制方法所描述的至少一种实施例。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技 术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得安装有所述存储介质的移动机器人可以执行本申请各个实施例所述方法的全部或部分步骤。
于本申请提供的实施例中,所述计算机可读写存储介质可以包括只读存储器、随机存取存储器、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁存储设备、闪存、U盘、移动硬盘、或者能够用于存储具有指令或数据结构形式的期望的程序代码并能够由计算机进行存取的任何其它介质。另外,任何连接都可以适当地称为计算机可读介质。例如,如果指令是使用同轴电缆、光纤光缆、双绞线、数字订户线(DSL)或者诸如红外线、无线电和微波之类的无线技术,从网站、服务器或其它远程源发送的,则所述同轴电缆、光纤光缆、双绞线、DSL或者诸如红外线、无线电和微波之类的无线技术包括在所述介质的定义中。然而,应当理解的是,计算机可读写存储介质和数据存储介质不包括连接、载波、信号或者其它暂时性介质,而是旨在针对于非暂时性、有形的存储介质。如申请中所使用的磁盘和光盘包括压缩光盘(CD)、激光光盘、光盘、数字多功能光盘(DVD)、软盘和蓝光光盘,其中,磁盘通常磁性地复制数据,而光盘则用激光来光学地复制数据。
在一个或多个示例性方面,本申请所述方法的计算机程序所描述的功能可以用硬件、软件、固件或其任意组合的方式来实现。当用软件实现时,可以将这些功能作为一个或多个指令或代码存储或传送到计算机可读介质上。本申请所公开的方法或算法的步骤可以用处理器可执行软件模块来体现,其中处理器可执行软件模块可以位于有形、非临时性计算机可读写存储介质上。有形、非临时性计算机可读写存储介质可以是计算机能够存取的任何可用介质。
本申请上述的附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。基于此,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这根据所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以通过执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以通过专用硬件与计算机指令的组合来实现。
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡 所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。

Claims (25)

  1. 一种移动机器人的停泊控制方法,其特征在于,包括:
    获取三维图像数据;
    根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的偏转信息;
    依据所述偏转信息输出控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
  2. 根据权利要求1所述的移动机器人的停泊控制方法,其特征在于,所述根据三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的偏转信息的步骤包括:
    基于预设的停泊装置模型中与所述三维图像数据相匹配的图像特征,确定所述移动机器人的当前姿态与所述停泊装置模型的泊位侧之间的方向信息;
    根据所确定的方向信息,确定所述偏转信息。
  3. 根据权利要求1所述的移动机器人的停泊控制方法,其特征在于,所述根据三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的偏转信息的步骤包括:
    基于从所述三维图像数据中所识别出的反映所述停泊装置的泊位侧的三维数据,确定所述泊位侧与所述移动机器人当前姿态之间的方向信息;
    根据所确定的方向信息,确定所述偏转信息。
  4. 根据权利要求1所述的移动机器人的停泊控制方法,其特征在于,还包括:
    获取二维图像数据;
    从所述三维图像数据中和二维图像数据中视角范围重叠的各图像区域中,确定反映所述停泊装置的泊位侧的方向信息。
  5. 根据权利要求4所述的移动机器人的停泊控制方法,其特征在于,所述二维图像数据中包含反映所述停泊装置的泊位侧标识的图像区域。
  6. 根据权利要求1所述的移动机器人的停泊控制方法,其特征在于,所述根据三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊 位侧移动的偏转信息的步骤包括:
    以所述移动机器人的接口端正对停泊装置的泊位侧为目标,根据所述的方向信息确定沿着泊位侧的法线方向、垂直该法线方向、或者沿着与该法线方向呈预设角度的方向而移动的偏转信息;和/或
    以所述移动机器人的接口端正对停泊装置的泊位侧为目标,基于预设的位于停泊装置的泊位侧一对接距离的目标位置,以及对应停泊装置的距离数据、和方向信息,确定为移动至该目标位置而对应的偏转信息。
  7. 根据权利要求1所述的移动机器人的停泊控制方法,其特征在于,所述移动机器人上设置有至少一种环境感应装置以输出环境感应数据,其中,所述环境感应数据包括所述三维图像数据;
    所述停泊控制方法还包括:根据至少一种环境感应数据所反映的所述停泊装置与所述移动机器人的相对位置关系,以及所述偏转信息,生成移动路线;
    所述依据偏转信息输出控制指令的步骤包括:依据所述移动路线输出控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
  8. 根据权利要求7所述的移动机器人的停泊控制方法,其特征在于,所述相对位置关系的确定方式包括以下至少一种:
    根据所述三维图像数据中对应所述停泊装置的图像特征,确定所述相对位置关系;
    所述环境感应数据还包括二维图像数据,根据所述二维图像数据和/或三维图像数据中对应所述停泊装置的图像特征在相应图像数据中的图像位置,确定所述相对位置关系;以及
    所述环境感应数据还包括二维图像数据和惯导数据,根据移动机器人在不同位置所获取的二维图像数据以及在不同位置之间移动的惯导数据,确定所述相对位置关系。
  9. 根据权利要求1所述的移动机器人的停泊控制方法,其特征在于,所述停泊装置为充电桩和/或垃圾回收装置,还包括:
    监测所述移动机器人的电池数据和/或集尘数据,以及所述移动机器人当前的定位信息;
    根据所监测的电池数据和/或集尘数据,以及所述定位信息与停泊装置之间的导航路线,输出用于控制移动机器人沿所述导航路线向所述停泊装置移动的控制指令,以使所述 移动机器人至少在移动至所述停泊装置附近时,获取所述三维图像数据。
  10. 根据权利要求1所述的移动机器人的停泊控制方法,其特征在于,所述三维图像数据中包含反映所述停泊装置的泊位侧标识的图像区域。
  11. 一种移动机器人的停泊控制方法,其特征在于,包括:
    获取二维图像数据和三维图像数据;
    根据对应停泊装置的二维图像区域在所述二维图像数据中的图像位置,确定所述移动机器人当前姿态与正对于停泊装置时的姿态之间的第一偏转信息;
    根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的第二偏转信息;
    依据所确定的第一偏转信息和/或第二偏转信息输出至少一个控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
  12. 根据权利要求11所述的移动机器人的停泊控制方法,其特征在于,所述二维图像数据和三维图像数据来自于集成感光器件和ToF测量器件的环境感应装置。
  13. 根据权利要求11所述的移动机器人的停泊控制方法,其特征在于,所述二维图像数据中包含反映所述停泊装置的泊位侧标识的图像区域;和/或所述三维图像数据中包含反映所述停泊装置的泊位侧标识的图像区域。
  14. 根据权利要求11所述的移动机器人的停泊控制方法,其特征在于,所述依据第一偏转信息和/或第二偏转信息输出至少一个控制指令的步骤包括依时序地执行以下步骤:
    依据所述第一偏转信息输出第一控制指令,以使所述移动机器人正对于所述停泊装置;以及
    依据所述第二偏转信息输出第二控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
  15. 根据权利要求11所述的移动机器人的停泊控制方法,其特征在于,所述根据三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的第二偏转信息的步骤包括以下至少一种:
    基于预设的停泊装置模型中与所述三维图像数据相匹配的图像特征,确定所述移动机器人的当前姿态与所述停泊装置模型的泊位侧之间的方向信息;
    基于从所述三维图像数据中所识别出的反映所述停泊装置的泊位侧的三维数据,确定所述泊位侧与所述移动机器人当前姿态之间的方向信息;或者
    从所述三维图像数据中和二维图像数据中视角范围重叠的各图像区域中,确定反映所述停泊装置的泊位侧的方向信息;
    其中,任一方式所确定的方向信息用于得到所述第二偏转信息。
  16. 根据权利要求11所述的移动机器人的停泊控制方法,其特征在于,所述停泊装置为充电桩和/或垃圾回收装置,还包括:
    监测所述移动机器人的电池数据和/或集尘数据,以及所述移动机器人当前的定位信息;
    根据所监测的电池数据和/或集尘数据,以及所述定位信息与停泊装置之间的导航路线,输出用于控制移动机器人沿所述导航路线向所述停泊装置移动的控制指令,以使所述移动机器人至少在移动至所述停泊装置附近时,至少获取所述二维图像数据。
  17. 一种移动机器人的停泊控制方法,其特征在于,包括:
    获取三维图像数据;
    根据对应停泊装置的三维图像区域在所述三维图像数据中的图像位置,确定所述移动机器人当前姿态与正对于停泊装置时的姿态之间的第一偏转信息;
    根据所述三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的第二偏转信息;
    依据所确定的第一偏转信息和/或第二偏转信息输出至少一个控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
  18. 根据权利要求17所述的移动机器人的停泊控制方法,其特征在于,所述三维图像数据中包含反映所述停泊装置的泊位侧标识的图像区域。
  19. 根据权利要求17所述的移动机器人的停泊控制方法,其特征在于,所述依据第一偏转信息和/或第二偏转信息输出至少一个控制指令的步骤包括依时序地执行以下步骤:
    依据所述第一偏转信息输出第一控制指令,以使所述移动机器人正对于所述停泊装置; 以及
    依据所述第二偏转信息输出第二控制指令,以使所述移动机器人向面向所述停泊装置的泊位侧方向移动。
  20. 根据权利要求17所述的移动机器人的停泊控制方法,其特征在于,所述根据三维图像数据中对应停泊装置的至少一侧的方向信息,确定所述移动机器人从当前姿态向停泊装置的泊位侧移动的第二偏转信息的步骤包括以下至少一种:
    基于预设的停泊装置模型中与所述三维图像数据相匹配的图像特征,确定所述移动机器人的当前姿态与所述停泊装置模型的泊位侧之间的方向信息;或者
    基于从所述三维图像数据中所识别出的反映所述停泊装置的泊位侧的三维数据,确定所述泊位侧与所述移动机器人当前姿态之间的方向信息;
    其中,任一方式所确定的方向信息用于得到所述第二偏转信息。
  21. 根据权利要求17所述的移动机器人的停泊控制方法,其特征在于,所述停泊装置为充电桩和/或垃圾回收装置,还包括:
    监测所述移动机器人的电池数据和/或集尘数据,以及所述移动机器人当前的定位信息;
    根据所监测的电池数据和/或集尘数据,以及所述定位信息与停泊装置之间的导航路线,输出用于控制移动机器人沿所述导航路线向所述停泊装置移动的控制指令,以使所述移动机器人至少在移动至所述停泊装置附近时,获取所述三维图像数据。
  22. 一种移动机器人的控制系统,其特征在于,包括:
    至少一个第一接口端,用于接收三维图像数据;
    至少一个存储器,用于存储至少一个程序;
    至少一个处理器,与所述至少一个第一接口端和至少一个存储器相连,用于调用并执行所述至少一个程序,以协调所述至少一个第一接口端和至少一个存储器执行并实现如权利要求1-10、11-16、或17-21中任一所述的停泊控制方法;
    第二接口端,用于与停泊装置进行对接确认。
  23. 一种移动机器人,其特征在于,包括:
    至少一种传感器,用于至少提供三维图像数据;
    移动系统,用于根据所接收到的控制指令执行移动操作;
    如权利要求22所述的控制系统,利用其中的第一接口端分别与各所述传感器和移动系统连接,用于根据所至少获取的三维图像数据输出所述控制指令。
  24. 一种移动机器人系统,其特征在于,包括:
    如权利要求23所述的移动机器人;以及
    停泊装置,用于与所述移动机器人对接。
  25. 一种计算机可读存储介质,其特征在于,存储有至少一程序,所述至少一程序在被调用时执行并实现如权利要求1-10、11-16、或17-21中任一所述的停泊控制方法。
PCT/CN2021/116331 2020-12-11 2021-09-02 停泊控制方法、控制系统、移动机器人及存储介质 WO2022121392A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011446027.5 2020-12-11
CN202011446027.5A CN114690751A (zh) 2020-12-11 2020-12-11 停泊控制方法、控制系统、移动机器人及存储介质

Publications (1)

Publication Number Publication Date
WO2022121392A1 true WO2022121392A1 (zh) 2022-06-16

Family

ID=81974024

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116331 WO2022121392A1 (zh) 2020-12-11 2021-09-02 停泊控制方法、控制系统、移动机器人及存储介质

Country Status (2)

Country Link
CN (1) CN114690751A (zh)
WO (1) WO2022121392A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115577816A (zh) * 2022-11-21 2023-01-06 南京联云智能系统有限公司 一种锚地智能调度方法、系统及设备
CN117414110A (zh) * 2023-12-14 2024-01-19 先临三维科技股份有限公司 三维扫描设备的控制方法、装置、终端设备及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060111780A (ko) * 2005-04-25 2006-10-30 엘지전자 주식회사 이동로봇의 위치 산출 시스템과 그를 이용한 충전대 복귀시스템 및 그 방법
US20140100693A1 (en) * 2012-10-05 2014-04-10 Irobot Corporation Robot management systems for determining docking station pose including mobile robots and methods using same
CN106142082A (zh) * 2016-06-23 2016-11-23 昆山穿山甲机器人有限公司 机器人调整路径偏移的定位导航方法
CN109217484A (zh) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 一种无线充电对准装置及系统
CN110515383A (zh) * 2019-08-30 2019-11-29 深圳飞科机器人有限公司 自主充电的方法以及移动机器人
CN111481112A (zh) * 2019-01-29 2020-08-04 北京奇虎科技有限公司 扫地机的回充对准方法、装置及扫地机

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060111780A (ko) * 2005-04-25 2006-10-30 엘지전자 주식회사 이동로봇의 위치 산출 시스템과 그를 이용한 충전대 복귀시스템 및 그 방법
US20140100693A1 (en) * 2012-10-05 2014-04-10 Irobot Corporation Robot management systems for determining docking station pose including mobile robots and methods using same
CN106142082A (zh) * 2016-06-23 2016-11-23 昆山穿山甲机器人有限公司 机器人调整路径偏移的定位导航方法
CN109217484A (zh) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 一种无线充电对准装置及系统
CN111481112A (zh) * 2019-01-29 2020-08-04 北京奇虎科技有限公司 扫地机的回充对准方法、装置及扫地机
CN110515383A (zh) * 2019-08-30 2019-11-29 深圳飞科机器人有限公司 自主充电的方法以及移动机器人

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115577816A (zh) * 2022-11-21 2023-01-06 南京联云智能系统有限公司 一种锚地智能调度方法、系统及设备
CN115577816B (zh) * 2022-11-21 2023-08-11 南京联云智能系统有限公司 一种锚地智能调度方法、系统及设备
CN117414110A (zh) * 2023-12-14 2024-01-19 先临三维科技股份有限公司 三维扫描设备的控制方法、装置、终端设备及系统
CN117414110B (zh) * 2023-12-14 2024-03-22 先临三维科技股份有限公司 三维扫描设备的控制方法、装置、终端设备及系统

Also Published As

Publication number Publication date
CN114690751A (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
JP6441993B2 (ja) レーザー点クラウドを用いる物体検出のための方法及びシステム
WO2021254367A1 (zh) 机器人系统及定位导航方法
US10025317B2 (en) Methods and systems for camera-based autonomous parking
CN111035327B (zh) 清洁机器人、地毯检测方法及计算机可读存储介质
WO2022121392A1 (zh) 停泊控制方法、控制系统、移动机器人及存储介质
EP3132732B1 (en) Autonomous coverage robot
US8989944B1 (en) Methods and devices for determining movements of an object in an environment
CN110477825B (zh) 清洁机器人、自主充电方法、系统及可读存储介质
WO2019246029A1 (en) Sensor obstruction detection and mitigation using vibration and/or heat
US20230247015A1 (en) Pixelwise Filterable Depth Maps for Robots
US11592524B2 (en) Computation of the angle of incidence of laser beam and its application on reflectivity estimation
WO2013130734A1 (en) Mobile robot
CN106569225B (zh) 一种基于测距传感器的无人车实时避障方法
Chatterjee et al. Vision based autonomous robot navigation: algorithms and implementations
US11769269B2 (en) Fusing multiple depth sensing modalities
CN112034837A (zh) 确定移动机器人工作环境的方法、控制系统及存储介质
US11460855B1 (en) Systems and methods for sensor calibration
Jun et al. Autonomous driving system design for formula student driverless racecar
CN112684813A (zh) 机器人与充电桩的对接方法、装置、机器人与可读存储介质
CN212044739U (zh) 一种基于惯性数据和视觉特征的定位装置及机器人
CN217801729U (zh) 一种室外机器人
Poomarin et al. Automatic docking with obstacle avoidance of a differential wheel mobile robot
US20150294465A1 (en) Vehicle position estimation system
EP3842888A1 (en) Pixelwise filterable depth maps for robots
JP7358108B2 (ja) 情報処理装置、情報処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902100

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21902100

Country of ref document: EP

Kind code of ref document: A1