WO2023045644A1 - Positioning method and device for mobile robot, storage medium and electronic device - Google Patents

Positioning method and device for mobile robot, storage medium and electronic device Download PDF

Info

Publication number
WO2023045644A1
WO2023045644A1 PCT/CN2022/113375 CN2022113375W WO2023045644A1 WO 2023045644 A1 WO2023045644 A1 WO 2023045644A1 CN 2022113375 W CN2022113375 W CN 2022113375W WO 2023045644 A1 WO2023045644 A1 WO 2023045644A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile robot
state
target
target object
position coordinates
Prior art date
Application number
PCT/CN2022/113375
Other languages
French (fr)
Chinese (zh)
Inventor
王朕
郁顺昌
齐焱
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2023045644A1 publication Critical patent/WO2023045644A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Definitions

  • the present disclosure relates to the communication field, and in particular, to a positioning method and device for a mobile robot, a storage medium, and an electronic device.
  • Robots need to interact with the environment and users, and environmental perception is the most basic and critical link in the interaction process of robots.
  • environmental perception is the most basic and critical link in the interaction process of robots.
  • it is necessary to use the past information to obtain the accurate machine pose.
  • Existing machine relocalization techniques are divided into methods based on traditional feature points, methods based on machine learning and methods based on deep learning. Among them, the relocation based on traditional feature points uses feature points for matching, and then obtains the machine pose.
  • VSLAM Vision Simultaneous Location And Mapping
  • the purpose of the present disclosure is to provide a positioning method and device, a storage medium and an electronic device for a mobile robot, so as to at least solve the problem in the related art that the position of the mobile robot cannot be determined when the pose of the mobile robot fails.
  • a positioning method for a mobile robot including: when it is detected that the mobile robot is in the first state, acquiring at least two data detected by the mobile robot in the second state The position coordinates of a target object; the first state is a state where the mobile robot has a pose failure, and the second state is a state where the mobile robot has no pose failure; determine the relationship between the mobile robot and the First relative positions of at least two target objects; calculating the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative positions.
  • the acquiring the position coordinates of at least two target objects detected by the mobile robot in the second state includes: identifying at least two target objects based on the image data collected by the image acquisition component; according to the The identifications of at least two target objects are matched from a preset relationship library to obtain the position coordinates of the at least two target objects; wherein, the preset relationship library stores the correspondence between the identification of the target object and the position coordinates of the target object relation.
  • the determining the first relative position between the mobile robot and the at least two target objects includes: determining the distance between the mobile robot and the at least two target objects according to the target ranging method, and obtaining at least two First distance; determine the first directional relationship between the mobile robot and the at least two objects based on the image data collected by the image acquisition component; determine the first directional relationship according to the first directional relationship and the at least two first distances First relative positions of the mobile robot and the at least two target objects.
  • the method further includes: when the mobile robot is in the second state, identifying the target object based on the image data collected by the image acquisition component; determining the position coordinates of the target object; The identifier of and the position coordinates of the target object are correspondingly stored in a preset relationship library.
  • the determining the position coordinates of the target object includes: determining a second relative position between the target object and the mobile robot; determining the target according to the second relative position and the position coordinates of the mobile robot The object's location coordinates.
  • determining the second relative position between the target object and the mobile robot includes: determining the distance between the mobile robot and the target object according to a target ranging method to obtain a second distance; Determining a second directional relationship between the mobile robot and the target object based on the image data; determining a second relative position between the mobile robot and the target object according to the second directional relationship and the second distance.
  • the mobile robot is in the first state through at least one of the following methods, including: detecting that the moving wheels of the mobile robot are idling; detecting that the moving wheels of the mobile robot are not in contact with the target plane.
  • a positioning device for a mobile robot including: an acquisition module, configured to acquire the location of the mobile robot in the second state when it is detected that the mobile robot is in the first state The detected position coordinates of at least two target objects; the first state is a state where the mobile robot has a pose failure, and the second state is a state where the mobile robot has no pose failure; a determination module , for determining a first relative position between the mobile robot and the at least two target objects; a calculation module, for calculating the mobile robot according to the position coordinates of the at least two target objects and the first relative position location coordinates.
  • a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute any of the above-mentioned steps when running.
  • an electronic device including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to perform any of the above The localization method of the mobile robot described in the item.
  • the position coordinates of at least two target objects are obtained when the mobile robot does not have a pose failure, and at the same time, the first relative positions between the mobile robot and the at least two target objects are determined , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object.
  • FIG. 1 is a block diagram of the hardware structure of a computer terminal of a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 2 is a flow chart of a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram (1) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram (2) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 1 is a block diagram of a hardware structure of a robot according to a positioning method for a mobile robot according to an embodiment of the present disclosure.
  • the mobile robot can include one or more (only one is shown in Figure 1) processor 102 (processor 102 can include but not limited to microprocessor (Microprocessor Unit, MPU for short) or programmable logic device (Programmable logic device, referred to as PLD) and other processing devices and memory 104 for storing data, optionally, the above-mentioned mobile robot can also include transmission equipment 106 and input and output equipment 108 for communication functions.
  • processor 102 can include but not limited to microprocessor (Microprocessor Unit, MPU for short) or programmable logic device (Programmable logic device, referred to as PLD) and other processing devices and memory 104 for storing data
  • PLD programmable logic device
  • the structure shown in Figure 1 is only illustrative, and it does not limit the structure of the above-mentioned mobile robot.
  • the mobile robot can also include more or less components than shown in Figure 1, or have the same Functionally equivalent to that shown in Figure 1 or a different configuration with more functionality than shown in Figure 1.
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the mobile robot positioning method in the embodiment of the present disclosure, and the processor 102 runs the computer program stored in the memory 104, Thereby executing various functional applications and data processing, that is, realizing the above-mentioned method.
  • the memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include memory located remotely from the processor 102, and these remote memories may be connected to the mobile robot through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 106 is used to receive or transmit data via the network.
  • a specific example of the above-mentioned network may include a wireless network provided by a mobile robot's communication provider.
  • the transmission device 106 includes a network interface controller (NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • FIG. 2 is a flowchart (1) of a positioning method for a mobile robot according to an embodiment of the disclosure. As shown in FIG. 2 , the process includes the following steps :
  • Step S202 When it is detected that the mobile robot is in the first state, obtain the position coordinates of at least two target objects detected by the mobile robot in the second state; the first state is the mobile robot A state where a pose failure occurs, and the second state is a state where the mobile robot does not have a pose failure;
  • whether the mobile robot is in the first state is detected by at least one of the following methods: detecting whether the moving wheels of the mobile robot are idling; detecting whether the moving wheels of the mobile robot are in contact with the target plane.
  • the VSLAM cannot determine the position of the mobile robot, and then determine that the mobile robot is in the first state, that is, determine that the mobile robot is in a state where the pose is invalid.
  • Step S204 determining a first relative position between the mobile robot and the at least two target objects
  • Step S206 Calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
  • the position coordinates of at least two target objects are obtained when the mobile robot does not have a pose failure, and at the same time, the first relative positions of the mobile robot and the at least two target objects are determined , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object.
  • the technical solution of the embodiment of the present application can be divided into two parts from the time level, the first part: the mobile robot fails in the pose; the second part: the mobile robot occurs Pose fails.
  • the target object when the mobile robot is in the second state, the target object is identified based on the image data collected by the image acquisition component; the position coordinates of the target object are determined; the target object is The identifier of and the position coordinates of the target object are correspondingly stored in a preset relationship library.
  • the mobile robot when the mobile robot is in a non-failure state, the mobile robot can collect the image data during the traveling process through its own image acquisition component, and then use the object recognition model to identify the image data to determine the presence of The target object and the identification of the target object, then the mobile robot will further determine the position coordinates of the target object, and then store the identification of the target object and the position coordinates of the target object in the preset relationship library.
  • the above image data includes: pictures, videos and so on.
  • the above-mentioned target objects include: obstacles, furniture, etc.
  • the above-mentioned mobile robot can determine the position coordinates of the target object in the following manner: determine the second relative position between the target object and the mobile robot; determine according to the second relative position and the position coordinates of the mobile robot The location coordinates of the target object.
  • the mobile robot can determine its own coordinates through VSLAM, and then the mobile robot only needs to determine the relative position corresponding to the target object to determine the position coordinates of the target object.
  • the coordinates of the mobile robot are (2, 2)
  • the target object is one meter (relative position) due west of the mobile robot
  • the coordinates of the target object are (1, 2).
  • the positive direction of the X-axis of the coordinate system is due east
  • the positive direction of the Y-axis is due north
  • the length of a unit in the coordinate system is one meter.
  • the determination of the second relative position between the target object and the mobile robot may be achieved in the following manner: determine the distance between the mobile robot and the target object according to the target ranging method , to obtain the second distance; determine the second direction relationship between the mobile robot and the target object based on the image data collected by the image acquisition component; determine the relationship between the mobile robot and the target object according to the second direction relationship and the second distance The second relative position of the target object.
  • the mobile robot can first determine the distance between the target object and itself through the target ranging method, and then determine which direction the target object is in itself according to the image data collected by the image acquisition component. After determining the direction and distance, it can Determine the relative position. For example, the target object is determined to be one meter away from the mobile robot through the target ranging method, and the target object is determined to be in the due west of the mobile robot through image data, and then the relative position is one meter due to the west. It should be noted that there are many methods of target ranging, including: monocular ranging, binocular ranging, depth sensor ranging, laser ranging, and so on.
  • the mobile robot can determine the position coordinates of all target objects in the area where the mobile robot is located without pose failure, and then map the target object identification to the position coordinates of the target object Stored in a preset relational library.
  • the position coordinates of at least two target objects detected by the mobile robot in the second state can be obtained first, and the first relative positions between the mobile robot and the at least two target objects can be determined , and then the mobile robot can calculate the position coordinates of the mobile robot according to the position coordinates of at least two target objects and the first relative position.
  • obtaining the position coordinates of at least two target objects detected by the mobile robot in the second state may be achieved in the following manner: identifying at least two target objects based on the image data collected by the image acquisition component ; According to the identification of the at least two target objects, the position coordinates of the at least two target objects are obtained by matching from the preset relationship library; wherein, the preset relationship library stores the identification of the target object and the target object Correspondence of position coordinates.
  • the mobile robot needs to collect images of the target area through the image collection component, and then determine at least two target objects from the collected pictures, and determine the position coordinates of the at least two target objects from the preset relationship library . If the mobile robot determines the target object A and the target object B according to the image acquisition component, then it can determine the coordinates of the target object A as (1, 2) and the target object B (5, 2) from the preset relationship library.
  • determining the first relative positions of the mobile robot and at least two target objects may be achieved in the following manner: determine the distance between the mobile robot and the at least two target objects according to the target ranging method, and obtain at least two first relative positions A distance; determine the first directional relationship between the mobile robot and the at least two objects based on the image data collected by the image acquisition component; determine the movement according to the first directional relationship and the at least two first distances A first relative position between the robot and the at least two target objects.
  • the mobile robot can first determine the distance between the target object and itself through the target ranging method, and then determine which direction the target object is in itself according to the image data collected by the image acquisition component. After determining the direction and distance, it can Determine the relative position.
  • the target object A is determined to be two meters away from the mobile robot through the target distance measurement method, and the target object is determined to be in the due west direction of the mobile robot through the image data, and then the relative position is two meters due to the west direction.
  • the target object B is determined to be two meters away from the mobile robot through the target ranging method, and the target object B is determined to be in the due east direction of the mobile robot through the image data, and then the relative position is two meters due east.
  • the position coordinates of the mobile robot can be calculated, for example, the coordinates of the target object A are (1, 2), and the target object B (5, 2), The target object A is two meters to the west of the mobile robot, and the target object B is two meters to the east of the mobile robot, and the coordinates of the mobile robot are (3, 2).
  • the mobile robot includes two states during the traveling process.
  • the first state is an abnormal state in which a pose failure occurs
  • the second state is a normal state in which a pose failure does not occur.
  • the mobile robot detects the surrounding object information (label information, detection frame information) through the AI camera (equivalent to the above-mentioned image acquisition component) during the travel process, and determines the distance from the object to the robot through the principle of monocular ranging, so as to determine The relative position between the object and the robot, and then determine the global coordinates of the object according to the relative position of the two (in the second state, the global coordinates of the robot can be determined based on VSLAM), for example, when the robot travels to position A, it detects For refrigerators, the AI camera can mark the refrigerator, and determine the distance from the refrigerator to the robot through the principle of monocular distance measurement, thereby determining the relative position between the refrigerator and the robot, and then determine the global coordinates of the refrigerator according to the relative positions of the two
  • VSLAM cannot locate the global coordinates of the mobile robot due to pose failure.
  • at least two objects within the robot’s field of vision can be obtained at the position where the pose failure occurs, determined by the AI camera.
  • the tags of the two objects so that the global coordinates corresponding to the two objects can be obtained according to the tags of the two objects (the global coordinates are determined in the second state and are accurate), and then through the principle of monocular distance measurement Determine the distance from the object to the mobile robot, thereby determining the relative position between the object and the mobile robot, and then calculate the global coordinates of the mobile robot based on the relative positions of the two objects and the mobile robot and the global coordinates of the two objects.
  • FIG. 3 is a flow chart (2) of a positioning method for a mobile robot according to an embodiment of the present disclosure. Based on the flow chart shown in FIG. 3 , the optional embodiment of the present disclosure provides The technical solution can be summarized as the following steps:
  • Step 1 When the sweeping robot (equivalent to the mobile robot in the above embodiment) T2 time (equivalent to the above-mentioned The second state in the embodiment) object detection information at the current location;
  • the AI camera (equivalent to the image acquisition component in the above embodiment) is arranged in front of the cleaning robot, and the AI camera's collection field of view is the forward direction of the cleaning robot.
  • the sweeping robot turns on the AI camera in real time, and the sweeping robot inputs the collected images into the detection model to obtain the detection information of the target object on the ground in front.
  • the detection information includes: label information and detection frame information.
  • Step 2 Obtain the global map coordinate information of the target object at T2 (equivalent to the global coordinates in the above-mentioned embodiment);
  • the target object pose in the machine coordinate system can be obtained by using the principle of monocular distance measurement (relationship between projection transformation and rigid body transformation). Then, combined with the pose information at the current position of the sweeping robot, the coordinate information of the target object under the global map is obtained.
  • the sweeping robot detects the dining table A in front of it, combines the detection frame information and the current pose information of the machine to obtain the coordinate information of the dining table A in the global map coordinate system, and stores it in the sweeping robot.
  • Step 3 Based on the global coordinate information of two or more target objects at the current position, determine the pose information of the sweeping robot;
  • the AI camera captures the detection information of more than two target objects.
  • the target object is an object that has been detected by AI many times during the normal driving of the sweeping robot (the machine pose exists and is accurate) before the pose failure of the sweeping robot, and the coordinate information under the global map is given.
  • the coordinate information of the target object in the machine coordinate system (equivalent to the position coordinates in the above embodiments) is restored, according to the machine coordinate system , the coordinate transformation relationship between the global coordinate system and the machine pose of the sweeping robot, and finally obtain the pose information of the machine at this time.
  • Step 4 Based on the pose information in step 3, update the pose when the VSLAM positioning fails and correct the wrong relocation, so as to avoid the map being inaccurate.
  • the VSLAM pose failure occurs when the machine is lifted and relocated, and the relocation error mostly occurs during the wheel slipping process.
  • the reverse calculated machine pose information is used to fill and correct the machine pose at this time. In this way, the occurrence of inaccurate VSLAM mapping is avoided.
  • the above-mentioned embodiments of the present disclosure use AI real-time detection to establish the semantic information and global map coordinate information of ground target objects.
  • use AI to identify and match more than two target objects to obtain a priori global map Coordinates, and then fill and correct the current machine pose, solve the problem of traditional VSLAM positioning failure and positioning error, improve the accuracy of positioning and mapping, and avoid inaccurate mapping.
  • the embodiments of the present disclosure can also directly use the deep neural network to realize end-to-end machine pose prediction.
  • use CNN to encode images, construct a database containing image features and real-world poses, and then achieve relative pose prediction by matching the most similar images in the database.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as a read-only memory (Read-Only Memory) Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, or network device) etc.) to perform the methods of the various embodiments of the present disclosure.
  • a storage medium such as a read-only memory (Read-Only Memory) Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk
  • a terminal device which can be a mobile phone, computer, server, or network device
  • a positioning device for a mobile robot is also provided, and the positioning device for a mobile robot is used to implement the above-mentioned embodiments and preferred implementation modes, and those that have already been described will not be repeated.
  • the term "module” may be a combination of software and/or hardware that realizes a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
  • Fig. 4 is a structural block diagram of a positioning device for a mobile robot according to an embodiment of the present disclosure. As shown in Fig. 4, the device includes:
  • the obtaining module 42 is used to obtain the position coordinates of at least two target objects detected by the mobile robot in the second state when it is detected that the mobile robot is in the first state;
  • the first state is the A state where the mobile robot has a pose failure
  • the second state is a state where the mobile robot has no pose failure;
  • a determining module 44 configured to determine a first relative position between the mobile robot and the at least two target objects
  • a calculation module 46 configured to calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
  • the position of the mobile robot in the case of a pose failure of the mobile robot, obtain the position coordinates of at least two target objects when the mobile robot does not have a pose failure, and at the same time, determine the first relative position between the mobile robot and the at least two target objects , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object.
  • the acquisition module 42 is also configured to identify at least two target objects based on the image data collected by the image acquisition component; and obtain the at least two target objects from a preset relationship library according to the identification of the at least two target objects The position coordinates of each target object; wherein, the preset relationship library stores the corresponding relationship between the identification of the target object and the position coordinates of the target object.
  • the determination module 44 is further configured to determine the distance between the mobile robot and the at least two target objects according to the target distance measurement method to obtain at least two first distances; A first directional relationship between the mobile robot and the at least two objects; determining a first relative position between the mobile robot and the at least two target objects according to the first directional relationship and the at least two first distances .
  • the calculation module 46 is also used to identify the target object based on the image data collected by the image acquisition component when the mobile robot is in the second state; determine the position coordinates of the target object; The object identifier and the position coordinates of the target object are correspondingly stored in a preset relationship library.
  • the calculation module 46 is further configured to determine a second relative position between the target object and the mobile robot; determine the position coordinates of the target object according to the second relative position and the position coordinates of the mobile robot.
  • the calculation module 46 is also used to determine the distance between the mobile robot and the target object according to the target ranging method to obtain a second distance; determine the distance between the mobile robot and the target object based on the image data collected by the image acquisition component. A second directional relationship of the target object; determining a second relative position between the mobile robot and the target object according to the second directional relationship and the second distance.
  • the acquisition module 42 is also used to detect that the mobile robot is in the first state through at least one of the following methods, including: detecting that the mobile wheels of the mobile robot are idling; detecting that the mobile wheels of the mobile robot are not in contact with Target plane contact.
  • the above-mentioned modules can be realized by software or hardware. For the latter, it can be realized by the following methods, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules can be combined in any combination The forms of are located in different processors.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the above method embodiments when running.
  • the above-mentioned storage medium may be configured to store a computer program for performing the following steps:
  • the mobile robot When it is detected that the mobile robot is in the first state, acquire the position coordinates of at least two target objects detected by the mobile robot in the second state;
  • the first state is when the mobile robot occurs A state of pose failure
  • the second state is a state where the mobile robot has no pose failure;
  • the above-mentioned storage medium may include but not limited to: various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
  • various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
  • Embodiments of the present disclosure also provide an electronic device, including a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to execute the following steps through a computer program:
  • the mobile robot When it is detected that the mobile robot is in the first state, acquire the position coordinates of at least two target objects detected by the mobile robot in the second state;
  • the first state is when the mobile robot occurs A state of pose failure
  • the second state is a state where the mobile robot has no pose failure;
  • each module or each step of the above-mentioned disclosure can be realized by a general-purpose computing device, and they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Alternatively, they may be implemented in program code executable by a computing device so that they may be stored in a storage device to be executed by a computing device, and in some cases in an order different from that shown here
  • the steps shown or described are carried out, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps among them are fabricated into a single integrated circuit module for implementation.
  • the present disclosure is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A positioning method and device for a mobile robot, a storage medium and an electronic device. The method comprises: when detected that a mobile robot is in a first state, acquiring the position coordinates of at least two target objects detected by the mobile robot in a second state, the first state being a state in which a pose failure occurs in the mobile robot, and the second state being a state in which a pose failure does not occur in the mobile robot (S202); determining first relative positions between the mobile robot and the at least two target objects (S204); and calculating the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative positions (S206). The method solves the problem in the related technology that when a pose failure occurs in a mobile robot, the position of the mobile robot cannot be determined.

Description

移动机器人的定位方法及装置、存储介质及电子装置Mobile robot positioning method and device, storage medium and electronic device
本公开要求如下专利申请的优先权:于2021年09月23日提交中国专利局、申请号为202111117669.5、发明名称为“移动机器人的定位方法及装置、存储介质及电子装置”的中国专利申请;上述专利申请的全部内容通过引用结合在本公开中。This disclosure claims the priority of the following patent application: a Chinese patent application submitted to the China Patent Office on September 23, 2021, with the application number 202111117669.5, and the title of the invention is "Positioning method and device, storage medium and electronic device for a mobile robot"; The entire contents of the aforementioned patent applications are incorporated by reference in this disclosure.
技术领域technical field
本公开涉及通信领域,具体而言,涉及一种移动机器人的定位方法及装置、存储介质及电子装置。The present disclosure relates to the communication field, and in particular, to a positioning method and device for a mobile robot, a storage medium, and an electronic device.
背景技术Background technique
随着机器人技术的快速发展,机器人现在已经被应用到各个领域,机器人需要与环境和用户进行交互,而环境感知是机器人在交互过程中最基础、最关键的环节。为了保证机器人的定位和建图精度,必须利用以往的信息得到准确的机器位姿。现有的机器重定位技术分为基于传统特征点的方法、基于机器学习的方法和基于深度学习的方法。其中,基于传统特征点的重定位利用特征点进行匹配,进而得到机器位姿。在机器搬起和轮子打滑等情况的时候,机器本身位姿丢失或者解算错误,无法确定移动机器人的位置,进而造成视觉同步定位与地图构建(Vision Simultaneous Location And Mapping,简称为VSLAM)建图叠图的情况。With the rapid development of robot technology, robots have now been applied to various fields. Robots need to interact with the environment and users, and environmental perception is the most basic and critical link in the interaction process of robots. In order to ensure the positioning and mapping accuracy of the robot, it is necessary to use the past information to obtain the accurate machine pose. Existing machine relocalization techniques are divided into methods based on traditional feature points, methods based on machine learning and methods based on deep learning. Among them, the relocation based on traditional feature points uses feature points for matching, and then obtains the machine pose. When the machine is lifted and the wheels are slipping, the pose of the machine itself is lost or the calculation is wrong, and the position of the mobile robot cannot be determined, which in turn leads to Vision Simultaneous Location And Mapping (VSLAM) mapping The case of overlays.
针对相关技术中,在移动机器人发生位姿失效的情况下,无法确定移动机器人的位置的问题,尚未提出有效的解决方案。Aiming at the problem in the related art that the position of the mobile robot cannot be determined when the pose of the mobile robot fails, no effective solution has been proposed yet.
因此,有必要对现有技术予以改良以克服现有技术中的所述缺陷。Therefore, it is necessary to improve the prior art to overcome the defects in the prior art.
发明内容Contents of the invention
本公开的目的在于提供一种移动机器人的定位方法及装置、存储介质及电子装置,以至少解决相关技术中,在移动机器人发生位姿失效的情况下,无法确定移动机器人的位置的问题。The purpose of the present disclosure is to provide a positioning method and device, a storage medium and an electronic device for a mobile robot, so as to at least solve the problem in the related art that the position of the mobile robot cannot be determined when the pose of the mobile robot fails.
本公开的目的是通过以下技术方案实现:The purpose of this disclosure is to be achieved through the following technical solutions:
根据本公开实施例的一方面,提供一种移动机器人的定位方法,包括:在检测到移动机器人处于第一状态的情况下,获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标;所述第一状态为所述移动机器人发生位姿失效的状态,所述第二状态为所述移动机器人未发生位姿失效的状态;确定所述移动机器人与所述至少两个目标对象的第 一相对位置;根据所述至少两个目标对象的位置坐标和所述第一相对位置计算所述移动机器人的位置坐标。According to an aspect of an embodiment of the present disclosure, there is provided a positioning method for a mobile robot, including: when it is detected that the mobile robot is in the first state, acquiring at least two data detected by the mobile robot in the second state The position coordinates of a target object; the first state is a state where the mobile robot has a pose failure, and the second state is a state where the mobile robot has no pose failure; determine the relationship between the mobile robot and the First relative positions of at least two target objects; calculating the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative positions.
进一步地,所述获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标包括:基于图像采集组件采集到的图像数据识别出至少两个目标对象;根据所述至少两个目标对象的标识从预设的关系库中匹配得到所述至少两个目标对象的位置坐标;其中,所述预设的关系库存储有目标对象的标识和目标对象的位置坐标的对应关系。Further, the acquiring the position coordinates of at least two target objects detected by the mobile robot in the second state includes: identifying at least two target objects based on the image data collected by the image acquisition component; according to the The identifications of at least two target objects are matched from a preset relationship library to obtain the position coordinates of the at least two target objects; wherein, the preset relationship library stores the correspondence between the identification of the target object and the position coordinates of the target object relation.
进一步地,所述确定所述移动机器人与所述至少两个目标对象的第一相对位置包括:根据目标测距方式确定所述移动机器人距离所述至少两个目标对象的距离,得到至少两个第一距离;基于图像采集组件采集到的图像数据确定所述移动机器人与所述至少两个物体的第一方向关系;根据所述第一方向关系和所述至少两个第一距离确定所述移动机器人与所述至少两个目标对象的第一相对位置。Further, the determining the first relative position between the mobile robot and the at least two target objects includes: determining the distance between the mobile robot and the at least two target objects according to the target ranging method, and obtaining at least two First distance; determine the first directional relationship between the mobile robot and the at least two objects based on the image data collected by the image acquisition component; determine the first directional relationship according to the first directional relationship and the at least two first distances First relative positions of the mobile robot and the at least two target objects.
进一步地,所述方法还包括:在所述移动机器人处于第二状态的情况下,基于图像采集组件采集到的图像数据识别出目标对象;确定所述目标对象的位置坐标;将所述目标对象的标识和所述目标对象的位置坐标对应存储在预设的关系库中。Further, the method further includes: when the mobile robot is in the second state, identifying the target object based on the image data collected by the image acquisition component; determining the position coordinates of the target object; The identifier of and the position coordinates of the target object are correspondingly stored in a preset relationship library.
进一步地,所述确定所述目标对象的位置坐标包括:确定所述目标对象与所述移动机器人的第二相对位置;根据所述第二相对位置与所述移动机器人的位置坐标确定所述目标对象的位置坐标。Further, the determining the position coordinates of the target object includes: determining a second relative position between the target object and the mobile robot; determining the target according to the second relative position and the position coordinates of the mobile robot The object's location coordinates.
进一步地,确定所述目标对象与所述移动机器人的第二相对位置,包括:根据目标测距方式确定所述移动机器人距离所述目标对象的距离,得到第二距离;基于图像采集组件采集到的图像数据确定所述移动机器人与所述目标对象的第二方向关系;根据所述第二方向关系和所述第二距离确定所述移动机器人与所述目标对象的第二相对位置。Further, determining the second relative position between the target object and the mobile robot includes: determining the distance between the mobile robot and the target object according to a target ranging method to obtain a second distance; Determining a second directional relationship between the mobile robot and the target object based on the image data; determining a second relative position between the mobile robot and the target object according to the second directional relationship and the second distance.
进一步地,通过以下方式至少之一检测到移动机器人处于第一状态,包括:检测到所述移动机器人的移动轮发生空转;检测到所述移动机器人的移动轮未与目标平面接触。Further, it is detected that the mobile robot is in the first state through at least one of the following methods, including: detecting that the moving wheels of the mobile robot are idling; detecting that the moving wheels of the mobile robot are not in contact with the target plane.
根据本公开实施例的另一方面,提供一种移动机器人的定位装置,包括:获取模块,用于在检测到移动机器人处于第一状态的情况下,获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标;所述第一状态为所述移动机器人发生位姿失效的状态,所述第二状态为所述移动机器人未发生位姿失效的状态;确定模块,用于确定所述移动机器人与所述至少两个目标对象的第一相对位置;计算模块,用于根据所述至少两个目标对象的位置坐标和所述第一相对位置计算所述移动机器人的位置坐标。According to another aspect of the embodiments of the present disclosure, there is provided a positioning device for a mobile robot, including: an acquisition module, configured to acquire the location of the mobile robot in the second state when it is detected that the mobile robot is in the first state The detected position coordinates of at least two target objects; the first state is a state where the mobile robot has a pose failure, and the second state is a state where the mobile robot has no pose failure; a determination module , for determining a first relative position between the mobile robot and the at least two target objects; a calculation module, for calculating the mobile robot according to the position coordinates of the at least two target objects and the first relative position location coordinates.
根据本公开实施例的又一方面,还提供了一种计算机可读的存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行以上任一项中所述的移动机 器人的定位方法。According to yet another aspect of the embodiments of the present disclosure, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute any of the above-mentioned steps when running. The positioning method of the mobile robot described above.
根据本公开实施例的又一方面,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行以上任一项中所述的移动机器人的定位方法。According to yet another aspect of the embodiments of the present disclosure, there is also provided an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to perform any of the above The localization method of the mobile robot described in the item.
通过本公开,在移动机器人发生位姿失效的情况下,获取移动机器人未发生位姿失效的时候至少两个目标对象的位置坐标,同时,确定移动机器人与至少两个目标对象的第一相对位置,进而可以根据第一相对位置和目标对象的位置坐标计算移动机器人的位置坐标。采用上述技术方案,解决了在移动机器人发生位姿失效的情况下,无法确定移动机器人的位置的问题。进而在移动机器人发生位姿失效的情况下,也可以确定移动机器人的位置。Through the present disclosure, in the case of a pose failure of the mobile robot, the position coordinates of at least two target objects are obtained when the mobile robot does not have a pose failure, and at the same time, the first relative positions between the mobile robot and the at least two target objects are determined , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object. By adopting the above technical solution, the problem that the position of the mobile robot cannot be determined in the case of a pose failure of the mobile robot is solved. Furthermore, in the case of a pose failure of the mobile robot, the position of the mobile robot can also be determined.
附图说明Description of drawings
此处所说明的附图用来提供对本公开的进一步理解,构成本公开的一部分,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。在附图中:The drawings described here are used to provide a further understanding of the present disclosure, and constitute a part of the present disclosure. The schematic embodiments of the present disclosure and their descriptions are used to explain the present disclosure, and do not constitute improper limitations to the present disclosure. In the attached picture:
图1是本公开实施例的一种目标物体的确定方法的计算机终端的硬件结构框图;FIG. 1 is a block diagram of the hardware structure of a computer terminal of a method for determining a target object according to an embodiment of the present disclosure;
图2是根据本公开实施例的一种目标物体的确定方法的流程图;FIG. 2 is a flow chart of a method for determining a target object according to an embodiment of the present disclosure;
图3是根据本公开实施例的一种目标物体的确定方法的检测障碍物的示意图(一);FIG. 3 is a schematic diagram (1) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure;
图4是根据本公开实施例的一种目标物体的确定方法的检测障碍物的示意图(二);FIG. 4 is a schematic diagram (2) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure;
[根据细则91更正 01.09.2022] 
[Corrected 01.09.2022 under Rule 91]
[根据细则91更正 01.09.2022] 
[Corrected 01.09.2022 under Rule 91]
[根据细则91更正 01.09.2022] 
[Corrected 01.09.2022 under Rule 91]
具体实施方式Detailed ways
下文中将参考附图并结合实施例来详细说明本公开。需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。Hereinafter, the present disclosure will be described in detail with reference to the accompanying drawings and embodiments. It should be noted that, in the case of no conflict, the embodiments in the present disclosure and the features in the embodiments can be combined with each other.
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。It should be noted that the terms "first" and "second" in the specification and claims of the present disclosure and the above drawings are used to distinguish similar objects, but not necessarily used to describe a specific sequence or sequence.
本公开实施例所提供的方法实施例可以在移动机器人或者类似的运算装置中执行。以运行在移动机器人上为例,图1是本公开实施例的一种移动机器人的定位方法的机器人的硬件结构框图。如图1所示,移动机器人可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器(Microprocessor Unit,简称是MPU)或可编程逻辑器件(Programmable logic device,简称是PLD)等的处理装置和用于存储数据的存储器104,可选 地,上述移动机器人还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述移动机器人的结构造成限定。例如,移动机器人还可包括比图1中所示更多或者更少的组件,或者具有与图1所示等同功能或比图1所示功能更多的不同的配置。The method embodiments provided by the embodiments of the present disclosure may be executed in a mobile robot or a similar computing device. Taking running on a mobile robot as an example, FIG. 1 is a block diagram of a hardware structure of a robot according to a positioning method for a mobile robot according to an embodiment of the present disclosure. As shown in Figure 1, the mobile robot can include one or more (only one is shown in Figure 1) processor 102 (processor 102 can include but not limited to microprocessor (Microprocessor Unit, MPU for short) or programmable logic device (Programmable logic device, referred to as PLD) and other processing devices and memory 104 for storing data, optionally, the above-mentioned mobile robot can also include transmission equipment 106 and input and output equipment 108 for communication functions. Common in the art Those skilled in the art can understand that the structure shown in Figure 1 is only illustrative, and it does not limit the structure of the above-mentioned mobile robot.For example, the mobile robot can also include more or less components than shown in Figure 1, or have the same Functionally equivalent to that shown in Figure 1 or a different configuration with more functionality than shown in Figure 1.
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本公开实施例中的移动移动机器人的定位方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至移动机器人。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the mobile robot positioning method in the embodiment of the present disclosure, and the processor 102 runs the computer program stored in the memory 104, Thereby executing various functional applications and data processing, that is, realizing the above-mentioned method. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, and these remote memories may be connected to the mobile robot through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
传输设备106用于经由网络接收或者发送数据。上述的网络具体实例可包括移动机器人的通信供应商提供的无线网络。在一个实例中,传输设备106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输设备106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。The transmission device 106 is used to receive or transmit data via the network. A specific example of the above-mentioned network may include a wireless network provided by a mobile robot's communication provider. In one example, the transmission device 106 includes a network interface controller (NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet. In an example, the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
在本实施例中提供了一种运行于上述移动机器人的定位方法,图2是根据本公开实施例的移动机器人的定位方法的流程图(一),如图2所示,该流程包括如下步骤:In this embodiment, a positioning method that operates on the mobile robot described above is provided. FIG. 2 is a flowchart (1) of a positioning method for a mobile robot according to an embodiment of the disclosure. As shown in FIG. 2 , the process includes the following steps :
步骤S202:在检测到移动机器人处于第一状态的情况下,获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标;所述第一状态为所述移动机器人发生位姿失效的状态,所述第二状态为所述移动机器人未发生位姿失效的状态;Step S202: When it is detected that the mobile robot is in the first state, obtain the position coordinates of at least two target objects detected by the mobile robot in the second state; the first state is the mobile robot A state where a pose failure occurs, and the second state is a state where the mobile robot does not have a pose failure;
需要说明的是,通过以下方式至少之一检测移动机器人是否处于第一状态:检测移动机器人的移动轮是否发生空转;检测移动机器人的移动轮是否与目标平面接触。It should be noted that whether the mobile robot is in the first state is detected by at least one of the following methods: detecting whether the moving wheels of the mobile robot are idling; detecting whether the moving wheels of the mobile robot are in contact with the target plane.
现有技术中,在移动机器人搬起和轮子打滑的时候,机器本身位姿丢失或者解算错误,进而造成VSLAM无法确定移动机器人的位置,也就是说,如果移动机器人的移动轮发生空转或者移动机器人的移动轮与目标平面接触,则VSLAM无法确定移动机器人的位置,进而确定移动机器人位于第一状态,即确定移动机器人处于位姿失效的状态。In the prior art, when the mobile robot lifts up and the wheels slip, the pose of the machine itself is lost or the calculation is wrong, which causes VSLAM to be unable to determine the position of the mobile robot. If the moving wheels of the robot are in contact with the target plane, the VSLAM cannot determine the position of the mobile robot, and then determine that the mobile robot is in the first state, that is, determine that the mobile robot is in a state where the pose is invalid.
步骤S204:确定所述移动机器人与所述至少两个目标对象的第一相对位置;Step S204: determining a first relative position between the mobile robot and the at least two target objects;
步骤S206:根据所述至少两个目标对象的位置坐标和所述第一相对位置计算所述移动机器人的位置坐标。Step S206: Calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
需要说明的是,本申请实施例的技术方案可以应用在移动机器人上,移动机器人包括但 不限于扫地机器人。It should be noted that the technical solutions of the embodiments of the present application can be applied to mobile robots, including but not limited to sweeping robots.
通过上述步骤,在移动机器人发生位姿失效的情况下,获取移动机器人未发生位姿失效的时候至少两个目标对象的位置坐标,同时,确定移动机器人与至少两个目标对象的第一相对位置,进而可以根据第一相对位置和目标对象的位置坐标计算移动机器人的位置坐标。采用上述技术方案,解决了在移动机器人发生位姿失效的情况下,无法确定移动机器人的位置的问题。进而在移动机器人发生位姿失效的情况下,也可以确定移动机器人的位置。Through the above steps, in the case of a pose failure of the mobile robot, the position coordinates of at least two target objects are obtained when the mobile robot does not have a pose failure, and at the same time, the first relative positions of the mobile robot and the at least two target objects are determined , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object. By adopting the above technical solution, the problem that the position of the mobile robot cannot be determined in the case of a pose failure of the mobile robot is solved. Furthermore, in the case of a pose failure of the mobile robot, the position of the mobile robot can also be determined.
为了更好的理解本申请的技术方案,可以将本申请实施例的技术方案从时间层次上来划分,分为两部分,第一部分:移动机器人在未发生位姿失效;第二部分:移动机器人发生位姿失效。In order to better understand the technical solution of the present application, the technical solution of the embodiment of the present application can be divided into two parts from the time level, the first part: the mobile robot fails in the pose; the second part: the mobile robot occurs Pose fails.
具体的,以下就第一部分进行具体的说明:Specifically, the first part is described in detail as follows:
在一个可选的实施例中,在所述移动机器人处于第二状态的情况下,基于图像采集组件采集到的图像数据识别出目标对象;确定所述目标对象的位置坐标;将所述目标对象的标识和所述目标对象的位置坐标对应存储在预设的关系库中。In an optional embodiment, when the mobile robot is in the second state, the target object is identified based on the image data collected by the image acquisition component; the position coordinates of the target object are determined; the target object is The identifier of and the position coordinates of the target object are correspondingly stored in a preset relationship library.
也就是说,在移动机器人位于未失效的状态下的时候,移动机器人可以通过自身搭载的图像采集组件采集行进过程中的图像数据,进而通过物体识别模型对图像数据进行识别,确定图像数据中存在的目标对象以及目标对象的标识,随后移动机器人会进一步地确定目标对象的位置坐标,然后将目标对象的标识和目标对象的位置坐标对应存储在预设的关系库中。That is to say, when the mobile robot is in a non-failure state, the mobile robot can collect the image data during the traveling process through its own image acquisition component, and then use the object recognition model to identify the image data to determine the presence of The target object and the identification of the target object, then the mobile robot will further determine the position coordinates of the target object, and then store the identification of the target object and the position coordinates of the target object in the preset relationship library.
需要说明的是,上述图像数据包括:图片,视频等。上述目标对象包括:障碍物、家具等。It should be noted that the above image data includes: pictures, videos and so on. The above-mentioned target objects include: obstacles, furniture, etc.
具体的,上述移动机器人确定目标对象的位置坐标可以通过以下方式实现:确定所述目标对象与所述移动机器人的第二相对位置;根据所述第二相对位置与所述移动机器人的位置坐标确定所述目标对象的位置坐标。Specifically, the above-mentioned mobile robot can determine the position coordinates of the target object in the following manner: determine the second relative position between the target object and the mobile robot; determine according to the second relative position and the position coordinates of the mobile robot The location coordinates of the target object.
也就是说,在移动机器人没有发生位姿失效的情况下,移动机器人是可以通过VSLAM来确定自身的坐标,进而移动机器人只需要确定目标对象对应的相对位置,就可以确定目标对象的位置坐标。例如:移动机器人的坐标为(2,2),目标对象在移动机器人的正西方向一米(相对位置),进而目标对象的坐标为(1,2)。需要说明的是,坐标系X轴正方向为正东,Y轴正方向为正北,坐标系中一个单位长度为一米。That is to say, in the case of no pose failure of the mobile robot, the mobile robot can determine its own coordinates through VSLAM, and then the mobile robot only needs to determine the relative position corresponding to the target object to determine the position coordinates of the target object. For example: the coordinates of the mobile robot are (2, 2), the target object is one meter (relative position) due west of the mobile robot, and the coordinates of the target object are (1, 2). It should be noted that the positive direction of the X-axis of the coordinate system is due east, the positive direction of the Y-axis is due north, and the length of a unit in the coordinate system is one meter.
需要说明的是,在一个可选的实施例中,上述确定目标对象与移动机器人的第二相对位置,可以通过以下方式实现:根据目标测距方式确定所述移动机器人距离所述目标对象的距离,得到第二距离;基于图像采集组件采集到的图像数据确定所述移动机器人与所述目标对象的第二方向关系;根据所述第二方向关系和所述第二距离确定所述移动机器人与所述目标 对象的第二相对位置。It should be noted that, in an optional embodiment, the determination of the second relative position between the target object and the mobile robot may be achieved in the following manner: determine the distance between the mobile robot and the target object according to the target ranging method , to obtain the second distance; determine the second direction relationship between the mobile robot and the target object based on the image data collected by the image acquisition component; determine the relationship between the mobile robot and the target object according to the second direction relationship and the second distance The second relative position of the target object.
也就是说,移动机器人可以通过目标测距方式先确定目标对象距离自身的距离,进而根据图像采集组件采集到的图像数据确定目标对象在自身的哪个方向,在确定了方向以及距离以后,就可以确定相对位置了。例如,通过目标测距方式确定目标对象距离移动机器人一米,通过图像数据确定目标对象在移动机器人的正西方,进而相对位置为正西方一米。需要说明的是,目标测距方式有多种,包括:单目测距、双目测距、深度传感器测距、激光测距等等。That is to say, the mobile robot can first determine the distance between the target object and itself through the target ranging method, and then determine which direction the target object is in itself according to the image data collected by the image acquisition component. After determining the direction and distance, it can Determine the relative position. For example, the target object is determined to be one meter away from the mobile robot through the target ranging method, and the target object is determined to be in the due west of the mobile robot through image data, and then the relative position is one meter due to the west. It should be noted that there are many methods of target ranging, including: monocular ranging, binocular ranging, depth sensor ranging, laser ranging, and so on.
进而,采用第一部分的技术方案,可以使得移动机器人在未发生位姿失效的情况下确定移动机器人所处的区域中所有目标对象的位置坐标,进而将目标对象的标识和目标对象的位置坐标对应存储在预设的关系库中。Furthermore, by adopting the technical solution of the first part, the mobile robot can determine the position coordinates of all target objects in the area where the mobile robot is located without pose failure, and then map the target object identification to the position coordinates of the target object Stored in a preset relational library.
进一步地,以下就第二部分进行具体的说明:Further, the following is a specific description of the second part:
在移动机器人发生位姿失效的情况下,可以先获取移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标,并确定移动机器人与至少两个目标对象的第一相对位置,进而移动机器人可以根据至少两个目标对象的位置坐标和第一相对位置计算所述移动机器人的位置坐标。In the case of a pose failure of the mobile robot, the position coordinates of at least two target objects detected by the mobile robot in the second state can be obtained first, and the first relative positions between the mobile robot and the at least two target objects can be determined , and then the mobile robot can calculate the position coordinates of the mobile robot according to the position coordinates of at least two target objects and the first relative position.
为了更好的理解,获取移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标,可以通过以下方式实现:基于图像采集组件采集到的图像数据识别出至少两个目标对象;根据所述至少两个目标对象的标识从预设的关系库中匹配得到所述至少两个目标对象的位置坐标;其中,所述预设的关系库存储有目标对象的标识和目标对象的位置坐标的对应关系。For a better understanding, obtaining the position coordinates of at least two target objects detected by the mobile robot in the second state may be achieved in the following manner: identifying at least two target objects based on the image data collected by the image acquisition component ; According to the identification of the at least two target objects, the position coordinates of the at least two target objects are obtained by matching from the preset relationship library; wherein, the preset relationship library stores the identification of the target object and the target object Correspondence of position coordinates.
也就是说,移动机器人需要通过图像采集组件对目标区域进行图像采集,进而从采集到的图片中确定至少两个目标对象,并从预设的关系库中确定这至少两个目标对象的位置坐标。假如移动机器人根据图像采集组件确定了目标对象A,以及目标对象B,进而可以从预设的关系库中确定目标对象A的坐标为(1,2)以及目标对象B(5,2)。That is to say, the mobile robot needs to collect images of the target area through the image collection component, and then determine at least two target objects from the collected pictures, and determine the position coordinates of the at least two target objects from the preset relationship library . If the mobile robot determines the target object A and the target object B according to the image acquisition component, then it can determine the coordinates of the target object A as (1, 2) and the target object B (5, 2) from the preset relationship library.
进一步地,确定移动机器人与至少两个目标对象的第一相对位置,可以通过以下方式实现:根据目标测距方式确定所述移动机器人距离所述至少两个目标对象的距离,得到至少两个第一距离;基于图像采集组件采集到的图像数据确定所述移动机器人与所述至少两个物体的第一方向关系;根据所述第一方向关系和所述至少两个第一距离确定所述移动机器人与所述至少两个目标对象的第一相对位置。Further, determining the first relative positions of the mobile robot and at least two target objects may be achieved in the following manner: determine the distance between the mobile robot and the at least two target objects according to the target ranging method, and obtain at least two first relative positions A distance; determine the first directional relationship between the mobile robot and the at least two objects based on the image data collected by the image acquisition component; determine the movement according to the first directional relationship and the at least two first distances A first relative position between the robot and the at least two target objects.
也就是说,移动机器人可以通过目标测距方式先确定目标对象距离自身的距离,进而根据图像采集组件采集到的图像数据确定目标对象在自身的哪个方向,在确定了方向以及距离以后,就可以确定相对位置了。例如,通过目标测距方式确定目标对象A距离移动机器人两 米,通过图像数据确定目标对象在移动机器人的正西方向,进而相对位置为正西方向两米。通过目标测距方式确定目标对象B距离移动机器人两米,通过图像数据确定目标对象B在移动机器人的正东方向,进而相对位置为正东方向两米。That is to say, the mobile robot can first determine the distance between the target object and itself through the target ranging method, and then determine which direction the target object is in itself according to the image data collected by the image acquisition component. After determining the direction and distance, it can Determine the relative position. For example, the target object A is determined to be two meters away from the mobile robot through the target distance measurement method, and the target object is determined to be in the due west direction of the mobile robot through the image data, and then the relative position is two meters due to the west direction. The target object B is determined to be two meters away from the mobile robot through the target ranging method, and the target object B is determined to be in the due east direction of the mobile robot through the image data, and then the relative position is two meters due east.
在确定了至少两个目标对象的位置坐标和第一相对位置以后,就可以计算移动机器人的位置坐标了,例如目标对象A的坐标为(1,2),目标对象B(5,2),目标对象A在移动机器人正西方向两米处,目标对象B在移动机器人正东方向两米处,进而移动机器人的坐标为(3,2)。After determining the position coordinates and the first relative position of at least two target objects, the position coordinates of the mobile robot can be calculated, for example, the coordinates of the target object A are (1, 2), and the target object B (5, 2), The target object A is two meters to the west of the mobile robot, and the target object B is two meters to the east of the mobile robot, and the coordinates of the mobile robot are (3, 2).
进一步地,在一个可选的实施例中,移动机器人在行进过程中包括两种状态,第一状态是发生位姿失效的异常状态,第二状态是未发生位姿失效的正常状态,在第二状态时移动机器人在行进过程中通过AI相机(相当于上述图像采集组件)检测周围的物体信息(标签信息,检测框信息),并通过单目测距原理确定物体至机器人的距离,从而确定物体与机器人之间的相对位置,进而根据两者的相对位置确定物体的全局坐标(在第二状态下,机器人的全局坐标是可以基于VSLAM确定的),例如机器人在行驶至A位置时检测到冰箱,则AI相机可以标记冰箱,并通过单目测距原理确定冰箱至机器人的距离,从而确定冰箱与机器人之间的相对位置,进而根据两者的相对位置确定冰箱的全局坐标。机器人在第二状态下可以确定多个物体的全局坐标,以及标记各个物体(例如标记冰箱、桌子等)。Further, in an optional embodiment, the mobile robot includes two states during the traveling process. The first state is an abnormal state in which a pose failure occurs, and the second state is a normal state in which a pose failure does not occur. In the In the second state, the mobile robot detects the surrounding object information (label information, detection frame information) through the AI camera (equivalent to the above-mentioned image acquisition component) during the travel process, and determines the distance from the object to the robot through the principle of monocular ranging, so as to determine The relative position between the object and the robot, and then determine the global coordinates of the object according to the relative position of the two (in the second state, the global coordinates of the robot can be determined based on VSLAM), for example, when the robot travels to position A, it detects For refrigerators, the AI camera can mark the refrigerator, and determine the distance from the refrigerator to the robot through the principle of monocular distance measurement, thereby determining the relative position between the refrigerator and the robot, and then determine the global coordinates of the refrigerator according to the relative positions of the two. In the second state, the robot can determine the global coordinates of multiple objects, and mark each object (for example, mark a refrigerator, a table, etc.).
在第一状态时,移动机器人由于发生位姿失效,VSLAM不能定位移动机器人的全局坐标,此时可以获取机器人在发生位姿失效的位置下,机器人视野内的至少两个物体,由AI相机确定这两个物体的标签,从而可以根据这两个物体的标签获取这两个物体对应的全局坐标(该全局坐标是在第二状态下确定的,是准确的),然后通过单目测距原理确定物体至移动机器人的距离,从而确定物体与移动机器人之间的相对位置,进而根据这两个物体与移动机器人的相对位置和这两个物体的全局坐标来计算出移动机器人的全局坐标。In the first state, VSLAM cannot locate the global coordinates of the mobile robot due to pose failure. At this time, at least two objects within the robot’s field of vision can be obtained at the position where the pose failure occurs, determined by the AI camera. The tags of the two objects, so that the global coordinates corresponding to the two objects can be obtained according to the tags of the two objects (the global coordinates are determined in the second state and are accurate), and then through the principle of monocular distance measurement Determine the distance from the object to the mobile robot, thereby determining the relative position between the object and the mobile robot, and then calculate the global coordinates of the mobile robot based on the relative positions of the two objects and the mobile robot and the global coordinates of the two objects.
显然,上述所描述的实施例仅仅是本公开一部分的实施例,而不是全部的实施例。为了更好的理解移动机器人的定位方法,以下结合实施例对上述过程进行说明,但不用于限定本公开实施例的技术方案,具体地:Apparently, the embodiments described above are only some of the embodiments of the present disclosure, not all of them. In order to better understand the positioning method of the mobile robot, the above process is described below in conjunction with the embodiments, but it is not used to limit the technical solutions of the embodiments of the present disclosure, specifically:
在一个可选的实施例中,图3是根据本公开实施例的一种移动机器人的定位方法的流程图(二),基于附图3所示的流程图,本公开可选实施例提供的技术方案可以概括为以下步骤:In an optional embodiment, FIG. 3 is a flow chart (2) of a positioning method for a mobile robot according to an embodiment of the present disclosure. Based on the flow chart shown in FIG. 3 , the optional embodiment of the present disclosure provides The technical solution can be summarized as the following steps:
步骤一:当扫地机器人T1时刻(相当于上述实施例中的第一状态)处于抬起后者轮胎打滑的状态,获取扫地机器人(相当于上述实施例中的移动机器人)T2时刻(相当于上述实施例中的第二状态)当前位置下的物体检测信息;Step 1: When the sweeping robot (equivalent to the mobile robot in the above embodiment) T2 time (equivalent to the above-mentioned The second state in the embodiment) object detection information at the current location;
具体的,AI相机(相当于上述实施例中的图像采集组件)设置于扫地机器人前方,AI相机的采集视野为扫地机器人的前进方向。扫地机器人在清扫的过程中,实时开启AI相机,扫地机器人将采集到的图像输入到检测模型,得到前方地面目标物体的检测信息,其中,检测信息包括:标签信息和检测框信息。Specifically, the AI camera (equivalent to the image acquisition component in the above embodiment) is arranged in front of the cleaning robot, and the AI camera's collection field of view is the forward direction of the cleaning robot. During the cleaning process, the sweeping robot turns on the AI camera in real time, and the sweeping robot inputs the collected images into the detection model to obtain the detection information of the target object on the ground in front. The detection information includes: label information and detection frame information.
步骤二:获取T2时刻目标物体的全局地图坐标信息(相当于上述实施例中的全局坐标);Step 2: Obtain the global map coordinate information of the target object at T2 (equivalent to the global coordinates in the above-mentioned embodiment);
具体的,对于步骤一中得到的目标物体的检测信息,可以利用单目测距原理(投影变换和刚体变换关系),得到机器坐标系下的目标物体位姿。然后结合扫地机器人当前位置下的位姿信息,得到全局地图下目标物体的坐标信息。例如,扫地机器人检测出前方餐桌A,结合检测框信息和机器当前位姿信息,得到餐桌A在全局地图坐标系下的坐标信息,存入扫地机器人中。Specifically, for the detection information of the target object obtained in step 1, the target object pose in the machine coordinate system can be obtained by using the principle of monocular distance measurement (relationship between projection transformation and rigid body transformation). Then, combined with the pose information at the current position of the sweeping robot, the coordinate information of the target object under the global map is obtained. For example, the sweeping robot detects the dining table A in front of it, combines the detection frame information and the current pose information of the machine to obtain the coordinate information of the dining table A in the global map coordinate system, and stores it in the sweeping robot.
步骤三:基于当前位置下两个以上目标物体的全局坐标信息,确定扫地机器人的位姿信息;Step 3: Based on the global coordinate information of two or more target objects at the current position, determine the pose information of the sweeping robot;
具体的,扫地机器人在清扫过程中发生机器抬起或者轮子打滑,扫地机器人抬起会造成机器位姿丢失,扫地机器人轮子打滑会造成VSLAM定位错误,进而产生VSLAM建图不够精确。同一时刻下,AI相机捕获两个以上目标物体的检测信息。所述目标物体为扫地机器人在发生位姿失效以前,扫地机器人正常行驶中(机器位姿存在且准确)已经被多次AI检测到的物体,并给出全局地图下的坐标信息。在本公开实施中,基于此刻的目标物体的图像坐标,依据上述单目测距原理,还原目标物体在机器坐标系下的坐标信息(相当于上述实施例中的位置坐标),根据机器坐标系、全局坐标系和扫地机器人机器位姿之间的坐标转换关系,最终得到机器此时的位姿信息。Specifically, if the sweeping robot lifts up or the wheels slip during the cleaning process, the lifting of the sweeping robot will cause the machine's pose to be lost, and the slipping of the sweeping robot's wheels will cause VSLAM positioning errors, resulting in inaccurate VSLAM mapping. At the same moment, the AI camera captures the detection information of more than two target objects. The target object is an object that has been detected by AI many times during the normal driving of the sweeping robot (the machine pose exists and is accurate) before the pose failure of the sweeping robot, and the coordinate information under the global map is given. In the implementation of the present disclosure, based on the image coordinates of the target object at the moment, according to the above-mentioned principle of monocular distance measurement, the coordinate information of the target object in the machine coordinate system (equivalent to the position coordinates in the above embodiments) is restored, according to the machine coordinate system , the coordinate transformation relationship between the global coordinate system and the machine pose of the sweeping robot, and finally obtain the pose information of the machine at this time.
步骤四:基于步骤三中的位姿信息,更新VSLAM定位失效下位姿和纠正错误重定位,从而避免地图不够精确。Step 4: Based on the pose information in step 3, update the pose when the VSLAM positioning fails and correct the wrong relocation, so as to avoid the map being inaccurate.
具体的,所述VSLAM位姿失效发生在机器搬起重定位时,所述重定位错误多发生在轮子打滑过程中。基于上述AI检测到的两个以上目标物体,反算出的机器位姿信息,填补和纠正此时的机器位姿。从而避免了VSLAM的建图不够精确的发生。Specifically, the VSLAM pose failure occurs when the machine is lifted and relocated, and the relocation error mostly occurs during the wheel slipping process. Based on the two or more target objects detected by the above-mentioned AI, the reverse calculated machine pose information is used to fill and correct the machine pose at this time. In this way, the occurrence of inaccurate VSLAM mapping is avoided.
此外,本公开上述实施例通过AI实时检测,建立地面目标物体的语义信息和全局地图坐标信息,在传统VSLAM定位失效的情况下,借助AI识别匹配两个以上目标物体,得到先验的全局地图坐标,进而对当前的机器位姿进行填补和修正,解决了传统VSLAM定位失效和定位错误的问题,提高了定位和建图精度,避免了建图不够精确。In addition, the above-mentioned embodiments of the present disclosure use AI real-time detection to establish the semantic information and global map coordinate information of ground target objects. In the case of traditional VSLAM positioning failures, use AI to identify and match more than two target objects to obtain a priori global map Coordinates, and then fill and correct the current machine pose, solve the problem of traditional VSLAM positioning failure and positioning error, improve the accuracy of positioning and mapping, and avoid inaccurate mapping.
同时,本公开实施例还可以直接利用深度神经网络,实现端到端的机器位姿预测。或者利用CNN对图像进行编码,构建一个包含图像特征和真实世界位姿的数据库,然后通过匹 配数据库中最相似的图像,实现相对位姿的预测。At the same time, the embodiments of the present disclosure can also directly use the deep neural network to realize end-to-end machine pose prediction. Or use CNN to encode images, construct a database containing image features and real-world poses, and then achieve relative pose prediction by matching the most similar images in the database.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本公开各个实施例的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solution of the present disclosure can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as a read-only memory (Read-Only Memory) Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, or network device) etc.) to perform the methods of the various embodiments of the present disclosure.
在本实施例中还提供了一种移动机器人的定位装置,该移动机器人的定位装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。In this embodiment, a positioning device for a mobile robot is also provided, and the positioning device for a mobile robot is used to implement the above-mentioned embodiments and preferred implementation modes, and those that have already been described will not be repeated. As used below, the term "module" may be a combination of software and/or hardware that realizes a predetermined function. Although the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
图4是根据本公开实施例的一种移动机器人的定位装置的结构框图,如图4所示,该装置包括:Fig. 4 is a structural block diagram of a positioning device for a mobile robot according to an embodiment of the present disclosure. As shown in Fig. 4, the device includes:
获取模块42,用于在检测到移动机器人处于第一状态的情况下,获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标;所述第一状态为所述移动机器人发生位姿失效的状态,所述第二状态为所述移动机器人未发生位姿失效的状态;The obtaining module 42 is used to obtain the position coordinates of at least two target objects detected by the mobile robot in the second state when it is detected that the mobile robot is in the first state; the first state is the A state where the mobile robot has a pose failure, and the second state is a state where the mobile robot has no pose failure;
确定模块44,用于确定所述移动机器人与所述至少两个目标对象的第一相对位置;A determining module 44, configured to determine a first relative position between the mobile robot and the at least two target objects;
计算模块46,用于根据所述至少两个目标对象的位置坐标和所述第一相对位置计算所述移动机器人的位置坐标。A calculation module 46, configured to calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
通过上述模块,在移动机器人发生位姿失效的情况下,获取移动机器人未发生位姿失效的时候至少两个目标对象的位置坐标,同时,确定移动机器人与至少两个目标对象的第一相对位置,进而可以根据第一相对位置和目标对象的位置坐标计算移动机器人的位置坐标。采用上述技术方案,解决了在移动机器人发生位姿失效的情况下,无法确定移动机器人的位置的问题。进而在移动机器人发生位姿失效的情况下,也可以确定移动机器人的位置。Through the above modules, in the case of a pose failure of the mobile robot, obtain the position coordinates of at least two target objects when the mobile robot does not have a pose failure, and at the same time, determine the first relative position between the mobile robot and the at least two target objects , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object. By adopting the above technical solution, the problem that the position of the mobile robot cannot be determined in the case of a pose failure of the mobile robot is solved. Furthermore, in the case of a pose failure of the mobile robot, the position of the mobile robot can also be determined.
可选的,获取模块42还用于基于图像采集组件采集到的图像数据识别出至少两个目标对象;根据所述至少两个目标对象的标识从预设的关系库中匹配得到所述至少两个目标对象的位置坐标;其中,所述预设的关系库存储有目标对象的标识和目标对象的位置坐标的对应关系。Optionally, the acquisition module 42 is also configured to identify at least two target objects based on the image data collected by the image acquisition component; and obtain the at least two target objects from a preset relationship library according to the identification of the at least two target objects The position coordinates of each target object; wherein, the preset relationship library stores the corresponding relationship between the identification of the target object and the position coordinates of the target object.
可选的,确定模块44还用于根据目标测距方式确定所述移动机器人距离所述至少两个目标对象的距离,得到至少两个第一距离;基于图像采集组件采集到的图像数据确定所述移 动机器人与所述至少两个物体的第一方向关系;根据所述第一方向关系和所述至少两个第一距离确定所述移动机器人与所述至少两个目标对象的第一相对位置。Optionally, the determination module 44 is further configured to determine the distance between the mobile robot and the at least two target objects according to the target distance measurement method to obtain at least two first distances; A first directional relationship between the mobile robot and the at least two objects; determining a first relative position between the mobile robot and the at least two target objects according to the first directional relationship and the at least two first distances .
可选的,计算模块46还用于在所述移动机器人处于第二状态的情况下,基于图像采集组件采集到的图像数据识别出目标对象;确定所述目标对象的位置坐标;将所述目标对象的标识和所述目标对象的位置坐标对应存储在预设的关系库中。Optionally, the calculation module 46 is also used to identify the target object based on the image data collected by the image acquisition component when the mobile robot is in the second state; determine the position coordinates of the target object; The object identifier and the position coordinates of the target object are correspondingly stored in a preset relationship library.
可选的,计算模块46还用于确定所述目标对象与所述移动机器人的第二相对位置;根据所述第二相对位置与所述移动机器人的位置坐标确定所述目标对象的位置坐标。Optionally, the calculation module 46 is further configured to determine a second relative position between the target object and the mobile robot; determine the position coordinates of the target object according to the second relative position and the position coordinates of the mobile robot.
可选的,计算模块46还用于根据目标测距方式确定所述移动机器人距离所述目标对象的距离,得到第二距离;基于图像采集组件采集到的图像数据确定所述移动机器人与所述目标对象的第二方向关系;根据所述第二方向关系和所述第二距离确定所述移动机器人与所述目标对象的第二相对位置。Optionally, the calculation module 46 is also used to determine the distance between the mobile robot and the target object according to the target ranging method to obtain a second distance; determine the distance between the mobile robot and the target object based on the image data collected by the image acquisition component. A second directional relationship of the target object; determining a second relative position between the mobile robot and the target object according to the second directional relationship and the second distance.
可选的,获取模块42还用于通过以下方式至少之一检测到移动机器人处于第一状态,包括:检测到所述移动机器人的移动轮发生空转;检测到所述移动机器人的移动轮未与目标平面接触。Optionally, the acquisition module 42 is also used to detect that the mobile robot is in the first state through at least one of the following methods, including: detecting that the mobile wheels of the mobile robot are idling; detecting that the mobile wheels of the mobile robot are not in contact with Target plane contact.
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。It should be noted that the above-mentioned modules can be realized by software or hardware. For the latter, it can be realized by the following methods, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules can be combined in any combination The forms of are located in different processors.
本公开的实施例还提供了一种计算机可读的存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。Embodiments of the present disclosure also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the above method embodiments when running.
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:Optionally, in this embodiment, the above-mentioned storage medium may be configured to store a computer program for performing the following steps:
S1,在检测到移动机器人处于第一状态的情况下,获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标;所述第一状态为所述移动机器人发生位姿失效的状态,所述第二状态为所述移动机器人未发生位姿失效的状态;S1. When it is detected that the mobile robot is in the first state, acquire the position coordinates of at least two target objects detected by the mobile robot in the second state; the first state is when the mobile robot occurs A state of pose failure, the second state is a state where the mobile robot has no pose failure;
S2,确定所述移动机器人与所述至少两个目标对象的第一相对位置;S2. Determine a first relative position between the mobile robot and the at least two target objects;
S3,根据所述至少两个目标对象的位置坐标和所述第一相对位置计算所述移动机器人的位置坐标。S3. Calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器ROM、随机存取存储器RAM、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。Optionally, in this embodiment, the above-mentioned storage medium may include but not limited to: various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
本公开的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。Embodiments of the present disclosure also provide an electronic device, including a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
可选地,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。Optionally, the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:Optionally, in this embodiment, the above-mentioned processor may be configured to execute the following steps through a computer program:
S1,在检测到移动机器人处于第一状态的情况下,获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标;所述第一状态为所述移动机器人发生位姿失效的状态,所述第二状态为所述移动机器人未发生位姿失效的状态;S1. When it is detected that the mobile robot is in the first state, acquire the position coordinates of at least two target objects detected by the mobile robot in the second state; the first state is when the mobile robot occurs A state of pose failure, the second state is a state where the mobile robot has no pose failure;
S2,确定所述移动机器人与所述至少两个目标对象的第一相对位置;S2. Determine a first relative position between the mobile robot and the at least two target objects;
S3,根据所述至少两个目标对象的位置坐标和所述第一相对位置计算所述移动机器人的位置坐标。S3. Calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。Optionally, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementation manners, and details are not repeated in this embodiment.
显然,本领域的技术人员应该明白,上述的本公开的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本公开不限制于任何特定的硬件和软件结合。Obviously, those skilled in the art should understand that each module or each step of the above-mentioned disclosure can be realized by a general-purpose computing device, and they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Alternatively, they may be implemented in program code executable by a computing device so that they may be stored in a storage device to be executed by a computing device, and in some cases in an order different from that shown here The steps shown or described are carried out, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps among them are fabricated into a single integrated circuit module for implementation. As such, the present disclosure is not limited to any specific combination of hardware and software.
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the principle of the present disclosure shall be included in the protection scope of the present disclosure.

Claims (10)

  1. 一种移动机器人的定位方法,其特征在于,所述方法包括:A positioning method for a mobile robot, characterized in that the method comprises:
    在检测到移动机器人处于第一状态的情况下,获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标;所述第一状态为所述移动机器人发生位姿失效的状态,所述第二状态为所述移动机器人未发生位姿失效的状态;When it is detected that the mobile robot is in the first state, the position coordinates of at least two target objects detected by the mobile robot in the second state are obtained; the first state is the pose of the mobile robot In an invalid state, the second state is a state in which the mobile robot has not experienced a pose failure;
    确定所述移动机器人与所述至少两个目标对象的第一相对位置;determining a first relative position of the mobile robot to the at least two target objects;
    根据所述至少两个目标对象的位置坐标和所述第一相对位置计算所述移动机器人的位置坐标。The position coordinates of the mobile robot are calculated according to the position coordinates of the at least two target objects and the first relative position.
  2. 根据权利要求1所述的移动机器人的定位方法,其中,所述获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标包括:The positioning method of a mobile robot according to claim 1, wherein said obtaining the position coordinates of at least two target objects detected by said mobile robot in a second state comprises:
    基于图像采集组件采集到的图像数据识别出至少两个目标对象;Identifying at least two target objects based on the image data collected by the image collection component;
    根据所述至少两个目标对象的标识从预设的关系库中匹配得到所述至少两个目标对象的位置坐标;其中,所述预设的关系库存储有目标对象的标识和目标对象的位置坐标的对应关系。According to the identifiers of the at least two target objects, the position coordinates of the at least two target objects are obtained by matching from a preset relationship library; wherein, the preset relationship library stores the identifiers of the target objects and the positions of the target objects Coordinate correspondence.
  3. 根据权利要求1所述的移动机器人的定位方法,其中,所述确定所述移动机器人与所述至少两个目标对象的第一相对位置包括:The positioning method for a mobile robot according to claim 1, wherein said determining the first relative position between said mobile robot and said at least two target objects comprises:
    根据目标测距方式确定所述移动机器人距离所述至少两个目标对象的距离,得到至少两个第一距离;Determining the distance between the mobile robot and the at least two target objects according to the target ranging method to obtain at least two first distances;
    基于图像采集组件采集到的图像数据确定所述移动机器人与所述至少两个物体的第一方向关系;determining a first directional relationship between the mobile robot and the at least two objects based on the image data collected by the image collection component;
    根据所述第一方向关系和所述至少两个第一距离确定所述移动机器人与所述至少两个目标对象的第一相对位置。A first relative position between the mobile robot and the at least two target objects is determined according to the first directional relationship and the at least two first distances.
  4. 根据权利要求1所述的移动机器人的定位方法,其中,所述方法还包括:The positioning method of a mobile robot according to claim 1, wherein said method further comprises:
    在所述移动机器人处于第二状态的情况下,基于图像采集组件采集到的图像数据识别出目标对象;When the mobile robot is in the second state, identifying the target object based on the image data collected by the image collection component;
    确定所述目标对象的位置坐标;determining the location coordinates of the target object;
    将所述目标对象的标识和所述目标对象的位置坐标对应存储在预设的关系库中。The identifier of the target object and the position coordinates of the target object are correspondingly stored in a preset relationship library.
  5. 根据权利要求4所述的移动机器人的定位方法,其中,所述确定所述目标对象的位置坐标包括:The positioning method of a mobile robot according to claim 4, wherein said determining the position coordinates of said target object comprises:
    确定所述目标对象与所述移动机器人的第二相对位置;determining a second relative position of the target object to the mobile robot;
    根据所述第二相对位置与所述移动机器人的位置坐标确定所述目标对象的位置坐标。The position coordinates of the target object are determined according to the second relative position and the position coordinates of the mobile robot.
  6. 根据权利要求5所述的移动机器人的定位方法,其中,确定所述目标对象与所述移 动机器人的第二相对位置,包括:The positioning method of mobile robot according to claim 5, wherein, determining the second relative position of the target object and the mobile robot comprises:
    根据目标测距方式确定所述移动机器人距离所述目标对象的距离,得到第二距离;Determining the distance between the mobile robot and the target object according to the target ranging method to obtain a second distance;
    基于图像采集组件采集到的图像数据确定所述移动机器人与所述目标对象的第二方向关系;determining a second directional relationship between the mobile robot and the target object based on the image data collected by the image collection component;
    根据所述第二方向关系和所述第二距离确定所述移动机器人与所述目标对象的第二相对位置。A second relative position between the mobile robot and the target object is determined according to the second directional relationship and the second distance.
  7. 根据权利要求1所述的移动机器人的定位方法,其中,通过以下方式至少之一检测到移动机器人处于第一状态,包括:The positioning method for a mobile robot according to claim 1, wherein it is detected that the mobile robot is in the first state by at least one of the following methods, including:
    检测到所述移动机器人的移动轮发生空转;Detecting that the mobile wheels of the mobile robot are idling;
    检测到所述移动机器人的移动轮未与目标平面接触。It is detected that the mobile wheels of the mobile robot are not in contact with the target plane.
  8. 一种全局坐标的确定装置,其中,包括:A device for determining global coordinates, including:
    获取模块,用于在检测到移动机器人处于第一状态的情况下,获取所述移动机器人在处于第二状态下所检测到的至少两个目标对象的位置坐标;所述第一状态为所述移动机器人发生位姿失效的状态,所述第二状态为所述移动机器人未发生位姿失效的状态;An acquisition module, configured to acquire the position coordinates of at least two target objects detected by the mobile robot in a second state when it is detected that the mobile robot is in the first state; the first state is the A state where the mobile robot has a pose failure, and the second state is a state where the mobile robot does not have a pose failure;
    确定模块,用于确定所述移动机器人与所述至少两个目标对象的第一相对位置;a determining module, configured to determine a first relative position between the mobile robot and the at least two target objects;
    计算模块,用于根据所述至少两个目标对象的位置坐标和所述第一相对位置计算所述移动机器人的位置坐标。A calculation module, configured to calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
  9. 一种计算机可读的存储介质,其中,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至7任一项中所述的方法。A computer-readable storage medium, wherein a computer program is stored in the storage medium, wherein the computer program is configured to perform the method described in any one of claims 1 to 7 when running.
  10. 一种电子装置,包括存储器和处理器,其中,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至7任一项中所述的方法。An electronic device, comprising a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to perform the method described in any one of claims 1 to 7 method.
PCT/CN2022/113375 2021-09-23 2022-08-18 Positioning method and device for mobile robot, storage medium and electronic device WO2023045644A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111117669.5A CN113907645A (en) 2021-09-23 2021-09-23 Mobile robot positioning method and device, storage medium and electronic device
CN202111117669.5 2021-09-23

Publications (1)

Publication Number Publication Date
WO2023045644A1 true WO2023045644A1 (en) 2023-03-30

Family

ID=79236005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113375 WO2023045644A1 (en) 2021-09-23 2022-08-18 Positioning method and device for mobile robot, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN113907645A (en)
WO (1) WO2023045644A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934751A (en) * 2023-09-15 2023-10-24 深圳市信润富联数字科技有限公司 Acquisition method and device of high-precision point cloud, storage medium and electronic equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113907645A (en) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 Mobile robot positioning method and device, storage medium and electronic device
CN114519739A (en) * 2022-04-21 2022-05-20 深圳史河机器人科技有限公司 Direction positioning method and device based on recognition device and storage medium
CN116185046B (en) * 2023-04-27 2023-06-30 北京宸普豪新科技有限公司 Mobile robot positioning method, mobile robot and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090149994A1 (en) * 2007-12-11 2009-06-11 Samsung Electronics Co., Ltd. Method, medium, and apparatus for correcting pose of moving robot
CN105953798A (en) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 Determination method and apparatus for poses of mobile robot
US20170010100A1 (en) * 2015-07-09 2017-01-12 Panasonic Intellectual Property Corporation Of America Map production method, mobile robot, and map production system
CN109506641A (en) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 The pose loss detection and relocation system and robot of mobile robot
CN112256011A (en) * 2019-07-05 2021-01-22 苏州宝时得电动工具有限公司 Regression guiding method, regression guiding device, mobile robot, and storage medium
CN113907645A (en) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 Mobile robot positioning method and device, storage medium and electronic device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3739417A4 (en) * 2018-06-08 2021-02-24 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Navigation method, navigation system, mobile control system, and mobile robot
CN109643127B (en) * 2018-11-19 2022-05-03 深圳阿科伯特机器人有限公司 Map construction, positioning, navigation and control method and system, and mobile robot
CN111136648B (en) * 2019-12-27 2021-08-27 深圳市优必选科技股份有限公司 Mobile robot positioning method and device and mobile robot
CN113126602B (en) * 2019-12-30 2023-07-14 南京景曜智能科技有限公司 Positioning method of mobile robot
CN111220148A (en) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 Mobile robot positioning method, system and device and mobile robot
CN112161618B (en) * 2020-09-14 2023-03-28 灵动科技(北京)有限公司 Storage robot positioning and map construction method, robot and storage medium
CN112686951A (en) * 2020-12-07 2021-04-20 深圳乐动机器人有限公司 Method, device, terminal and storage medium for determining robot position

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090149994A1 (en) * 2007-12-11 2009-06-11 Samsung Electronics Co., Ltd. Method, medium, and apparatus for correcting pose of moving robot
US20170010100A1 (en) * 2015-07-09 2017-01-12 Panasonic Intellectual Property Corporation Of America Map production method, mobile robot, and map production system
CN105953798A (en) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 Determination method and apparatus for poses of mobile robot
CN109506641A (en) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 The pose loss detection and relocation system and robot of mobile robot
CN112256011A (en) * 2019-07-05 2021-01-22 苏州宝时得电动工具有限公司 Regression guiding method, regression guiding device, mobile robot, and storage medium
CN113907645A (en) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 Mobile robot positioning method and device, storage medium and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934751A (en) * 2023-09-15 2023-10-24 深圳市信润富联数字科技有限公司 Acquisition method and device of high-precision point cloud, storage medium and electronic equipment
CN116934751B (en) * 2023-09-15 2024-01-12 深圳市信润富联数字科技有限公司 Acquisition method and device of high-precision point cloud, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113907645A (en) 2022-01-11

Similar Documents

Publication Publication Date Title
WO2023045644A1 (en) Positioning method and device for mobile robot, storage medium and electronic device
US11204247B2 (en) Method for updating a map and mobile robot
CN109074085B (en) Autonomous positioning and map building method and device and robot
CN107025662B (en) Method, server, terminal and system for realizing augmented reality
CN112734852B (en) Robot mapping method and device and computing equipment
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN110806215A (en) Vehicle positioning method, device, equipment and storage medium
WO2022078513A1 (en) Positioning method and apparatus, self-moving device, and storage medium
US20200278450A1 (en) Three-dimensional point cloud generation method, position estimation method, three-dimensional point cloud generation device, and position estimation device
Iocchi et al. Self-localization in the RoboCup environment
CN105116886A (en) Robot autonomous walking method
WO2018207426A1 (en) Information processing device, information processing method, and program
CN111380515B (en) Positioning method and device, storage medium and electronic device
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN107025661A (en) A kind of method for realizing augmented reality, server, terminal and system
CN111856499B (en) Map construction method and device based on laser radar
WO2022222345A1 (en) Positioning correction method and apparatus for mobile robot, storage medium, and electronic apparatus
WO2022002149A1 (en) Initial localization method, visual navigation device, and warehousing system
Haugaard et al. Multi-view object pose estimation from correspondence distributions and epipolar geometry
CN113063421A (en) Navigation method and related device, mobile terminal and computer readable storage medium
CN111563934B (en) Monocular vision odometer scale determination method and device
CN112689234A (en) Indoor vehicle positioning method and device, computer equipment and storage medium
CN116295406A (en) Indoor three-dimensional positioning method and system
CN113190564A (en) Map updating system, method and device
WO2020037553A1 (en) Image processing method and device, and mobile device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE