WO2023045644A1 - Procédé et dispositif de positionnement pour robot mobile, support de stockage et dispositif électronique - Google Patents

Procédé et dispositif de positionnement pour robot mobile, support de stockage et dispositif électronique Download PDF

Info

Publication number
WO2023045644A1
WO2023045644A1 PCT/CN2022/113375 CN2022113375W WO2023045644A1 WO 2023045644 A1 WO2023045644 A1 WO 2023045644A1 CN 2022113375 W CN2022113375 W CN 2022113375W WO 2023045644 A1 WO2023045644 A1 WO 2023045644A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile robot
state
target
target object
position coordinates
Prior art date
Application number
PCT/CN2022/113375
Other languages
English (en)
Chinese (zh)
Inventor
王朕
郁顺昌
齐焱
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2023045644A1 publication Critical patent/WO2023045644A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Definitions

  • the present disclosure relates to the communication field, and in particular, to a positioning method and device for a mobile robot, a storage medium, and an electronic device.
  • Robots need to interact with the environment and users, and environmental perception is the most basic and critical link in the interaction process of robots.
  • environmental perception is the most basic and critical link in the interaction process of robots.
  • it is necessary to use the past information to obtain the accurate machine pose.
  • Existing machine relocalization techniques are divided into methods based on traditional feature points, methods based on machine learning and methods based on deep learning. Among them, the relocation based on traditional feature points uses feature points for matching, and then obtains the machine pose.
  • VSLAM Vision Simultaneous Location And Mapping
  • the purpose of the present disclosure is to provide a positioning method and device, a storage medium and an electronic device for a mobile robot, so as to at least solve the problem in the related art that the position of the mobile robot cannot be determined when the pose of the mobile robot fails.
  • a positioning method for a mobile robot including: when it is detected that the mobile robot is in the first state, acquiring at least two data detected by the mobile robot in the second state The position coordinates of a target object; the first state is a state where the mobile robot has a pose failure, and the second state is a state where the mobile robot has no pose failure; determine the relationship between the mobile robot and the First relative positions of at least two target objects; calculating the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative positions.
  • the acquiring the position coordinates of at least two target objects detected by the mobile robot in the second state includes: identifying at least two target objects based on the image data collected by the image acquisition component; according to the The identifications of at least two target objects are matched from a preset relationship library to obtain the position coordinates of the at least two target objects; wherein, the preset relationship library stores the correspondence between the identification of the target object and the position coordinates of the target object relation.
  • the determining the first relative position between the mobile robot and the at least two target objects includes: determining the distance between the mobile robot and the at least two target objects according to the target ranging method, and obtaining at least two First distance; determine the first directional relationship between the mobile robot and the at least two objects based on the image data collected by the image acquisition component; determine the first directional relationship according to the first directional relationship and the at least two first distances First relative positions of the mobile robot and the at least two target objects.
  • the method further includes: when the mobile robot is in the second state, identifying the target object based on the image data collected by the image acquisition component; determining the position coordinates of the target object; The identifier of and the position coordinates of the target object are correspondingly stored in a preset relationship library.
  • the determining the position coordinates of the target object includes: determining a second relative position between the target object and the mobile robot; determining the target according to the second relative position and the position coordinates of the mobile robot The object's location coordinates.
  • determining the second relative position between the target object and the mobile robot includes: determining the distance between the mobile robot and the target object according to a target ranging method to obtain a second distance; Determining a second directional relationship between the mobile robot and the target object based on the image data; determining a second relative position between the mobile robot and the target object according to the second directional relationship and the second distance.
  • the mobile robot is in the first state through at least one of the following methods, including: detecting that the moving wheels of the mobile robot are idling; detecting that the moving wheels of the mobile robot are not in contact with the target plane.
  • a positioning device for a mobile robot including: an acquisition module, configured to acquire the location of the mobile robot in the second state when it is detected that the mobile robot is in the first state The detected position coordinates of at least two target objects; the first state is a state where the mobile robot has a pose failure, and the second state is a state where the mobile robot has no pose failure; a determination module , for determining a first relative position between the mobile robot and the at least two target objects; a calculation module, for calculating the mobile robot according to the position coordinates of the at least two target objects and the first relative position location coordinates.
  • a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute any of the above-mentioned steps when running.
  • an electronic device including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to perform any of the above The localization method of the mobile robot described in the item.
  • the position coordinates of at least two target objects are obtained when the mobile robot does not have a pose failure, and at the same time, the first relative positions between the mobile robot and the at least two target objects are determined , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object.
  • FIG. 1 is a block diagram of the hardware structure of a computer terminal of a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 2 is a flow chart of a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram (1) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram (2) of detecting obstacles according to a method for determining a target object according to an embodiment of the present disclosure
  • FIG. 1 is a block diagram of a hardware structure of a robot according to a positioning method for a mobile robot according to an embodiment of the present disclosure.
  • the mobile robot can include one or more (only one is shown in Figure 1) processor 102 (processor 102 can include but not limited to microprocessor (Microprocessor Unit, MPU for short) or programmable logic device (Programmable logic device, referred to as PLD) and other processing devices and memory 104 for storing data, optionally, the above-mentioned mobile robot can also include transmission equipment 106 and input and output equipment 108 for communication functions.
  • processor 102 can include but not limited to microprocessor (Microprocessor Unit, MPU for short) or programmable logic device (Programmable logic device, referred to as PLD) and other processing devices and memory 104 for storing data
  • PLD programmable logic device
  • the structure shown in Figure 1 is only illustrative, and it does not limit the structure of the above-mentioned mobile robot.
  • the mobile robot can also include more or less components than shown in Figure 1, or have the same Functionally equivalent to that shown in Figure 1 or a different configuration with more functionality than shown in Figure 1.
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the mobile robot positioning method in the embodiment of the present disclosure, and the processor 102 runs the computer program stored in the memory 104, Thereby executing various functional applications and data processing, that is, realizing the above-mentioned method.
  • the memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include memory located remotely from the processor 102, and these remote memories may be connected to the mobile robot through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 106 is used to receive or transmit data via the network.
  • a specific example of the above-mentioned network may include a wireless network provided by a mobile robot's communication provider.
  • the transmission device 106 includes a network interface controller (NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • FIG. 2 is a flowchart (1) of a positioning method for a mobile robot according to an embodiment of the disclosure. As shown in FIG. 2 , the process includes the following steps :
  • Step S202 When it is detected that the mobile robot is in the first state, obtain the position coordinates of at least two target objects detected by the mobile robot in the second state; the first state is the mobile robot A state where a pose failure occurs, and the second state is a state where the mobile robot does not have a pose failure;
  • whether the mobile robot is in the first state is detected by at least one of the following methods: detecting whether the moving wheels of the mobile robot are idling; detecting whether the moving wheels of the mobile robot are in contact with the target plane.
  • the VSLAM cannot determine the position of the mobile robot, and then determine that the mobile robot is in the first state, that is, determine that the mobile robot is in a state where the pose is invalid.
  • Step S204 determining a first relative position between the mobile robot and the at least two target objects
  • Step S206 Calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
  • the position coordinates of at least two target objects are obtained when the mobile robot does not have a pose failure, and at the same time, the first relative positions of the mobile robot and the at least two target objects are determined , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object.
  • the technical solution of the embodiment of the present application can be divided into two parts from the time level, the first part: the mobile robot fails in the pose; the second part: the mobile robot occurs Pose fails.
  • the target object when the mobile robot is in the second state, the target object is identified based on the image data collected by the image acquisition component; the position coordinates of the target object are determined; the target object is The identifier of and the position coordinates of the target object are correspondingly stored in a preset relationship library.
  • the mobile robot when the mobile robot is in a non-failure state, the mobile robot can collect the image data during the traveling process through its own image acquisition component, and then use the object recognition model to identify the image data to determine the presence of The target object and the identification of the target object, then the mobile robot will further determine the position coordinates of the target object, and then store the identification of the target object and the position coordinates of the target object in the preset relationship library.
  • the above image data includes: pictures, videos and so on.
  • the above-mentioned target objects include: obstacles, furniture, etc.
  • the above-mentioned mobile robot can determine the position coordinates of the target object in the following manner: determine the second relative position between the target object and the mobile robot; determine according to the second relative position and the position coordinates of the mobile robot The location coordinates of the target object.
  • the mobile robot can determine its own coordinates through VSLAM, and then the mobile robot only needs to determine the relative position corresponding to the target object to determine the position coordinates of the target object.
  • the coordinates of the mobile robot are (2, 2)
  • the target object is one meter (relative position) due west of the mobile robot
  • the coordinates of the target object are (1, 2).
  • the positive direction of the X-axis of the coordinate system is due east
  • the positive direction of the Y-axis is due north
  • the length of a unit in the coordinate system is one meter.
  • the determination of the second relative position between the target object and the mobile robot may be achieved in the following manner: determine the distance between the mobile robot and the target object according to the target ranging method , to obtain the second distance; determine the second direction relationship between the mobile robot and the target object based on the image data collected by the image acquisition component; determine the relationship between the mobile robot and the target object according to the second direction relationship and the second distance The second relative position of the target object.
  • the mobile robot can first determine the distance between the target object and itself through the target ranging method, and then determine which direction the target object is in itself according to the image data collected by the image acquisition component. After determining the direction and distance, it can Determine the relative position. For example, the target object is determined to be one meter away from the mobile robot through the target ranging method, and the target object is determined to be in the due west of the mobile robot through image data, and then the relative position is one meter due to the west. It should be noted that there are many methods of target ranging, including: monocular ranging, binocular ranging, depth sensor ranging, laser ranging, and so on.
  • the mobile robot can determine the position coordinates of all target objects in the area where the mobile robot is located without pose failure, and then map the target object identification to the position coordinates of the target object Stored in a preset relational library.
  • the position coordinates of at least two target objects detected by the mobile robot in the second state can be obtained first, and the first relative positions between the mobile robot and the at least two target objects can be determined , and then the mobile robot can calculate the position coordinates of the mobile robot according to the position coordinates of at least two target objects and the first relative position.
  • obtaining the position coordinates of at least two target objects detected by the mobile robot in the second state may be achieved in the following manner: identifying at least two target objects based on the image data collected by the image acquisition component ; According to the identification of the at least two target objects, the position coordinates of the at least two target objects are obtained by matching from the preset relationship library; wherein, the preset relationship library stores the identification of the target object and the target object Correspondence of position coordinates.
  • the mobile robot needs to collect images of the target area through the image collection component, and then determine at least two target objects from the collected pictures, and determine the position coordinates of the at least two target objects from the preset relationship library . If the mobile robot determines the target object A and the target object B according to the image acquisition component, then it can determine the coordinates of the target object A as (1, 2) and the target object B (5, 2) from the preset relationship library.
  • determining the first relative positions of the mobile robot and at least two target objects may be achieved in the following manner: determine the distance between the mobile robot and the at least two target objects according to the target ranging method, and obtain at least two first relative positions A distance; determine the first directional relationship between the mobile robot and the at least two objects based on the image data collected by the image acquisition component; determine the movement according to the first directional relationship and the at least two first distances A first relative position between the robot and the at least two target objects.
  • the mobile robot can first determine the distance between the target object and itself through the target ranging method, and then determine which direction the target object is in itself according to the image data collected by the image acquisition component. After determining the direction and distance, it can Determine the relative position.
  • the target object A is determined to be two meters away from the mobile robot through the target distance measurement method, and the target object is determined to be in the due west direction of the mobile robot through the image data, and then the relative position is two meters due to the west direction.
  • the target object B is determined to be two meters away from the mobile robot through the target ranging method, and the target object B is determined to be in the due east direction of the mobile robot through the image data, and then the relative position is two meters due east.
  • the position coordinates of the mobile robot can be calculated, for example, the coordinates of the target object A are (1, 2), and the target object B (5, 2), The target object A is two meters to the west of the mobile robot, and the target object B is two meters to the east of the mobile robot, and the coordinates of the mobile robot are (3, 2).
  • the mobile robot includes two states during the traveling process.
  • the first state is an abnormal state in which a pose failure occurs
  • the second state is a normal state in which a pose failure does not occur.
  • the mobile robot detects the surrounding object information (label information, detection frame information) through the AI camera (equivalent to the above-mentioned image acquisition component) during the travel process, and determines the distance from the object to the robot through the principle of monocular ranging, so as to determine The relative position between the object and the robot, and then determine the global coordinates of the object according to the relative position of the two (in the second state, the global coordinates of the robot can be determined based on VSLAM), for example, when the robot travels to position A, it detects For refrigerators, the AI camera can mark the refrigerator, and determine the distance from the refrigerator to the robot through the principle of monocular distance measurement, thereby determining the relative position between the refrigerator and the robot, and then determine the global coordinates of the refrigerator according to the relative positions of the two
  • VSLAM cannot locate the global coordinates of the mobile robot due to pose failure.
  • at least two objects within the robot’s field of vision can be obtained at the position where the pose failure occurs, determined by the AI camera.
  • the tags of the two objects so that the global coordinates corresponding to the two objects can be obtained according to the tags of the two objects (the global coordinates are determined in the second state and are accurate), and then through the principle of monocular distance measurement Determine the distance from the object to the mobile robot, thereby determining the relative position between the object and the mobile robot, and then calculate the global coordinates of the mobile robot based on the relative positions of the two objects and the mobile robot and the global coordinates of the two objects.
  • FIG. 3 is a flow chart (2) of a positioning method for a mobile robot according to an embodiment of the present disclosure. Based on the flow chart shown in FIG. 3 , the optional embodiment of the present disclosure provides The technical solution can be summarized as the following steps:
  • Step 1 When the sweeping robot (equivalent to the mobile robot in the above embodiment) T2 time (equivalent to the above-mentioned The second state in the embodiment) object detection information at the current location;
  • the AI camera (equivalent to the image acquisition component in the above embodiment) is arranged in front of the cleaning robot, and the AI camera's collection field of view is the forward direction of the cleaning robot.
  • the sweeping robot turns on the AI camera in real time, and the sweeping robot inputs the collected images into the detection model to obtain the detection information of the target object on the ground in front.
  • the detection information includes: label information and detection frame information.
  • Step 2 Obtain the global map coordinate information of the target object at T2 (equivalent to the global coordinates in the above-mentioned embodiment);
  • the target object pose in the machine coordinate system can be obtained by using the principle of monocular distance measurement (relationship between projection transformation and rigid body transformation). Then, combined with the pose information at the current position of the sweeping robot, the coordinate information of the target object under the global map is obtained.
  • the sweeping robot detects the dining table A in front of it, combines the detection frame information and the current pose information of the machine to obtain the coordinate information of the dining table A in the global map coordinate system, and stores it in the sweeping robot.
  • Step 3 Based on the global coordinate information of two or more target objects at the current position, determine the pose information of the sweeping robot;
  • the AI camera captures the detection information of more than two target objects.
  • the target object is an object that has been detected by AI many times during the normal driving of the sweeping robot (the machine pose exists and is accurate) before the pose failure of the sweeping robot, and the coordinate information under the global map is given.
  • the coordinate information of the target object in the machine coordinate system (equivalent to the position coordinates in the above embodiments) is restored, according to the machine coordinate system , the coordinate transformation relationship between the global coordinate system and the machine pose of the sweeping robot, and finally obtain the pose information of the machine at this time.
  • Step 4 Based on the pose information in step 3, update the pose when the VSLAM positioning fails and correct the wrong relocation, so as to avoid the map being inaccurate.
  • the VSLAM pose failure occurs when the machine is lifted and relocated, and the relocation error mostly occurs during the wheel slipping process.
  • the reverse calculated machine pose information is used to fill and correct the machine pose at this time. In this way, the occurrence of inaccurate VSLAM mapping is avoided.
  • the above-mentioned embodiments of the present disclosure use AI real-time detection to establish the semantic information and global map coordinate information of ground target objects.
  • use AI to identify and match more than two target objects to obtain a priori global map Coordinates, and then fill and correct the current machine pose, solve the problem of traditional VSLAM positioning failure and positioning error, improve the accuracy of positioning and mapping, and avoid inaccurate mapping.
  • the embodiments of the present disclosure can also directly use the deep neural network to realize end-to-end machine pose prediction.
  • use CNN to encode images, construct a database containing image features and real-world poses, and then achieve relative pose prediction by matching the most similar images in the database.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as a read-only memory (Read-Only Memory) Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, or network device) etc.) to perform the methods of the various embodiments of the present disclosure.
  • a storage medium such as a read-only memory (Read-Only Memory) Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk
  • a terminal device which can be a mobile phone, computer, server, or network device
  • a positioning device for a mobile robot is also provided, and the positioning device for a mobile robot is used to implement the above-mentioned embodiments and preferred implementation modes, and those that have already been described will not be repeated.
  • the term "module” may be a combination of software and/or hardware that realizes a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
  • Fig. 4 is a structural block diagram of a positioning device for a mobile robot according to an embodiment of the present disclosure. As shown in Fig. 4, the device includes:
  • the obtaining module 42 is used to obtain the position coordinates of at least two target objects detected by the mobile robot in the second state when it is detected that the mobile robot is in the first state;
  • the first state is the A state where the mobile robot has a pose failure
  • the second state is a state where the mobile robot has no pose failure;
  • a determining module 44 configured to determine a first relative position between the mobile robot and the at least two target objects
  • a calculation module 46 configured to calculate the position coordinates of the mobile robot according to the position coordinates of the at least two target objects and the first relative position.
  • the position of the mobile robot in the case of a pose failure of the mobile robot, obtain the position coordinates of at least two target objects when the mobile robot does not have a pose failure, and at the same time, determine the first relative position between the mobile robot and the at least two target objects , and then the position coordinates of the mobile robot can be calculated according to the first relative position and the position coordinates of the target object.
  • the acquisition module 42 is also configured to identify at least two target objects based on the image data collected by the image acquisition component; and obtain the at least two target objects from a preset relationship library according to the identification of the at least two target objects The position coordinates of each target object; wherein, the preset relationship library stores the corresponding relationship between the identification of the target object and the position coordinates of the target object.
  • the determination module 44 is further configured to determine the distance between the mobile robot and the at least two target objects according to the target distance measurement method to obtain at least two first distances; A first directional relationship between the mobile robot and the at least two objects; determining a first relative position between the mobile robot and the at least two target objects according to the first directional relationship and the at least two first distances .
  • the calculation module 46 is also used to identify the target object based on the image data collected by the image acquisition component when the mobile robot is in the second state; determine the position coordinates of the target object; The object identifier and the position coordinates of the target object are correspondingly stored in a preset relationship library.
  • the calculation module 46 is further configured to determine a second relative position between the target object and the mobile robot; determine the position coordinates of the target object according to the second relative position and the position coordinates of the mobile robot.
  • the calculation module 46 is also used to determine the distance between the mobile robot and the target object according to the target ranging method to obtain a second distance; determine the distance between the mobile robot and the target object based on the image data collected by the image acquisition component. A second directional relationship of the target object; determining a second relative position between the mobile robot and the target object according to the second directional relationship and the second distance.
  • the acquisition module 42 is also used to detect that the mobile robot is in the first state through at least one of the following methods, including: detecting that the mobile wheels of the mobile robot are idling; detecting that the mobile wheels of the mobile robot are not in contact with Target plane contact.
  • the above-mentioned modules can be realized by software or hardware. For the latter, it can be realized by the following methods, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules can be combined in any combination The forms of are located in different processors.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the above method embodiments when running.
  • the above-mentioned storage medium may be configured to store a computer program for performing the following steps:
  • the mobile robot When it is detected that the mobile robot is in the first state, acquire the position coordinates of at least two target objects detected by the mobile robot in the second state;
  • the first state is when the mobile robot occurs A state of pose failure
  • the second state is a state where the mobile robot has no pose failure;
  • the above-mentioned storage medium may include but not limited to: various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
  • various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
  • Embodiments of the present disclosure also provide an electronic device, including a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to execute the following steps through a computer program:
  • the mobile robot When it is detected that the mobile robot is in the first state, acquire the position coordinates of at least two target objects detected by the mobile robot in the second state;
  • the first state is when the mobile robot occurs A state of pose failure
  • the second state is a state where the mobile robot has no pose failure;
  • each module or each step of the above-mentioned disclosure can be realized by a general-purpose computing device, and they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Alternatively, they may be implemented in program code executable by a computing device so that they may be stored in a storage device to be executed by a computing device, and in some cases in an order different from that shown here
  • the steps shown or described are carried out, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps among them are fabricated into a single integrated circuit module for implementation.
  • the present disclosure is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Procédé et dispositif de positionnement pour un robot mobile, support de stockage et dispositif électronique. Le procédé consiste : lorsqu'il est détecté qu'un robot mobile est dans un premier état, à acquérir les coordonnées de position d'au moins deux objets cibles détectés par le robot mobile dans un second état, le premier état étant un état dans lequel une défaillance de pose se produit dans le robot mobile, et le second état étant un état dans lequel une défaillance de pose ne se produit pas dans le robot mobile (S202); à déterminer des premières positions relatives entre le robot mobile et lesdits deux objets cibles (S204); et à calculer les coordonnées de position du robot mobile en fonction des coordonnées de position desdits deux objets cibles et des premières positions relatives (S206). Le procédé résout le problème dans la technologie associée selon lequel, lorsqu'une défaillance de pose se produit dans un robot mobile, la position du robot mobile ne peut pas être déterminée.
PCT/CN2022/113375 2021-09-23 2022-08-18 Procédé et dispositif de positionnement pour robot mobile, support de stockage et dispositif électronique WO2023045644A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111117669.5A CN113907645A (zh) 2021-09-23 2021-09-23 移动机器人的定位方法及装置、存储介质及电子装置
CN202111117669.5 2021-09-23

Publications (1)

Publication Number Publication Date
WO2023045644A1 true WO2023045644A1 (fr) 2023-03-30

Family

ID=79236005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113375 WO2023045644A1 (fr) 2021-09-23 2022-08-18 Procédé et dispositif de positionnement pour robot mobile, support de stockage et dispositif électronique

Country Status (2)

Country Link
CN (1) CN113907645A (fr)
WO (1) WO2023045644A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934751A (zh) * 2023-09-15 2023-10-24 深圳市信润富联数字科技有限公司 高精点云的采集方法及装置、存储介质、电子设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113907645A (zh) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 移动机器人的定位方法及装置、存储介质及电子装置
CN114519739A (zh) * 2022-04-21 2022-05-20 深圳史河机器人科技有限公司 一种基于识别装置的方向定位方法、装置及存储介质
CN116185046B (zh) * 2023-04-27 2023-06-30 北京宸普豪新科技有限公司 一种移动机器人的定位方法、移动机器人及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090149994A1 (en) * 2007-12-11 2009-06-11 Samsung Electronics Co., Ltd. Method, medium, and apparatus for correcting pose of moving robot
CN105953798A (zh) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
US20170010100A1 (en) * 2015-07-09 2017-01-12 Panasonic Intellectual Property Corporation Of America Map production method, mobile robot, and map production system
CN109506641A (zh) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 移动机器人的位姿丢失检测与重定位系统及机器人
CN112256011A (zh) * 2019-07-05 2021-01-22 苏州宝时得电动工具有限公司 回归引导方法、回归引导装置、移动机器人及存储介质
CN113907645A (zh) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 移动机器人的定位方法及装置、存储介质及电子装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3739417A4 (fr) * 2018-06-08 2021-02-24 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Procédé et système de navigation, système de commande mobile et robot mobile
CN109643127B (zh) * 2018-11-19 2022-05-03 深圳阿科伯特机器人有限公司 构建地图、定位、导航、控制方法及系统、移动机器人
CN111136648B (zh) * 2019-12-27 2021-08-27 深圳市优必选科技股份有限公司 一种移动机器人的定位方法、定位装置及移动机器人
CN113126602B (zh) * 2019-12-30 2023-07-14 南京景曜智能科技有限公司 一种移动机器人的定位方法
CN111220148A (zh) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 移动机器人的定位方法、系统、装置及移动机器人
CN112161618B (zh) * 2020-09-14 2023-03-28 灵动科技(北京)有限公司 仓储机器人定位与地图构建方法、机器人及存储介质
CN112686951A (zh) * 2020-12-07 2021-04-20 深圳乐动机器人有限公司 用于确定机器人位置的方法、装置、终端及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090149994A1 (en) * 2007-12-11 2009-06-11 Samsung Electronics Co., Ltd. Method, medium, and apparatus for correcting pose of moving robot
US20170010100A1 (en) * 2015-07-09 2017-01-12 Panasonic Intellectual Property Corporation Of America Map production method, mobile robot, and map production system
CN105953798A (zh) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN109506641A (zh) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 移动机器人的位姿丢失检测与重定位系统及机器人
CN112256011A (zh) * 2019-07-05 2021-01-22 苏州宝时得电动工具有限公司 回归引导方法、回归引导装置、移动机器人及存储介质
CN113907645A (zh) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 移动机器人的定位方法及装置、存储介质及电子装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934751A (zh) * 2023-09-15 2023-10-24 深圳市信润富联数字科技有限公司 高精点云的采集方法及装置、存储介质、电子设备
CN116934751B (zh) * 2023-09-15 2024-01-12 深圳市信润富联数字科技有限公司 高精点云的采集方法及装置、存储介质、电子设备

Also Published As

Publication number Publication date
CN113907645A (zh) 2022-01-11

Similar Documents

Publication Publication Date Title
WO2023045644A1 (fr) Procédé et dispositif de positionnement pour robot mobile, support de stockage et dispositif électronique
US11204247B2 (en) Method for updating a map and mobile robot
CN109074085B (zh) 一种自主定位和地图建立方法、装置和机器人
CN107025662B (zh) 一种实现增强现实的方法、服务器、终端及系统
CN112734852B (zh) 一种机器人建图方法、装置及计算设备
CN110470333B (zh) 传感器参数的标定方法及装置、存储介质和电子装置
CN110806215A (zh) 车辆定位的方法、装置、设备及存储介质
WO2022078513A1 (fr) Procédé et appareil de positionnement, dispositif automoteur et support d'enregistrement
US20200278450A1 (en) Three-dimensional point cloud generation method, position estimation method, three-dimensional point cloud generation device, and position estimation device
Iocchi et al. Self-localization in the RoboCup environment
CN105116886A (zh) 一种机器人自主行走的方法
WO2018207426A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
CN111380515B (zh) 定位方法及装置、存储介质、电子装置
CN116222543B (zh) 用于机器人环境感知的多传感器融合地图构建方法及系统
CN107025661A (zh) 一种实现增强现实的方法、服务器、终端及系统
CN111856499B (zh) 基于激光雷达的地图构建方法和装置
WO2022222345A1 (fr) Procédé et appareil de correction de positionnement pour robot mobile, support de stockage et appareil électronique
WO2022002149A1 (fr) Procédé de localisation initiale, dispositif de navigation visuelle et système d'entreposage
Haugaard et al. Multi-view object pose estimation from correspondence distributions and epipolar geometry
CN113063421A (zh) 导航方法及相关装置、移动终端、计算机可读存储介质
CN111563934B (zh) 单目视觉里程计尺度确定方法和装置
CN112689234A (zh) 室内车辆定位方法、装置、计算机设备和存储介质
CN116295406A (zh) 一种室内三维定位方法及系统
CN113190564A (zh) 地图更新系统、方法及设备
WO2020037553A1 (fr) Procédé et dispositif de traitement d'image et dispositif mobile

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE