CN112697151B - Method, apparatus, and storage medium for determining initial point of mobile robot - Google Patents

Method, apparatus, and storage medium for determining initial point of mobile robot Download PDF

Info

Publication number
CN112697151B
CN112697151B CN202011554388.1A CN202011554388A CN112697151B CN 112697151 B CN112697151 B CN 112697151B CN 202011554388 A CN202011554388 A CN 202011554388A CN 112697151 B CN112697151 B CN 112697151B
Authority
CN
China
Prior art keywords
mobile robot
coordinate system
position range
determining
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011554388.1A
Other languages
Chinese (zh)
Other versions
CN112697151A (en
Inventor
李江勇
余永旺
何径舟
万小波
孟宪鹏
王康
董粤强
赵雨蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011554388.1A priority Critical patent/CN112697151B/en
Publication of CN112697151A publication Critical patent/CN112697151A/en
Application granted granted Critical
Publication of CN112697151B publication Critical patent/CN112697151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The application discloses a method, equipment and a storage medium for determining an initial point of a mobile robot, and relates to the technical field of artificial intelligence. The specific implementation scheme is as follows: after the robot is started, determining a first position range of the robot in a world coordinate system by combining UWB positioning data of the mobile robot, determining a second position range of the mobile robot in a visual map coordinate system according to the first position range, performing visual repositioning by combining second image information in the second position range and first image information around the mobile robot to obtain a third position range of the mobile robot in the visual map coordinate system, determining a fourth position range of the third position range in a laser map coordinate system of the mobile robot, and setting a central point of the fourth position range as an initial point of the mobile robot. Therefore, the initial point of the robot is automatically determined, so that the mobile robot can be started at any position.

Description

Method, apparatus, and storage medium for determining initial point of mobile robot
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a storage medium for determining an initial point of a mobile robot.
Background
With the continuous development of robot technology, robots are applied more and more widely. In the process of providing services for human beings through the mobile robot, after the mobile robot is started, initial point position data needs to be provided for the mobile robot. In the related art, a manual method is usually adopted to interfere with the starting position of the mobile robot, so that the mobile robot is started at a fixed point.
Disclosure of Invention
Provided are a method, apparatus, and storage medium for determining an initiation point of a mobile robot.
According to an aspect of the present application, there is provided a method of determining an initial point of a mobile robot, including: responding to a starting request of a mobile robot, acquiring ultra-wideband UWB positioning data of the mobile robot, and acquiring first image information around the mobile robot, wherein the first image information is acquired by the mobile robot; determining a first position range of the mobile robot in a world coordinate system according to the UWB positioning data; determining a second position range of the mobile robot under a visual map coordinate system according to the first position range; acquiring second image information in the second position range according to the visual map corresponding to the visual map coordinate system; performing visual repositioning according to the first image information and the second image information to obtain a third position range of the mobile robot under the visual map coordinate system; and determining a fourth position range of the third position range under a laser map coordinate system of the mobile robot, and setting a center point of the fourth position range as an initial point of the mobile robot.
According to another aspect of the present application, there is provided an apparatus for determining an initial point of a mobile robot, including: the mobile robot system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for responding to a starting request of a mobile robot, acquiring ultra-wideband UWB (ultra-wideband) positioning data of the mobile robot and acquiring first image information around the mobile robot, which is acquired by the mobile robot; the first determining module is used for determining a first position range of the mobile robot in a world coordinate system according to the UWB positioning data; the second determining module is used for determining a second position range of the mobile robot under a visual map coordinate system according to the first position range; the second acquisition module is used for acquiring second image information in the second position range according to the visual map corresponding to the visual map coordinate system; the vision repositioning module is used for performing vision repositioning according to the first image information and the second image information to obtain a third position range of the mobile robot under the vision map coordinate system; and the initial point setting module is used for determining a fourth position range of the third position range under a laser map coordinate system of the mobile robot and setting the central point of the fourth position range as the initial point of the mobile robot.
According to another aspect of the present application, there is provided a mobile robot including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining an initiation point of a mobile robot of the present application.
According to another aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for determining an initial point of a mobile robot disclosed in the embodiments of the present application.
According to another aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method for determining an initiation point of a mobile robot disclosed in embodiments of the present application. One embodiment in the above application has the following advantages or benefits:
after the robot is started, determining a first position range of the robot in a world coordinate system by combining UWB positioning data of the mobile robot, then determining a second position range of the mobile robot in a visual map coordinate system according to the first position range, performing visual relocation by combining second image information in the second position range and first image information around the mobile robot to obtain a third position range of the mobile robot in the visual map coordinate system, determining a fourth position range of the third position range in a laser map coordinate system of the mobile robot, and setting a central point of the fourth position range as an initial point of the mobile robot. Therefore, after the robot is started, the initial point is automatically set for the mobile robot by combining UWB and a mode of visual repositioning auxiliary laser positioning, the trouble of manually interfering the given mobile robot is reduced, the mobile robot can be started at any position, and the mobile robot can be conveniently started at any position.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present application, nor are they intended to limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a flowchart illustrating a method for determining an initial point of a mobile robot according to a first embodiment of the present disclosure;
FIG. 2 is a flowchart detailing visual repositioning according to first image information and second image information to obtain a third position range of the mobile robot in a visual map coordinate system according to an embodiment of the application;
fig. 3 is a schematic structural diagram of an apparatus for determining an initial point of a mobile robot according to a second embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for determining an initial point of a mobile robot according to a third embodiment of the present disclosure;
fig. 5 is a block diagram of a mobile robot for implementing the method for determining an initial point of a mobile robot according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application to assist in understanding, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A method, an apparatus, a mobile robot, and a storage medium for determining an initial point of a mobile robot according to embodiments of the present invention are described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a method for determining an initial point of a mobile robot according to a first embodiment of the present application.
As shown in fig. 1, the method for determining an initial point of a mobile robot may include:
step 101, responding to a starting request of the mobile robot, acquiring ultra-wideband UWB positioning data of the mobile robot, and acquiring first image information around the mobile robot, wherein the first image information is acquired by the mobile robot.
It should be noted that, an execution subject of the method for determining an initiation point of a mobile robot is a device for determining an initiation point of a mobile robot, where the device for determining an initiation point of a mobile robot may be implemented by software and/or hardware, and the device for determining an initiation point of a mobile robot in this embodiment may be configured in a mobile robot, or may be configured in an electronic device that communicates with a mobile robot, where the electronic device may include a terminal device and a server, and the terminal device may include, but is not limited to, a computer, a smart phone, a portable device, and the like, and the embodiment does not specifically limit the terminal device.
It can be understood that, in different application scenarios, the triggering manner of the start request of the mobile robot may be triggered when it is detected that the mobile robot reaches the start time, or the user triggers the start control on the electronic device, so as to trigger the start of the mobile robot communicating with the electronic device, or receive a trigger operation on the start control or the key on the mobile robot.
Wherein, the vision acquisition module may include a vision sensor.
In one embodiment of the present application, the visual sensor may include a camera.
In one embodiment of the application, after the robot is started, a vision acquisition module in the mobile robot acquires images around the mobile robot to obtain first image information around the mobile robot. Correspondingly, an Ultra Wide Band (UWB) positioning module in the mobile robot acquires UWB positioning data of the mobile robot, and then transmits first image information around the robot and the UWB positioning data of the mobile robot to a determination device of an initial point of the mobile robot, so that the device acquires the first image information around the robot and the UWB positioning data of the mobile robot.
And 102, determining a first position range of the mobile robot in a world coordinate system according to the UWB positioning data.
In one embodiment of the present application, one possible implementation of determining the first position range of the mobile robot in the world coordinate system from the UWB positioning data is: the method comprises the steps of obtaining each UWB base station arranged in a target area where the mobile robot is located, obtaining position information of each UWB base station in a world coordinate system, and determining a first position range of the robot in the world coordinate system by combining the position information of each UWB base station in the world coordinate system and UWB positioning data.
And 103, determining a second position range of the mobile robot in the visual map coordinate system according to the first position range.
The visual map coordinate system in this embodiment is a coordinate system established based on the acquisition of image information of a target area where the mobile robot is located by a visual acquisition module in the mobile robot.
The visual map coordinate system in the present embodiment corresponds to the visual map.
In one embodiment of the present application, in order to facilitate subsequent accurate determination of an initial point of the mobile robot, according to the first position range, one possible implementation manner of determining a second position range of the mobile robot in a visual map coordinate system is to; acquiring a first coordinate conversion relation between a world coordinate system and a visual map coordinate system; and converting the first position range according to the first coordinate conversion relation to obtain a second position range of the mobile robot under a visual map coordinate system.
In another embodiment of the present application, a corresponding conversion model may be obtained according to the world coordinate system and the visual map coordinate system, and the first position range is input into the conversion model to obtain a second position range of the mobile robot in the visual map coordinate system.
And 104, acquiring second image information in a second position range according to the visual map corresponding to the visual map coordinate system.
In one embodiment of the present application, a visual map corresponding to a visual map coordinate system may be acquired, and then second image information within a second range of positions may be acquired from the visual map.
And 105, performing visual repositioning according to the first image information and the second image information to obtain a third position range of the mobile robot under a visual map coordinate system.
It can be understood that, in different application scenarios, the implementation manner of performing the visual repositioning according to the first image information and the second image information to obtain the third position range of the mobile robot in the visual map coordinate system is different, for example, as follows:
as a possible implementation manner, the first image information and the second image information may be input into a visual positioning model, and a third position range of the mobile robot in the visual map coordinate system may be determined by the visual positioning model.
As another possible implementation manner, the first image information and the second image information may be combined to perform visual positioning based on a visual positioning algorithm, so as to obtain a third position range of the mobile robot in the visual map coordinate system.
In the following embodiments, other possible implementations of performing the visual repositioning according to the first image information and the second image information to obtain the third position range of the mobile robot in the visual map coordinate system are described.
The third position range in the present embodiment is included in the second position range.
And 106, determining a fourth position range of the third position range under the laser map coordinate system of the mobile robot, and setting the center point of the fourth position range as the initial point of the mobile robot.
In one embodiment of the application, in order to accurately determine the initial point of the mobile robot subsequently, a fourth position range of the robot in a laser map coordinate system needs to be accurately determined. In this embodiment, in order to accurately determine the fourth position range of the robot in the laser map coordinate system, one possible implementation manner of determining the fourth position range of the third position range in the laser map coordinate system of the mobile robot is as follows: acquiring a second coordinate conversion relation between a visual map coordinate system and a laser map coordinate system; and converting the third position range according to the second coordinate conversion relation to obtain a fourth position range under the laser map coordinate system of the mobile robot.
In this embodiment, due to the problem of the positioning accuracy of the visual positioning of the robot, in this embodiment, the third position range obtained by the visual positioning is converted by combining the second coordinate conversion relationship between the visual map coordinate system and the laser map coordinate system to obtain a fourth position range under the laser map coordinate system of the mobile robot, and then, the initial point of the mobile robot is conveniently and accurately determined by subsequently combining the fourth position range.
In another embodiment of the present application, another possible implementation manner of determining the fourth position range of the third position range in the laser map coordinate system of the mobile robot is as follows: and acquiring a conversion model corresponding to the laser map coordinate system and the visual map coordinate system, and converting the third position range according to the conversion model to obtain a fourth position range under the laser map coordinate system of the mobile robot.
The laser map coordinate system is a coordinate system established based on environment data of a target area acquired by a laser sensor in the mobile robot. Wherein, the target area is the area where the mobile robot is located.
According to the method for determining the initial point of the mobile robot, after the robot is started, a first position range of the robot in a world coordinate system is determined by combining UWB positioning data of the mobile robot, then, a second position range of the mobile robot in a visual map coordinate system is determined according to the first position range, visual repositioning is performed by combining second image information in the second position range and first image information around the mobile robot, a third position range of the mobile robot in the visual map coordinate system is obtained, a fourth position range of the third position range in a laser map coordinate system of the mobile robot is determined, and a central point of the fourth position range is set as the initial point of the mobile robot. Therefore, after the robot is started, the initial point is automatically set for the mobile robot by combining UWB and a mode of visual repositioning auxiliary laser positioning, the trouble of manually interfering the given mobile robot is reduced, the mobile robot can be started at any position, and the mobile robot can be conveniently started at any position.
Based on the foregoing embodiment, in an embodiment of the present application, in order to accurately determine the third position range of the mobile robot in the visual map coordinate system, as shown in fig. 2, the performing visual repositioning according to the first image information and the second image information to obtain the third position range of the mobile robot in the visual map coordinate system may include:
step 201, feature point extraction is performed on the first image to obtain first feature point information of the first image information.
In an embodiment of the present application, feature point extraction may be performed on the first image through a feature point extraction model or a preset feature point extraction algorithm, so as to obtain first feature point information of the first image information.
Step 202, feature point extraction is performed on the second image information to obtain second feature point information of the second image information.
In an embodiment of the application, feature point extraction may be performed on the second image information through a feature point extraction model or a preset feature point extraction algorithm, so as to obtain second feature point information of the second image information.
And step 203, performing characteristic point matching on the first characteristic point information and the second characteristic point information.
The first feature point information may include a plurality of first feature points, and the second feature point information may include a plurality of second feature points.
In an embodiment of the present application, a plurality of first feature points and a plurality of second feature points may be matched, for the plurality of first feature points, an unthrowed feature point may be obtained from the plurality of first feature points as a current feature point, then, the current feature point is compared with each of the second feature points, and if the current feature point is the same as one of the plurality of second feature points, it is determined that the current feature point is matched with a corresponding feature point of the plurality of second feature points until the plurality of first feature points are traversed.
And 204, determining a third position range of the mobile robot in the visual map coordinate system according to the matched feature point information.
In an embodiment of the present application, in different application scenarios, according to the matched feature point information, the implementation manner of determining the third position range of the mobile robot in the visual map coordinate system is different, for example, as follows:
as an exemplary embodiment, one possible implementation manner of determining the third position range of the mobile robot in the visual map coordinate system according to the matched feature point information is as follows: acquiring an innermost closed region containing matched feature point information under a visual map coordinate system; and determining a third position range of the mobile robot under the coordinates of the visual map according to the innermost closed region.
In this embodiment, the innermost enclosed region may be used as a third position range of the mobile robot in the visual map coordinates.
In the embodiment, the third position range of the mobile robot on the visual map coordinate is accurately determined by combining the innermost closed region comprising the matched feature point information.
As another possible implementation manner, another possible implementation manner of determining the third position range of the mobile robot in the visual map coordinate system according to the matched feature point information is as follows: determining a maximum closed area which can be formed by the matched characteristic point information according to the matched characteristic point information under a visual map coordinate system; and determining a third position range of the mobile robot under the coordinates of the visual map according to the maximum closed area.
In this embodiment, the maximum closed region may be used as the third position range of the mobile robot in the visual map coordinates.
On the basis of any of the above embodiments, after the central point of the fourth position range is determined to be set as the initial point of the mobile robot, in the process that the mobile robot moves based on the initial point, the mobile robot can be positioned and navigated by combining data acquired by the laser sensor, the vision acquisition module and the UWB module in the mobile robot.
In order to implement the foregoing embodiments, the present application further provides a device for determining an initial point of a mobile robot.
Fig. 3 is a schematic structural diagram of an apparatus for determining an initial point of a mobile robot according to a second embodiment of the present disclosure.
As shown in fig. 3, the apparatus 300 for determining an initial point of a mobile robot may include a first obtaining module 301, a first determining module 302, a second determining module 303, a second obtaining module 304, a visual repositioning module 305, and an initial point setting module 306, wherein:
the first obtaining module 301 is configured to, in response to a start request of the mobile robot, obtain ultra wideband UWB positioning data of the mobile robot, and obtain first image information around the mobile robot, where the first image information is acquired by the mobile robot.
A first determining module 302, configured to determine a first position range of the mobile robot in a world coordinate system according to the UWB positioning data.
And a second determining module 303, configured to determine a second position range of the mobile robot in the visual map coordinate system according to the first position range.
A second obtaining module 304, configured to obtain second image information in a second position range according to the visual map corresponding to the visual map coordinate system.
And the visual repositioning module 305 is configured to perform visual repositioning according to the first image information and the second image information to obtain a third position range of the mobile robot in a visual map coordinate system. And
and the initial point setting module 306 is configured to determine a fourth position range of the third position range in the laser map coordinate system of the mobile robot, and set a central point of the fourth position range as an initial point of the mobile robot.
It should be noted that the explanation of the embodiment of the method for determining the initial point of the mobile robot is also applicable to this embodiment, and the implementation is not described again.
The device for determining the initial point of the mobile robot determines a first position range of the robot in a world coordinate system by combining UWB positioning data of the mobile robot after the robot is started, then determines a second position range of the mobile robot in a visual map coordinate system according to the first position range, performs visual relocation by combining second image information in the second position range and first image information around the mobile robot to obtain a third position range of the mobile robot in the visual map coordinate system, determines a fourth position range of the third position range in a laser map coordinate system of the mobile robot, and sets a central point of the fourth position range as the initial point of the mobile robot. Therefore, after the robot is started, the initial point is automatically set for the mobile robot by combining UWB and a mode of visual repositioning auxiliary laser positioning, the trouble of manually interfering the given mobile robot is reduced, the mobile robot can be started at any position, and the mobile robot can be conveniently started at any position.
In one embodiment of the present application, as shown in fig. 4, the apparatus 400 for determining an initiation point of a mobile robot may include: a first obtaining module 401, a first determining module 402, a second determining module 403, a second obtaining module 404, a visual repositioning module 405, and an initial point setting module 406, wherein the visual repositioning module 405 includes: a first feature extraction unit 4051, a second feature extraction unit 4052, a matching unit 4053, and a determination unit 4054.
The detailed description of the first obtaining module 401, the first determining module 402, the second determining module 403, the second obtaining module 404, the visual repositioning module 405 and the initial point setting module 406 can refer to the descriptions of the first obtaining module 301, the first determining module 302, the second determining module 303, the second obtaining module 304, the visual repositioning module 305 and the initial point setting module 306 in the embodiment shown in fig. 3, and will not be described here.
In an embodiment of the application, the first feature extraction unit 4051 is configured to perform feature point extraction on the first image to obtain first feature point information of the first image information.
The second feature extraction unit 4052 is configured to perform feature point extraction on the second image information to obtain second feature point information of the second image information.
The matching unit 4053 is configured to perform feature point matching on the first feature point information and the second feature point information.
The determining unit 4054 is configured to determine a third position range of the mobile robot in the visual map coordinate system according to the matched feature point information.
In an embodiment of the application, the determining unit 40541 is specifically configured to: acquiring an innermost closed region containing matched feature point information under a visual map coordinate system; and determining a third position range of the mobile robot under the coordinates of the visual map according to the innermost closed region.
In an embodiment of the application, as shown in fig. 4, the second determining module 403 includes:
a first obtaining unit 4031 for obtaining a first coordinate transformation relationship between the world coordinate system and the visual map coordinate system; and
a second converting unit 4032, configured to convert the first position range according to the first coordinate conversion relationship, so as to obtain a second position range of the mobile robot in the visual map coordinate system.
In an embodiment of the application, the initiation point setting module is specifically configured to: acquiring a second coordinate conversion relation between a visual map coordinate system and a laser map coordinate system; and converting the third position range according to the second coordinate conversion relation to obtain a fourth position range under a laser map coordinate system of the mobile robot.
It should be noted that the foregoing explanation of the embodiment of the method for determining the initial point of the mobile robot is also applicable to the apparatus for determining the initial point of the mobile robot in this embodiment, and is not repeated here.
The application also provides a mobile robot and a readable storage medium according to an embodiment of the application.
Fig. 5 is a block diagram of a mobile robot according to the method for determining an initial point of the mobile robot according to the embodiment of the present application. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the mobile robot includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the mobile robot, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, if desired. Also, multiple mobile robots may be connected, with each device providing some of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 5 illustrates an example of a processor 501.
Memory 502 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for determining an initiation point of a mobile robot provided herein. A non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the method for determining an initial point of a mobile robot provided in the present application.
The memory 502, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the determination method of the initiation point of the mobile robot in the embodiment of the present application (for example, the first obtaining module 301, the first determining module 302, the second determining module 303, the second obtaining module 304, the visual repositioning module 305, and the initiation point setting module 306 shown in fig. 3). The processor 501 implements the method of determining the initiation point of the mobile robot in the above-described method embodiments by running non-transitory software programs, instructions, and modules stored in the memory 502.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the mobile robot determined from the initial point of the mobile robot, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 502 may optionally include memory remotely located with respect to processor 501, which may be connected to the mobile robot at the initial point of the mobile robot via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The mobile robot of the method of determination of an initial point of the mobile robot may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 5 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the mobile robot for determination of an initial point of the mobile robot, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, and the like. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or Device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a Programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
According to the technical scheme of the embodiment of the application, after the robot is started, a first position range of the robot in a world coordinate system is determined by combining UWB positioning data of the mobile robot, then a second position range of the mobile robot in a visual map coordinate system is determined according to the first position range, visual repositioning is carried out by combining second image information in the second position range and first image information around the mobile robot, a third position range of the mobile robot in the visual map coordinate system is obtained, a fourth position range of the third position range in a laser map coordinate system of the mobile robot is determined, and a central point of the fourth position range is set as an initial point of the mobile robot. Therefore, after the robot is started, the initial point is automatically set for the mobile robot by combining UWB and a mode of visual repositioning auxiliary laser positioning, the trouble of manually interfering the given mobile robot is reduced, the mobile robot can be started at any position, and the mobile robot can be conveniently started at any position.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A method of determining an initiation point of a mobile robot, comprising:
responding to a starting request of a mobile robot, acquiring ultra-wideband UWB positioning data of the mobile robot, and acquiring first image information around the mobile robot, wherein the first image information is acquired by the mobile robot;
determining a first position range of the mobile robot in a world coordinate system according to the UWB positioning data, wherein each UWB base station arranged in a target area where the mobile robot is located is obtained, the position information of each UWB base station in the world coordinate system is obtained, and the first position range of the mobile robot in the world coordinate system is determined by combining the position information of each UWB base station in the world coordinate system and the UWB positioning data;
determining a second position range of the mobile robot under a visual map coordinate system according to the first position range, wherein the visual map coordinate system is established based on the acquisition of image information of a target area where the mobile robot is located by a visual acquisition module in the mobile robot;
acquiring second image information in the second position range according to the visual map corresponding to the visual map coordinate system;
performing visual repositioning according to the first image information and the second image information to obtain a third position range of the mobile robot under the visual map coordinate system; and
determining a fourth position range of the third position range under a laser map coordinate system of the mobile robot, and setting a center point of the fourth position range as an initial point of the mobile robot, wherein the laser map coordinate system is a coordinate system established based on environment data of a target area acquired by a laser sensor in the mobile robot, and the target area is an area where the mobile robot is located;
the performing the visual repositioning according to the first image information and the second image information to obtain a third position range of the mobile robot under the visual map coordinate system includes:
extracting feature points of the first image to obtain first feature point information of the first image information;
extracting feature points of the second image information to obtain second feature point information of the second image information;
performing characteristic point matching on the first characteristic point information and the second characteristic point information; and
acquiring the innermost closed region containing the matched feature point information under the visual map coordinate system; and
and determining a third position range of the mobile robot under the coordinates of the visual map according to the innermost closed region.
2. The method of claim 1, wherein said determining a second range of positions of the mobile robot in a visual map coordinate system from the first range of positions comprises:
acquiring a first coordinate conversion relation between the world coordinate system and the visual map coordinate system; and
and converting the first position range according to the first coordinate conversion relation so as to obtain a second position range of the mobile robot under a visual map coordinate system.
3. The method of claim 1, wherein the determining the third range of positions at a fourth range of positions in a laser map coordinate system of the mobile robot comprises:
acquiring a second coordinate conversion relation between the visual map coordinate system and the laser map coordinate system; and
and converting the third position range according to the second coordinate conversion relation to obtain a fourth position range under a laser map coordinate system of the mobile robot.
4. An apparatus for determining an initial point of a mobile robot, comprising:
the mobile robot system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for responding to a starting request of a mobile robot, acquiring ultra-wideband UWB positioning data of the mobile robot and acquiring first image information around the mobile robot, which is acquired by the mobile robot;
the first determining module is used for determining a first position range of the mobile robot in a world coordinate system according to the UWB positioning data, acquiring each UWB base station arranged in a target area where the mobile robot is located, acquiring position information of each UWB base station in the world coordinate system, and determining the first position range of the mobile robot in the world coordinate system by combining the position information of each UWB base station in the world coordinate system and the UWB positioning data;
the second determining module is used for determining a second position range of the mobile robot under a visual map coordinate system according to the first position range, wherein the visual map coordinate system is a coordinate system established by acquiring image information of a target area where the mobile robot is located based on a visual acquisition module in the mobile robot;
the second acquisition module is used for acquiring second image information in the second position range according to the visual map corresponding to the visual map coordinate system;
the vision repositioning module is used for performing vision repositioning according to the first image information and the second image information to obtain a third position range of the mobile robot under the vision map coordinate system; and
an initial point setting module, configured to determine a fourth position range of the third position range in a laser map coordinate system of the mobile robot, and set a center point of the fourth position range as an initial point of the mobile robot, where the laser map coordinate system is a coordinate system established based on environment data of a target area acquired by a laser sensor in the mobile robot, and the target area is an area where the mobile robot is located;
the vision repositioning module comprising:
a first feature extraction unit, configured to perform feature point extraction on the first image to obtain first feature point information of the first image information;
a second feature extraction unit, configured to perform feature point extraction on the second image information to obtain second feature point information of the second image information;
the matching unit is used for matching the first characteristic point information with the second characteristic point information; and
the determining unit is used for acquiring the innermost closed region containing the matched feature point information under the visual map coordinate system; and
and determining a third position range of the mobile robot under the visual map coordinate according to the innermost closed area.
5. The apparatus of claim 4, wherein the second determining means comprises:
the first acquisition unit is used for acquiring a first coordinate conversion relation between the world coordinate system and the visual map coordinate system; and
and the second conversion unit is used for converting the first position range according to the first coordinate conversion relation so as to obtain a second position range of the mobile robot under a visual map coordinate system.
6. The apparatus of claim 4, wherein the initiation point setting module is specifically configured to:
acquiring a second coordinate conversion relation between the visual map coordinate system and the laser map coordinate system; and
and converting the third position range according to the second coordinate conversion relation to obtain a fourth position range under a laser map coordinate system of the mobile robot.
7. A mobile robot, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-3.
8. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-3.
CN202011554388.1A 2020-12-24 2020-12-24 Method, apparatus, and storage medium for determining initial point of mobile robot Active CN112697151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011554388.1A CN112697151B (en) 2020-12-24 2020-12-24 Method, apparatus, and storage medium for determining initial point of mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011554388.1A CN112697151B (en) 2020-12-24 2020-12-24 Method, apparatus, and storage medium for determining initial point of mobile robot

Publications (2)

Publication Number Publication Date
CN112697151A CN112697151A (en) 2021-04-23
CN112697151B true CN112697151B (en) 2023-02-21

Family

ID=75510031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011554388.1A Active CN112697151B (en) 2020-12-24 2020-12-24 Method, apparatus, and storage medium for determining initial point of mobile robot

Country Status (1)

Country Link
CN (1) CN112697151B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379850B (en) * 2021-06-30 2024-01-30 深圳银星智能集团股份有限公司 Mobile robot control method, device, mobile robot and storage medium
CN113960999A (en) * 2021-07-30 2022-01-21 珠海一微半导体股份有限公司 Mobile robot repositioning method, system and chip

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107677279A (en) * 2017-09-26 2018-02-09 上海思岚科技有限公司 It is a kind of to position the method and system for building figure
CN110377015A (en) * 2018-04-13 2019-10-25 北京三快在线科技有限公司 Robot localization method and robotic positioning device
CN110727265A (en) * 2018-06-28 2020-01-24 深圳市优必选科技有限公司 Robot repositioning method and device and storage device
CN111121754A (en) * 2019-12-31 2020-05-08 深圳市优必选科技股份有限公司 Mobile robot positioning navigation method and device, mobile robot and storage medium
KR102150588B1 (en) * 2019-02-28 2020-09-01 주식회사 마루시스 Apparatus for Golf Ball Collection using Autonomous Driving Robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107677279A (en) * 2017-09-26 2018-02-09 上海思岚科技有限公司 It is a kind of to position the method and system for building figure
CN110377015A (en) * 2018-04-13 2019-10-25 北京三快在线科技有限公司 Robot localization method and robotic positioning device
CN110727265A (en) * 2018-06-28 2020-01-24 深圳市优必选科技有限公司 Robot repositioning method and device and storage device
KR102150588B1 (en) * 2019-02-28 2020-09-01 주식회사 마루시스 Apparatus for Golf Ball Collection using Autonomous Driving Robot
CN111121754A (en) * 2019-12-31 2020-05-08 深圳市优必选科技股份有限公司 Mobile robot positioning navigation method and device, mobile robot and storage medium

Also Published As

Publication number Publication date
CN112697151A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN107678647B (en) Virtual shooting subject control method and device, electronic equipment and storage medium
CN106062763B (en) Method and device for displaying application and picture and electronic equipment
CN112697151B (en) Method, apparatus, and storage medium for determining initial point of mobile robot
CN111694429A (en) Virtual object driving method and device, electronic equipment and readable storage
US11854237B2 (en) Human body identification method, electronic device and storage medium
CN111708366B (en) Robot, and method, apparatus and computer-readable storage medium for controlling movement of robot
CN111507355B (en) Character recognition method, device, equipment and storage medium
CN112241764A (en) Image recognition method and device, electronic equipment and storage medium
JP7320016B2 (en) Vehicle call command transmission method, device and electronic device
CN111695519B (en) Method, device, equipment and storage medium for positioning key point
CN109992111B (en) Augmented reality extension method and electronic device
CN111645521A (en) Control method and device for intelligent rearview mirror, electronic equipment and storage medium
CN111539347B (en) Method and device for detecting target
CN110866504B (en) Method, device and equipment for acquiring annotation data
CN104656878A (en) Method, device and system for recognizing gesture
CN113918015B (en) Interaction method and device for augmented reality
WO2021003692A1 (en) Algorithm configuration method, device, system, and movable platform
CN111166239A (en) Map processing method, device and equipment for cleaning robot and storage medium
CN112527110A (en) Non-contact interaction method and device, electronic equipment and medium
CN111814651A (en) Method, device and equipment for generating lane line
CN112102417A (en) Method and device for determining world coordinates and external reference calibration method for vehicle-road cooperative roadside camera
CN110837766B (en) Gesture recognition method, gesture processing method and device
CN111553283B (en) Method and device for generating model
CN112053280B (en) Panoramic map display method, device, equipment and storage medium
CN111966432B (en) Verification code processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant