CN113256715B - Positioning method and device for robot - Google Patents

Positioning method and device for robot Download PDF

Info

Publication number
CN113256715B
CN113256715B CN202010089211.2A CN202010089211A CN113256715B CN 113256715 B CN113256715 B CN 113256715B CN 202010089211 A CN202010089211 A CN 202010089211A CN 113256715 B CN113256715 B CN 113256715B
Authority
CN
China
Prior art keywords
frame image
image
robot
view mark
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010089211.2A
Other languages
Chinese (zh)
Other versions
CN113256715A (en
Inventor
曹正江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202010089211.2A priority Critical patent/CN113256715B/en
Publication of CN113256715A publication Critical patent/CN113256715A/en
Application granted granted Critical
Publication of CN113256715B publication Critical patent/CN113256715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the disclosure discloses a positioning method and a positioning device for a robot. One embodiment of the method comprises the following steps: in response to receiving a current frame image acquired by the robot, acquiring a reference frame image matched with the current frame image from a pre-established target map, wherein the reference frame image comprises at least one view mark and identification information of the view mark, and the identification information comprises edge information of the view mark and a reference pose of the reference frame image acquired by the robot; determining a target view mark in a current image frame, and extracting edges of the determined target view mark to obtain an extracted edge image of the target view mark; and determining the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image. According to the embodiment, the current pose of the robot is determined by utilizing the edge image of the view mark in the image, so that the accuracy of robot positioning is improved.

Description

Positioning method and device for robot
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a positioning method and device of a robot.
Background
Visual SLAM is one of the current SLAM research hotspots. The robot may construct maps and locations concurrently using visual SLAM (Simultaneous Localization And Mapping), synchronized location and mapping techniques. In visual SLAM, the robot can recognize places where the robot has passed by comparing the image of the current scene with all the images in the established map through visual loop detection. Then, the robot is positioned by means of feature point matching and the like, so that accumulated errors generated in the positioning and mapping processes are eliminated.
In the related art, under the conditions of constant illumination, unchanged or little change of an image acquisition scene, a map established by the visual SLAM can be reused, and the positioning result is accurate. However, in a dynamic scene with large illumination variation, the robot positioning result has poor accuracy.
Disclosure of Invention
The embodiment of the disclosure provides a positioning method and device of a robot.
In a first aspect, embodiments of the present disclosure provide a positioning method of a robot, the method including: in response to receiving a current frame image acquired by the robot, acquiring a reference frame image matched with the current frame image from a pre-established target map, wherein the reference frame image comprises at least one view mark and identification information of the view mark, and the identification information comprises edge information of the view mark and a reference pose of the reference frame image acquired by the robot; determining a target view mark in a current image frame, and carrying out edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image comprises the target view mark; and determining the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
In some embodiments, in response to receiving a current frame image acquired by the robot, acquiring a reference frame image matching the current frame image from a pre-established target map, including: in response to receiving a current frame image acquired by the robot, performing scene recognition on the current frame image; based on the scene recognition result, a reference image frame matching the current image frame is determined from a pre-established target map.
In some embodiments, determining the target view flag in the current image frame includes: identifying at least one view flag in the current frame image; for a view index of the identified at least one view index, responsive to determining that the view index is included in the reference frame image, the view index is determined to be a target view index.
In some embodiments, determining the current pose of the robot based on the identification information of the target view index in the reference frame image and the extracted edge image of the target view index in the current frame image includes: determining a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image; and fitting the extracted edge image of the target view mark in the current frame image with the determined theoretical edge image by adopting a nonlinear least square method to obtain the current pose of the robot.
In some embodiments, the method further comprises: determining the square sum of Euclidean distances between an extracted edge image of a target view mark in a current frame image and a theoretical edge image as an objective function of a nonlinear least square method; and determining the minimum value of the objective function so as to fit the extracted edge image of the objective view mark in the current frame image and the theoretical edge image.
In some embodiments, the target map is established by: acquiring an environment image acquired by a robot and the pose of the robot; extracting edges of the environment image to obtain edge information of view marks in the environment image, wherein the edge information comprises pixel depth information and pixel coordinate information; processing the collected environment image by adopting a SLAM algorithm, and generating and storing a real-time map; and setting identification information for the view mark in the real-time map to obtain a target map, wherein the identification information comprises edge information and the pose of the robot.
In a second aspect, embodiments of the present disclosure provide a positioning device of a robot, the device including: an acquisition unit configured to acquire a reference frame image matched with a current frame image from a pre-established target map in response to receiving the current frame image acquired by the robot, wherein the reference frame image comprises at least one view mark and identification information of the view mark, and the identification information comprises edge information of the view mark and a reference pose of the robot for acquiring the reference frame image; an edge extraction unit configured to determine a target view mark in a current image frame, and perform edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image includes the target view mark; and a determining unit configured to determine a current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
In some embodiments, the acquisition unit is further configured to: in response to receiving a current frame image acquired by the robot, performing scene recognition on the current frame image; based on the scene recognition result, a reference image frame matching the current image frame is determined from a pre-established target map.
In some embodiments, the edge extraction unit is further configured to identify at least one view flag in the current frame image; for a view index of the identified at least one view index, responsive to determining that the view index is included in the reference frame image, the view index is determined to be a target view index.
In some embodiments, the determining unit is further configured to: determining a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image; and fitting the extracted edge image of the target view mark in the current frame image with the determined theoretical edge image by adopting a nonlinear least square method to obtain the current pose of the robot.
In some embodiments, the determining unit is further configured to: determining the square sum of Euclidean distances between an extracted edge image of a target view mark in a current frame image and a theoretical edge image as an objective function of a nonlinear least square method; and determining the minimum value of the objective function so as to fit the extracted edge image of the objective view mark in the current frame image and the theoretical edge image.
In some embodiments, the target map is established by: acquiring an environment image acquired by a robot and the pose of the robot; extracting edges of the environment image to obtain edge information of view marks in the environment image, wherein the edge information comprises pixel depth information and pixel coordinate information; processing the collected environment image by adopting a SLAM algorithm, and generating and storing a real-time map; and setting identification information for the view mark in the real-time map to obtain a target map, wherein the identification information comprises edge information and the pose of the robot.
According to the method and the device for positioning the robot, the current frame image acquired by the robot is received, the reference frame image matched with the current frame image can be acquired from the pre-established target map, then the target view mark is determined in the current image frame, the view mark in the current image frame is subjected to edge extraction to obtain the extracted edge image of the target view mark, and finally the current pose of the robot can be determined based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image, so that the current pose of the robot is determined by utilizing the edge image of the view mark in the image, the robot positioning is not influenced by illumination and scene change, and the accuracy of the robot positioning is improved.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture diagram in which some embodiments of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method of positioning a robot according to the present disclosure;
FIG. 3 is a flow chart of yet another embodiment of a method of positioning a robot according to the present disclosure;
fig. 4 is a schematic view of an application scenario of a positioning method of a robot according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural view of one embodiment of a positioning device of a robot according to the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 of a positioning method of a robot or a positioning device of a robot to which embodiments of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include a robot 101, a network 102, and a server 103. The network 102 is a medium used to provide a communication link between the robot 101 and the server 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The robot 101 may interact with the server 103 through the network 102 to receive or send messages or the like. The robot 101 may be a robot applied to various fields, such as a floor sweeping robot, an AGV, etc., and the robot 101 may be provided with an image pickup device through which the robot can perform image pickup during movement.
The server 103 may be a server providing various services, for example, a background server analyzing data such as a current frame image acquired by the robot 101, and the background server may feed back a processing result (for example, a determined current pose of the robot) to the robot.
It should be noted that, the positioning method of the robot provided by the embodiment of the present disclosure may be performed by the server 103. Accordingly, the positioning means of the robot may be provided in the server 103.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of robots, networks and servers in fig. 1 is merely illustrative. There may be any number of robots, networks, and servers as desired for implementation.
It should be noted that, the robot 101 may directly locate itself, after the robot 101 acquires the current frame image from the image acquisition device therein, may acquire a reference frame image matching with the current frame image from a pre-established target map, determine a target view mark in the current image frame, perform edge extraction on the view mark in the current frame image, obtain an extracted edge image of the target view mark, and determine the current pose of the robot based on the identification information and the extracted edge image of the target view mark in the reference frame image. In this case, the robot positioning method may be performed by the robot 101, and the robot positioning device may be provided in the robot 101. At this point, the exemplary system architecture 100 may not have the server 103 and network 102 present.
With continued reference to fig. 2, a flow 200 of one embodiment of a method of positioning a robot according to the present disclosure is shown. The positioning method of the robot comprises the following steps:
in step 201, in response to receiving the current frame image acquired by the robot, acquiring a reference frame image matched with the current frame image from a pre-established target map.
In this embodiment, the execution body of the positioning method of the robot (for example, the server shown in fig. 1) may receive the current frame image sent by the robot through a wired connection manner or a wireless connection manner. Here, the current frame image may be an image obtained by performing image acquisition on the current position of the robot. After receiving the current frame image acquired by the robot, the executing body may match the acquired current frame image with each frame image in the pre-established target map in various manners, acquire an image matched with the current frame image therefrom, and determine the acquired image as a reference frame image of the current frame image. As an example, the above-described execution subject may calculate the distance between the current frame image and each frame image in the target map, and determine the image in the target map having the smallest distance from the current frame image as the reference frame image.
The reference frame image may include at least one view flag and identification information of the view flag. Here, the view index may be an object having an index such as a billboard, a traffic signal sign, or the like included in the image. The identification information of the view flag may include edge information and a reference pose of the view flag. The edge image information may include image coordinate information of an edge image of the view index in the reference frame image and depth information of an edge of the view index. The reference pose may be a pose of the robot when the reference frame image is acquired.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
In some optional implementations of this embodiment, the executing body may acquire the reference frame image matching the current frame image from the target map by: in response to receiving a current frame image acquired by the robot, performing scene recognition on the current frame image; based on the scene recognition result, a reference image frame matching the current image frame is determined from a pre-established target map. In this implementation manner, the execution body may perform scene recognition on the current frame image in various manners, so that the scene of the current frame image may be determined. As an example, the execution subject may use a Bag of words (Bag of words) model to perform scene recognition on the current frame image. It will be appreciated that the execution subject may also perform scene recognition by other methods, and is not limited solely herein. Then, the executing body may search an image containing the identified scene in a pre-established target map, where the image is a reference image. The accuracy of the acquired reference frame image can be improved by determining the reference image of the current frame image through a scene recognition method.
In some optional implementations of this embodiment, the robot uses slam technology to concurrently locate and build a map of an environment where the robot is located, where the map is a target map. Therefore, after the robot collects the current frame image when the environment is in the place, the reference frame image can be matched in the target map. Specifically, the above target map may be obtained as follows: acquiring an environment image acquired by a robot and the pose of the robot; then, edge extraction can be carried out on the environment image to obtain edge information of the view mark in the environment image, wherein the edge information can comprise depth information and pixel coordinates; then, the collected environment images can be processed by adopting an SLAM algorithm to generate and store a real-time map; and finally, setting identification information for the view mark in the real-time map to obtain a target map, wherein the identification information comprises edge information and the pose of the robot. It will be appreciated that the target map may also be a map created by other means, without limitation.
Step 202, determining a target view mark in the current image frame, and performing edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark.
In this embodiment, based on the current frame image received in step 201, the execution subject (e.g., the server shown in fig. 1) may determine the target view flag in the current frame image in various ways. For example, the target view flag may be determined by specifying a view flag among view flags in the current frame image. The reference image also includes a target view flag. Then, the execution subject may perform edge extraction on the target view flag in the current frame image, so that an edge image of the target view flag in the current frame image may be obtained, and the obtained edge image may be determined as an extracted edge image. As an example, the execution subject may perform edge detection on the current frame image by using a sobel operator or the like, so that a boundary line between the target view mark and the background may be determined in the current frame image, and the determined boundary line is an edge of the target view mark.
It will be appreciated that the executing body may also perform edge extraction on all view marks in the current image frame, and then determine the edge of the target view mark from the edges of all view marks extracted in the current image frame. There is no unique limitation herein.
In some optional implementations of this embodiment, the executing entity may determine the target view flag by: identifying at least one view flag in the current frame image; for a view index of the identified at least one view index, responsive to determining that the view index is included in the reference frame image, the view index is determined to be a target view index. By adopting the implementation mode, the execution body can acquire all view marks commonly contained in the current frame image and the reference image, and each determined view mark is a target view mark. According to the implementation mode, all the target view marks can be determined in the current frame image, and the accuracy of the determined current pose of the robot can be further improved due to the fact that the number of the target view marks is large.
Step 203, determining the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
In this embodiment, based on the reference frame image obtained in step 201 and the extracted edge image of the target view mark in the current frame image obtained in step 202, the execution subject may process the identification information (including the edge information and the reference pose) of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image in various manners, so as to determine the current pose of the robot for collecting the current frame image. As an example, the above-mentioned execution subject may match the edge of the target view flag in the reference frame image with the extracted edge image of the target view flag in the current frame image, so as to solve the relative pose of the reference pose with respect to the current pose by using the edge information in the identification information of the target view flag in the reference frame image, so that the current pose of the robot may be determined.
According to the method disclosed by the embodiment, the current frame image and the reference frame image are registered by utilizing the edge of the target view mark, so that the current pose of the robot is determined.
According to the method provided by the embodiment of the invention, in response to receiving the current frame image acquired by the robot, the reference frame image matched with the current frame image can be acquired from the pre-established target map, then the target view mark is determined in the current image frame, the view mark in the current image frame is subjected to edge extraction to obtain the extracted edge image of the target view mark, and finally the current pose of the robot can be determined based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image, so that the current pose of the robot is determined by utilizing the edge image of the view mark in the image, the robot is not influenced by illumination and scene change, and the accuracy of robot positioning is improved.
With further reference to fig. 3, a flow 300 of yet another embodiment of a method of positioning a robot is shown. The process 300 of the positioning method of the net robot comprises the following steps:
in step 301, in response to receiving a current frame image acquired by the robot, acquiring a reference frame image matched with the current frame image from a pre-established target map.
Step 302, determining a target view mark in the current image frame, and performing edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark.
In this embodiment, the contents of steps 301 to 302 are similar to those of steps 201 to 201 in the above embodiment, and will not be described here again.
Step 303, determining a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image.
In this embodiment, based on the reference frame image and the current frame image acquired in step 301, the execution subject may determine a theoretical edge image of the target view flag in the current frame image. Specifically, the execution subject may convert and project the edge image of the target view mark in the reference frame image to the current frame image using the identification information of the target view mark in the reference frame image (including the edge information and the reference pose of the target view mark), so that the theoretical edge image of the target view mark in the current frame image may be obtained.
And 304, fitting the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image by adopting a nonlinear least square method to obtain the current pose of the robot.
In this embodiment, based on the theoretical edge image of the target view mark in the reference frame image obtained in step 303 in the current frame image, the executing body may use a nonlinear least square method to fit the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image, so as to obtain the current pose of the robot. The distance between the pixel point of the theoretical edge image of the target view mark in the current frame image and the pixel of the extracted edge image is minimized by adopting a nonlinear least binary method, so that the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image are fitted.
In some optional implementations of this embodiment, the executing body may determine a sum of squares of euclidean distances between the extracted edge image of the target view flag in the current frame image and the theoretical edge image as the objective function of the nonlinear least squares method. And then the execution subject can minimum the objective function, so that the extracted edge image and the theoretical edge image of the objective view mark in the current frame image are fitted.
With continued reference to fig. 4, fig. 4 is a schematic view of an application scenario of the positioning method of the robot according to the present embodiment. In the application scenario of fig. 4, after the robot acquires the current frame image, the background server may receive the current frame image c n Acquiring a current frame image c from a pre-established target map n Matched reference frame image c r The background server may then determine the target view index 401 in the current frame image, as shown in fig. 4, and edge the determined target view index 401The extracted edge image 402 of the target view mark is extracted as shown in fig. 4, and the background server uses the reference frame image c r The identification information of the view index in (including the edge information of the target view index edge image 403 and the reference pose) may be found in the current frame image c n The theoretical edge image 404 of the target view mark is determined, and the final background server can use the nonlinear least square method to image the current frame image c n Fitting the extracted edge image 402 and the theoretical edge image 404 of the target view mark in the model to obtain a rotation matrix R which can minimize the nonlinear least square objective function c c r n and translation matrix P c c r n, thereby determining the current pose of the robot. Wherein the matrix R is rotated c c r n and translation matrix P c c r n is the reference frame image c r Image c relative to the current frame n Is a rotation matrix and a translation matrix of the same.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the flow 300 of the positioning method of the robot in this embodiment implements estimation of the current pose of the robot by fitting the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image by the nonlinear least square method, and improves the effect of fitting the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image, thereby further improving the accuracy of the obtained current pose of the robot.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a positioning device of a robot, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 2, and the device may be specifically applied to various electronic devices.
As shown in fig. 5, the positioning device 500 of the robot of the present embodiment includes: an acquisition unit 501, an edge extraction unit 502, a determination unit 503. Wherein the obtaining unit 501 is configured to obtain, in response to receiving a current frame image acquired by the robot, a reference frame image matched with the current frame image from a pre-established target map, wherein the reference frame image includes at least one view mark and identification information of the view mark, and the identification information includes edge information of the view mark and a reference pose of the robot for acquiring the reference frame image; the edge extraction unit 502 is configured to determine a target view mark in a current image frame, and perform edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image includes the target view mark; the determination unit 503 is configured to determine the current pose of the robot based on the identification information of the target view flag in the reference frame image and the extracted edge image of the target view flag in the current frame image.
In some optional implementations of the present embodiment, the acquisition unit 501 is further configured to: in response to receiving a current frame image acquired by the robot, performing scene recognition on the current frame image; based on the scene recognition result, a reference image frame matching the current image frame is determined from a pre-established target map.
In some optional implementations of the present embodiment, the edge extraction unit 502 is further configured to identify at least one view flag in the current frame image; for a view index of the identified at least one view index, responsive to determining that the view index is included in the reference frame image, the view index is determined to be a target view index.
In some optional implementations of the present embodiment, the determining unit 503 is further configured to: determining a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image; and fitting the extracted edge image of the target view mark in the current frame image with the determined theoretical edge image by adopting a nonlinear least square method to obtain the current pose of the robot.
In some optional implementations of the present embodiment, the determining unit 503 is further configured to: determining the square sum of Euclidean distances between an extracted edge image of a target view mark in a current frame image and a theoretical edge image as an objective function of a nonlinear least square method; and determining the minimum value of the objective function so as to fit the extracted edge image of the objective view mark in the current frame image and the theoretical edge image.
In some optional implementations of the present embodiment, the target map is established by: acquiring an environment image acquired by a robot and the pose of the robot; extracting edges of the environment image to obtain edge information of view marks in the environment image, wherein the edge information comprises pixel depth information and pixel coordinate information; processing the collected environment image by adopting a SLAM algorithm, and generating and storing a real-time map; and setting identification information for the view mark in the real-time map to obtain a target map, wherein the identification information comprises edge information and the pose of the robot.
The elements recited in apparatus 500 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations and features described above with respect to the method are equally applicable to the apparatus 500 and the units contained therein, and are not described in detail herein. Referring now to fig. 6, a schematic diagram of an electronic device (e.g., server or robot of fig. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The server illustrated in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure in any way.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 6 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 601. It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to receiving a current frame image acquired by the robot, acquiring a reference frame image matched with the current frame image from a pre-established target map, wherein the reference frame image comprises at least one view mark and identification information of the view mark, and the identification information comprises edge information of the view mark and a reference pose of the reference frame image acquired by the robot; determining a target view mark in a current image frame, and carrying out edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image comprises the target view mark; and determining the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, an edge extraction unit, and a determination unit. The names of these units do not constitute a limitation of the unit itself in some cases, and for example, the acquisition unit may also be described as "a unit that acquires a reference frame image matching a current frame image from a pre-established target map in response to receiving the current frame image acquired by the robot".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. A method of positioning a robot, comprising:
in response to receiving a current frame image acquired by the robot, acquiring a reference frame image matched with the current frame image from a pre-established target map, wherein the reference frame image comprises at least one view mark and identification information of the view mark, and the identification information comprises edge information of the view mark and a reference pose of the robot for acquiring the reference frame image;
determining a target view mark in the current frame image, and extracting edges of the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image comprises the target view mark;
determining the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image, including: and matching the edge information of the target view mark in the reference frame image with the extracted edge image of the target view mark in the current frame image, solving the relative pose of the reference pose relative to the current pose by utilizing the edge information of the target view mark in the reference frame image, and determining the current pose of the robot.
2. The method of claim 1, wherein the acquiring, in response to receiving the current frame image acquired by the robot, a reference frame image matching the current frame image from a pre-established target map comprises:
in response to receiving a current frame image acquired by the robot, performing scene recognition on the current frame image;
and determining a reference image frame matched with the current frame image from a pre-established target map based on the scene recognition result.
3. The method of claim 1, wherein the determining a target view flag in the current frame image comprises:
identifying at least one view flag in the current frame image;
for a view index of the identified at least one view index, responsive to determining that the view index is included in the reference frame image, the view index is determined to be the target view index.
4. The method of claim 1, wherein the determining the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image comprises:
determining a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image;
and fitting the extracted edge image of the target view mark in the current frame image with the determined theoretical edge image by adopting a nonlinear least square method to obtain a rotation matrix and a translation matrix which enable the nonlinear least square objective function to be minimum, and determining the current pose of the robot.
5. The method of claim 4, wherein the method further comprises:
determining the square sum of Euclidean distances between the extracted edge image of the target view mark in the current frame image and the theoretical edge image as an objective function of a nonlinear least square method;
and determining the minimum value of the objective function so as to fit the extracted edge image of the objective view mark in the current frame image and the theoretical edge image.
6. The method according to one of claims 1 to 5, wherein the target map is established by:
acquiring an environment image acquired by the robot and a pose of the robot;
extracting edges of the environment image to obtain edge information of view marks in the environment image, wherein the edge information comprises pixel depth information and pixel coordinate information;
processing the collected environment image by adopting a SLAM algorithm, and generating and storing a real-time map;
and setting identification information for the view mark in the real-time map to obtain the target map, wherein the identification information comprises edge information and the pose of the robot.
7. A positioning device of a robot, comprising:
an acquisition unit configured to acquire a reference frame image matched with a current frame image from a pre-established target map in response to receiving the current frame image acquired by the robot, wherein the reference frame image comprises at least one view mark and identification information of the view mark, and the identification information comprises edge information of the view mark and a reference pose of the robot for acquiring the reference frame image;
an edge extraction unit configured to determine a target view mark in the current frame image, and perform edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image includes the target view mark;
a determining unit configured to determine a current pose of the robot based on identification information of a target view mark in the reference frame image and an extracted edge image of the target view mark in the current frame image;
wherein the determining unit is further configured to: and matching the edge information of the target view mark in the reference frame image with the extracted edge image of the target view mark in the current frame image, solving the relative pose of the reference pose relative to the current pose by utilizing the edge information of the target view mark in the reference frame image, and determining the current pose of the robot.
8. The apparatus of claim 7, wherein the determination unit is further configured to:
determining a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image;
and fitting the extracted edge image of the target view mark in the current frame image with the determined theoretical edge image by adopting a nonlinear least square method to obtain a rotation matrix and a translation matrix which enable the nonlinear least square objective function to be minimum, and determining the current pose of the robot.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
10. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-6.
CN202010089211.2A 2020-02-12 2020-02-12 Positioning method and device for robot Active CN113256715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010089211.2A CN113256715B (en) 2020-02-12 2020-02-12 Positioning method and device for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010089211.2A CN113256715B (en) 2020-02-12 2020-02-12 Positioning method and device for robot

Publications (2)

Publication Number Publication Date
CN113256715A CN113256715A (en) 2021-08-13
CN113256715B true CN113256715B (en) 2024-04-05

Family

ID=77220181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010089211.2A Active CN113256715B (en) 2020-02-12 2020-02-12 Positioning method and device for robot

Country Status (1)

Country Link
CN (1) CN113256715B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200496A (en) * 2014-09-01 2014-12-10 西北工业大学 High-precision detecting and locating method for rectangular identifiers on basis of least square vertical fitting of adjacent sides
GB201801399D0 (en) * 2017-12-13 2018-03-14 Xihua Univeristy Positioning method and apparatus
CN108283021A (en) * 2015-10-02 2018-07-13 X开发有限责任公司 Locating a robot in an environment using detected edges of a camera image from a camera of the robot and detected edges derived from a three-dimensional model of the environment
CN108665508A (en) * 2018-04-26 2018-10-16 腾讯科技(深圳)有限公司 A kind of positioning and map constructing method, device and storage medium immediately
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
CN109410281A (en) * 2018-11-05 2019-03-01 珠海格力电器股份有限公司 A kind of position control method, device, storage medium and logistics system
WO2019140745A1 (en) * 2018-01-16 2019-07-25 广东省智能制造研究所 Robot positioning method and device
WO2019223463A1 (en) * 2018-05-22 2019-11-28 腾讯科技(深圳)有限公司 Image processing method and apparatus, storage medium, and computer device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8147503B2 (en) * 2007-09-30 2012-04-03 Intuitive Surgical Operations Inc. Methods of locating and tracking robotic instruments in robotic surgical systems
US9098905B2 (en) * 2010-03-12 2015-08-04 Google Inc. System and method for determining position of a device
JP6561511B2 (en) * 2014-03-20 2019-08-21 株式会社リコー Parallax value deriving device, moving body, robot, parallax value production deriving method, parallax value producing method and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200496A (en) * 2014-09-01 2014-12-10 西北工业大学 High-precision detecting and locating method for rectangular identifiers on basis of least square vertical fitting of adjacent sides
CN108283021A (en) * 2015-10-02 2018-07-13 X开发有限责任公司 Locating a robot in an environment using detected edges of a camera image from a camera of the robot and detected edges derived from a three-dimensional model of the environment
GB201801399D0 (en) * 2017-12-13 2018-03-14 Xihua Univeristy Positioning method and apparatus
WO2019140745A1 (en) * 2018-01-16 2019-07-25 广东省智能制造研究所 Robot positioning method and device
CN108665508A (en) * 2018-04-26 2018-10-16 腾讯科技(深圳)有限公司 A kind of positioning and map constructing method, device and storage medium immediately
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
WO2019223463A1 (en) * 2018-05-22 2019-11-28 腾讯科技(深圳)有限公司 Image processing method and apparatus, storage medium, and computer device
CN109410281A (en) * 2018-11-05 2019-03-01 珠海格力电器股份有限公司 A kind of position control method, device, storage medium and logistics system

Also Published As

Publication number Publication date
CN113256715A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
US10268926B2 (en) Method and apparatus for processing point cloud data
CN108694882B (en) Method, device and equipment for labeling map
CN107328424B (en) Navigation method and device
CN109285188B (en) Method and apparatus for generating position information of target object
EP3379459A1 (en) System and method for telecom inventory management
CN111461981B (en) Error estimation method and device for point cloud stitching algorithm
US20180188033A1 (en) Navigation method and device
CN109285181B (en) Method and apparatus for recognizing image
CN108982522B (en) Method and apparatus for detecting pipe defects
EP3796262A1 (en) Method and apparatus for calibrating camera
CN111353453B (en) Obstacle detection method and device for vehicle
US20210264198A1 (en) Positioning method and apparatus
CN116182878B (en) Road curved surface information generation method, device, equipment and computer readable medium
CN109635870A (en) Data processing method and device
WO2018105122A1 (en) Training data candidate extraction program, training data candidate extraction device, and training data candidate extraction method
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN108512888B (en) Information labeling method, cloud server, system and electronic equipment
CN109903308B (en) Method and device for acquiring information
CN111445499B (en) Method and device for identifying target information
CN111401423B (en) Data processing method and device for automatic driving vehicle
CN113256715B (en) Positioning method and device for robot
CN111383337B (en) Method and device for identifying objects
CN111461980A (en) Performance estimation method and device of point cloud splicing algorithm
CN110853098A (en) Robot positioning method, device, equipment and storage medium
CN111369624B (en) Positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant