CN114373010A - Method, device and medium for correcting image loop - Google Patents

Method, device and medium for correcting image loop Download PDF

Info

Publication number
CN114373010A
CN114373010A CN202111583636.XA CN202111583636A CN114373010A CN 114373010 A CN114373010 A CN 114373010A CN 202111583636 A CN202111583636 A CN 202111583636A CN 114373010 A CN114373010 A CN 114373010A
Authority
CN
China
Prior art keywords
image
initial
point
loop
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111583636.XA
Other languages
Chinese (zh)
Inventor
张站朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN202111583636.XA priority Critical patent/CN114373010A/en
Publication of CN114373010A publication Critical patent/CN114373010A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a method, a device and a medium for correcting an image loop, wherein the method comprises the following steps: acquiring an initial point coordinate and an initial direction angle of an initial point and an initial end point coordinate of a preset end point; acquiring the corresponding physical distance of the initial point coordinate and the initial end point coordinate in a real space coordinate system according to the initial point coordinate and the initial end point coordinate; acquiring an in-loop image, and acquiring a key point coordinate and a key point direction angle of the in-loop image according to a preset positioning strategy and an initial point coordinate and an initial direction angle of an initial point; acquiring a pose transformation matrix according to the key point coordinates, the key point direction angles and initial end point coordinates of a preset end point; and carrying out pose correction on the image in the loop according to the pose conversion matrix to obtain a loop correction image. The method and the device can utilize the characteristics of the geometric rules of the environment to complete loop correction, can effectively reduce the error of drawing construction precision, and effectively improve the positioning precision and stability in the navigation link.

Description

Method, device and medium for correcting image loop
Technical Field
The embodiment of the application relates to the technical field of positioning, in particular to a method and a device for correcting an image-creating loop and a storage medium.
Background
In the visual SLAM problem, the pose estimation is often a recursive process, i.e., the pose of the current frame is solved by the pose of the previous frame, so the error in the pose is transmitted from one frame to another, i.e., the error is accumulated. One effective way to eliminate errors is to perform loop detection. The loop detection determines whether the robot has returned to a previously passed position and if a loop is detected, it passes the information to the back end for optimization. The loop is a more compact and accurate constraint than the back end, and the constraint condition can form a track map with consistent topology. If the closed loop can be detected and optimized, the result can be made more accurate.
However, when detecting a loop, if all previous frames are taken to match the current frame, the matching is good enough to be the loop, but this results in too large calculation amount, too slow matching speed, and a very large number of matches are needed in the case that no initial value is found.
Disclosure of Invention
The embodiment of the application provides a method, a device and a medium for correcting an image-creating loop, which can utilize the characteristics of geometric rules of an environment to complete loop correction, effectively reduce image-creating precision errors and effectively improve positioning precision and stability in a navigation link.
In a first aspect, an embodiment of the present application provides a method for correcting a graph-building loopback, where the method includes:
acquiring an initial point coordinate and an initial direction angle of an initial point and an initial end point coordinate of a preset end point;
acquiring the corresponding physical distance between the starting point and the preset end point in a real space coordinate system according to the initial point coordinate of the starting point and the initial end point coordinate of the preset end point;
acquiring corresponding image distances of the starting point and the preset end point in an image coordinate system according to the conversion relation between a real space coordinate system and the image coordinate system;
acquiring an in-loop image, and acquiring a key point coordinate and a key point direction angle of the in-loop image according to a preset positioning strategy and an initial point coordinate and an initial direction angle of the initial point;
acquiring a pose transformation matrix according to the key point coordinates, the key point direction angles and the initial end point coordinates of the preset end point;
and carrying out pose correction on the image in the loop according to the pose conversion matrix to obtain a loop correction image.
In a second aspect, an embodiment of the present application further provides an image-creating loop correction device, where the image-creating loop correction device includes: an acquisition unit and a processing unit;
the acquisition unit is used for acquiring an initial point coordinate and an initial direction angle of an initial point and an initial end point coordinate of a preset end point;
the acquiring unit is further configured to acquire a corresponding physical distance between the starting point and the preset end point in a real space coordinate system according to the initial point coordinate of the starting point and the initial end point coordinate of the preset end point;
the processing unit is used for acquiring the corresponding image distance between the starting point and the preset end point in the image coordinate system according to the conversion relation between the real space coordinate system and the image coordinate system;
the processing unit is further configured to obtain an in-loop image, and obtain a key point coordinate and a key point direction angle of the in-loop image according to a preset positioning strategy and an initial point coordinate and an initial direction angle of the initial point;
the processing unit is further used for acquiring a pose transformation matrix according to the key point coordinates, the key point direction angles and the initial end point coordinates of the preset end point;
and the processing unit is also used for carrying out pose correction on the image in the loop according to the pose transformation matrix to obtain a loop correction image.
In a third aspect, an embodiment of the present application further provides a processing device, which includes a processor and a memory, where the memory stores a computer program, and the processor executes, when calling the computer program in the memory, any step in the method for creating a graph loop according to the embodiment of the present application.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a plurality of instructions are stored in the computer-readable storage medium, and the instructions are suitable for being loaded by a processor to perform steps in any one of the methods for creating a graph and loop back correction provided by the embodiments of the present application.
According to the method, the characteristics of the geometric rules of the environment can be utilized to complete loop correction, the error of the drawing construction precision can be effectively reduced, and the positioning precision and the stability in the navigation link can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for creating a graph loop correction according to the present application;
FIG. 2a is a schematic diagram of a map 1 in the method for correcting the graph loop;
FIG. 2b is a schematic diagram of a map 2 in the method for image loop correction of the present application;
FIG. 2c is a schematic diagram of a map 3 in the method for image loop correction of the present application;
FIG. 2d is a schematic diagram of a map 4 in the method for image loop correction of the present application;
FIG. 3 is a schematic diagram of a construction of a device for correcting the loop of the present application;
FIG. 4 is a schematic diagram of a processing apparatus according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to herein, for a number of times, as being performed by a computer, embodiments of the present application refer to computer-implemented operations involving data being processed by a computer processing unit in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The principles of the present application may be employed in numerous other general-purpose or special-purpose computing, communication environments or configurations. Examples of well known computing systems, environments, and configurations that may be suitable for use with the application include, but are not limited to, hand-held telephones, personal computers, servers, multiprocessor systems, microcomputer-based systems, mainframe-based computers, and distributed computing environments that include any of the above systems or devices.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions.
Next, the method for correcting the graph loop provided by the present application is described.
Referring to fig. 1, fig. 1 shows a schematic flow chart of a graph-building loop-back correction method applied to navigation of a robot (such as a cloud robot), an automatic driving terminal (such as an automatic driving smart car, etc.), and the like. The method provided by the application specifically comprises the following steps:
101. and acquiring the initial point coordinate and the initial direction angle of the initial point and the initial end point coordinate of the preset end point.
In the embodiment of the present application, for example, the method is applied to a robot, when the robot starts to walk in a certain space, it may initially detect an initial point coordinate and an initial direction angle of an acquisition initial point, for example, let a point a in fig. 2 a-2 d be an initial point, and the initial point coordinate and the initial direction angle of a be x, y, yaw (direction angle). Then, the robot walks forward under the navigation control of manipulation or self-search and always moves to a preset end point (for example, a point B of a target point in fig. 2 a-2 d).
As the self sensor (single line or multi-line) laser sensor, the chassis odometer, the IMU (inertial sensor) and the 3D depth camera (binocular/TOF/structured light) of the robot respectively acquire the information of 2D/3D laser point cloud, 3D pose, visual image and the like of the perception environment, the robot performs fusion mapping of the information. Fig. 2 a-2 d show four actual environments (buildings have geometric symmetry and similarity) for robot mapping and mapping conditions, and the map 1 in fig. 2a is taken as an example to describe the technical implementation, and the map 2, the map 3 and the map 4 are all different in environment geometric shape, including but not limited to rectangle, five-pointed star, circle and ellipse, but all are paths without connection on the map road.
Because the effective distance of the laser radar or the depth vision camera of the robot is dozens of meters or dozens of meters, the long channel is not necessarily covered at one time, and the continuous data combination of sensors such as a robot chassis odometer, an IMU and the like is required to be combined to continuously scan to obtain relatively accurate environment geometry. However, when the robot walks a certain distance, since the odometer and the IMU will generate accumulated errors according to the moving distance, it will be the case that when the robot actually scans to form points a ' to B ' in fig. 2 a-2B (the same is true for map 1/map 2, map 3 and map 4), the robot actually reaches point B ', and not the exact point B of the actual environment.
102. And acquiring the corresponding physical distance between the starting point and the preset end point in a real space coordinate system according to the initial point coordinate of the starting point and the initial end point coordinate of the preset end point.
In the embodiment of the application, the physical distance D1 corresponding to the starting point and the preset end point in the real space coordinate system can be obtained by manually measuring the distances between the points a and B in the real environment.
103. And acquiring the corresponding image distance between the starting point and the preset end point in the image coordinate system according to the conversion relation between the real space coordinate system and the image coordinate system.
In the embodiment of the application, the image distance corresponding to the starting point and the preset end point in the image coordinate system can be obtained by adjusting the physical distance D1 according to the obtained ratio of the physical distance D1 to the image coordinate system.
104. And acquiring the in-loop image, and acquiring the key point coordinate and the key point direction angle of the in-loop image according to a preset positioning strategy and the initial point coordinate and the initial direction angle of the initial point.
In the embodiment of the present application, in order to implement loop correction, it is necessary to acquire a corresponding loop-inside image of a robot during moving from a starting point (e.g., point a in fig. 2 a) to an actual end point (e.g., point B 'in fig. 2 a), and then, based on the acquired initial point coordinate and initial direction angle of the starting point and the acquired local preset positioning strategy, calculate a key point coordinate and a key point direction angle of a key point B'. Therefore, the coordinates and the direction angles of the key points corresponding to the actual end point can be quickly determined through the acquired image in the loop.
In one embodiment, the step 104 includes:
acquiring the vertical coordinate of the initial point coordinate as the vertical coordinate of the key point coordinate of the image in the loop;
acquiring the abscissa of the initial point coordinate, obtaining a difference result by calculating the distance difference between the initial point coordinate and the image, and taking the difference result as the abscissa of the key point coordinate of the image in the loop;
and acquiring an initial direction angle of the starting point as a key point direction angle of the image in the loop.
In the embodiment of the application, the Y coordinate of the key frame (frames 3-5, at least one frame) near the point B ' is set as the Y coordinate value of the starting point A, and the X coordinate of the key frame (at least one frame or a plurality of frames) near the point B ' is set as the X coordinate + (-D1 ') of A; the Yaw direction angle of the point B 'is the same as the setting of the point A of the initial point, so that the key point coordinate and the key point direction angle of the key point of the point B' can be obtained based on the positioning strategy corresponding to the adjustment.
105. And acquiring a pose transformation matrix according to the key point coordinates, the key point direction angles and the initial terminal point coordinates of the preset terminal point.
In the embodiment of the application, after the key point coordinates, the key point direction angles and the initial endpoint coordinates of the preset endpoint are known, a pose transformation matrix can be calculated according to the transformation relation among coordinate systems so as to be used for subsequent loop correction.
In one embodiment, the step 105 comprises:
and calculating and acquiring a pose transformation matrix according to a transformation strategy that the multiplication of the second coordinate and the pose transformation matrix is equal to the first coordinate.
In the embodiment of the present application, if the position transformation matrix is denoted as T, the first coordinate is denoted as P1, and the second coordinate is denoted as P2, since P1 is TP2, the pose transformation matrix can be obtained quickly according to the transformation strategy that the multiplication of the second coordinate and the pose transformation matrix is equal to the first coordinate, thereby implementing the transformation from the real space coordinate system to the image coordinate system.
106. And carrying out pose correction on the image in the loop according to the pose conversion matrix to obtain a loop correction image.
In the embodiment of the application, after a pose transformation matrix transformed from a real space coordinate system to an image coordinate system is known, the pose of each frame of the image in the loop can be corrected to obtain a loop correction image. It can be seen that, by using the regular geometric characteristics of the environment (local symmetry or similarity, the geometric rules can be accurately measured or calculated to obtain a closed point), closed virtual connection calibration is performed on the scanned non-closed map based on the symmetry and similarity of the geometric features of the starting point and the end point, so that the closed point of the current scan is specified, and loop correction is performed on the scanned map (such as the SALM map constructed based on SLAM technology).
In one embodiment, the step 106 includes:
and performing pose correction based on light beam adjustment on the image in the loop according to the pose conversion matrix to obtain a loop correction image.
In the embodiment of the present application, when correcting the image in the loop, a beam adjustment method is used. The beam adjustment method means that for any three-dimensional point P in a scene, light rays which are emitted from the optical center of the camera corresponding to each view and pass through the pixel corresponding to the P in an image are intersected at the point P, and for all the three-dimensional points, a great number of light beams (bundle) are formed; in the actual process, because noise and the like exist, each ray is almost impossible to converge with one point, and therefore, in the solving process, the information to be solved needs to be continuously adjusted (adjustment) so that the final ray can be converged at the point P. The beam adjustment method has different convergence methods for specific application scenes. The current common methods include gradient descent method, Newton method, Gauss Newton method and the like. And performing all frame position and posture correction on the image in the loop based on the position and posture conversion matrix, specifically, correcting all the positions and postures of the image in the loop by using a Bundle Adjustment (BA) method, and obtaining a loop correction image with a good correction effect. Aiming at a plurality of similar non-closed environments in the same environment, the method can be used iteratively to carry out loop correction, namely, a small circle is closed first, and then a large circle is closed.
In one embodiment, the step 106 is followed by:
and acquiring a target area corresponding to the loop back corrected image in the original map data, and replacing the target map data of the target area with the loop back corrected image to update the original map data.
In the embodiment of the application, after the robot scans the closed environment, the robot can perform scaling in a proper proportion according to the environment building blueprint (which can be a CAD drawing or a BIM model) and the raster image established by scanning, and meanwhile, correspondingly corrects the drawing data based on the building drawing. Specifically, a target area corresponding to the loop correction image in the original map data is obtained, and then the target map data of the target area is replaced by the loop correction image, so that the original map data is updated.
In order to better implement the method of the present application, the embodiments of the present application further provide a device for creating a graph and loop.
Referring to fig. 3, fig. 3 is a schematic structural diagram of the image-creating loop-back correction device 20 of the present application, wherein the image-creating loop-back correction device 20 specifically includes the following structure: an acquisition unit 201 and a processing unit 202.
The obtaining unit 201 is configured to obtain an initial point coordinate and an initial direction angle of a starting point, and an initial end point coordinate of a preset end point.
In the embodiment of the present application, for example, the mapping and looping correction device 20 is applied to a robot, when the robot starts to walk in a certain space, it may initially detect the initial point coordinates and the initial direction angle of the initial point, for example, it is assumed that point a in fig. 2 a-2 d of the robot is the initial point, and the initial point coordinates and the initial direction angle of point a are x, y, and yaw (direction angle). Then, the robot walks forward under the navigation control of manipulation or self-search and always moves to a preset end point (for example, a point B of a target point in fig. 2 a-2 d).
As the self sensor (single line or multi-line) laser sensor, the chassis odometer, the IMU (inertial sensor) and the 3D depth camera (binocular/TOF/structured light) of the robot respectively acquire the information of 2D/3D laser point cloud, 3D pose, visual image and the like of the perception environment, the robot performs fusion mapping of the information. Fig. 2 a-2 d show four actual environments (buildings have geometric symmetry and similarity) for robot mapping and mapping conditions, and the map 1 in fig. 2a is taken as an example to describe the technical implementation, and the map 2, the map 3 and the map 4 are all different in environment geometric shape, including but not limited to rectangle, five-pointed star, circle and ellipse, but all are paths without connection on the map road.
Because the effective distance of the laser radar or the depth vision camera of the robot is dozens of meters or dozens of meters, the long channel is not necessarily covered at one time, and the continuous data combination of sensors such as a robot chassis odometer, an IMU and the like is required to be combined to continuously scan to obtain relatively accurate environment geometry. However, when the robot walks a certain distance, since the odometer and the IMU will generate accumulated errors according to the moving distance, it will be the case that when the robot actually scans to form points a ' to B ' in fig. 2 a-2B (the same is true for map 1/map 2, map 3 and map 4), the robot actually reaches point B ', and not the exact point B of the actual environment.
The obtaining unit 201 is further configured to obtain, according to the initial point coordinate of the initial point and the initial end point coordinate of the preset end point, a corresponding physical distance between the initial point and the preset end point in a real space coordinate system.
In the embodiment of the application, the physical distance D1 corresponding to the starting point and the preset end point in the real space coordinate system can be obtained by manually measuring the distances between the points a and B in the real environment.
The obtaining unit 201 is further configured to obtain, according to a conversion relationship between a real space coordinate system and an image coordinate system, an image distance between the starting point and the preset end point in the image coordinate system.
In the embodiment of the application, the image distance corresponding to the starting point and the preset end point in the image coordinate system can be obtained by adjusting the physical distance D1 according to the obtained ratio of the physical distance D1 to the image coordinate system.
The processing unit 202 is configured to obtain an in-loop image, and obtain a key point coordinate and a key point direction angle of the in-loop image according to a preset positioning policy and an initial point coordinate and an initial direction angle of the initial point.
In the embodiment of the present application, in order to implement loop correction, it is necessary to acquire a corresponding loop-inside image of a robot during moving from a starting point (e.g., point a in fig. 2 a) to an actual end point (e.g., point B 'in fig. 2 a), and then, based on the acquired initial point coordinate and initial direction angle of the starting point and the acquired local preset positioning strategy, calculate a key point coordinate and a key point direction angle of a key point B'. Therefore, the coordinates and the direction angles of the key points corresponding to the actual end point can be quickly determined through the acquired image in the loop.
In an embodiment, the processing unit 202 is specifically configured to:
acquiring the vertical coordinate of the initial point coordinate as the vertical coordinate of the key point coordinate of the image in the loop;
acquiring the abscissa of the initial point coordinate, obtaining a difference result by calculating the distance difference between the initial point coordinate and the image, and taking the difference result as the abscissa of the key point coordinate of the image in the loop;
and acquiring an initial direction angle of the starting point as a key point direction angle of the image in the loop.
In the embodiment of the application, the Y coordinate of the key frame (frames 3-5, at least one frame) near the point B ' is set as the Y coordinate value of the starting point A, and the X coordinate of the key frame (at least one frame or a plurality of frames) near the point B ' is set as the X coordinate + (-D1 ') of A; the Yaw direction angle of the point B 'is the same as the setting of the point A of the initial point, so that the key point coordinate and the key point direction angle of the key point of the point B' can be obtained based on the positioning strategy corresponding to the adjustment.
The processing unit 202 is further configured to acquire a pose transformation matrix according to the key point coordinates, the key point direction angles, and the initial end point coordinates of the preset end point.
In the embodiment of the application, after the key point coordinates, the key point direction angles and the initial endpoint coordinates of the preset endpoint are known, a pose transformation matrix can be calculated according to the transformation relation among coordinate systems so as to be used for subsequent loop correction.
In an embodiment, the processing unit 202 is further specifically configured to:
and calculating and acquiring a pose transformation matrix according to a transformation strategy that the multiplication of the second coordinate and the pose transformation matrix is equal to the first coordinate.
In the embodiment of the present application, if the position transformation matrix is denoted as T, the first coordinate is denoted as P1, and the second coordinate is denoted as P2, since P1 is TP2, the pose transformation matrix can be obtained quickly according to the transformation strategy that the multiplication of the second coordinate and the pose transformation matrix is equal to the first coordinate, thereby implementing the transformation from the real space coordinate system to the image coordinate system.
The processing unit 202 is further configured to perform pose correction on the loop-back image according to the pose transformation matrix, so as to obtain a loop-back corrected image.
In the embodiment of the application, after a pose transformation matrix transformed from a real space coordinate system to an image coordinate system is known, the pose of each frame of the image in the loop can be corrected to obtain a loop correction image. Therefore, by using the regular geometric characteristics (local symmetry or similarity, the geometric rules can be accurately measured or calculated to obtain the closing point) of the environment, the closed virtual connection calibration is carried out on the scanned non-closed map based on the symmetry and the similarity of the geometric characteristics of the starting point and the end point, so that the closing point of the current scanning map is specified, and the scanned SLAM map is subjected to loop correction.
In an embodiment, the processing unit 202 is further specifically configured to:
and performing pose correction based on light beam adjustment on the image in the loop according to the pose conversion matrix to obtain a loop correction image.
In the embodiment of the present application, when correcting the image in the loop, a beam adjustment method is used. The beam adjustment method means that for any three-dimensional point P in a scene, light rays which are emitted from the optical center of the camera corresponding to each view and pass through the pixel corresponding to the P in an image are intersected at the point P, and for all the three-dimensional points, a great number of light beams (bundle) are formed; in the actual process, because noise and the like exist, each ray is almost impossible to converge with one point, and therefore, in the solving process, the information to be solved needs to be continuously adjusted (adjustment) so that the final ray can be converged at the point P. The beam adjustment method has different convergence methods for specific application scenes. The current common methods include gradient descent method, Newton method, Gauss Newton method and the like. And performing all frame position and posture correction on the image in the loop based on the position and posture conversion matrix, specifically, correcting all the positions and postures of the image in the loop by using a Bundle Adjustment (BA) method, and obtaining a loop correction image with a good correction effect. Aiming at a plurality of similar non-closed environments in the same environment, the method can be used iteratively to carry out loop correction, namely, a small circle is closed first, and then a large circle is closed.
In an embodiment, the processing unit 201 is further configured to:
and acquiring a target area corresponding to the loop back corrected image in the original map data, and replacing the target map data of the target area with the loop back corrected image to update the original map data.
In the embodiment of the application, after the robot scans the closed environment, the robot can perform scaling in a proper proportion according to the environment building blueprint (which can be a CAD drawing or a BIM model) and the raster image established by scanning, and meanwhile, correspondingly corrects the drawing data based on the building drawing. Specifically, a target area corresponding to the loop correction image in the original map data is obtained, and then the target map data of the target area is replaced by the loop correction image, so that the original map data is updated.
The present application further provides a processing device, and referring to fig. 4, fig. 4 shows a schematic structural diagram of the processing device of the present application, and specifically, the processing device of the present application includes a processor, and the processor is configured to implement the steps in the embodiment corresponding to fig. 1 when executing the computer program stored in the memory; alternatively, the processor is configured to implement the functions of the modules in the corresponding embodiment of fig. 3 when executing the computer program stored in the memory.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in a memory and executed by a processor to accomplish the present application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of a computer program in a computer device.
The processing device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the illustration is merely an example of a processing device and is not meant to be limiting, and that more or fewer components than those illustrated may be included, or some components may be combined, or different components may be included, for example, the processing device may also include input output devices, network access devices, buses, etc., through which the processor, memory, input output devices, network access devices, etc., are connected.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center for the processing device and the various interfaces and lines connecting the various parts of the overall processing device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the processing device, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The display screen is used for displaying characters of at least one character type output by the input and output unit.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus, the processing device and the corresponding modules thereof described above may refer to the description in the embodiment corresponding to fig. 1, and are not described herein again in detail.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in the embodiment corresponding to fig. 1 in the present application, and specific operations may refer to the description in the embodiment corresponding to fig. 1, and are not described herein again.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in the embodiment of the present application corresponding to fig. 1, the beneficial effects that can be achieved in the embodiment of the present application corresponding to fig. 1 can be achieved, and the detailed description is omitted here.
The method, the apparatus, and the storage medium for correcting the graph-building loop provided by the present application are introduced in detail, and a specific example is applied in the embodiment of the present application to explain the principle and the implementation of the present application, and the description of the embodiment is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for correcting a constructed image loop, the method comprising:
acquiring an initial point coordinate and an initial direction angle of an initial point and an initial end point coordinate of a preset end point;
acquiring the corresponding physical distance between the starting point and the preset end point in a real space coordinate system according to the initial point coordinate of the starting point and the initial end point coordinate of the preset end point;
acquiring corresponding image distances of the starting point and the preset end point in an image coordinate system according to the conversion relation between a real space coordinate system and the image coordinate system;
acquiring an in-loop image, and acquiring a key point coordinate and a key point direction angle of the in-loop image according to a preset positioning strategy and an initial point coordinate and an initial direction angle of the initial point;
acquiring a pose transformation matrix according to the key point coordinates, the key point direction angles and the initial end point coordinates of the preset end point;
and carrying out pose correction on the image in the loop according to the pose conversion matrix to obtain a loop correction image.
2. The method of claim 1, wherein obtaining the key point coordinates and the key point direction angle of the image in the loop according to a preset positioning strategy and the initial point coordinates and the initial direction angle of the initial point comprises:
acquiring the vertical coordinate of the initial point coordinate as the vertical coordinate of the key point coordinate of the image in the loop;
acquiring the abscissa of the initial point coordinate, obtaining a difference result by calculating the distance difference between the initial point coordinate and the image, and taking the difference result as the abscissa of the key point coordinate of the image in the loop;
and acquiring an initial direction angle of the starting point as a key point direction angle of the image in the loop.
3. The method according to claim 1, wherein the obtaining a pose transformation matrix according to the key point coordinates and key point direction angles and the initial end point coordinates of the preset end point comprises:
and calculating and acquiring a pose transformation matrix according to a transformation strategy that the multiplication of the second coordinate and the pose transformation matrix is equal to the first coordinate.
4. The method according to claim 1, wherein the pose correction of the in-loop image according to the pose transformation matrix to obtain a loop correction image comprises:
and performing pose correction based on light beam adjustment on the image in the loop according to the pose conversion matrix to obtain a loop correction image.
5. The method according to claim 1, wherein after the pose correction of the in-loop image according to the pose transformation matrix to obtain a loop corrected image, the method further comprises:
and acquiring a target area corresponding to the loop back corrected image in the original map data, and replacing the target map data of the target area with the loop back corrected image to update the original map data.
6. An image-creating loopback correction device, characterized in that, the image-creating loopback correction device comprises: an acquisition unit and a processing unit;
the acquisition unit is used for acquiring an initial point coordinate and an initial direction angle of an initial point and an initial end point coordinate of a preset end point;
the acquiring unit is further configured to acquire a corresponding physical distance between the starting point and the preset end point in a real space coordinate system according to the initial point coordinate of the starting point and the initial end point coordinate of the preset end point;
the processing unit is used for acquiring the corresponding image distance between the starting point and the preset end point in the image coordinate system according to the conversion relation between the real space coordinate system and the image coordinate system;
the processing unit is further configured to obtain an in-loop image, and obtain a key point coordinate and a key point direction angle of the in-loop image according to a preset positioning strategy and an initial point coordinate and an initial direction angle of the initial point;
the processing unit is further used for acquiring a pose transformation matrix according to the key point coordinates, the key point direction angles and the initial end point coordinates of the preset end point;
and the processing unit is also used for carrying out pose correction on the image in the loop according to the pose transformation matrix to obtain a loop correction image.
7. The apparatus according to claim 6, wherein the processing unit is specifically configured to:
acquiring the vertical coordinate of the initial point coordinate as the vertical coordinate of the key point coordinate of the image in the loop;
acquiring the abscissa of the initial point coordinate, obtaining a difference result by calculating the distance difference between the initial point coordinate and the image, and taking the difference result as the abscissa of the key point coordinate of the image in the loop;
and acquiring an initial direction angle of the starting point as a key point direction angle of the image in the loop.
8. The apparatus according to claim 6, wherein the processing unit is further specifically configured to:
and calculating and acquiring a pose transformation matrix according to a transformation strategy that the multiplication of the second coordinate and the pose transformation matrix is equal to the first coordinate.
9. A processing device comprising a processor and a memory, the memory having stored therein a computer program, the processor when calling the computer program in the memory performing the method of any of claims 1 to 5.
10. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of any of claims 1 to 5.
CN202111583636.XA 2021-12-22 2021-12-22 Method, device and medium for correcting image loop Pending CN114373010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111583636.XA CN114373010A (en) 2021-12-22 2021-12-22 Method, device and medium for correcting image loop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111583636.XA CN114373010A (en) 2021-12-22 2021-12-22 Method, device and medium for correcting image loop

Publications (1)

Publication Number Publication Date
CN114373010A true CN114373010A (en) 2022-04-19

Family

ID=81139942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111583636.XA Pending CN114373010A (en) 2021-12-22 2021-12-22 Method, device and medium for correcting image loop

Country Status (1)

Country Link
CN (1) CN114373010A (en)

Similar Documents

Publication Publication Date Title
US11042762B2 (en) Sensor calibration method and device, computer device, medium, and vehicle
EP3627181A1 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
US9270891B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
US9519968B2 (en) Calibrating visual sensors using homography operators
JP5248806B2 (en) Information processing apparatus and information processing method
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
US20140253679A1 (en) Depth measurement quality enhancement
CN111426312B (en) Method, device and equipment for updating positioning map and storage medium
CN111640180B (en) Three-dimensional reconstruction method and device and terminal equipment
US11209277B2 (en) Systems and methods for electronic mapping and localization within a facility
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
US20200082556A1 (en) Image processing apparatus, image processing program, and driving assistance system
CN109635639B (en) Method, device, equipment and storage medium for detecting position of traffic sign
CN114662587A (en) Three-dimensional target sensing method, device and system based on laser radar
CN116844124A (en) Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium
CN114373010A (en) Method, device and medium for correcting image loop
CN115507840A (en) Grid map construction method, grid map construction device and electronic equipment
CN114577216A (en) Navigation map construction method and device, robot and storage medium
CN114882194A (en) Method and device for processing room point cloud data, electronic equipment and storage medium
CN116136408A (en) Indoor navigation method, server, device and terminal
CN113776517A (en) Map generation method, device, system, storage medium and electronic equipment
KR20210116161A (en) Heterogeneous sensors calibration method and apparatus using single checkerboard
CN111950420A (en) Obstacle avoidance method, device, equipment and storage medium
US9165208B1 (en) Robust ground-plane homography estimation using adaptive feature selection
CN116804551A (en) Mobile robot navigation map generation method, equipment, medium and mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information