CN112837241A - Method and device for removing image-building ghost and storage medium - Google Patents

Method and device for removing image-building ghost and storage medium Download PDF

Info

Publication number
CN112837241A
CN112837241A CN202110179342.4A CN202110179342A CN112837241A CN 112837241 A CN112837241 A CN 112837241A CN 202110179342 A CN202110179342 A CN 202110179342A CN 112837241 A CN112837241 A CN 112837241A
Authority
CN
China
Prior art keywords
point cloud
frame
ghost
frames
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110179342.4A
Other languages
Chinese (zh)
Inventor
贾魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Jingbangda Supply Chain Technology Co ltd
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Guizhou Jingbangda Supply Chain Technology Co ltd
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Jingbangda Supply Chain Technology Co ltd, Beijing Jingdong Qianshi Technology Co Ltd filed Critical Guizhou Jingbangda Supply Chain Technology Co ltd
Priority to CN202110179342.4A priority Critical patent/CN112837241A/en
Publication of CN112837241A publication Critical patent/CN112837241A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method and equipment for removing image-building ghosting and a storage medium, which are applied to the field of data processing. The method comprises the following steps: displaying a target track containing a ghost region; displaying time point information of an associated frame in response to an operation of connecting a reference frame and a frame to be corrected in a pair of associated frames on a ghost region by a user; aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose; and carrying out ghost optimization on the target track according to the relative pose. By the method and the device, image-building ghosting can be effectively removed.

Description

Method and device for removing image-building ghost and storage medium
Technical Field
The present application relates to the field of data processing, and in particular, to a method, an apparatus, and a storage medium for removing image ghosting.
Background
The automatic driving technology specifically provides a prior map for an automatic driving vehicle based on a high-precision map in the driving process, and performs real-time sensing, positioning and route planning. Synchronous positioning and Mapping (SLAM for short) based on data acquired in the automatic driving process is a low-cost way for acquiring high-precision maps. Specifically, the automatic driving vehicle scans roads and collects point cloud data through equipment such as laser radar equipment and the like to provide high-quality point cloud data for synchronous positioning and mapping; in addition, the automatic driving vehicle carries out synchronous positioning and map building to generate a pose through data acquired by a wheel speed meter, a combined inertial navigation device and the like.
In the process of implementing the present application, the inventors found that the above-mentioned technology has at least the following problems: in the process of synchronous positioning and map building by using point cloud data, accumulated errors are generated along with the map building, and although most of the accumulated errors can be removed by closed-loop detection at present, the success of closed-loop is not guaranteed by hundred percent; in addition, when the bit-pushed matching is performed, there are also cases where the matching error is large, such as sharp bends, and the cumulative error and the matching error both cause ghost images.
Disclosure of Invention
The embodiment of the application provides a method and equipment for removing image ghosting and a storage medium, so as to improve the ghost removing effect.
In a first aspect, an embodiment of the present application provides a method for removing image ghosting, including: displaying a target track containing a ghost region; displaying time point information of an associated frame in response to an operation of connecting a reference frame and a frame to be corrected in a pair of associated frames on a ghost region by a user; aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose; and carrying out ghost optimization on the target track according to the relative pose.
In a second aspect, an embodiment of the present application provides an image ghosting removing device, including:
the display module is used for displaying a target track containing a ghost image area; and displaying time point information of the associated frames in response to an operation of connecting a reference frame and a frame to be corrected in a pair of associated frames on the ghost region by a user;
the processing module is used for aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose; and carrying out ghost optimization on the target track according to the relative pose.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method according to the first aspect is implemented.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on an electronic device, the electronic device is caused to execute the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which includes a computer program, when the computer program runs on an electronic device, causes the electronic device to execute the method according to the first aspect.
The method comprises the steps of displaying a target track containing a ghost area; displaying time point information of an associated frame in response to an operation of connecting a reference frame and a frame to be corrected in a pair of associated frames on a ghost region by a user; aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose; and carrying out ghost optimization on the target track according to the relative pose. And aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame to obtain a relative pose, and performing ghost optimization on the target track according to the relative pose to effectively remove the image-building ghost.
These and other aspects of the present application will be more readily apparent from the following description of the embodiment(s).
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario of the image-creating ghost removing method provided by the present application;
fig. 2 is a schematic flowchart of a method for removing image ghosts according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a target trajectory;
FIG. 4 is a schematic diagram of an interface for image ghosting removal according to an embodiment of the present disclosure;
FIG. 5 is an enlarged partial schematic view of the interface shown in FIG. 4;
FIG. 6 is a schematic view of an interface of an operation panel of the ghost image removing tool according to the embodiment of the present disclosure;
FIG. 7 is a schematic view of another interface of an operating panel of a ghost removal tool according to an embodiment of the present disclosure;
FIG. 8 is another schematic interface diagram for image ghosting removal provided by an embodiment of the present application;
fig. 9 is a schematic point cloud diagram of a pair of associated frames before point cloud alignment according to an embodiment of the present application;
FIG. 10 is a schematic diagram of the point cloud of the associated frame shown in FIG. 9 after alignment of the point cloud;
fig. 11 is a schematic structural diagram of an image ghosting removing device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
First, for the sake of understanding, terms are explained in this application:
ghosting belongs to abnormal mapping effects expressed in a point cloud map, and is specifically expressed as an object in reality, when mapping is performed on data obtained by collecting information of the object based on equipment such as a laser radar and a camera, the obtained map contains a plurality of objects, that is, one object which should be expressed is expressed as a plurality of objects, and the ghosting phenomenon occurs at the moment. For example, a point cloud map corresponding to one utility pole appears as two utility poles, which is an error.
Compared with a common electronic map, the high-precision map has higher precision and contains more target ground objects.
Point Cloud (Point Cloud) data is a Point data set of each sampling Point on the surface of an object obtained by a measuring instrument. Specifically, the point cloud data obtained according to the laser measurement principle includes three-dimensional coordinates (XYZ) and laser reflection Intensity (Intensity); point cloud data obtained according to a photogrammetry principle includes three-dimensional coordinates (XYZ) and color information (RGB); and (3) combining laser measurement and photogrammetry principles to obtain point cloud data comprising three-dimensional coordinates (XYZ), laser reflection Intensity (Intensity) and color information (RGB). For example, for point cloud data obtained according to a laser measurement principle, when a laser beam irradiates the surface of an object, the laser reflected by the object carries information such as direction, distance and the like; when a laser beam is scanned along a certain trajectory, information of a reflected laser spot is recorded while scanning, and since scanning is extremely fine, a large number of laser spots can be obtained, thereby forming point cloud data having a large number of spots and being dense.
Closed loop detection, also known as loop detection, refers to the ability of an autonomous vehicle to recognize that a scene has been reached, causing a map to close.
And registering, namely performing matching, coordinate transformation and the like on two point cloud sets corresponding to crossed scenes, and aligning one scene with the other scene, wherein ghost images can be eliminated.
And the pose file comprises poses corresponding to the frames of point cloud data, each frame of point cloud data corresponds to one pose, and the point cloud data corresponding to all the poses are superposed to form the whole point cloud map. The pose file is formatted such that each pose occupies a row, for example:
1589443722.909254,-0.005656,-0.000536,0.010376,0.999930,0.013034,1.739831,0.300687
wherein, the '1589443722.909254' is time information used for searching the pose; "-0.005656, -0.000536, 0.010376" are x, y, z three-dimensional coordinates, respectively; "0.999930, 0.013034,1.739831, 0.300687" is rotation angle information expressed by quaternions.
In closed-loop detection, along with the movement of measuring instruments such as a camera and the like, errors exist in the calculated pose, the point cloud position obtained through triangulation and the like. The most effective way to remove the accumulated error is to find a closed loop and optimize all results according to the closed loop.
At present, in order to reduce matching errors, in the process of synchronous positioning and mapping, a mapping method mainly includes Iterative Closest Point (ICP), Normal Distribution Transformation (NDT), feature extraction and other methods. Due to the characteristics of sharp turning, few scene characteristics and the like in the acquisition process, the mapping and the extrapolation matching are difficult to realize, so that a double image phenomenon exists in a single-pass mapping result, the double image phenomenon is corrected by improving the robustness of an algorithm, but the double image can not be removed in percentage.
In addition, closed loop detection is difficult to ensure that hundred percent detection is correct, and accumulated errors are increased continuously along with graph building, so that great difficulty is brought to closed loop. For the closed-loop problem, the current mainstream methods include iterative closest point, normal distribution transformation, closed-loop detection based on semantic information, Global Positioning System (GPS) information, and manual inspection data labeling after collection (but no high-efficiency open source software is available). Wherein:
the iterative closest point and the normal distribution transformation are both performed by searching a certain distance radius R range, calculating the point distance matching score, and performing matching calculation analysis to determine whether a closed loop is detected. In the two methods, the distance radius R is not set to a fixed value well, accumulated errors are large sometimes, the distance radius R needs to be amplified, and the calculation amount after amplification is large; moreover, even if the matching calculates the corresponding relationship, the distance radius R is relatively large and the scene is repeated singly, and there may be a plurality of places with higher matching scores, and it is not reliable to distinguish whether a closed loop is detected or not according to the matching score threshold of the point distance.
And (4) assisting closed loop detection based on semantic information such as information combined with curbs, lane lines, buildings, signboards and the like. This approach strongly depends on the accuracy of the recognition result, and it works when the scene features are significant, so there is a limitation.
The information of the global positioning system is a common way for assisting closed-loop detection, but high buildings in cities are more, and if the situation that the signal of the global positioning system is not good exists in the driving process, the information of the global positioning system cannot be used for assisting closed-loop detection.
Through the manual inspection data mark after gathering let the procedure find corresponding closed loop detection information, compare in above three kinds of modes, though can detect the closed loop, but efficiency is lower.
Moreover, the operation of the existing ghost image removing software is complicated and is not friendly enough; moreover, the software is used for repairing the closed loop detection problem, and the ghost inside a single pass cannot be removed.
In conclusion, in the process of performing the bit-shifting matching, the synchronous positioning and mapping technology cannot ensure that the hundred percent matching is correct, and a large matching error usually occurs in a challenging scene; for closed-loop detection, the current mainstream mode is difficult to ensure that hundreds of percent of detection is correct or consumes time and labor.
In order to solve the problems, the application provides a method, equipment and a storage medium for image construction ghost removal, and ghost images in synchronous positioning and image construction are corrected through man-machine interaction. The application can be applied to various services related to ghost image removal.
Fig. 1 is a schematic view of an application scenario of the image ghosting removal method provided by the present application. As shown in fig. 1, in practical applications, after the client 11 performs synchronous positioning and mapping based on point cloud data and the like, the obtained point cloud map corresponding to the target track includes a ghost, and at this time, the user needs to correct the ghost through the client 11. A user determines a ghost image area in a target track displayed by a client terminal 11, and performs operation of connecting an associated frame on the ghost image area through input equipment such as a mouse and a keyboard or a touch mode, wherein the associated frame comprises a reference frame and a frame to be corrected; in response to the operation of the user, the client 11 displays the time point information of the associated frame, aligns the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to the preset point cloud alignment information to obtain a relative pose, and then performs ghost optimization on the target track according to the relative pose to remove ghosts. Then, the client 11 generates a high-precision map based on the point cloud data and pose data after the ghost is removed, and sends the high-precision map to the server 12 through the network, and the high-precision map is called from the server 12 by the autonomous vehicle 13 to assist autonomous driving.
It should be noted that fig. 1 is only a schematic diagram of an application scenario provided in the embodiment of the present application, and the embodiment of the present application does not limit the devices included in fig. 1, nor does it limit the positional relationship between the devices in fig. 1. For example, in the application scenario shown in fig. 1, the client 11 performs synchronous positioning and mapping, ghost image removal, and high-precision map generation as an example for explanation, and the operations may also be performed by different clients. For example, in the application scenario shown in fig. 1, a data storage device may be further included, and the data storage device may be an external memory with respect to the client 11 and the server 12, or may be an internal memory integrated in the client 11 or the server 12. The server 12 may be an independent server, or may be a service cluster or the like.
The technical solution of the present application will be described in detail below with reference to specific examples. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a schematic flowchart of a method for removing image ghosting according to an embodiment of the present application. The embodiment of the application provides a method for removing image ghosting, which is applied to an image ghosting removing device and can be realized in a software and/or hardware mode. Alternatively, in the scenario shown in fig. 1, the image ghosting removing device may be integrated in the client, for example, the image ghosting removing device is a chip or a circuit in the client; or the image-creating ghost removing device is a client. Next, an example will be described with a client as an execution subject. Clients are referred to as computers such as laptops, desktops, workstations, Personal Digital Assistants (PDAs), mainframes, and other suitable computers.
As shown in fig. 2, the image ghosting removing method includes the following steps:
and S201, displaying a target track containing a ghost area.
Illustratively, the trajectory of an object containing ghost regions is shown in FIG. 3. The target trajectory shown in fig. 3 is the result of the simultaneous localization and mapping, and is the autonomous vehicle travel trajectory with a ghost image, i.e., a closed-loop ghost image, of approximately 4 meters (shown as 3.970381 in fig. 3) to and from the target trajectory. Referring to fig. 3, the ghost region includes a reference trajectory and a trajectory to be corrected, and the embodiment of the present application achieves an effect that the trajectory to be corrected is corrected to coincide with the reference trajectory based on the reference trajectory, so as to remove a ghost. And the frame corresponding to the reference track is a reference frame, and the frame corresponding to the track to be corrected is a frame to be corrected. Wherein Δ X-0.337700 represents the difference in X coordinates; Δ Y-3.955700 represents the difference in Y coordinates; Δ Z-0.048200 represents the difference in Z coordinates; Δ XY 3.970088 represents the difference on the XY plane; Δ XZ 0.341122 represents the difference in the XZ plane; Δ ZY 3.955994 represents the difference on the ZY plane.
In the aspect of data input requirements, data required by the embodiment of the application comprises point cloud data and pose data obtained after synchronous positioning and mapping, wherein the point cloud data is stored in a point cloud file, and the pose data is stored in a pose file.
The step of displaying a target track corresponding to the point cloud data by loading the point cloud data so as to allow a user to view the ghost image area. Illustratively, the three-dimensional point cloud processing software CloudCompare has basic opening point cloud data and display functions, but the application is not limited thereto.
And S202, responding to the operation of connecting a reference frame and a frame to be corrected in a pair of associated frames on the ghost image area by a user, and displaying the time point information of the associated frames.
After the ghost area of the target track is known, the user can perform an operation of connecting the reference frame and the frame to be corrected in a pair of associated frames on the ghost area, that is, the reference frame and the frame to be corrected are associated to perform point cloud alignment in a targeted manner. For the operation of connecting the associated frames, the embodiments of the present application are not limited, and the following examples are provided:
in one example, a user positions a cursor at a point on a reference track through a mouse, and selects a frame corresponding to the point as a reference frame for association by clicking a left mouse button and the like; and then, positioning the cursor at one point on the track to be corrected by the user through the mouse, and selecting the frame corresponding to the point as the frame to be corrected for correlation through similar operations such as clicking a left mouse button, so as to realize the connection of the pair of correlated frames.
In another example, a user positions a cursor at a point on a reference track through a mouse, and selects a frame corresponding to the point as a reference frame for association by clicking a left mouse button and the like; and then, continuously dragging the cursor to a point on the track to be corrected by the user through a mouse, and selecting the frame corresponding to the point as the frame to be corrected for correlation by clicking a left button of the mouse and other similar operations, thereby realizing the connection of a pair of correlated frames.
After the user performs the above operation, the client displays the time point information of the associated frame, that is, the time point information of the reference frame and the time point information of the frame to be corrected, in response to the above operation. Since the time of occurrence of each frame on the target track is different, the reference frame and/or the frame to be corrected can be distinguished by the time point information.
Optionally, the method may further include: in response to the operation of connecting the reference frame and the frame to be corrected in a pair of associated frames on the ghost image area by the user, the corresponding optimized edge of the associated frame is displayed in the screen, as shown in fig. 4. Further, after displaying the time point information of the associated frame, the method may further include: in response to an operation of the user selecting the associated frame, an area where the associated frame is located is displayed in the center of the screen, as shown in fig. 5.
S203, aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose.
The preset point cloud alignment information may include at least one of the following information: x in three-dimensional coordinates, Y in three-dimensional coordinates, Z in three-dimensional coordinates, Intensity of laser reflection (Intensity), R in color information (RGB), G in color information (RGB), and B in color information (RGB). For example, if only the X coordinate value in the point cloud data corresponding to the frame to be corrected is different from the X coordinate value in the point cloud data corresponding to the reference frame, the preset point cloud alignment information only includes the X coordinate value. In addition, because the X coordinate value is unknown to the user, the preset point cloud alignment information can be set to be smaller at the beginning, and the point cloud corresponding to the frame to be corrected is aligned to the point cloud corresponding to the reference frame by adjusting the size of the preset point cloud alignment information at the later stage, so that the relative pose of the frame to be corrected in the pair of associated frames relative to the reference frame is obtained.
In one possible implementation manner, the point cloud corresponding to the reference frame and the point cloud corresponding to the frame to be corrected may be displayed in different colors. Illustratively, the point cloud corresponding to the frame to be corrected is represented by red, the point cloud corresponding to the reference frame is represented by green, at this time, the client adjusts the red point cloud according to the preset point cloud alignment information until the red point cloud is aligned with the green point cloud, and the client can automatically calculate the relative pose between the two frames corresponding to the red point cloud and the green point cloud. Alternatively, these relative poses may be stored as correction information for closed-loop correction or forward-backward registration error correction to a correction information file, such as fix file.
And S204, carrying out ghost optimization on the target track according to the relative pose.
And carrying out ghost optimization on the target track according to the relative pose obtained by point cloud alignment, so that the ghost can be eliminated. And then, synchronous positioning and mapping are carried out based on the data after the ghost optimization, and an obtained high-precision map is used for assisting automatic driving.
According to the image-creating ghost removing method provided by the embodiment of the application, a target track containing a ghost area is displayed firstly, and time point information of an associated frame is displayed in response to the operation of a user on the ghost area for connecting a reference frame and a frame to be corrected in the associated frame; then, aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose; and finally, carrying out ghost optimization on the target track according to the relative pose. The relative pose is obtained by aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame, and the ghost image of the image is effectively removed by carrying out ghost image optimization on the target track according to the relative pose, wherein the ghost image comprises the ghost image caused by single-pass matching error and the ghost image caused by difficult closed-loop detection.
On the basis, as a possible embodiment, the image-building ghost removing method provided by the application can be implemented based on CloudCompare open source software. Based on the existing functions of CloudCompare open source software, a plug-in of a ghost removal tool is designed to be used for closed loop correction and can correct front and back registration errors. Illustratively, the operation panel of the ghost removing tool is shown in fig. 6. Referring to fig. 6, the operation panel includes three regions, which are respectively identified as: selecting correction points, point cloud alignment and ghost removal, wherein:
firstly, selecting correction points
The area comprises a combination control, an addition control, a deletion control, a data cutting control and a panel list. Specifically, the merging control is used for merging point cloud tracks; adding a control for adding a new associated frame, wherein a connecting line between a reference frame contained in the associated frame and a frame to be corrected is an optimized side, namely constraint side information during point cloud alignment; the deleting control is used for deleting the associated frame; the data clipping control is used for clipping the point cloud data according to the corresponding time point information so as to reserve the point cloud data corresponding to the associated frame. Displaying the time point information of a pair of associated frames added by the adding control in a panel list; and deleting the selected associated frame in the panel list through the deletion control. The panel list contains three columns of data, specifically: the serial number corresponding to the associated frame, the reference frame contained in the associated frame and the frame to be corrected. Therefore, the time point information of the display related frame may include: in the operation panel of the ghost removal tool, time point information of the associated frame is displayed, as illustrated in fig. 7.
Two, point cloud alignment
The area contains a control part for rotating and/or translating the frame to be corrected, and step length information of rotation and translation can be set.
The preset point cloud alignment information contains pose information set by the user in the area in real time. And after the user selects the associated frame, setting preset point cloud alignment information in the area. And then, the client aligns the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose. Specifically, the client executes the following steps until the point cloud of the frame to be corrected is aligned with the point cloud corresponding to the reference frame, and determines that the correction rotation matrix at the moment is the relative pose:
obtaining a correction rotation matrix according to preset point cloud alignment information;
multiplying the point cloud corresponding to the frame to be corrected by the correction rotation matrix to obtain a corrected point cloud;
displaying the corrected point cloud;
and if the corrected point cloud is not aligned with the point cloud corresponding to the reference frame, responding to the operation of updating the point cloud alignment information by the user, and taking the updated point cloud alignment information as new preset point cloud alignment information.
Triple, ghost removal
The area comprises a ghost optimization and output control for ghost optimization and a virtual control for plane optimization, and when the virtual control is of a selected attribute, plane constraint optimization is carried out on each frame of data after the ghost optimization. Optionally, the storage address of the ghosting optimized data is preset for subsequent lookup and viewing or application. The storage address may be a storage medium inside the client, or may be a storage medium outside the client, such as a cloud server.
In addition, a closing control for closing the ghost removing tool is also included in the operation panel, as shown in the upper right corner of fig. 7.
The ghost image removing tool is simple to operate and easy to understand, the efficiency of a manual interaction process is high, and a plug-in developed based on open source software CloudCompare can be used for conveniently and rapidly removing the ghost image of synchronous positioning and image building while using a huge open source function.
Next, a specific implementation of the image-creating ghost removing method provided in the present application is described with reference to an application of a ghost removing tool.
In some embodiments, the displaying of the target trajectory including the ghost region S201 may include: when an instruction of clicking the merging control by a user is detected, displaying a candidate point cloud file; when an instruction of a user for selecting a target point cloud file in candidate point cloud files is detected, a plurality of target point cloud files are obtained, and the target point cloud files contain data corresponding to partial tracks in a target track; and merging the tracks of the plurality of target point cloud files and displaying the target tracks. Specifically, when the user clicks the merge control, the client displays the file directory; and after the user finds the directory where the point cloud files are located, selecting the point cloud files for combination. Due to the fact that the processing data volume is large, the data input of common operation is a plurality of point cloud files, track combination needs to be carried out once, and selection of subsequent related frames is facilitated.
When the user selects the associated frame (correction point), the frame to be corrected is selected first, and then the reference frame is selected. Illustratively, the user clicks the add control, and then adds 5 pairs of associated frames according to the track points corresponding to the ghost regions, as shown in fig. 3, as shown in fig. 4. Further, S202, in response to the operation of the user connecting the reference frame and the frame to be corrected in the pair of associated frames on the ghost region, displaying the time point information of the associated frames may include: when detecting the operation of a user for connecting a reference frame and a frame to be corrected in a pair of associated frames on a ghost image area, displaying information whether to confirm loading; and if the instruction that the user confirms loading is detected, displaying the time point information of the associated frame. For example, when the user selects the second track point, the client prompts whether to confirm the loading, as shown in fig. 8, after the user confirms the loading, the client displays the time point information of the associated frame in the panel list, and simultaneously, an optimized edge is displayed in the ghost area.
Optionally, if the user double-clicks one of the serial numbers, the client responds to the operation, and automatically locates the optimized edge corresponding to the serial number to the center of the screen.
And clicking the data clipping control at the user, and clipping the data by the client according to the selected associated frame, and reserving the associated frame and the point cloud data corresponding to the frame adjacent to the associated frame. Therefore, in some embodiments, before aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to the preset point cloud alignment information to obtain the relative pose, the method may further include: in response to a data clipping operation performed by a user on an operation panel of the ghost removing tool, data clipping is performed on point cloud data corresponding to a target track, associated frames and point cloud data corresponding to N frames adjacent to the associated frames in front and back are retained, and point clouds corresponding to a pair of associated frames are displayed on a screen, as shown in fig. 9. Wherein, the value of N is determined according to actual requirements or historical experience.
If the point cloud includes a plurality of pairs of associated frames, displaying the point cloud corresponding to one pair of associated frames in the screen includes: and displaying the point cloud corresponding to the first pair of associated frames in the screen. At this time, the method may further include: after point clouds corresponding to frames to be corrected in the first pair of associated frames are aligned to point clouds corresponding to reference frames in the first pair of associated frames (as shown in fig. 10), responding to the operation of point cloud alignment on the next group of associated frames acted on an operation panel by a user, storing the relative poses corresponding to the first pair of associated frames, displaying the point clouds corresponding to the second pair of associated frames on a screen, executing the step of aligning the point clouds corresponding to the frames to be corrected to the point clouds corresponding to the reference frames according to preset point cloud alignment information aiming at the point clouds corresponding to the second pair of associated frames, and obtaining the relative poses until the point cloud alignment of a plurality of pairs of associated frames is completed. Optionally, after completing the point cloud alignment of the multiple pairs of associated frames, the client may pop up to indicate that all alignment is completed.
Optionally, the method may further include: and loading the point clouds corresponding to the multiple pairs of associated frames through a progress bar program, wherein the loading sequence of the point clouds corresponding to the multiple pairs of associated frames is determined according to the associated time points of the associated frames. It can be understood that the associated time points can be expressed as serial numbers in a panel list, the serial numbers of the associated frames which are firstly associated are arranged in front, and the serial numbers are smaller; the serial number of the associated frame which is then established is ranked behind the frame, and the serial number is larger.
In some embodiments, S204, performing ghost optimization on the target trajectory according to the relative pose may include: acquiring a pose file, wherein the pose file comprises poses of frames corresponding to the target track; obtaining the pose change between adjacent frames according to the pose of each frame; and carrying out ghost optimization on the target track according to the relative pose, the pose of each frame and the pose change. Wherein the ghost optimization comprises closed-loop optimization and front-back matching optimization.
Optionally, acquiring the pose file may include: displaying candidate pose files in response to ghost optimization operation of a user on an operation panel of the ghost removal tool; and responding to the selection operation of the pose file in the candidate pose file selected by the user, and acquiring the pose file. For example, a user clicks a virtual control marked with 'ghost optimization and output', at this time, the ghost optimization operation is a click operation, the client responds to the operation and pops up a frame to display candidate pose files, and after the user selects the candidate pose files, the client acquires the pose files.
Wherein, according to relative position and position change of each frame, carry out ghost optimization to the target trajectory, include: taking the pose change as an optimization edge, setting the weight of the optimization edge as a preset value, taking the pose of each frame as a G2O optimization node, and carrying out nonlinear optimization on the pose of each frame; and outputting the point cloud map data after ghost image removal according to the optimized pose. And performing nonlinear optimization by using a G2O library, wherein each pose is a G2O optimization node, pose transformation between adjacent poses is an optimization edge, and the weight of each optimization edge is assigned to a preset value, such as 1, during optimization, so that the nonlinear optimization can be completed.
In some embodiments, after performing ghost optimization on the target trajectory according to the relative pose at S204, the method may further include: and carrying out plane constraint optimization on each frame of data after the ghosting optimization. Optionally, the step may comprise: extracting the ground from each frame of data after the double image optimization; and calculating a normal vector of each frame data vertical to the ground, and using a constraint edge formed between the (0,0,1) vector and each normal vector to perform horizontal optimization on each frame data after the ghosting optimization through the constraint ground.
That is to say, the method for removing image ghosting provided by the embodiment of the application supports plane constraint optimization, and if the point cloud area is known to be close to a plane in advance, plane constraint can be selected for optimization, all point cloud data are changed into the same height, and ground heave drift is prevented. The method refers to a plane constraint method of an hdl _ graph _ slam open source framework, the ground is extracted from each frame of point cloud, a normal vector perpendicular to the ground is calculated, and a constraint edge is formed between a (0,0,1) vector and each ground normal vector, so that the ground is constrained to carry out horizontal optimization.
It should be noted that, when each operation is performed, corresponding log information is printed in the log column for subsequent viewing.
With respect to the above front-end operation logic and introduction, the following describes the computation processing logic of the client:
1) and selecting a correction point. In the process, the client reads all the selected point cloud files, combines the point cloud files into a large track file, and loads and displays the track on the current interface. And the client acquires the time point information of the associated frame by clicking the selected associated frame by the front end and displays the time point information in the panel list. And when the data clipping operation is carried out, the point cloud data corresponding to each pair of associated frames are recorded in the memory of the client, and the point cloud data corresponding to the first pair of associated frames are displayed in the screen.
2) And aligning the point clouds. In the process of rotationally translating the point cloud data at the front end, the client multiplies the point cloud by a correction rotation matrix to achieve a point cloud rotation effect, adds and subtracts x, y and z coordinates, updates a calculation result to a display interface, and finally achieves the rotational translation effect. And finally, when the front end aligns two point cloud data, the client end accumulates and calculates the pose difference before and after alignment to obtain the relative pose, and records the relative pose.
3) And removing the ghost image. This operation is implemented by a library of nonlinear optimization algorithms G2O. And finally, performing rear-end nonlinear optimization on the pose, and outputting corrected point cloud map data according to a new pose after optimization.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 11 is a schematic structural diagram of an image-creating ghost removing apparatus according to an embodiment of the present application. The embodiment of the application provides a device for removing image ghosting, which can be integrated in a client. As shown in fig. 11, the image ghosting removing device 110 includes: a display module 111 and a processing module 112. Wherein:
a display module 111 for displaying a target trajectory including a ghost region; and displaying time point information of the associated frames in response to an operation of connecting a reference frame and a frame to be corrected in a pair of the associated frames by a user on the ghost region.
The processing module 112 is configured to align the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information, so as to obtain a relative pose; and carrying out ghost optimization on the target track according to the relative pose.
The apparatus provided in the embodiment of the present application may be used to execute the method in the embodiment shown in fig. 2, and the implementation principle and the technical effect are similar, which are not described herein again.
Optionally, the preset point cloud alignment information includes pose information set by a user on an operation panel of the ghost removing tool in real time.
Optionally, when the display module 111 is configured to display time point information of the associated frame, specifically configured to: in an operation panel of the ghost removing tool, time point information of the associated frame is displayed.
In some embodiments, the processing module 112 is specifically configured to, when configured to align the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to the preset point cloud alignment information to obtain the relative pose:
executing the following steps until the point cloud of the frame to be corrected is aligned with the point cloud corresponding to the reference frame, and determining that the correction rotation matrix is a relative pose when the point clouds are aligned:
obtaining a correction rotation matrix according to preset point cloud alignment information;
multiplying the point cloud corresponding to the frame to be corrected by the correction rotation matrix to obtain a corrected point cloud;
displaying the corrected point cloud;
and if the corrected point cloud is not aligned with the point cloud corresponding to the reference frame, responding to the operation of updating the point cloud alignment information by the user, and taking the updated point cloud alignment information as new preset point cloud alignment information.
In some embodiments, the processing module 112, when configured to perform ghost optimization on the object trajectory according to the relative pose, is specifically configured to: acquiring a pose file, wherein the pose file comprises poses of frames corresponding to the target track; obtaining the pose change between adjacent frames according to the pose of each frame; and carrying out ghost optimization on the target track according to the relative pose, the pose of each frame and the pose change.
In some embodiments, the processing module 112, when being configured to acquire the pose file, is specifically configured to: displaying candidate pose files in response to ghost optimization operation of a user on an operation panel of the ghost removal tool; and responding to the selection operation of the pose file in the candidate pose file selected by the user, and acquiring the pose file.
In some embodiments, the processing module 112, when configured to perform ghost optimization on the object trajectory based on the relative pose, the poses of the frames, and the pose changes, is specifically configured to: taking the pose change as an optimization edge, setting the weight of the optimization edge as a preset value, taking the pose of each frame as a G2O optimization node, and carrying out nonlinear optimization on the pose of each frame; and outputting the point cloud map data after ghost image removal according to the optimized pose.
In some embodiments, the processing module 112 may be further configured to: before aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to the preset point cloud alignment information to obtain the relative pose, performing data clipping on the point cloud data corresponding to the target track in response to data clipping operation of a user acting on an operation panel of the ghost removing tool, reserving the associated frame and the point cloud data corresponding to the N frames adjacent to the associated frame in the front and back directions, and triggering the display module 111 to display the point cloud corresponding to the associated frame in the screen.
In some embodiments, if a plurality of pairs of associated frames are included, the display module 111 is specifically configured to, when configured to display a point cloud corresponding to a pair of associated frames on a screen: and displaying the point cloud corresponding to the first pair of associated frames in the screen. The processing module 112 is further configured to: after point clouds corresponding to frames to be corrected in the first pair of associated frames are aligned to point clouds corresponding to reference frames in the first pair of associated frames, responding to the operation of performing point cloud alignment on the next group of associated frames on an operation panel by a user, storing the relative pose corresponding to the first pair of associated frames, displaying the point clouds corresponding to the second pair of associated frames in a screen, executing the step of aligning the point clouds corresponding to the frames to be corrected to the point clouds corresponding to the reference frames according to preset point cloud alignment information aiming at the point clouds corresponding to the second pair of associated frames, and obtaining the relative pose until the point cloud alignment of a plurality of pairs of associated frames is completed.
In some embodiments, the processing module 112 may be further configured to: and loading the point clouds corresponding to the multiple pairs of associated frames through a progress bar program, wherein the loading sequence of the point clouds corresponding to the multiple pairs of associated frames is determined according to the associated time points of the associated frames.
In some embodiments, the display module 111 may also be configured to: and responding to the operation of connecting a reference frame and a frame to be corrected in a pair of associated frames on the ghost image area by a user, and displaying an optimized edge corresponding to the associated frame in the screen.
In some embodiments, the display module 111 may also be configured to: after the time point information of the associated frame is displayed, in response to an operation of a user selecting the associated frame, an area where the associated frame is located is displayed at the center of the screen.
In some embodiments, when the display module 111 is configured to display, in response to an operation of a user connecting a reference frame and a frame to be corrected in a pair of associated frames on a ghost region, time point information of the associated frames, specifically: when detecting the operation of a user for connecting a reference frame and a frame to be corrected in a pair of associated frames on a ghost image area, displaying information whether to confirm loading; and if the instruction that the user confirms loading is detected, displaying the time point information of the associated frame.
Optionally, a merge control for merging the traces is included in the operation panel of the ghost removal tool. At this time, when the display module is used for displaying the target trajectory including the ghost region, the display module is specifically configured to: when an instruction of clicking the merging control by a user is detected, displaying a candidate point cloud file; when an instruction of a user for selecting a target point cloud file in candidate point cloud files is detected, a plurality of target point cloud files are obtained, and the target point cloud files contain data corresponding to partial tracks in a target track; and merging the tracks of the plurality of target point cloud files and displaying the target tracks.
Further, the processing module 112 may be further configured to: and after carrying out ghost optimization on the target track according to the relative pose, carrying out plane constraint optimization on each frame of data after the ghost optimization.
Further, when the processing module 112 is configured to perform plane constraint optimization on the ghosting-optimized frame data, specifically: extracting the ground from each frame of data after the double image optimization; and calculating a normal vector of each frame data vertical to the ground, and using a constraint edge formed between the (0,0,1) vector and each normal vector to perform horizontal optimization on each frame data after the ghosting optimization through the constraint ground.
Optionally, an operation panel of the ghost removing tool includes a virtual control for plane optimization, and when the virtual control is of a selected attribute, plane constraint optimization is performed on each frame of data after ghost optimization.
Optionally, the operation panel of the ghost removal tool includes a delete control for deleting the associated frame and/or an add control for adding a new associated frame.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the processing module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a function of the processing module may be called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device is for example a client as described above. As shown in fig. 12, the electronic device may include: a processor 81, a memory 82, a communication interface 83, and a system bus 84. The memory 82 and the communication interface 83 are connected to the processor 81 through the system bus 84 and complete communication with each other, the memory 82 is used for storing computer programs, the communication interface 83 is used for communicating with other devices, and the processor 81 is used for calling the computer programs in the memory to execute the scheme of the embodiment of the ghosting removal method as described above.
The system bus 84 mentioned in fig. 12 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus 84 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 83 is used to enable communication between the database access device and other devices (e.g., clients, read-write libraries, and read-only libraries).
The Memory 82 may include a Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor 81 may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program runs on an electronic device, the electronic device is enabled to execute the method for removing image ghosts in any of the above method embodiments.
The embodiment of the application further provides a chip for running the instruction, and the chip is used for executing the image building ghost removing method in any method embodiment.
The present application further provides a computer program product, which includes a computer program stored in a computer-readable storage medium, from which the computer program can be read by at least one processor, and when the computer program is executed by the at least one processor, the at least one processor can implement the method for removing image ghosts as described in any of the above method embodiments.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship; in the formula, the character "/" indicates that the preceding and following related objects are in a relationship of "division". "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application. In the embodiment of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (22)

1. A method for removing image ghosting is characterized by comprising the following steps:
displaying a target track containing a ghost region;
displaying time point information of a pair of associated frames in response to a user's operation of connecting a reference frame and a frame to be corrected in the associated frames on the ghost region;
aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose;
and carrying out ghost optimization on the target track according to the relative pose.
2. The method of claim 1, wherein the preset point cloud alignment information comprises pose information set by a user in real-time on an operating panel of a ghost removal tool.
3. The method according to claim 1, wherein the displaying the time point information of the associated frame comprises: displaying time point information of the associated frame in an operation panel of a ghost removal tool.
4. The method according to claim 1, wherein aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose comprises:
executing the following steps until the point cloud of the frame to be corrected is aligned with the point cloud corresponding to the reference frame, and determining a correction rotation matrix when the point clouds are aligned to be the relative pose:
obtaining a correction rotation matrix according to the preset point cloud alignment information;
multiplying the point cloud corresponding to the frame to be corrected with the correction rotation matrix to obtain a corrected point cloud;
displaying the corrected point cloud;
and if the corrected point cloud is not aligned with the point cloud corresponding to the reference frame, responding to the operation of updating the point cloud alignment information by a user, and taking the updated point cloud alignment information as new preset point cloud alignment information.
5. The method of claim 1, wherein the ghost optimization of the target trajectory according to the relative pose comprises:
acquiring a pose file, wherein the pose file comprises poses of frames corresponding to the target track;
obtaining the pose change between adjacent frames according to the pose of each frame;
and carrying out ghost optimization on the target track according to the relative pose, the pose of each frame and the pose change.
6. The method according to claim 5, wherein the acquiring a pose file comprises:
displaying candidate pose files in response to ghost optimization operation of a user on an operation panel of the ghost removal tool;
and responding to the selection operation of the candidate pose file selected by the user, and acquiring the pose file.
7. The method of claim 5, wherein the ghost optimization of the target trajectory based on the relative pose, the poses of the frames, and the pose changes comprises:
taking the pose change as an optimization edge, setting the weight of the optimization edge as a preset value, taking the pose of each frame as a G2O optimization node, and performing nonlinear optimization on the pose of each frame;
and outputting the point cloud map data after ghost image removal according to the optimized pose.
8. The method according to any one of claims 1 to 7, wherein before aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain the relative pose, the method further comprises:
and responding to data clipping operation acted on an operation panel of the ghost removing tool by a user, performing data clipping on the point cloud data corresponding to the target track, reserving the associated frame and the point cloud data corresponding to N frames adjacent to the associated frame in front and back, and displaying the point clouds corresponding to a pair of associated frames in a screen.
9. The method of claim 8, wherein displaying a point cloud corresponding to a pair of related frames on a screen if the plurality of related frames are included comprises: displaying the point cloud corresponding to the first pair of associated frames in a screen;
the method further comprises the following steps: after point clouds corresponding to frames to be corrected in the first pair of associated frames are aligned to point clouds corresponding to reference frames in the first pair of associated frames, responding to the operation of performing point cloud alignment on the next group of associated frames on the operation panel by a user, storing the relative poses corresponding to the first pair of associated frames, displaying the point clouds corresponding to the second pair of associated frames on a screen, executing the step of aligning the point clouds corresponding to the frames to be corrected to the point clouds corresponding to the reference frames according to preset point cloud alignment information aiming at the point clouds corresponding to the second pair of associated frames, and obtaining the relative poses until the point cloud alignment of the multiple pairs of associated frames is completed.
10. The method of claim 9, further comprising:
and loading the point clouds corresponding to the multiple pairs of associated frames through a progress bar program, wherein the loading sequence of the point clouds corresponding to the multiple pairs of associated frames is determined according to the associated time points of the associated frames.
11. The method of any one of claims 1 to 7, further comprising:
and responding to the operation of a user for connecting a reference frame and a frame to be corrected in a pair of associated frames on the ghost image area, and displaying an optimized edge corresponding to the associated frame in a screen.
12. The method according to claim 11, wherein after displaying the time point information of the associated frame, further comprising:
and responding to the operation of selecting the associated frame by the user, and displaying the area where the associated frame is located in the center of the screen.
13. The method according to any one of claims 1 to 7, wherein the displaying of the point-in-time information of the associated frame in response to a user operation of connecting a reference frame and a frame to be corrected in a pair of associated frames on the ghost region comprises:
when the operation of a user for connecting a reference frame and a frame to be corrected in a pair of associated frames on the ghost image area is detected, displaying information whether to confirm loading;
and if the instruction that the user confirms loading is detected, displaying the time point information of the associated frame.
14. The method of any of claims 1 to 7, wherein an operating panel of the ghost removal tool includes a merge control for merging trajectories, and wherein displaying the target trajectories including the ghost regions comprises:
when an instruction of clicking the merging control by a user is detected, displaying a candidate point cloud file;
when an instruction of selecting the target point cloud file in the candidate point cloud files by a user is detected, acquiring a plurality of target point cloud files, wherein the target point cloud files comprise data corresponding to partial tracks in the target tracks;
and merging the tracks of the plurality of target point cloud files and displaying the target tracks.
15. The method of any of claims 1 to 7, further comprising, after ghost optimization of the target trajectory according to the relative pose:
and carrying out plane constraint optimization on each frame of data after the ghosting optimization.
16. The method of claim 15, wherein performing plane constraint optimization on the ghosting optimized frames of data comprises:
extracting the ground from each frame of data after the double image optimization;
and calculating a normal vector of each frame of data vertical to the ground, and forming a constraint edge between the (0,0,1) vector and each normal vector so as to perform horizontal optimization on each frame of data after the ghosting optimization through constraining the ground.
17. The method of claim 15, wherein an operation panel of the ghost removing tool comprises a virtual control for plane optimization, and when the virtual control is of a selected attribute, plane constraint optimization is performed on each frame of data after ghost optimization.
18. The method of any of claims 1 to 7, wherein an operating panel of a ghost removal tool contains a delete control for deleting the associated frame and an add control for adding a new associated frame.
19. An image-creation ghosting removal device, comprising:
the display module is used for displaying a target track containing a ghost image area; and displaying time point information of a pair of associated frames to be corrected in response to a user operation of connecting a reference frame and the associated frame on the ghost region;
the processing module is used for aligning the point cloud corresponding to the frame to be corrected to the point cloud corresponding to the reference frame according to preset point cloud alignment information to obtain a relative pose; and carrying out ghost optimization on the target track according to the relative pose.
20. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 18 when executing the computer program.
21. A computer-readable storage medium, in which a computer program is stored which, when run on an electronic device, causes the electronic device to perform the method of any one of claims 1 to 18.
22. A computer program product comprising a computer program, characterized in that the computer program, when run on an electronic device, causes the electronic device to perform the method according to any of claims 1 to 18.
CN202110179342.4A 2021-02-09 2021-02-09 Method and device for removing image-building ghost and storage medium Pending CN112837241A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110179342.4A CN112837241A (en) 2021-02-09 2021-02-09 Method and device for removing image-building ghost and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110179342.4A CN112837241A (en) 2021-02-09 2021-02-09 Method and device for removing image-building ghost and storage medium

Publications (1)

Publication Number Publication Date
CN112837241A true CN112837241A (en) 2021-05-25

Family

ID=75933216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110179342.4A Pending CN112837241A (en) 2021-02-09 2021-02-09 Method and device for removing image-building ghost and storage medium

Country Status (1)

Country Link
CN (1) CN112837241A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439644A (en) * 2022-08-19 2022-12-06 广东领慧建筑科技有限公司 Similar point cloud data alignment method
CN117132728A (en) * 2023-10-26 2023-11-28 毫末智行科技有限公司 Method and device for constructing map, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260988A (en) * 2015-09-09 2016-01-20 百度在线网络技术(北京)有限公司 High-precision map data processing method and high-precision map data processing device
CN108765487A (en) * 2018-06-04 2018-11-06 百度在线网络技术(北京)有限公司 Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic
WO2019122939A1 (en) * 2017-12-21 2019-06-27 University of Zagreb, Faculty of Electrical Engineering and Computing Interactive computer-implemented method, graphical user interface and computer program product for building a high-accuracy environment map
CN110533587A (en) * 2019-07-03 2019-12-03 浙江工业大学 A kind of SLAM method of view-based access control model prior information and map recovery
CN110766716A (en) * 2019-09-10 2020-02-07 中国科学院深圳先进技术研究院 Method and system for acquiring information of space unknown moving target
CN111179435A (en) * 2019-12-24 2020-05-19 Oppo广东移动通信有限公司 Augmented reality processing method, device and system, storage medium and electronic equipment
CN111177295A (en) * 2019-12-28 2020-05-19 深圳市优必选科技股份有限公司 Image-building ghost eliminating method and device, computer-readable storage medium and robot
CN111539999A (en) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 Point cloud registration-based 3D map construction method and device, computer equipment and storage medium
CN111583369A (en) * 2020-04-21 2020-08-25 天津大学 Laser SLAM method based on facial line angular point feature extraction
CN111795704A (en) * 2020-06-30 2020-10-20 杭州海康机器人技术有限公司 Method and device for constructing visual point cloud map
CN112052300A (en) * 2020-08-05 2020-12-08 浙江大华技术股份有限公司 SLAM back-end processing method, device and computer readable storage medium
CN112100298A (en) * 2020-08-17 2020-12-18 深圳市优必选科技股份有限公司 Drawing establishing method and device, computer readable storage medium and robot
CN112258600A (en) * 2020-10-19 2021-01-22 浙江大学 Simultaneous positioning and map construction method based on vision and laser radar

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260988A (en) * 2015-09-09 2016-01-20 百度在线网络技术(北京)有限公司 High-precision map data processing method and high-precision map data processing device
WO2019122939A1 (en) * 2017-12-21 2019-06-27 University of Zagreb, Faculty of Electrical Engineering and Computing Interactive computer-implemented method, graphical user interface and computer program product for building a high-accuracy environment map
CN108765487A (en) * 2018-06-04 2018-11-06 百度在线网络技术(北京)有限公司 Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic
CN110533587A (en) * 2019-07-03 2019-12-03 浙江工业大学 A kind of SLAM method of view-based access control model prior information and map recovery
CN110766716A (en) * 2019-09-10 2020-02-07 中国科学院深圳先进技术研究院 Method and system for acquiring information of space unknown moving target
CN111179435A (en) * 2019-12-24 2020-05-19 Oppo广东移动通信有限公司 Augmented reality processing method, device and system, storage medium and electronic equipment
CN111177295A (en) * 2019-12-28 2020-05-19 深圳市优必选科技股份有限公司 Image-building ghost eliminating method and device, computer-readable storage medium and robot
CN111583369A (en) * 2020-04-21 2020-08-25 天津大学 Laser SLAM method based on facial line angular point feature extraction
CN111539999A (en) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 Point cloud registration-based 3D map construction method and device, computer equipment and storage medium
CN111795704A (en) * 2020-06-30 2020-10-20 杭州海康机器人技术有限公司 Method and device for constructing visual point cloud map
CN112052300A (en) * 2020-08-05 2020-12-08 浙江大华技术股份有限公司 SLAM back-end processing method, device and computer readable storage medium
CN112100298A (en) * 2020-08-17 2020-12-18 深圳市优必选科技股份有限公司 Drawing establishing method and device, computer readable storage medium and robot
CN112258600A (en) * 2020-10-19 2021-01-22 浙江大学 Simultaneous positioning and map construction method based on vision and laser radar

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439644A (en) * 2022-08-19 2022-12-06 广东领慧建筑科技有限公司 Similar point cloud data alignment method
CN115439644B (en) * 2022-08-19 2023-08-08 广东领慧数字空间科技有限公司 Similar point cloud data alignment method
CN117132728A (en) * 2023-10-26 2023-11-28 毫末智行科技有限公司 Method and device for constructing map, electronic equipment and storage medium
CN117132728B (en) * 2023-10-26 2024-02-23 毫末智行科技有限公司 Method and device for constructing map, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN108369741B (en) Method and system for registration data
EP2399239B1 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
US11408740B2 (en) Map data update apparatus, map data update method, and computer readable medium
WO2022188094A1 (en) Point cloud matching method and apparatus, navigation method and device, positioning method, and laser radar
CN108931795B (en) Positioning equipment track optimization and boundary extraction method and device
US9020187B2 (en) Planar mapping and tracking for mobile devices
EP3112802B1 (en) Road feature measurement apparatus and road feature measuring method
JP6842039B2 (en) Camera position and orientation estimator, method and program
JP2015181042A (en) detection and tracking of moving objects
US20210090266A1 (en) Method and device for labeling point of interest
CN113916243A (en) Vehicle positioning method, device, equipment and storage medium for target scene area
CN112837241A (en) Method and device for removing image-building ghost and storage medium
CN110608746B (en) Method and device for determining the position of a motor vehicle
US20230260216A1 (en) Point cloud annotation device, method, and program
CN114089330B (en) Indoor mobile robot glass detection and map updating method based on depth image restoration
JP2023533625A (en) High-definition map creation method, apparatus, device, and computer program
KR20210089602A (en) Method and device for controlling vehicle, and vehicle
WO2021212477A1 (en) Point cloud data correction method, and related device
JP2009080660A (en) Object region extraction processing program, object region extraction device, and object region extraction method
CN113012197A (en) Binocular vision odometer positioning method suitable for dynamic traffic scene
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
CN114140592A (en) High-precision map generation method, device, equipment, medium and automatic driving vehicle
Shi et al. Fusion of a panoramic camera and 2D laser scanner data for constrained bundle adjustment in GPS-denied environments
Wang et al. Improving RGB-D SLAM accuracy in dynamic environments based on semantic and geometric constraints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination