CN114565648A - Method, device and equipment for evaluating reconstructed parking space and storage medium - Google Patents

Method, device and equipment for evaluating reconstructed parking space and storage medium Download PDF

Info

Publication number
CN114565648A
CN114565648A CN202210186241.4A CN202210186241A CN114565648A CN 114565648 A CN114565648 A CN 114565648A CN 202210186241 A CN202210186241 A CN 202210186241A CN 114565648 A CN114565648 A CN 114565648A
Authority
CN
China
Prior art keywords
point
cloud
reconstructed
parking space
parking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210186241.4A
Other languages
Chinese (zh)
Inventor
龙淇伟
赵明
刘余钱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202210186241.4A priority Critical patent/CN114565648A/en
Publication of CN114565648A publication Critical patent/CN114565648A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Abstract

The embodiment of the application provides a method, a device, equipment and a storage medium for evaluating a reconstructed parking space, wherein the method comprises the following steps: determining a reconstructed parking spot point cloud reconstructed for a preset parking spot and a true value parking spot point cloud of the preset parking spot; acquiring a first matching relation between the reconstructed parking spot cloud and the truth value parking spot cloud, wherein the reconstructed parking spot cloud and the truth value parking spot cloud belong to the same preset parking spot; determining a second matching relationship between each point in the reconstructed parking spot point cloud and each point in the truth-value parking spot point cloud based on the first matching relationship; registering the reconstructed parking spot cloud and the truth-value parking spot cloud based on the second matching relation to obtain a registration result; and determining the reconstruction accuracy of the reconstructed parking spot cloud based on the registration result.

Description

Method, device and equipment for evaluating reconstructed parking space and storage medium
Technical Field
The embodiment of the application relates to the technical field of data processing, and relates to but is not limited to a method, a device, equipment and a storage medium for evaluating a reconstructed parking space.
Background
For a scene of a parking lot, if vision is used as a sensor for parking space reconstruction, reconstruction information of a Red Green Blue-Depth (RGB-D) camera cannot be used as a true value of evaluation due to a large scene. If the reconstruction result of the laser radar is used as a true value, the accuracy requirement of the algorithm of the laser radar is very high, a large amount of manpower is needed to build an accurate algorithm of the laser radar, and whether the accuracy is accurate or not is often dependent on experience, so that the actual evaluation is difficult.
Disclosure of Invention
The embodiment of the application provides a rebuild parking stall evaluation technical scheme.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a reconstructed parking space evaluating method, which comprises the following steps:
determining a reconstructed parking spot point cloud reconstructed for a preset parking spot and a true value parking spot point cloud of the preset parking spot;
acquiring a first matching relation between the reconstructed parking spot cloud and the truth value parking spot cloud, wherein the reconstructed parking spot cloud and the truth value parking spot cloud belong to the same preset parking spot;
determining a second matching relationship between each point in the reconstructed parking spot point cloud and each point in the truth-value parking spot point cloud based on the first matching relationship;
registering the reconstructed parking spot cloud and the truth-value parking spot cloud based on the second matching relation to obtain a registration result;
and determining the reconstruction accuracy of the reconstructed parking spot cloud based on the registration result.
The embodiment of the application provides a rebuild parking stall evaluation device, the device includes:
the system comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining a reconstructed parking space point cloud reconstructed for a preset parking space and a truth value parking space point cloud of the preset parking space;
the first acquisition module is used for acquiring a first matching relation between the reconstructed parking spot cloud and the truth value parking spot cloud, wherein the reconstructed parking spot cloud and the truth value parking spot cloud belong to the same preset parking spot;
the second determining module is used for determining a second matching relation between each point in the reconstructed parking space point cloud and each point in the truth-value parking space point cloud based on the first matching relation;
the first registration module is used for registering the reconstructed parking spot cloud and the truth-value parking spot cloud based on the second matching relation to obtain a registration result;
and the third determining module is used for determining the reconstruction accuracy of the reconstructed parking spot cloud based on the registration result.
In some embodiments, the first determining module comprises:
the first acquisition submodule is used for acquiring a true value engineering image of the preset parking space;
the first conversion submodule is used for converting the truth-value engineering image into a binary image;
and the first extraction submodule is used for carrying out contour extraction on the binary image to obtain the truth-value parking space point cloud.
In some embodiments, the first extraction sub-module comprises:
the first scanning unit is used for scanning any pixel point in the binary image according to a preset scanning sequence;
the first determining unit is used for responding to a candidate pixel point with a gray value not being 0 after scanning, and determining the target type of the parking space boundary to which the candidate pixel point belongs;
and the first tracking unit is used for carrying out boundary tracking in the binary image based on the target type to obtain the truth-value parking space point cloud.
In some embodiments, the first determining unit includes:
the first determining subunit is configured to determine, in the scanning line where the candidate pixel is located, a previous pixel of the candidate pixel and a subsequent pixel of the candidate pixel;
and the second determining subunit is used for determining the target type of the parking space boundary to which the candidate pixel point belongs based on the gray value of the previous pixel point and the gray value of the next pixel point.
In some embodiments, the first tracking unit is further configured to: under the condition that the target type of the candidate pixel point is a boundary point type, determining a first background pixel point with a first gray value of 0 in the neighborhood of the candidate pixel point according to a first tracking direction by taking the candidate pixel point as a starting point; with the first background pixel point as a starting point, searching a first foreground pixel point with a gray value not being 0 in the neighborhood of the candidate pixel point according to the first tracking direction; according to a second tracking direction, with a next background pixel point of the first background pixel points as a starting point, searching a second foreground pixel point with a gray value not being 0 in the neighborhood of the candidate pixel point; wherein the first tracking direction is different from the second tracking direction; and determining the truth-value parking space point cloud based on the first foreground pixel points and the second foreground pixel points.
In some embodiments, the first tracking unit is further configured to: determining a target boundary area of each parking space in the binary image based on the first foreground pixel points and the second foreground pixel points; determining size information and position information of each parking space based on the target boundary area of each parking space and a preset number of the target boundary area; and obtaining the truth-value parking spot cloud based on the size information and the position information of each parking spot.
In some embodiments, the apparatus further comprises:
the fourth determining module is used for determining the outer contour of each boundary area under the condition that the number of the boundary areas of each parking space is at least two;
a fifth determining module, configured to determine a degree of overlap between outer contours corresponding to the at least two demarcation regions;
and the sixth determining module is used for responding to the fact that the overlapping degree is larger than a preset threshold value, and taking the boundary area corresponding to the outer contour with the largest area as the target boundary area of each parking space.
In some embodiments, the second determining module comprises:
the first centroid removing submodule is used for respectively performing centroid removing operation on the reconstructed parking spot cloud and the true value parking spot cloud to obtain a centroid removing reconstructed parking spot cloud and a centroid removing true value parking spot cloud;
the first searching submodule is used for searching any point in the centroid-removing true value parking space point cloud, and searching a target point which is closest to the any point in the centroid-removing reconstructed parking space point cloud;
and the first establishing sub-module is used for establishing the second matching relation between any point and the target point.
In some embodiments, the first registration module comprises:
the first determining submodule is used for determining a first conversion parameter of the reconstructed parking spot cloud relative to the truth-value parking spot cloud based on the second matching relation;
and the first registration submodule is used for registering the reconstructed parking spot cloud and the true parking spot cloud based on the first conversion parameter to obtain the registration result.
In some embodiments, the first conversion parameter comprises a selection parameter and a translation parameter, and the first determination submodule comprises:
the first marking unit is used for marking the parking places in the centroid-removed reconstructed parking place point cloud according to the second matching relation to obtain a marked reconstructed parking place point cloud;
the first fusion unit is used for fusing the marked reconstructed parking spot cloud and the centroid-removed true value parking spot cloud to obtain a fusion result;
the first decomposition unit is used for decomposing the fusion result to obtain the rotation parameter;
and the second determination unit is used for determining the translation parameter of the reconstructed parking spot cloud relative to the truth-value parking spot cloud based on the rotation parameter, the centroid of the reconstructed parking spot cloud and the centroid of the truth-value parking spot cloud.
In some embodiments, the second determining unit includes:
the first rotation subunit is used for rotating the mass center of the reconstructed parking spot cloud based on the rotation parameters to obtain a rotated mass center;
the third determining subunit is used for determining the mass center of the true-value parking space point cloud and the displacement scale coefficient of the adjusted mass center;
the first adjusting subunit is used for adjusting the adjusted mass center based on the displacement proportion coefficient to obtain the adjusted mass center of the reconstructed parking spot cloud;
and the fourth determining subunit is used for determining the translation parameter based on the difference value between the mass center of the truth-value parking space point cloud and the adjusted mass center.
In some embodiments, the first registration sub-module comprises:
the first conversion unit is used for respectively rotating and translating the reconstructed parking spot cloud based on the rotation parameter and the translation parameter to obtain a converted reconstructed parking spot cloud;
a third determining unit, configured to determine a second conversion parameter between the converted reconstructed parking spot cloud and the true parking spot cloud;
a fourth determining unit, configured to determine, based on the second conversion parameter, a target reconstruction point and a target true value point that belong to the same preset parking space in the converted reconstruction parking space point cloud and the true value parking space point cloud;
and the first matching unit is used for matching the target reconstruction point with the target true value point to obtain the registration result.
In some embodiments, the third determining module comprises:
the second determining submodule is used for determining the overlapping area of the reconstructed parking spot point cloud and the truth value parking spot point cloud, which represents the same parking spot, based on the registration result;
a third determining submodule, configured to determine, based on the overlapping area and the true value area of the same parking spot, an overlapping degree of the same parking spot between the reconstructed parking spot point cloud and the true value parking spot point cloud;
the fourth determining submodule is used for determining the minimum distance between the reconstructed parking spot cloud and the truth-value parking spot cloud in the same parking spot;
a fifth determination submodule for determining the accuracy based on the degree of overlap and the minimum distance.
The embodiment of the application provides a computer storage medium, wherein a computer executable instruction is stored on the computer storage medium, and after the computer executable instruction is executed, the method for evaluating the reconstructed parking space can be realized.
The embodiment of the application provides computer equipment, the computer equipment comprises a memory and a processor, wherein computer executable instructions are stored in the memory, and the processor can realize the reconstructed parking space evaluation method when running the computer executable instructions on the memory.
The embodiment of the application provides a method, a device, equipment and a storage medium for evaluating a reconstructed parking space, and the method comprises the steps of firstly, determining a reconstructed parking space point cloud and a true value parking space point cloud of a preset parking space; therefore, the reconstructed parking spot cloud can be effectively evaluated by acquiring the truth value parking spot cloud. Secondly, acquiring a first matching relation between the reconstructed parking space point cloud and the truth value parking space point cloud, wherein the first matching relation belongs to the same preset parking space; thus, coarse registration between the point cloud of the reconstructed parking space and the point cloud of the truth-value parking space is realized by acquiring the first matching relationship; thirdly, determining a second matching relationship between each point in the reconstructed parking spot point cloud and each point in the truth-value parking spot point cloud based on the first matching relationship, and registering the reconstructed parking spot point cloud and the truth-value parking spot point cloud based on the second matching relationship to obtain a registration result; in this way, the accurate registration between the reconstructed parking spot cloud and the true parking spot cloud is realized through two matching, so that the obtained registration result is more accurate; and finally, determining the accuracy of the reconstruction of the preset parking space by the reconstructed parking space point cloud based on the registration result. Therefore, the accuracy process evaluation of reconstruction of the point cloud of the reconstructed parking space on the preset parking space can be realized through the registration result obtained through accurate registration, the operation is simple, and the evaluation accuracy is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein are incorporated into and constitute a part of this specification, and illustrate embodiments consistent with the embodiments of the present disclosure and, together with the description, serve to explain the technical solutions of the embodiments of the present disclosure. It is appreciated that the following drawings depict only some embodiments of the disclosed embodiments and are therefore not to be considered limiting of scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic diagram illustrating an implementation flow of a reconstructed parking space evaluation method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of another implementation of the reconstructed parking space evaluation method according to the embodiment of the present application;
fig. 3 is a schematic flow chart of another implementation of the method for evaluating a reconstructed parking space according to the embodiment of the present application;
fig. 4 is a schematic view of an application scenario of the reconstructed parking space evaluation method provided in the embodiment of the present application;
fig. 5 is a schematic view of another application scenario of the reconstructed parking space evaluation method according to the embodiment of the present application;
fig. 6 is a schematic view of another application scenario of the method for evaluating a reconstructed parking space according to the embodiment of the present application;
fig. 7 is a schematic view of another application scenario of the reconstructed parking space evaluation method according to the embodiment of the present application;
fig. 8 is a schematic view of another application scenario of the reconstructed parking space evaluation method according to the embodiment of the present application;
fig. 9 is a schematic structural composition diagram of the reconstructed parking space evaluation device according to the embodiment of the application;
fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
(1) When the raster scan display displays a pattern, the electron beam is scanned along a fixed scan line and in a predetermined scan order. The electron beam is swept from the upper left corner of the screen to the right by one horizontal line, then rapidly swept back to a point slightly below the left, then swept by a second horizontal line, and swept in a fixed path and sequence until the last horizontal line, thus completing the scanning of the entire screen. In the embodiment of the present application, in the raster scanning method, each pixel is scanned one by one.
(2) Neighborhood: in the digital image, the neighborhood is divided into 4 neighborhoods and 8 neighborhoods, wherein the 4 neighborhoods are four points of the (x, y) point, namely, the upper, the lower, the left, the right, and the 8 neighborhoods are added with the four points of the upper left, the upper right, the upper left, the lower right. If point p is within 8 points around point q, it is within 8 neighborhoods of point q.
The following describes an exemplary application of the device for reconstructing a parking stall evaluation provided in the embodiment of the present application, and the device provided in the embodiment of the present application may be implemented as various types of user terminals having an image capturing function, such as a notebook computer, a tablet computer, a desktop computer, a camera, a mobile device (e.g., a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. In the following, an exemplary application will be explained when the device is implemented as a terminal or a server.
The method can be applied to a computer device, and the functions realized by the method can be realized by calling a program code by a processor in the computer device, although the program code can be stored in a computer storage medium, which at least comprises the processor and the storage medium.
The embodiment of the application provides a method for evaluating a reconstructed parking space, which is shown in fig. 1 and is described by combining the steps shown in fig. 1:
step S101, determining a reconstructed parking spot point cloud reconstructed for a preset parking spot and a true parking spot point cloud of the preset parking spot.
In some embodiments, the predetermined parking space includes at least one parking space, and may be a plurality of parking spaces of any garage. This predetermine parking stall can be the parking stall in the parking area in public place, can also be a plurality of parking stalls in private garage, for example, a plurality of parking stalls of district underground garage, empty wagon garage or ground garage etc.. The point cloud of the reconstructed parking space of the preset parking space can be obtained by three-dimensional reconstruction of a plurality of parking spaces of a scene where the preset parking space is located. The reconstruction of the size and the position of a preset parking space is carried out in the reconstruction parking space point cloud; the truth-value parking space point cloud carries the truth-value size and the truth-value position of the preset point cloud.
In some possible implementations, a reconstructed point cloud of multiple preset vehicle spots is determined by three-dimensional reconstruction in computer vision. The method can be realized by the following steps: firstly, acquiring a multi-frame image of a preset parking space comprising a plurality of parking spaces, and determining the reference of an image acquisition device (such as a multi-view camera); secondly, determining the three-dimensional information of each parking space through the parallax of a multi-view camera, and identifying pixels in the multi-frame images, so that the pixel points in the multi-frame images are matched with the three-dimensional information; and thirdly, three-dimensional reconstruction can be carried out according to the matching result, so that the reconstructed parking spot cloud of the preset parking spot is obtained.
In some possible implementations, the three-dimensional reconstruction may be achieved by: the pose (such as the position and the collecting direction of the camera) of the camera serving as the image collecting device is judged, and the pose of the camera can be obtained by detecting and matching the characteristics of the pictures collected by the camera, so that the three-dimensional point cloud is obtained.
In some possible implementation manners, the real-value point cloud of the preset parking space may be obtained by obtaining an accurate position of each parking space in the preset parking space through a Computer Aided Design (CAD) map of the preset parking space. In some possible implementation modes, the CAD graph of the preset parking space is converted into a binary image, the binary image is scanned, and contour extraction is performed, so that a true value parking space point cloud capable of truly representing the preset parking space is obtained.
And S102, acquiring a first matching relation between the reconstructed parking space point cloud and the truth-value parking space point cloud, wherein the first matching relation belongs to the same preset parking space.
In some embodiments, step S102 may be the first matching relationship of the obtained manual matching. The first matching relationship may be that the user manually selects the reconstructed parking spot point cloud and the true parking spot point cloud belonging to the same preset parking spot, and marks the reconstructed parking spot point cloud and the true parking spot point cloud belonging to the same preset parking spot to establish the first matching relationship. For example, the number of the preset parking space is adopted to mark the reconstructed parking space point cloud and the true parking space point cloud which belong to the preset number. Therefore, the reconstructed parking space point cloud and the true parking space point cloud with the first matching relation represent the same preset parking space.
In some possible implementations, the first matching relationship of the at least one preset parking space may be manually established between the reconstructed parking space point cloud and the true parking space point cloud. For example, for four different preset parking spaces, points belonging to the four different preset parking spaces are respectively found in the reconstructed parking space point cloud and the truth value parking space point cloud, and a first matching relationship is established for the reconstructed parking space point cloud and the truth value parking space point cloud belonging to the same preset parking space; therefore, the first matching relation between the parking spot point cloud and the true parking spot point cloud is rebuilt.
Step S103, determining a second matching relationship between each point in the reconstructed parking space point cloud and each point in the truth parking space point cloud based on the first matching relationship.
In some embodiments, after the first matching relationship between the reconstructed point cloud of parking spaces of several preset parking spaces and the truth point cloud of parking spaces has been manually established, on the basis, for each point in the truth point cloud, a point closest to the point is searched in the reconstructed point cloud of parking spaces, so as to establish a second matching relationship between the point and the closest point. The second matching relationship represents a point with the shortest distance between the reconstructed parking spot cloud and the true parking spot cloud, namely if a second matching relationship exists between a certain point in the true parking spot cloud and a certain point in the reconstructed parking spot cloud, the two points are indicated to be closest.
In some possible implementation manners, based on a first matching relationship between the reconstructed point cloud of the parking space and the truth point cloud of the parking space, a preliminary rotation condition of the reconstructed point cloud of the parking space relative to the truth point cloud of the parking space can be preliminarily determined; and after the point cloud of the reconstructed parking space is adjusted according to the preliminary rotation condition, determining a target point closest to any point in the adjusted true value point cloud of the parking space in the adjusted reconstructed point cloud of the parking space, and further establishing a second matching relationship between the any point and the target point.
In some possible implementation manners, a second matching relationship between each Point in the reconstructed parking space Point cloud and each Point in the true parking space Point cloud may be established by an Iterative Closest Point (ICP) matching manner. Here, the principle of ICP matching is given as follows:
ICP matching is essentially an optimal registration method based on the least squares method. And the ICP matching is repeated to select the corresponding relation point pair, and the optimal rigid body transformation is calculated until the convergence precision requirement of correct registration is met. The basic principle of ICPICP matching is as follows: respectively finding the nearest point (P) in the reconstructed parking spot point cloud P and the truth parking spot point cloud Q according to certain constraint conditionsi,qi) Then, the optimal rotation R and translation T are calculated so that the error function is minimal, and the formula of the error function E (R, T) is:
Figure BDA0003523550710000101
where n is the number of pairs of adjacent points, piTo reconstruct a point in the point cloud P of parking spots, qiIs true value in the point cloud Q of the parking space and piAnd R is a rotation matrix and T is a translation vector.
And S104, registering the reconstructed parking spot cloud and the true parking spot cloud based on the second matching relation to obtain a registration result.
In some embodiments, after the reconstructed parking space point cloud and the true parking space point cloud reconstructed by the preset parking space are obtained, the reconstructed parking space point cloud and the true parking space point cloud are preliminarily registered in a semi-automatic manner to obtain a second matching relationship between the reconstructed parking space point cloud and the true parking space point cloud. After the second matching relationship is determined, determining a first conversion parameter between the reconstructed parking spot cloud and the truth parking spot cloud through the second matching relationship, and registering the reconstructed parking spot cloud and the truth parking spot cloud according to the first conversion parameter to obtain a registration result. After the second matching relationship is determined, the reconstructed parking spot point cloud and the true parking spot point cloud which represent the same parking spot are further in one-to-one correspondence according to the second matching relationship, so that the obtained registration result is more accurate. The point cloud of the reconstructed parking space and the point cloud of the true parking space are registered, and a matching relation is established between the reconstructed parking space point and the true parking space point which represent the same preset parking space so as to mark that the points represent the same preset parking space. The registration result obtained in this way is a plurality of pairs of reconstructed vehicle sites and true vehicle sites which mark the same preset parking space, wherein one pair of reconstructed vehicle site and true vehicle site can represent one preset parking space.
In some possible implementation manners, after the first matching relationship between the reconstructed parking space point cloud and the true parking space point cloud is determined, the reconstructed parking space point cloud and the true parking space point cloud are registered again according to the second matching relationship on the basis, so that the reconstructed parking space point cloud and the true parking space point cloud are accurately registered.
And S105, determining the accuracy of the reconstruction of the preset parking space by the reconstructed parking space point cloud based on the registration result.
In some embodiments, according to the registration result between the reconstructed parking spot cloud and the true parking spot cloud, the reconstructed vehicle location point and the true parking spot point representing the same parking spot can be obtained. And determining the reconstruction accuracy of the reconstructed parking space point cloud by determining the overlapping degree between points representing the same parking space in the reconstructed parking space point cloud and the truth value parking space point cloud and the closest distance between the reconstructed parking space point cloud representing the same parking space and the truth value parking space point cloud.
In the embodiment of the application, firstly, a reconstructed parking spot cloud and a true value parking spot cloud of a preset parking spot are obtained; therefore, the reconstructed parking spot cloud can be effectively evaluated by acquiring the truth value parking spot cloud. And secondly, determining a first conversion parameter between the reconstructed parking spot cloud and the true parking spot cloud. Thirdly, registering the reconstructed parking spot cloud and the true parking spot cloud based on the first conversion parameter to obtain a registration result; therefore, coarse registration between the point cloud of the reconstruction parking space and the point cloud of the truth parking space is achieved by determining the first conversion parameter, and based on the coarse registration, the point cloud of the reconstruction parking space and the point cloud of the truth parking space are registered by the first conversion parameter, so that the obtained registration result is more accurate. And finally, determining the accuracy of the reconstruction of the preset parking space by the reconstructed parking space point cloud based on the registration result. Therefore, the accuracy process evaluation of the reconstruction of the point cloud of the reconstructed parking space on the preset parking space can be automatically carried out through the registration result obtained through accurate registration, the operation is simple, and the evaluation accuracy is high.
In some embodiments, the real-value parking space point cloud of the preset parking space is obtained through the CAD image of the preset parking space, that is, "determining the real-value parking space point cloud of the preset parking space" in the step S101 may be implemented through the steps shown in fig. 2:
step S201, obtaining a true value engineering image of the preset parking space.
In some embodiments, the truth engineering image of the preset parking space may be a design drawing indicating an accurate structure of the preset parking space, for example, a CAD drawing of the preset parking space. The truth-value engineering image can also be a layer which is extracted from a CAD (computer-aided design) drawing and represents a real design structure of a preset parking space. The truth-value engineering image can be one layer of a CAD drawing and can be a color image. Therefore, the truth-value parking space point cloud of the preset parking space can be obtained by acquiring the truth-value engineering image of the preset parking space.
And S202, converting the truth-value engineering image into a binary image.
In some embodiments, a binary image may be obtained by performing a binary transformation on the truth-valued engineering image. The binary image can represent the real outline of the preset parking space, so that each parking space can be accurately represented.
In some possible implementation modes, graying is firstly carried out through a true value engineering image to obtain a gray image; and then carrying out binarization processing on the gray level image, thereby realizing the classification of the area where the parking space is located and the background area in the image. For example, a binary image is obtained by setting the pixel value of the parking space boundary region in the grayscale image to 1 and the pixel values of the other regions to 0.
And step S203, extracting the outline of the binary image to obtain the truth-value parking space point cloud.
In some embodiments, pixel points in the binary image are scanned, so that it can be determined which image regions in the binary image have pixel values of 1 and which image regions have pixel values of 0, and then the contour representing each parking space and the true parking space point cloud representing the size of each parking space can be determined by using the image regions with pixel values of 1.
In some possible implementation manners, the contour extraction of the binary image may be understood as scanning and counting pixel points with a pixel value of 1 in the binary image, so that a contour of each parking space can be obtained, and the contour extraction of the binary image is realized.
In the above steps S201 to S203, by acquiring the truth-value engineering image of the preset parking space, a more accurate truth-value parking space point cloud can be extracted on the basis of the truth-value engineering image, and the implementation process is simple.
In some embodiments, the boundary tracking of the binary image is implemented by scanning the binary image according to a certain scanning sequence, so as to obtain a true-value point cloud, that is, the step S203 may be implemented by the following steps S231 to S233 (not shown in the figure):
and S231, scanning any pixel point in the binary image according to a preset scanning sequence.
In some embodiments, the preset scanning order may be set according to the positions of the vertices of the binary image, where the preset scanning order is set to start from the vertex at the lower left corner of the binary image to the lower right corner, then turn to the first point at the right of the line above the lower right corner, scan the line from the first point at the right of the line above, and traverse the binary image line by line until reaching the vertex at the upper right corner of the binary image.
The preset scanning order may also be set autonomously, for example, setting is to traverse the binary image line by line from left to right from the upper left corner to right to left until the lower right corner vertex of the binary image is reached. Or, starting from the top left corner, traversing the binary image from left to right in sequence until reaching the top of the bottom right corner. Therefore, each pixel point in the binary image can be scanned by traversing and scanning the binary image.
Step S232, responding to the scanned candidate pixel point with the gray value not being 0, and determining the target type of the parking space boundary to which the candidate pixel point belongs.
In some embodiments, in the scanning process, if the gray value of a certain pixel point in the scanned binary image is not 0, that is, the gray value of the pixel point is 1, the pixel point is taken as a candidate pixel point. And the candidate pixel point with the gray value not being 0 is the pixel point on the boundary area of the parking space. In response to scanning the candidate pixel point with the gray value not being 0, the candidate pixel point with the gray value not being 0 is scanned for the first time in the scanning of one line, or the candidate pixel point with the gray value not being 0 is scanned for the nth time in the scanning of one line. The target type of the parking space boundary to which the candidate pixel point belongs comprises: the outer boundary or, boundary and the point between the inner and outer boundary of the parking space boundary area. That is, for the scanned candidate pixel point whose gray value is not 0, further analyzing whether the candidate pixel point is the inner boundary or the outer boundary of the parking space boundary to which the candidate pixel point belongs.
The target type of the parking space boundary to which the candidate pixel point belongs, namely the position of the candidate pixel point in the parking space boundary to which the candidate pixel point belongs, can also be understood as the position of the candidate pixel point in the boundary area of the parking space to which the candidate pixel point belongs; for example, the candidate pixel is any point on the outer boundary line, the inner boundary line, or the region between the inner boundary line and the outer boundary line of the boundary region.
In some possible implementation manners, for a candidate pixel point whose scanned gray value is not 0, the target type of the parking space boundary to which the candidate pixel point belongs is determined by combining the type of the parking space boundary to which the pixel point in the same row adjacent to the candidate pixel point belongs, that is, the step S232 may be implemented by:
firstly, in a scanning line where the candidate pixel point is located, determining a previous pixel point of the candidate pixel point and a next pixel point of the candidate pixel point.
In some embodiments, the scanning line in which the candidate pixel point is located is a pixel point line in the binary image in the process of scanning the binary image line by line. The scanning line where the candidate pixel points are located comprises all the pixel points in the line of the binary image, and some of the pixel points are in front of the candidate pixel points and some of the pixel points are behind the candidate pixel points; here, the front and the back are set according to the scanning timing, that is, the pixel scanned earlier than the candidate pixel in the scanning line is confirmed to be in front of the candidate pixel; and confirming that the pixel point scanned later than the candidate pixel point is behind the candidate pixel point. The previous pixel of the candidate pixel can be understood as a pixel adjacent to the candidate pixel in the scanning line and scanned before the candidate pixel. The next pixel of the candidate pixel can be understood as a pixel adjacent to the candidate pixel in the scanning line and scanned later than the candidate pixel.
And secondly, determining the target type of the parking space boundary to which the candidate pixel point belongs based on the gray value of the previous pixel point and the gray value of the next pixel point.
In some embodiments, the gray-level value of the previous pixel is 0 or 1, and the gray-level value of the next pixel is also 0 or 1. The gray value of the previous pixel point of the candidate pixel point is combined with the gray value of the next pixel point, so that the approximate position of the candidate pixel point in the boundary area of the parking space, namely the target type of the parking space boundary of the candidate pixel point, can be determined.
In some possible implementation manners, if the gray value of the previous pixel point and the gray value of the next pixel point are both 1, it is described that the previous pixel point and the next pixel point are both pixel points on the boundary region of the parking place to which the previous pixel point and the next pixel point belong, and further it is described that the candidate pixel point between the previous pixel point and the next pixel point is not a point on the boundary of the parking place to which the candidate pixel point belongs, but is a point on the boundary region except for two boundary lines, namely the inner boundary line and the outer boundary line.
In some possible implementation modes, if the gray value of the previous pixel point is 1 and the gray value of the next pixel point is 0, the previous pixel point is a pixel point on the boundary region of the parking place to which the previous pixel point belongs, and the next pixel point is not a pixel point on the boundary region of the parking place to which the next pixel point belongs; on the basis, the candidate pixel point is combined with the pixel point on the boundary region of the parking place, the candidate pixel point and the previous pixel point of the candidate pixel point are all the pixel points on the boundary region of the parking place, the next pixel point of the candidate pixel point is not the pixel point on the boundary region of the parking place, and the candidate pixel point can be determined to be the point on the inner boundary line of the boundary region of the parking place.
In some possible implementation manners, if the gray value of the previous pixel point is 0 and the gray value of the next pixel point is 1, it is indicated that the previous pixel point is not the pixel point in the boundary region of the parking place to which the previous pixel point belongs, and the next pixel point is the pixel point in the boundary region of the parking place to which the next pixel point belongs; on the basis, the candidate pixel points are combined to be pixel points on the boundary area of the parking place, it is stated that the candidate pixel points and the next pixel points of the candidate pixel points are pixel points on the boundary area of the parking place, the previous pixel points of the candidate pixel points are not pixel points on the boundary area of the parking place, and the candidate pixel points can be determined to be points on the outer boundary line on the boundary area of the parking place. Therefore, whether the candidate pixel point is a point on the boundary line of the parking space can be accurately judged by combining the gray values of the previous pixel point and the next pixel point of the candidate pixel point.
And step S233, based on the target type, performing boundary tracking in the binary image to obtain the true value parking space point cloud.
In some embodiments, according to the target type of the candidate pixel point, performing boundary tracking by taking any point on an outer boundary line of a parking space to which the candidate pixel point belongs as a starting point in the binary image; therefore, the complete boundary area of the parking space to which the candidate pixel point belongs can be inquired, and the complete boundary area is output in a true parking space point cloud mode.
In some possible implementation manners, if the target type of the candidate pixel point indicates that the candidate pixel point is a point on the boundary of the parking space, the boundary tracking may be implemented through the following processes:
firstly, taking the candidate pixel point as a central point, taking any point in the background on the binary image as a starting point, and searching pixel points with the gray value of 1 in 4 neighborhoods (or 8 neighborhoods) of the candidate pixel point clockwise (or anticlockwise).
Second, the first adjacent point on the boundary line for the encountered value of 1 is marked with any two-dimensional coordinate (e.g., (i1, j1)) and another background point is made to represent the closest point in the sequence before scanning (i1, j 1). The candidate pixel point and the location of (i1, j1) are stored for use in subsequent steps.
And updating a loop variable, starting from another background point and proceeding in a clockwise direction, searching a pixel point with the gray value of 1 and on the boundary line in the 4 neighborhoods of (i1, j1), and marking the first adjacent point with the encountered value of 1 by any two-dimensional coordinate (for example, (i4, j 4)).
Again, the center point is updated to (i4, j4), and the background point as the starting point is updated to (i4, j4) the closest pixel point in the background.
And finally, repeating the process until the searched point with the gray value of 1 is the candidate pixel point and the next boundary point of the candidate pixel point is (i1, j 1). In this way, a plurality of boundary points can be obtained.
The binary image is scanned in the steps S231 to S233, so that the target type of the candidate pixel point is determined, and then the truth-value point cloud representing the real size and position of the preset parking space can be more accurately obtained by performing boundary tracking on the binary image.
In some embodiments, in the binary image, by searching the pixel point on the boundary clockwise or counterclockwise for the non-0 pixel point, the true-value parking spot cloud with the parking spot number can be obtained, that is, the step S233 can be implemented by the following steps S241 to S243 (shown in the figure):
step S241, when the target type of the candidate pixel is the boundary point type, determining a first background pixel with a first gray value of 0 in a neighborhood of the candidate pixel according to a first tracking direction, with the candidate pixel as a starting point.
In some embodiments, the boundary point class includes points on the outer boundary and points on the inner boundary of the demarcated areas of the vehicle. And under the condition that the candidate pixel point is a point on the boundary line of the parking space, determining the neighborhood of the candidate pixel point by taking the candidate pixel point as a central point. For example, 4 neighborhoods or 8 neighborhoods of the candidate pixel points. The first tracking direction may be clockwise or counter-clockwise. Taking the first tracking direction as a clockwise direction as an example, taking the candidate pixel point as a starting point, and searching a pixel point with a gray value of 0 in the neighborhood of the candidate pixel point; and taking the searched pixel point with the first gray value of 0 as a first background point.
In one embodiment, if the candidate pixel (i, j) is a point on the outer boundary, the neighborhood of 4 for the candidate pixel (i, j) is determined. And (5) searching a background pixel point with the gray value of 0 in the 4 neighborhoods of the candidate pixel point (i, j) according to the clockwise tracking direction by using the candidate pixel point (i, j), and taking the first background pixel point (i2, j2) as the first background pixel point.
Step S242, using the first background pixel point as a starting point, and searching a first foreground pixel point whose gray value is not 0 in a neighborhood of the candidate pixel point according to the first tracking direction.
In some embodiments, the searched pixel point with the first gray value of 0 is used as a starting point, and searching is performed in the neighborhood of the candidate pixel point to search for a pixel point with a gray value of not 0, that is, to search for a pixel point located on the boundary region.
Taking the first tracking direction as the clockwise direction and the first background pixel point as (i2, j2) as an example, in 4 neighborhoods of the candidate pixel points, the pixel points located on the boundary area, that is, the first foreground pixel point, are searched. Thus, the number of the searched first foreground pixel points can be one or more.
Step S243, according to the second tracking direction, using the next background pixel of the first background pixel as a starting point, and searching for a second foreground pixel having a gray value not equal to 0 in the neighborhood of the candidate pixel.
In some embodiments, the second tracking direction is different from the first tracking direction, for example, the first tracking direction is clockwise and the second tracking direction is counterclockwise. The next background pixel point of the first background pixel point can be understood as a pixel point whose next pixel point encountered by forward movement in the second tracking direction is 0, that is, the next background pixel point of the first background pixel point, with the first background pixel point as a starting point. Taking the second tracking direction as the counterclockwise direction as an example, and taking the next background pixel of the first background pixel as a starting point, in the neighborhood of the candidate pixel, searching pixels with the gray value not being 0, namely, the second foreground pixel located on the boundary region, in the counterclockwise direction.
Step S244, determining the truth-value parking space point cloud based on the first foreground pixel points and the second foreground pixel points.
In some embodiments, the boundary area of each preset parking space can be determined by combining the first foreground pixel points searched according to the first tracking direction and the second foreground pixel points searched according to the second tracking direction; therefore, after the boundary region of the preset parking space is determined, the boundary region is combined with the pixel point with the pixel of 0 in the background region of the binary image, and the true value parking space point cloud of the preset parking space can be obtained.
In the above steps S241 to S244, the pixel point with the gray value not being 0 is searched in the neighborhood of the candidate pixel point by taking the candidate pixel point as the central point according to the different tracking directions, so that the pixel point in the boundary region of the same parking space with the candidate pixel point can be tracked, and a true parking space point cloud with higher accuracy is established.
In some embodiments, by analyzing the contour of each parking space and the preset number of the parking space, the point cloud coordinate of each parking space in the world coordinate can be determined, so as to obtain the true point cloud of the parking space of the preset parking space, that is, the step S243 may be implemented by the following steps:
firstly, determining a target boundary area of each parking space in the binary image based on the non-0 pixel points searched in the binary image.
In some embodiments, the non-0 pixel point found in the binary image indicates that the pixel point is on the boundary region of the parking space, so that the boundary region of each parking space in the binary image can be obtained by finding all the non-0 pixel points in the binary image. If a plurality of boundary areas of each parking space in the determined binary image are provided, determining an optimal boundary area as a target boundary area from the plurality of boundary areas; the most preferred boundary region is understood to be the region of the plurality of boundary regions having the largest bounding area.
In some possible implementation manners, after the boundary region of each parking space is preliminarily determined, it is further determined that the boundary region with the largest outer contour surrounding area is selected from the obtained multiple boundary regions as a target boundary region, that is, the first step further includes:
firstly, under the condition that the number of the boundary areas of each parking space is at least two, determining the outer contour of each boundary area.
Here, the number of the boundary regions of each parking space is at least two, that is, at least two boundary regions of the parking space are obtained by scanning non-0 pixel points in the binary image. For each of at least two demarcated areas, determining an outer contour of the demarcated area; namely the outer contour of the parking space to which the boundary area belongs.
In a specific example, for any parking space, assuming that a yellow rectangle is used as a boundary region between the parking spaces, the outline is an outline of the parking space adjacent to other parking spaces (or road edges).
Secondly, the overlapping degree between the outer contours corresponding to the at least two demarcation areas is determined.
Here, the overlapping degree between the outer contours corresponding to the at least two interface regions is understood to be the overlapping degree between the regions covered by the outer contours. For example, the outer contours are all rectangular contours, and the overlapping degree between the outer contours can be obtained by determining the overlapping degree between rectangles surrounded by a plurality of rectangular contours.
In some possible implementations, if the number of the boundary regions is 3 or more than 3, the overlapping degree between the outer contours corresponding to each two boundary regions is determined, for example, if there are 3 boundary regions, then respectively determining: the degree of overlap between the outer contours corresponding to the boundary region 1 and the boundary region 2, the degree of overlap between the outer contours corresponding to the boundary region 2 and the boundary region 3, and the degree of overlap between the outer contours corresponding to the boundary region 1 and the boundary region 3.
And finally, responding to the fact that the overlapping degree is larger than a preset threshold value, and taking a boundary area corresponding to the outer contour with the largest area as a target boundary area of each parking space.
Here, the overlapping degree is greater than the preset threshold, and the overlapping degree between the outer contours corresponding to any two dividing regions may be greater than the preset threshold. The overlap of the two outer contours is greater than a preset threshold, which may be set based on empirical values, for example, the overlap is set to be greater than or equal to 0.9.
If the number of the boundary areas is more than 3 or more than 3, judging whether the overlapping degree corresponding to every two boundary areas is larger than a preset threshold value or not, and screening out one or more pairs of boundary areas with the overlapping degrees larger than the preset threshold value. In this way, by reserving the outline with the largest area and using the boundary region corresponding to the outline with the largest area as the target boundary region for each parking space, the number of overlapping boundaries of the same parking space can be reduced.
And secondly, determining the size information and the position information of each parking space based on the target boundary area of each parking space and the preset number of the target boundary area.
In some embodiments, the preset number of the target boundary area is a number marked in the CAD image for each parking space, and the number is designed based on the position of each parking space in the parking lot. For example, the serial numbers are sequentially arranged from the top left corner of the CAD drawing corresponding to the preset parking space to the bottom right corner of the CAD drawing from 1 to the bottom left corner. The size information of each parking space is the length and width of the parking space; the position information of each parking space, i.e., the position of the parking space in the binary image, for example, which row and which column of the plurality of parking spaces included in the binary image the parking space includes. And the size information of the parking space can be obtained by analyzing the area surrounded by the outer contour of the target boundary area and the side length of each outer contour. By acquiring the preset number of the target boundary area in the binary image, the parking space corresponding to the target boundary area can be determined as to which row and column in a plurality of parking spaces.
And thirdly, obtaining the truth-value parking space point cloud based on the size information and the position information of each parking space.
In some embodiments, the coordinates of each parking space in the world coordinate system can be obtained by converting the parking space from the two-dimensional coordinates on the binary image into the world coordinates through the size information and the position information of the parking space. Therefore, the real coordinates of each parking space in the world coordinate system can be obtained, and the real-value parking space point cloud of the preset parking spaces comprising a plurality of parking spaces is obtained.
Through the first step to the third step, the coordinates of the parking space in the world coordinate system and the size information of the parking space can be obtained according to the target boundary area of each parking space and the corresponding preset number, and then the true value parking space point cloud is obtained; therefore, the real point cloud coordinate of the preset parking space can be established.
In some embodiments, the reconstructed point cloud of the parking space and the true point cloud of the parking space are secondarily registered in a semi-automatic manner to obtain a first conversion parameter between the reconstructed point cloud of the parking space and the true point cloud of the parking space, so that the registration between the reconstructed point cloud of the parking space and the true point cloud of the parking space is conveniently realized through the first conversion parameter, that is, the step S104 may be realized through the steps shown in fig. 3:
step S301, determining a first conversion parameter of the reconstructed parking spot cloud relative to the true parking spot cloud based on the second matching relationship.
In some embodiments, after a second matching relationship between the reconstructed parking spot cloud and the truth parking spot cloud is established, the reconstructed parking spot cloud can be fitted with the truth parking spot cloud after the rotation and translation of the reconstructed parking spot cloud are determined according to the second matching relationship; in this way, a rotation parameter indicating how much to rotate and a translation parameter indicating how much to translate can be determined. Reconstructing a rotation parameter and a translation parameter of the parking spot point cloud relative to the true parking spot point cloud, and using the rotation parameter and the translation parameter as a first conversion parameter of the reconstructed parking spot point cloud relative to the true parking spot point cloud; therefore, the reconstructed parking spot cloud rotates according to the rotation parameter, and can be superposed with the true parking spot cloud to the maximum extent after translation operation is performed according to the translation parameter.
In some embodiments, the first conversion parameters include rotation and translation of the reconstructed point cloud coordinates relative to the coordinates of the true point cloud coordinates. And determining the closest point of each point in the truth-value parking space point cloud in the reconstructed parking space point cloud by respectively determining the centroids of the reconstructed parking space point cloud and the truth-value parking space point cloud, so as to establish a second matching relationship between the points with the closest distance between the reconstructed parking space point cloud and the truth-value parking space point cloud. In this way, the rotation and translation between the two types of point clouds can be determined by determining the closest point between the two types of point clouds, establishing the second matching relationship between the two types of point clouds, and then determining the W matrix formed by the two types of point clouds. The W matrix is a matrix for constructing iterative closest point matching, and the constructed matrix is a conversion relation between two types of point clouds, namely conversion between a reconstructed parking space point cloud and a true parking space point cloud is realized.
In some possible implementation manners, the closest point of each point in the truth-valued parking spot point cloud in the reconstructed parking spot point cloud may be determined by an iterative closest point matching method: firstly, adjusting the reconstructed parking spot cloud by adopting the centroid of the reconstructed parking spot cloud, and adjusting the true value parking spot cloud by adopting the centroid of the true value parking spot cloud; secondly, a mean square error function is constructed through a preset initial first conversion parameter, the adjusted truth-value parking spot cloud and the adjusted reconstructed parking spot cloud. Thirdly, for each point in the adjusted truth-value parking space point cloud, determining the nearest point of each point in the adjusted reconstructed parking space point cloud; finally, a first transformation parameter between the two closest points is determined in such a way that the minimum value of the mean square error function is established. Therefore, the first conversion parameter between the reconstructed parking spot cloud and the true parking spot cloud can be obtained, and therefore the registration between the reconstructed parking spot cloud and the true parking spot cloud can be conveniently achieved.
Step S302, registering the reconstructed parking spot cloud and the true parking spot cloud based on the first conversion parameter to obtain a registration result.
In some embodiments, after the first conversion parameter is determined, according to the first conversion parameter, the reconstructed point cloud of the parking space and the true point cloud of the parking space that represent the same parking space are further associated one to one, and a matching relationship is established. Thus, through the steps S301 and S302, the current optimal first conversion parameter is determined according to the second matching relationship, and the reconstructed parking space point cloud and the true parking space point cloud can be better registered.
In some embodiments, the second matching relationship is established for the reconstructed parking space point cloud and the true parking space point cloud by performing a centroid removing operation on the reconstructed parking space point cloud and the true parking space point cloud, that is, the step S103 may be implemented by the following steps S131 to S133 (not shown in the figure):
and S131, respectively carrying out centroid removing operation on the reconstructed parking spot cloud and the true value parking spot cloud to obtain a centroid-removed reconstructed parking spot cloud and a centroid-removed true value parking spot cloud.
In some embodiments, first, a first centroid of the reconstructed point cloud and a second centroid of the truth point cloud are determined from the reconstructed point cloud and the truth point cloud that have established the first matching relationship.
Here, the reconstructed parking spot point cloud and the truth-value parking spot point cloud of the first matching relationship are established, that is, matching relationships have been marked for several preset parking spots in the reconstructed parking spot point cloud and the truth-value parking spot point cloud. In this way, in the reconstructed parking spot cloud and the truth-value parking spot cloud marked with the first matching relationship, a first mass center of the reconstructed parking spot cloud and a second mass center of the truth-value parking spot cloud are determined.
And secondly, adjusting the reconstructed parking spot cloud by adopting the first mass center to obtain a mass center removed reconstructed parking spot cloud.
And subtracting the first centroid from each point in the reconstructed parking spot cloud to obtain a centroid-removed reconstructed parking spot cloud. Thus, the centroid-removed reconstructed parking spot cloud is the centroid-removed reconstructed parking spot cloud.
And thirdly, adjusting the true value parking space point cloud by adopting the second mass center to obtain a mass center removing true value parking space point cloud.
And subtracting the second centroid from each point in the true-value parking space point cloud to obtain a centroid-removed true-value parking space point cloud. Therefore, the centroid true value point cloud is the centroid true value point cloud.
And S132, searching any point in the centroid-removing truth-value parking spot point cloud in the centroid-removing reconstruction parking spot point cloud for a target point closest to the any point.
In some embodiments, for any point in the centroid-removed truth-value parking space point cloud, a point with a shortest euclidean distance to the any point, that is, a target point, is searched in the centroid-removed reconstructed parking space point cloud in an iterative closest point manner. The target point is closest to any point, and the point represents the same position in the same preset parking space.
In a specific example, four ABCD parking spots are selected from the centroid-removing truth-value parking spot point cloud, and for any parking spot (for example, parking spot a), a reconstructed parking spot closest to parking spot a is searched in the centroid-removing reconstructed parking spot point cloud, namely, a target point of the parking spot a.
Step S133, establishing the second matching relationship between the arbitrary point and the target point.
In some embodiments, any point with the closest distance corresponds to a target point of the point in the centroid-removed reconstructed parking space point cloud, and the two points are marked by adopting a second matching relation to represent the same point in the real preset parking space.
In the steps S131 to S133, by removing the centroid from the reconstructed parking space point cloud and the true parking space point cloud, a target point closest to the reconstructed parking space point cloud is searched in the removed centroid reconstructed parking space point cloud according to any point in the removed centroid true parking space point cloud after the centroid is removed, so that a second matching relationship between the target point and the any point is established; therefore, on the basis of manual rough matching, further matching between the centroid-removing reconstruction parking spot cloud and the centroid-removing truth-value parking spot cloud can be achieved, and most points between the centroid-removing reconstruction parking spot cloud and the centroid-removing truth-value parking spot cloud can be approximately matched together.
In some embodiments, by analyzing the rotational relationship between the centroid-removed reconstructed parking space point cloud and the centroid-removed true parking space point cloud, the rotational parameter and the translational parameter of the reconstructed parking space point cloud relative to the true parking space point cloud can be resolved therefrom, that is, the step S301 can be implemented by the following steps S311 to S313 (not shown in the figure):
and S311, according to the second matching relation, marking the parking space in the centroid-removed reconstructed parking space point cloud to obtain a marked reconstructed parking space point cloud.
In some embodiments, according to the correspondence relationship between the centroid-removed reconstructed parking space point cloud and the centroid-removed true parking space point cloud, the true value position and the true value size information corresponding to each reconstructed parking space in the centroid-removed reconstructed parking space point cloud are marked, so that the marked reconstructed parking space point cloud is obtained. Therefore, the marked reconstructed parking space point cloud carries the true position and the true size information, and the reconstructed parking space is successfully matched with the centroid-removed true parking space point cloud.
And S312, fusing the marked reconstructed parking spot cloud and the centroid true value removed parking spot cloud to obtain a fusion result.
In some embodiments, a transposed point cloud of a centroid-removed reconstructed parking space point cloud is obtained; and multiplying the transposed point cloud and the centroid true value-removed parking space point cloud element by element, and summing multiplication results to obtain a fusion result.
And step S313, decomposing the fusion result to obtain the rotation parameter.
In some embodiments, the fusion result is subjected to singular value decomposition to obtain two identity orthogonal matrices and a diagonal matrix. The dimension of the diagonal matrix can be determined based on the coordinate dimension of the midpoint of the truth point cloud of the parking space; for example, if the coordinate dimension of the real-value point cloud of the parking space is three-dimensional, the dimension of the diagonal matrix is determined to be three-dimensional. The values on the diagonal in the diagonal matrix may be determined based on the second matching relationship.
Step S314, determining the translation parameter of the reconstructed parking space point cloud relative to the true value parking space point cloud based on the rotation parameter, the centroid of the reconstructed parking space point cloud and the centroid of the true value parking space point cloud.
In the above steps S311 to S314, the real-valued positions and the real-valued sizes of the parking spaces in the centroid-removed reconstructed parking space point cloud are marked according to the second matching relationship, so that most of the reconstructed parking spaces in the marked reconstructed parking space point cloud are marked with the real-valued positions and sizes. On this basis, the marked reconstruction parking space point cloud and the fusion result of the centroid truth-value parking space point cloud are decomposed, so that the translation parameters and the rotation parameters of the reconstruction parking space point cloud relative to the truth-value parking space point cloud can be effectively obtained.
In some embodiments, a displacement scale coefficient between the point cloud centroids is combined on the basis of the rotation parameter, so that the obtained translation parameter is more accurate, that is, the step S314 may be implemented by:
and firstly, rotating the mass center of the reconstructed parking spot cloud based on the rotation parameters to obtain a rotated mass center.
In some embodiments, the transformation parameter may be a transformation matrix, and the transformation matrix is multiplied by the centroid of the reconstructed point cloud element by element to realize rotation of the centroid of the reconstructed point cloud; therefore, the result of element-by-element multiplication of the transformation matrix and the center of mass of the reconstructed parking spot point cloud can be used as the rotated center of mass.
And secondly, determining the mass center of the true parking spot cloud and the displacement scale coefficient of the adjusted mass center.
In some embodiments, the displacement scale factor can be obtained by dividing the centroid of the true-value parking space point cloud by the adjusted centroid. The displacement proportionality coefficient represents a proportionality coefficient between the true value parking space point cloud and the reconstructed parking space point cloud in displacement.
And thirdly, adjusting the mass center of the reconstructed parking spot cloud based on the displacement proportion coefficient to obtain the adjusted mass center of the reconstructed parking spot cloud.
In some embodiments, the displacement scale coefficient is multiplied by the center of mass of the reconstructed point cloud of the parking space, and the obtained multiplication result is the adjusted center of mass of the reconstructed point cloud of the parking space.
And fourthly, determining the translation parameters based on the difference value between the mass center of the truth-value parking space point cloud and the adjusted mass center.
In some embodiments, the translation parameter of the reconstructed point cloud of parking space relative to the true point cloud of parking space may be obtained by determining a difference between the centroid of the true point cloud of parking space and the adjusted centroid.
According to the first step to the fourth step, the proportion coefficient of the two types of point clouds about displacement is determined by rebuilding the rotation parameter of the point cloud of the parking space relative to the real-value point cloud of the parking space, and then the translation parameter of the point cloud of the rebuilding parking space relative to the real-value point cloud of the parking space can be analyzed more accurately based on the proportion coefficient.
In some embodiments, the first conversion parameter includes a rotation parameter and a translation parameter, and on the basis of obtaining the rotation parameter and the translation parameter, the reconstructed point cloud is converted, and the converted reconstructed point cloud and the true point cloud are further registered to obtain a registration result, that is, the step S302 may be implemented by the following steps S321 to S323 (not shown in the figure):
step S321, determining a second conversion parameter between the converted reconstructed parking spot cloud and the true parking spot cloud.
In some embodiments, after the reconstructed point cloud is converted by the first conversion parameter, the converted reconstructed point cloud is obtained. The implementation process of step S321 is similar to the implementation process of step S301, that is, according to the second matching relationship between the transformed and reconstructed parking space point cloud and the true parking space point cloud, the second transformation parameter between the two types of point clouds can be determined through the W matrix formed by the two types of point clouds. The second conversion parameters include a rotation parameter and a translation parameter.
Step S322, based on the second conversion parameter, determining a target reconstruction point and a target true value point which belong to the same preset parking space in the converted reconstruction parking space point cloud and the true value parking space point cloud.
In some embodiments, the converted and reconstructed parking spot cloud is rotated and translated according to the second conversion parameter, so that the converted and reconstructed parking spot cloud and the true parking spot cloud can be fitted together; therefore, the reconstruction points and the real value points can be determined to belong to the same preset parking space, and the target reconstruction points and the target real value points which belong to the same preset parking space can be obtained. Thus, for the same preset parking space, the target reconstruction point is the point which represents the preset parking space in the converted reconstructed parking space point cloud, and the target truth value point location truth value parking space point cloud represents the point of the preset parking space.
And step S323, matching the target reconstruction point with the target true value point to obtain the registration result.
In some embodiments, matching the target reconstruction point and the target true value point may be understood as associating the target reconstruction point and the target true value point of each preset parking space together, or overlapping the target reconstruction point and the target true value point of each preset parking space together, so that the target reconstruction point covers the target true value point, and the converted reconstructed parking space point cloud and the true value parking space point cloud after matching are the registration result.
Through the steps S321 to S323, on the basis of determining the rotation parameter and the translation parameter for the first time, the reconstructed point cloud is converted, and the converted reconstructed point cloud and the true value point cloud are registered again, so that a more effective registration result can be obtained.
In some embodiments, the overlapping degree between the true point cloud and the reconstructed point cloud and the minimum distance can be analyzed to reflect the quality of the reconstructed result, that is, the step S105 can be implemented by the following steps S151 to S154 (not shown in the figure):
and S151, determining the overlapping area of the reconstructed parking space point cloud and the truth value parking space point cloud which represent the same parking space based on the registration result.
Here, according to the registration result, points belonging to the same parking space between the heavy parking space point cloud and the true parking space point cloud are determined, and the overlapping area between the areas formed by the points is determined.
Step S152, determining the overlapping degree of the same parking space between the reconstructed parking space point cloud and the truth-value parking space point cloud based on the overlapping area and the truth-value area of the same parking space.
Here, the real-valued area of the same parking space is the area of the occupied area of the parking space in the binary image. In some possible implementation manners, the overlap area is divided by the true area of the same parking space to obtain a value, that is, the overlap degree between the reconstructed parking space point cloud and the true parking space point cloud is represented. The larger the overlapping degree is, the higher the similarity between the reconstructed parking space point cloud and the truth value parking space point cloud is, otherwise, the smaller the overlapping degree is, the lower the similarity between the reconstructed parking space point cloud and the truth value parking space point cloud is.
Step S153, in the same parking space, determining the minimum distance between the reconstructed parking space point cloud and the true parking space point cloud.
Here, for any parking space, after determining a true parking space point cloud belonging to the parking space and reconstructing the parking space point cloud, the minimum distance between the two types of point clouds is determined. The smaller the minimum distance is, the higher the similarity between the reconstructed parking space point cloud and the true parking space point cloud is, and otherwise, the larger the minimum distance is, the lower the similarity between the reconstructed parking space point cloud and the true parking space point cloud is.
Step S154, based on the overlapping degree and the minimum distance, the accuracy is determined.
Here, at least one of the overlapping degree and the minimum distance is used as an index for evaluating the reconstruction accuracy of the reconstructed point cloud of the parking space, and the overlapping degree and the minimum distance may be combined to jointly evaluate the reconstruction accuracy of the reconstructed point cloud of the parking space. For example, the higher the overlapping degree and the smaller the minimum distance, the higher the reconstruction accuracy of the reconstructed point cloud of the parking space. The lower the overlapping degree and the larger the minimum distance, the lower the reconstruction accuracy of the reconstructed parking spot cloud.
Through the steps S151 to S154, the overlapping degree and the minimum distance of the same parking space between the reconstructed parking space point cloud and the true parking space point cloud can be accurately obtained, and the accuracy of the reconstructed parking space point cloud can be effectively reflected through the two indexes.
An exemplary application of the embodiment of the present application in an actual application scenario will be described below, taking the evaluation of a three-dimensional reconstruction parking space based on a CAD drawing as an example.
The autonomous valet parking system is used for realizing the automatic driving of the fully autonomous vehicle in a specific area from the entrance or the exit of a parking lot to a parking space. The realization of the autonomous passenger-riding parking system needs to depend on the map reconstruction of a high-precision garage scene. In the related art, the mainstream environment reconstruction scheme is divided into two schemes, namely laser fusion and visual fusion, according to the difference of sensors. Both schemes ultimately result in a point cloud with semantic information for the parking garage reconstruction. No matter what kind of sensor-based reconstruction scheme for the garage, the accuracy of the reconstruction for the garage parking space needs to be evaluated.
The vision-based reconstruction scheme needs to use a laser radar scheme which is more accurate than vision for evaluating the accuracy, or needs to reconstruct the same scene in three dimensions by using a depth camera under the condition of short distance. This scheme sets the lidar or other sensor reconstruction scheme to true values, and it is desirable that the visual reconstruction scheme achieve similar results as the more accurate sensor. However, the real visual reconstruction scheme has no great limitation on the actual reconstruction effect of the real scene.
In the reconstruction scheme based on the laser radar scheme, an evaluation data set of a virtual scene is often used for evaluating the accuracy of the reconstruction scheme. The virtual data set is often different from the real scene, so that much noise is reduced, which results in that the performance of the actual algorithm in the noise scene is very different from that in the virtual scene.
For a scene of a parking lot, if vision is used as a sensor for parking space reconstruction, reconstruction information of an RGB-D camera cannot be used as a true value of evaluation due to a large scene; if the reconstruction result of the laser radar is used as a true value, the accuracy requirement of the algorithm of the laser radar needs to be very high, a large amount of manpower is needed to build an accurate algorithm of the laser radar, and whether the accuracy is accurate or not depends on experience, and an actual inspection scheme is not provided. This makes the actual evaluation very difficult.
Based on this, the embodiment of the application provides a reconstructed parking space evaluation method, which is applicable to both a scheme of visual reconstruction and a scheme of laser reconstruction; firstly, size representation of a garage parking space in the real world is obtained, and a representation method for the garage in a real reconstruction data acquisition scene is constructed. And secondly, registering the true value point cloud and the reconstructed point cloud of the parking space, wherein the registering process needs to be as accurate as possible, and the condition of non-matching can be avoided. And finally, evaluating the point cloud of the reconstructed parking space under the condition of knowing a matching result, designing various evaluation indexes, and reflecting the quality of the reconstructed result.
In some embodiments, the process of reconstructing the parking space assessment may be implemented by:
firstly, carrying out contour detection on a CAD graph of a preset parking space to obtain a true value parking space point cloud.
In some possible implementation manners, in the garage construction process, construction is often performed depending on accurate structural indexes given by a CAD graph. Therefore, the accurate position of each parking space can be acquired through the CAD graph. To acquire the parking space information from the CAD drawing, the parking space needs to be extracted from the parking space drawing. Here, the parking space is extracted by an algorithm of contour extraction that converts the CAD drawing into a binary image. The realization process is as follows: suppose the input CAD drawing is F ═ FijThe initial current boundary number is set to 1 (the frame of the CAD drawing F is considered to be the first boundary). And scanning the CAD graph F by using a raster scanning method, and executing the following steps 1 to 4 when the gray value of a certain pixel point (i, j) is not 0. The history boundary number is reset to 1 each time when the start position of a new row of the picture is scanned. Wherein, the current boundary is marked as B, and the historical boundary is marked as B'.
Here, the CAD drawing of the parking spaces is shown as a drawing layer 401 in fig. 4, and positions, sizes, and the like of a plurality of parking spaces are shown in the drawing layer 401. The current boundary number of each parking space in the layer 401 is shown as an image 501 in fig. 5, and the image 501 is obtained by numbering each parking space in the layer 401, where the image 501 includes the numbers 0 to 196 of each parking space.
Step 1, judging the type of the boundary point of the current point.
Here, the current point may be a candidate pixel point in the above embodiment. Step 1 may be achieved by the following steps a to c:
step a: if the current point is the outer boundary starting point (namely the previous pixel point is empty), otherwise, the current boundary number NBD + ═ 1, i2 ═ i, j2 ═ j-1;
here, j-1 indicates a pixel point immediately previous to the j-th pixel point. The outer boundary starting point can be understood as an outer boundary of the parking space boundary.
Step b: if the current boundary number is the hole boundary starting point (that is, the next pixel point is empty), the current boundary number NBD + ═ 1, i2 ═ i, j2 ═ j + 1;
here, j +1 denotes a pixel next to the jth pixel. The starting point of the hole boundary can be understood as the inner ring boundary of the parking space boundary.
Step c: otherwise, go to step 4.
And 2, determining the outer ring boundary of the parking space where the current boundary B is located according to the type of the B'.
And 3, starting to track the boundary from any starting point (i, j) on the boundary.
Here, step 3 may be implemented by the following steps a to d:
step a: and (3) with the candidate pixel point (i, j) as the center and the first background pixel point (i2, j2) as the starting point, and clockwise searching whether a non-0 pixel point exists in the 4 (or 8) neighborhood of (i, j). If the non-0 pixel point is found, the first foreground pixel point (i1, j1) is the first non-0 pixel point in the clockwise direction; otherwise make fij-NBD and go to step 4;
step b: i2 ═ i1, j2 ═ j2, i3 ═ i, j3 ═ j;
step c: and (3) searching whether a non-0 pixel point exists in the 4 (or 8) neighborhood of (i3, j3) by taking the candidate pixel point (i3, j3) as the center and taking the next point of the first background pixel point (i2, j2) as a starting point in the counterclockwise direction. If the non-0 pixel point is found, the second foreground pixel point (i4, j4) is the first non-0 pixel point in the counterclockwise direction;
step d: if the next point of the candidate pixel point is(i3, j3+1) is the pixel that has been checked in step c above and is 0, then fi3j3-NBD, if not, and fi3j31, then fi3j3=NBD。
Step 4, if fijNot equal to 1, then the historical boundary number equals fijAnd continuing to start scanning from (i, j +1) and ending when scanning to the top right corner vertex of the picture.
Through the steps 1 to 4, each contour line can be found, and the length and width directions of the parking spaces and the pixel positions of the central points are determined through four outer boundary points of each contour line. Meanwhile, in order to reduce the repetition of the boundary, the repetition degree judgment of two rectangles is added, when the repetition degree is greater than a certain threshold value, the two outlines are considered as one outline, and a larger outline is reserved. The specific length of the length and the width of the parking space and the actual space position of the central point can be determined through the real world scale corresponding to each pixel. Thus, the point cloud coordinates of the real parking garage are established.
And secondly, combining coarse matching and fine matching to obtain an ideal matching result.
Here, the matching relationship between the truth point cloud and the reconstructed point cloud is manually checked through a visual tool. As shown in fig. 6, the truth point cloud 601 and the reconstructed point cloud 602 are roughly matched by manually selecting 4 pairs of matching parking spaces. In fig. 6, real-valued parking spots 61 and 62 in the point cloud 601 are manually selected and roughly matched with real-valued parking spots 63 and 64 in the reconstructed point cloud 602.
Based on the rough matching result shown in fig. 6, a matching process based on Similarity (SIM 3) can be performed through the following steps a to d:
step a: respectively determining the truth value parking spot point cloud and reconstructing the mass center of the parking spot point cloud;
here, the centroid of the true point cloud X is shown in formula (1):
Figure BDA0003523550710000291
in the formula (1), NxNumber, x, of points in the real-valued parking spaceiAnd representing any point in the truth-valued parking spot cloud.
The centroid of the reconstructed point cloud P is shown in formula (2):
Figure BDA0003523550710000292
step b: subtracting the corresponding centroid from the original point cloud to obtain a point cloud with the centroid removed;
here, the truth-value point cloud X' of the center of mass of the parking space point cloud X is as shown in formula (3):
X'={xi-ux}=x'i (3);
the point cloud P' for reconstructing the point cloud P of the parking space and removing the center of mass is shown as a formula (4):
P'={xi-ux}=p'i (4);
step c: iteratively obtaining a closest point between the true value parking spot point cloud and the reconstructed parking spot point cloud, and establishing a matching relation;
on the basis of realizing rough matching through manual selection in fig. 6, through one iteration of the steps a to c, rough registration of the true value parking spot cloud and the reconstructed parking spot cloud is realized, and a rough registration result 701 in fig. 7 is obtained; the horizontal and vertical coordinates in the registration result 701 respectively indicate the dimensions of the CAD drawing.
Step d: the W matrix formed by the newly generated two point clouds is determined and SVD decomposition is performed.
Wherein, the W matrix is shown as formula (5):
Figure BDA0003523550710000301
in formula (5), U and V represent a translation matrix and a rotation matrix, respectively, where the rotation of the reconstructed point cloud P with respect to the true point cloud X is shown in formula (6):
R=VUT (6);
the displacement of the reconstructed parking spot cloud P relative to the true parking spot cloud X is shown in the formula (7):
t=ux-sRup (7);
in the formula (7), s represents a translational scaling factor,
Figure BDA0003523550710000302
on the basis of the coarse registration result 701 in fig. 7, multiple iterations of steps a to d are performed to realize the precise registration of the true-value point cloud and the reconstructed point cloud of the parking space, and a precise registration result 801 shown in fig. 8 is obtained; the horizontal and vertical coordinates in the registration result 801 respectively indicate the dimensions of the CAD drawing.
And 3, determining the real-value parking space point cloud, the overlapping degree (IOU) of the parking spaces in the reconstructed parking space point cloud, and the closest distance of the same parking space between the real-value parking space point cloud and the reconstructed parking space point cloud.
Here, after the two point clouds are matched, the size of the overlapping area corresponding to each parking space can be determined. If the area of the overlapping part is S ', the area of the true parking space is S, and the IOU is the ratio of S' to S.
In the embodiment of the application, firstly, a parking spot cloud of a garage of a real design is obtained from a CAD graph by using a contour detection method. And then, acquiring a true parking spot cloud and reconstructing a registration result of the parking spot cloud by utilizing manual verification, performing primary coarse matching and primary fine matching of ICP (inductively coupled plasma), and acquiring a final ideal matching result. And finally, giving a reconstruction quality report of the reconstructed parking space by using the matching result. Therefore, the truth-value parking space point cloud of the parking garage can be obtained according to the CAD graph, registration of the truth-value garage point cloud and the reconstructed garage point cloud can be effectively carried out, and the overlapping degree of the reconstructed garage and the truth-value garage can be evaluated. So, realized the process of semi-automatic evaluation, need not a large amount of human input can be quick acquire the quality report of rebuilding the parking stall, easy operation.
The embodiment of the present application provides a rebuild parking stall evaluation device, and fig. 9 is the structural component schematic diagram of this application embodiment rebuild parking stall evaluation device, as shown in fig. 9, rebuild parking stall evaluation device 900 includes:
a first determining module 901, configured to determine a reconstructed parking spot cloud reconstructed for a preset parking spot and a true-value parking spot cloud of the preset parking spot;
a first obtaining module 902, configured to obtain a first matching relationship between the reconstructed parking spot cloud and the truth-value parking spot cloud, where the reconstructed parking spot cloud and the truth-value parking spot cloud belong to the same preset parking spot;
a second determining module 903, configured to determine, based on the first matching relationship, a second matching relationship between each point in the reconstructed parking spot point cloud and each point in the truth-value parking spot point cloud;
a first registration module 904, configured to register the reconstructed parking space point cloud and the truth-value parking space point cloud based on the second matching relationship, so as to obtain a registration result;
a third determining module 905, configured to determine a reconstruction accuracy of the reconstructed parking spot cloud based on the registration result.
In some embodiments, the first determining module 901 includes:
the first acquisition submodule is used for acquiring a true value engineering image of the preset parking space;
the first conversion submodule is used for converting the truth-value engineering image into a binary image;
and the first extraction submodule is used for carrying out contour extraction on the binary image to obtain the truth-value parking space point cloud.
In some embodiments, the first extraction sub-module comprises:
the first scanning unit is used for scanning any pixel point in the binary image according to a preset scanning sequence;
the first determining unit is used for responding to a candidate pixel point with a gray value not being 0 after scanning, and determining the target type of the parking space boundary to which the candidate pixel point belongs;
and the first tracking unit is used for carrying out boundary tracking in the binary image based on the target type to obtain the truth-value parking space point cloud.
In some embodiments, the first determining unit includes:
the first determining subunit is configured to determine, in the scanning line where the candidate pixel is located, a previous pixel of the candidate pixel and a subsequent pixel of the candidate pixel;
and the second determining subunit is used for determining the target type of the parking space boundary to which the candidate pixel point belongs based on the gray value of the previous pixel point and the gray value of the next pixel point.
In some embodiments, the first tracking unit is further configured to: under the condition that the target type of the candidate pixel point is a boundary point type, determining a first background pixel point with a first gray value of 0 in the neighborhood of the candidate pixel point according to a first tracking direction by taking the candidate pixel point as a starting point; with the first background pixel point as a starting point, searching a first foreground pixel point with a gray value not being 0 in the neighborhood of the candidate pixel point according to the first tracking direction; according to a second tracking direction, with a next background pixel point of the first background pixel points as a starting point, searching a second foreground pixel point with a gray value not being 0 in the neighborhood of the candidate pixel point; wherein the first tracking direction is different from the second tracking direction; and determining the truth-value parking space point cloud based on the first foreground pixel points and the second foreground pixel points.
In some embodiments, the first tracking unit is further configured to: determining a target boundary area of each parking space in the binary image based on the first foreground pixel points and the second foreground pixel points; determining size information and position information of each parking space based on the target boundary area of each parking space and a preset number of the target boundary area; and obtaining the truth-value parking spot cloud based on the size information and the position information of each parking spot.
In some embodiments, the apparatus further comprises:
the fourth determining module is used for determining the outer contour of each boundary area under the condition that the number of the boundary areas of each parking space is at least two;
a fifth determining module, configured to determine a degree of overlap between outer contours corresponding to the at least two interface regions;
and the sixth determining module is used for responding to the fact that the overlapping degree is larger than a preset threshold value, and taking the boundary area corresponding to the outer contour with the largest area as the target boundary area of each parking space.
In some embodiments, the second determining module 903 comprises:
the first centroid removing submodule is used for respectively performing centroid removing operation on the reconstructed parking spot cloud and the true value parking spot cloud to obtain a centroid removing reconstructed parking spot cloud and a centroid removing true value parking spot cloud;
the first searching submodule is used for searching any point in the centroid-removing true value parking space point cloud, and searching a target point which is closest to the any point in the centroid-removing reconstructed parking space point cloud;
and the first establishing sub-module is used for establishing the second matching relation between any point and the target point.
In some embodiments, the first registration module 904 comprises:
the first determining submodule is used for determining a first conversion parameter of the reconstructed parking spot cloud relative to the truth-value parking spot cloud based on the second matching relation;
and the first registration submodule is used for registering the reconstructed parking spot cloud and the true parking spot cloud based on the first conversion parameter to obtain the registration result.
In some embodiments, the first conversion parameter comprises a selection parameter and a translation parameter, and the first determination submodule comprises:
the first marking unit is used for marking the parking places in the centroid-removed reconstructed parking place point cloud according to the second matching relation to obtain a marked reconstructed parking place point cloud;
the first fusion unit is used for fusing the marked reconstructed parking spot cloud and the centroid-removed true value parking spot cloud to obtain a fusion result;
the first decomposition unit is used for decomposing the fusion result to obtain the rotation parameter;
and the second determination unit is used for determining the translation parameter of the reconstructed parking spot cloud relative to the truth-value parking spot cloud based on the rotation parameter, the centroid of the reconstructed parking spot cloud and the centroid of the truth-value parking spot cloud.
In some embodiments, the second determining unit includes:
the first rotation subunit is used for rotating the mass center of the reconstructed parking spot cloud based on the rotation parameters to obtain a rotated mass center;
the third determining subunit is used for determining the mass center of the true-value parking space point cloud and the displacement scale coefficient of the adjusted mass center;
the first adjusting subunit is used for adjusting the adjusted mass center based on the displacement proportion coefficient to obtain the adjusted mass center of the reconstructed parking spot cloud;
and the fourth determining subunit is used for determining the translation parameter based on the difference value between the mass center of the truth-value parking space point cloud and the adjusted mass center.
In some embodiments, the first registration sub-module comprises:
the first conversion unit is used for respectively rotating and translating the reconstructed parking spot cloud based on the rotation parameter and the translation parameter to obtain a converted reconstructed parking spot cloud;
a third determining unit, configured to determine a second conversion parameter between the converted reconstructed parking spot cloud and the true parking spot cloud;
a fourth determining unit, configured to determine, based on the second conversion parameter, a target reconstruction point and a target true value point that belong to the same preset parking space in the converted reconstruction parking space point cloud and the true value parking space point cloud;
and the first matching unit is used for matching the target reconstruction point with the target true value point to obtain the registration result.
In some embodiments, the third determining module 905 includes:
the second determining submodule is used for determining the overlapping area of the reconstructed parking spot cloud and the truth-value parking spot cloud representing the same parking spot based on the registration result;
a third determining submodule, configured to determine, based on the overlapping area and the true value area of the same parking spot, an overlapping degree of the same parking spot between the reconstructed parking spot point cloud and the true value parking spot point cloud;
the fourth determining submodule is used for determining the minimum distance between the reconstructed parking spot cloud and the truth-value parking spot cloud in the same parking spot;
a fifth determination submodule configured to determine the accuracy based on the overlap and the minimum distance.
It should be noted that the above description of the embodiment of the apparatus, similar to the description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the above-mentioned reconstructed parking space evaluation method is implemented in the form of a software functional module, and is sold or used as an independent product, it may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a terminal, a server, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a hard disk drive, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
The embodiment of the application further provides a computer program product, the computer program product comprises computer executable instructions, and after the computer executable instructions are executed, the method for evaluating the reconstructed parking space provided by the embodiment of the application can be realized.
The embodiment of the application further provides a computer storage medium, wherein a computer executable instruction is stored on the computer storage medium, and when the computer executable instruction is executed by a processor, the method for evaluating the reconstructed parking space provided by the embodiment is implemented.
An embodiment of the present application provides a computer device, fig. 10 is a schematic structural diagram of a composition of a computer device according to an embodiment of the present application, and as shown in fig. 10, the computer device 1000 includes: a processor 1001, at least one communication bus, a communication interface 1002, at least one external communication interface, and a memory 1003. Wherein communications interface 1002 is configured to enable connected communications between these components. The communication interface 1002 may include a display screen, and the external communication interface may include a standard wired interface and a wireless interface. The processor 801 is configured to execute a reconstructed parking space evaluation program in the memory, so as to implement the reconstructed parking space evaluation method provided in the foregoing embodiment.
The descriptions of the embodiments of the reconstructed parking space evaluating device, the computer device, and the storage medium are similar to the descriptions of the embodiments of the method described above, have similar technical descriptions and beneficial effects as the embodiments of the corresponding method, and are limited by space. For technical details that are not disclosed in the embodiments of the reconstruction parking space evaluation device, the computer device, and the storage medium of the present application, please refer to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code. The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A reconstructed parking space evaluation method is characterized by comprising the following steps:
determining a reconstructed parking spot point cloud reconstructed for a preset parking spot and a true value parking spot point cloud of the preset parking spot;
acquiring a first matching relation between the reconstructed parking spot cloud and the truth value parking spot cloud, wherein the reconstructed parking spot cloud and the truth value parking spot cloud belong to the same preset parking spot;
determining a second matching relationship between each point in the reconstructed parking spot point cloud and each point in the truth-value parking spot point cloud based on the first matching relationship;
registering the reconstructed parking spot cloud and the truth-value parking spot cloud based on the second matching relation to obtain a registration result;
and determining the reconstruction accuracy of the reconstructed parking spot cloud based on the registration result.
2. The method of claim 1, wherein the determining a truth point cloud for the predetermined space comprises:
acquiring a true value engineering image of the preset parking space;
converting the truth-value engineering image into a binary image;
and carrying out contour extraction on the binary image to obtain the true value parking space point cloud.
3. The method of claim 2, wherein the contour extraction of the binary image to obtain the truth-valued parking spot cloud comprises:
scanning any pixel point in the binary image according to a preset scanning sequence;
responding to a scanned candidate pixel point with a gray value not being 0, and determining a target type of a parking space boundary to which the candidate pixel point belongs;
and carrying out boundary tracking in the binary image based on the target type to obtain the truth-value parking space point cloud.
4. The method of claim 3, wherein the determining the target type of the parking space boundary to which the candidate pixel point belongs in response to scanning the candidate pixel point with the gray value different from 0 comprises:
in the scanning line where the candidate pixel point is located, determining a previous pixel point of the candidate pixel point and a next pixel point of the candidate pixel point;
and determining the target type of the parking space boundary to which the candidate pixel point belongs based on the gray value of the previous pixel point and the gray value of the next pixel point.
5. The method of claim 3 or 4, wherein the boundary tracking in the binary image based on the target type to obtain the truth parking spot cloud comprises:
under the condition that the target type of the candidate pixel point is a boundary point type, determining a first background pixel point with a first gray value of 0 in the neighborhood of the candidate pixel point according to a first tracking direction by taking the candidate pixel point as a starting point;
with the first background pixel point as a starting point, searching a first foreground pixel point with a gray value not being 0 in the neighborhood of the candidate pixel point according to the first tracking direction;
according to a second tracking direction, with the next background pixel point of the first background pixel point as a starting point, searching a second foreground pixel point with a gray value not being 0 in the neighborhood of the candidate pixel point; wherein the first tracking direction is different from the second tracking direction;
and determining the truth-value parking space point cloud based on the first foreground pixel points and the second foreground pixel points.
6. The method of claim 5, wherein determining the truth point cloud based on the first foreground pixel points and the second foreground pixel points comprises:
determining a target boundary area of each parking space in the binary image based on the first foreground pixel points and the second foreground pixel points;
determining size information and position information of each parking space based on the target boundary area of each parking space and a preset number of the target boundary area;
and obtaining the truth-value parking spot cloud based on the size information and the position information of each parking spot.
7. The method of claim 6, further comprising:
determining the outline of each boundary area under the condition that the number of the boundary areas of each parking space is at least two;
determining the overlapping degree between the outer contours corresponding to the at least two demarcation areas;
and responding to the fact that the overlapping degree is larger than a preset threshold value, and taking the boundary area corresponding to the outline with the largest area as the target boundary area of each parking space.
8. The method of any one of claims 1 to 7, wherein determining a second matching relationship between each point in the reconstructed point cloud and each point in the truth point cloud based on the first matching relationship comprises:
respectively carrying out centroid removing operation on the reconstructed parking spot cloud and the truth value parking spot cloud to obtain a centroid-removed reconstructed parking spot cloud and a centroid-removed truth value parking spot cloud;
for any point in the centroid-removing truth-value parking spot point cloud, searching a target point closest to the any point in the centroid-removing reconstruction parking spot point cloud;
and establishing the second matching relation between the any point and the target point.
9. The method of any one of claims 1 to 8, wherein registering the reconstructed point cloud and the truth point cloud based on the second matching relationship to obtain a registration result comprises:
determining a first conversion parameter of the reconstructed parking spot cloud relative to the truth parking spot cloud based on the second matching relationship;
and registering the reconstructed parking spot cloud and the true parking spot cloud based on the first conversion parameter to obtain a registration result.
10. The method of claim 9, wherein the first conversion parameters comprise a selection parameter and a translation parameter, and wherein determining the first conversion parameters of the reconstructed point cloud relative to the true point cloud based on the second matching relationship comprises:
according to the second matching relation, marking the parking places in the centroid-removed reconstructed parking place point cloud to obtain a marked reconstructed parking place point cloud;
fusing the marked reconstructed parking spot cloud and the centroid truth value removed parking spot cloud to obtain a fusion result;
decomposing the fusion result to obtain the rotation parameter;
and determining the translation parameter of the reconstructed parking spot cloud relative to the true parking spot cloud based on the rotation parameter, the centroid of the reconstructed parking spot cloud and the centroid of the true parking spot cloud.
11. The method of claim 10, wherein determining the translation parameters of the reconstructed point cloud relative to the true point cloud based on the rotation parameters, a centroid of the reconstructed point cloud, and a centroid of the true point cloud comprises:
rotating the mass center of the reconstructed parking spot cloud based on the rotation parameters to obtain a rotated mass center;
determining the mass center of the truth-value parking space point cloud and the displacement scale coefficient of the adjusted mass center;
adjusting the adjusted mass center based on the displacement proportion coefficient to obtain the adjusted mass center of the reconstructed parking spot cloud;
determining the translation parameter based on a difference between a centroid of the truth point cloud and the adjusted centroid.
12. The method of any one of claims 9 to 11, wherein the registering the reconstructed point cloud of parking space and the truth point cloud of parking space based on the first conversion parameter to obtain a registration result comprises:
respectively rotating and translating the reconstructed parking spot cloud based on the rotation parameter and the translation parameter to obtain a converted reconstructed parking spot cloud;
determining a second conversion parameter between the converted reconstruction parking spot cloud and the truth parking spot cloud;
determining a target reconstruction point and a target true value point which belong to the same preset parking space in the converted reconstruction parking space point cloud and the true value parking space point cloud based on the second conversion parameter;
and matching the target reconstruction point with the target true value point to obtain the registration result.
13. The method of any one of claims 1 to 12, wherein determining the reconstruction accuracy of the reconstructed point cloud based on the registration result comprises:
determining the overlapping area of the reconstructed parking spot point cloud and the truth-value parking spot point cloud representing the same parking spot based on the registration result;
determining the overlapping degree of the same parking space between the reconstructed parking space point cloud and the truth-value parking space point cloud based on the overlapping area and the truth-value area of the same parking space;
determining the minimum distance between the reconstructed parking spot point cloud and the truth-value parking spot point cloud in the same parking spot;
determining the accuracy based on the degree of overlap and the minimum distance.
14. The utility model provides a rebuild parking stall evaluation device which characterized in that, the device includes:
the system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining a reconstructed parking space point cloud reconstructed for a preset parking space and a true value parking space point cloud of the preset parking space;
the first acquisition module is used for acquiring a first matching relation between the reconstructed parking spot cloud and the truth value parking spot cloud, wherein the reconstructed parking spot cloud and the truth value parking spot cloud belong to the same preset parking spot;
the second determining module is used for determining a second matching relation between each point in the reconstructed parking space point cloud and each point in the truth-value parking space point cloud based on the first matching relation;
the first registration module is used for registering the reconstructed parking spot cloud and the truth-value parking spot cloud based on the second matching relation to obtain a registration result;
and the third determining module is used for determining the reconstruction accuracy of the reconstructed parking spot cloud based on the registration result.
15. A computer storage medium, wherein computer-executable instructions are stored on the computer storage medium, and when executed, the computer-executable instructions can implement the method for evaluating a reconstruction parking space according to any one of claims 1 to 13.
16. A computer device, characterized in that the computer device comprises a memory and a processor, the memory stores computer-executable instructions, and the processor can implement the method for evaluating a reconstructed parking space according to any one of claims 1 to 13 when executing the computer-executable instructions on the memory.
CN202210186241.4A 2022-02-28 2022-02-28 Method, device and equipment for evaluating reconstructed parking space and storage medium Pending CN114565648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210186241.4A CN114565648A (en) 2022-02-28 2022-02-28 Method, device and equipment for evaluating reconstructed parking space and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210186241.4A CN114565648A (en) 2022-02-28 2022-02-28 Method, device and equipment for evaluating reconstructed parking space and storage medium

Publications (1)

Publication Number Publication Date
CN114565648A true CN114565648A (en) 2022-05-31

Family

ID=81715584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210186241.4A Pending CN114565648A (en) 2022-02-28 2022-02-28 Method, device and equipment for evaluating reconstructed parking space and storage medium

Country Status (1)

Country Link
CN (1) CN114565648A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029952A (en) * 2022-07-27 2023-04-28 荣耀终端有限公司 Point cloud evaluation method and related equipment thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029952A (en) * 2022-07-27 2023-04-28 荣耀终端有限公司 Point cloud evaluation method and related equipment thereof
CN116029952B (en) * 2022-07-27 2023-10-20 荣耀终端有限公司 Point cloud evaluation method and related equipment thereof

Similar Documents

Publication Publication Date Title
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
Xiao et al. Street environment change detection from mobile laser scanning point clouds
Kumar Mishra et al. A review of optical imagery and airborne lidar data registration methods
CN109410260B (en) Point cloud data meshing method and device, computer equipment and storage medium
Stiller et al. Large-scale building extraction in very high-resolution aerial imagery using Mask R-CNN
CN111179433A (en) Three-dimensional modeling method and device for target object, electronic device and storage medium
CN112529827A (en) Training method and device for remote sensing image fusion model
CN112154448A (en) Target detection method and device and movable platform
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
CN116452852A (en) Automatic generation method of high-precision vector map
CN114565648A (en) Method, device and equipment for evaluating reconstructed parking space and storage medium
Wu et al. Building reconstruction from high-resolution multiview aerial imagery
CN114332796A (en) Multi-sensor fusion voxel characteristic map generation method and system
CN111198563B (en) Terrain identification method and system for dynamic motion of foot type robot
CN113077523A (en) Calibration method, calibration device, computer equipment and storage medium
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
Fernandes et al. Extraction of building roof contours from the integration of high-resolution aerial imagery and laser data using Markov random fields
Rother et al. Seeing 3D objects in a single 2D image
CN113362458B (en) Three-dimensional model interpretation method for simulating multi-view imaging, terminal and storage medium
Becker et al. Lidar inpainting from a single image
Arevalo et al. Improving piecewise linear registration of high-resolution satellite images through mesh optimization
Salah et al. Summarizing large scale 3D mesh for urban navigation
Khan et al. Adaptive differential evolution applied to point matching 2d gis data
Qin et al. Geometric Processing for Image-based 3D Object Modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination