CN118279849A - Parking space detection method, electronic equipment and vehicle - Google Patents
Parking space detection method, electronic equipment and vehicle Download PDFInfo
- Publication number
- CN118279849A CN118279849A CN202311227885.4A CN202311227885A CN118279849A CN 118279849 A CN118279849 A CN 118279849A CN 202311227885 A CN202311227885 A CN 202311227885A CN 118279849 A CN118279849 A CN 118279849A
- Authority
- CN
- China
- Prior art keywords
- line
- parking space
- image
- vehicle
- vehicle position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 96
- 239000013598 vector Substances 0.000 claims abstract description 125
- 238000012512 characterization method Methods 0.000 claims abstract description 52
- 230000007935 neutral effect Effects 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000013135 deep learning Methods 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 230000009467 reduction Effects 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 14
- 238000012935 Averaging Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 239000000523 sample Substances 0.000 claims 4
- 238000004422 calculation algorithm Methods 0.000 abstract description 8
- 230000007613 environmental effect Effects 0.000 abstract description 3
- 238000012545 processing Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012805 post-processing Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008054 signal transmission Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
A parking space detection method, electronic equipment and a vehicle are provided, and the method comprises the following steps: acquiring a first image and a second image of a parking space; inputting the first image into a trained parking space detection model based on a deep learning network to obtain parking space characterization information; obtaining a first vehicle center line vector based on the vehicle position representation information; obtaining a first vehicle position line and a second vehicle position line based on the second image; obtaining a second vehicle center line vector based on the first vehicle line and the second vehicle line; matching the first vehicle centerline vector with the second vehicle centerline vector; when the first vehicle position neutral line vector is successfully matched with the second vehicle position neutral line vector, acquiring fusion vehicle position representation information based on the vehicle position representation information and the first vehicle position line and the second vehicle position line; and obtaining a parking space detection result based on the fusion parking space characterization information. According to the scheme, the fusion parking space characterization information can be obtained, the stability of an algorithm and the environmental adaptability are ensured, and meanwhile, the position precision and the angle precision of detection can be greatly improved.
Description
Technical Field
The application relates to the technical field of vehicles, in particular to a parking space detection method, electronic equipment and a vehicle.
Background
With the development and progress of technology, automatic parking systems are applied to more and more vehicles, and the automatic parking systems can automatically detect parking spaces. In the related art, the parking space detection method is poor in environmental adaptability, the algorithm is easy to fail, the algorithm stability is poor, or certain deviation exists between the position and the angle marked by the parking space due to the marking precision, so that the detected parking space line position and angle precision is low.
Disclosure of Invention
In the summary, a series of concepts in a simplified form are introduced, which will be further described in detail in the detailed description. The summary of the application is not intended to define the key features and essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Aiming at the existing problems, the application provides a parking space detection method, which comprises the following steps: acquiring a first image and a second image of a parking space; inputting the first image into a trained parking space detection model based on a deep learning network to obtain parking space characterization information; obtaining a first vehicle center line vector based on the vehicle position representation information; obtaining a first parking space line and a second parking space line based on the second image, wherein the first parking space line is parallel to the second parking space line; obtaining a second vehicle center line vector based on the first vehicle position line and the second vehicle position line; matching the first vehicle position neutral line vector with the second vehicle position neutral line vector, and determining whether the first vehicle position neutral line vector and the second vehicle position neutral line vector are successfully matched; when the first vehicle position neutral line vector is successfully matched with the second vehicle position neutral line vector, fused vehicle position representation information is obtained based on the vehicle position representation information, the first vehicle position line and the second vehicle position line; and obtaining a parking space detection result based on the fusion parking space characterization information.
Illustratively, the parking space characterization information includes an entry line first endpoint coordinate, an entry line second endpoint coordinate, and a split line angle.
The first vehicle location centerline vector is illustratively represented by a first vehicle location centerline vector coordinate and a first vehicle location centerline vector angulation, the deriving the first vehicle location centerline vector based on the vehicle location characterization information comprising: averaging the first end point coordinates of the entering line and the second end point coordinates of the entering line to obtain the first vehicle center line vector coordinates; the first vehicle center line angulation is equal to the split line angulation.
Illustratively, the deriving the first and second vehicle location lines based on the second image includes: obtaining a gray scale image based on the second image; performing histogram equalization operation on the gray level image to obtain an equalized gray level image; carrying out noise reduction treatment on the balanced gray level image to obtain a noise reduction gray level image; converting the noise reduction gray level image into a binary image by adopting an adaptive threshold method; performing open operation on the binary image to obtain a noise reduction binary image; skeletonizing the noise reduction binary image to obtain a binary skeleton image; detecting the binary skeleton image by adopting a Hough straight line detection method to obtain a potential parking space line; and filtering the potential parking space line by adopting priori knowledge to obtain the first parking space line and the second parking space line.
Illustratively, the obtaining a gray scale image based on the second image includes: converting the second image into an HSV image; and extracting a V channel image in the HSV image as the gray level image.
The first space line is illustratively represented by a first space line first endpoint coordinate and a first space line second endpoint coordinate; the second parking space line is represented by a first end point coordinate of the second parking space line and a second end point coordinate of the second vehicle parking space line.
The second vehicle center line vector is represented by a second vehicle center line vector coordinate and a second vehicle center line vector, the obtaining a second vehicle center line vector based on the first vehicle position line and the second vehicle position line includes: averaging the first end point coordinates of the first parking space line and the first end point coordinates of the second parking space line to obtain second center line first end point coordinates; averaging the first parking space line second endpoint coordinate and the second parking space line second endpoint coordinate to obtain a second central line second endpoint coordinate; averaging the first end point coordinates of the second center line and the second end point coordinates of the second center line to obtain second position center line vector coordinates; and obtaining the second vehicle position neutral line protracted angle based on the second neutral line first end point coordinate and the second neutral line second end point coordinate.
Illustratively, the matching the first vehicle location centerline vector with the second vehicle location centerline vector and determining whether the first vehicle location centerline vector and the second vehicle location centerline vector match successfully comprises: obtaining an absolute value of a difference value between the first vehicle position central line measuring angle and the second vehicle position central line measuring angle to obtain a central line included angle; solving the distance from the first vehicle center line vector coordinate to the second vehicle center line vector to obtain a center line distance; and when the center line included angle is not larger than a preset included angle and the center line distance is not larger than a preset distance, determining that the first vehicle center line vector and the second vehicle center line vector are successfully matched.
The preset included angle is, for example, 30 degrees.
Illustratively, the fused parking space characterization information includes a first intersection coordinate, a second intersection coordinate, and a parking space angle, and the obtaining the fused parking space characterization information based on the parking space characterization information, the first parking space line, and the second parking space line includes: obtaining an entry line based on the entry line first endpoint coordinates and the entry line second endpoint coordinates; based on the first vehicle position line and the entering line, obtaining the coordinate of an intersection point of the first vehicle position line and the entering line, and marking the coordinate as a first intersection point coordinate; based on the second bit line and the entering line, obtaining the coordinate of the intersection point of the second bit line and the entering line, and marking the coordinate as a second intersection point coordinate; the parking space angle is equal to the second vehicle space centerline measuring angle.
Illustratively, when the matching of the first vehicle center line vector and the second vehicle center line vector is unsuccessful, a vehicle position detection result is obtained based on the vehicle position characterization information.
Illustratively, the training of the parking space detection model includes the following steps: constructing a data set, wherein the data set comprises a training set; inputting the sample images in the training set into a parking space detection model to be trained, and executing the following operations by the parking space detection model to be trained: extracting features of the sample image to obtain a parking space feature image; obtaining parking space feature information based on the parking space feature image, and obtaining parking space representation information of a parking space in the sample image based on the parking space feature information; obtaining a loss function based on the parking space feature information, and updating model parameters of the parking space detection model to be trained based on the loss function; and iterating the operation until the parking space detection model to be trained converges to obtain the trained parking space detection model based on the deep learning network.
Illustratively, the parking space feature image comprises a parking space angular point horizontal axis coordinate feature image, a parking space angular point vertical axis coordinate feature image, an entry line horizontal axis length feature image, an entry line vertical axis length feature image, a parting line sine value feature image and a parting line cosine value feature image.
Illustratively, the parking space feature information includes a parking space angular point coordinate, an entry line length and a parting line angle, and the obtaining the parking space feature information based on the parking space feature image includes: obtaining the parking space corner coordinates based on the parking space corner horizontal axis coordinate feature images and the parking space corner vertical axis coordinate feature images; obtaining the length of the entering line based on the length characteristic image of the transverse axis of the entering line and the length characteristic image of the longitudinal axis of the entering line; and obtaining the parting line angle based on the parting line sine value characteristic image and the parting line cosine value characteristic image.
Illustratively, the parking spot feature image further includes at least one of: confidence feature images and occupancy flag feature images.
Illustratively, when the parking space feature information includes the confidence coefficient feature image, the parking space feature information further includes confidence coefficient information, and the obtaining the parking space feature information based on the parking space feature image includes: obtaining the confidence information based on the confidence characteristic image; when the parking space feature information comprises the occupation mark feature image, the parking space feature information further comprises the occupation mark information, and the parking space feature information is obtained based on the parking space feature image, and the method comprises the following steps: and obtaining the occupation mark information based on the occupation mark characteristic image.
Another aspect of the present application provides an electronic device, including a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, causes the processor to execute the above parking space detection method.
In yet another aspect, the present application provides a vehicle including the electronic device described above.
According to the parking space detection method, the electronic equipment and the vehicle, the first image can be processed based on the parking space detection model of the deep learning network to obtain the parking space representation information, the first vehicle space center line vector is further obtained, the parking space representation information, the first vehicle space line and the second vehicle space line are better in generalization and environment adaptability, although the parking space detection result can be obtained based on the parking space representation information, the position precision and the angle precision of the obtained parking space detection result are lower, the first vehicle space line and the second vehicle space line are further obtained based on the traditional parking space detection method, the position precision and the angle precision of the first vehicle space line and the second vehicle space line are higher, the second vehicle space center line vector is further obtained, the first vehicle space center line vector is matched with the second vehicle space center line vector, when the matching is successful, the parking space representation information, the first vehicle space line and the second vehicle space line are fused and calculated, and the fusion parking space representation information is obtained, and the position precision and the angle precision of the obtained parking space detection result based on the fusion representation information are greatly improved. In addition, the first vehicle position line and the second vehicle position line are obtained only based on the traditional vehicle position detection method, complex post-processing matching and analysis are not needed, the calculated amount based on the traditional vehicle position detection method is greatly reduced, and the processing flow based on the traditional vehicle position detection method is simplified.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing embodiments of the present invention in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, and not constitute a limitation to the invention. In the drawings, like reference numerals generally refer to like parts or steps.
In the accompanying drawings:
fig. 1 shows a schematic flow chart of a parking space detection method according to an embodiment of the application.
Fig. 2 shows a schematic diagram of parking space characterization information according to an embodiment of the present application.
Fig. 3 shows a schematic diagram of the fusion of parking space characterization information according to an embodiment of the present application.
Fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. Based on the embodiments of the invention described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the invention.
Next, a parking space detection method according to an embodiment of the present application will be described with reference to fig. 1 to 3, fig. 1 shows a schematic flowchart of the parking space detection method according to an embodiment of the present application, fig. 2 shows a schematic diagram of parking space characterization information according to an embodiment of the present application, and fig. 3 shows a schematic diagram of fused parking space characterization information according to an embodiment of the present application. As shown in fig. 1, the parking space detection method 100 may include the following steps:
In step S110, a first image and a second image of the parking space are acquired.
In step S120, the first image is input into a trained parking space detection model based on a deep learning network, so as to obtain parking space characterization information.
In step S130, a first vehicle center line vector is obtained based on the vehicle position characterization information.
In step S140, a first parking space line and a second parking space line are obtained based on the second image, and the first parking space line is parallel to the second parking space line.
In step S150, a second vehicle center line vector is obtained based on the first vehicle position line and the second vehicle position line.
In step S160, the first vehicle position neutral vector and the second vehicle position neutral vector are matched, and it is determined whether the first vehicle position neutral vector and the second vehicle position neutral vector are successfully matched.
In step S170, when the first vehicle center line vector and the second vehicle center line vector are successfully matched, fused vehicle position characterization information is obtained based on the vehicle position characterization information, the first vehicle position line and the second vehicle position line.
In step S180, a parking space detection result is obtained based on the fused parking space characterization information.
In the embodiment of the application, the first image and the second image of the parking space can be acquired through a panoramic monitoring image system (AVM, around View Monitor) carried on the vehicle, the AVM can shoot images through a plurality of (generally four) ultra-large wide angle fisheye lenses, and distortion correction and splicing are carried out on the shot images through a special algorithm, so that panoramic influence around the vehicle is formed. For example, the first image and the second image may be images directly output by the AVM, i.e., the first image and the second image may be identical images; the first image or the second image may be an image obtained by performing the secondary processing after the output of the AVM, for example, the field of view of the image output by the AVM is generally 10m x 10m or more, but the four peripheral edge image is relatively blurred and has a certain deformation dislocation, so that an image with a width of 6m and a height of 9m can be taken as the first image or the second image. And then, inputting the first image into a trained parking space detection model based on the deep learning network, wherein the parking space detection model can extract required information from the first image and output parking space characterization information, the parking space characterization information can be used for characterizing a parking space, and a first vehicle position center line vector can be obtained based on the parking space characterization information. Meanwhile, a first vehicle position line and a second vehicle position line can be obtained based on the second image, the first vehicle position line is parallel to the second vehicle position line, and a second vehicle position center line vector is obtained through the first vehicle position line and the second vehicle position line. For example, the second image can be processed based on a conventional parking space detection method to obtain the first and second vehicle position lines. The first parking space line and the second parking space line are dividing lines for dividing different parking spaces. And finally, matching the first vehicle position neutral line vector with the second vehicle position neutral line vector, and obtaining fusion vehicle position representation information based on the obtained vehicle position representation information and the first vehicle position line and the second vehicle position line when the first vehicle position neutral line vector and the second vehicle position neutral line vector are successfully matched, and obtaining a vehicle position detection result based on the fusion vehicle position representation information.
Therefore, the parking space method provided by the embodiment of the application can process the first image based on the parking space detection model of the deep learning network to obtain the parking space characterization information, further obtain the first vehicle space center line vector, and when the matching is successful, the parking space characterization information, the first vehicle space line and the second vehicle space line can be fused and calculated, and the fused parking space characterization information can greatly improve the position precision and the angle precision on the basis of the parking space characterization information, and further obtain the first vehicle space line and the second vehicle space line based on the traditional parking space detection method, the position precision and the angle precision of the first vehicle space line and the second vehicle space line are higher, further obtain the second vehicle space center line vector, and further match the first vehicle space center line vector with the second vehicle space center line vector, and when the matching is successful, the fused parking space characterization information is obtained, and the position precision and the angle precision of the obtained parking space detection result are greatly improved on the basis of the parking space characterization information. In addition, the first vehicle position line and the second vehicle position line are obtained only based on the traditional vehicle position detection method, complex post-processing matching and analysis are not needed, the calculated amount based on the traditional vehicle position detection method is greatly reduced, and the processing flow based on the traditional vehicle position detection method is simplified.
In an embodiment of the present application, training of a deep learning network-based parking space detection model includes the steps of: constructing a data set, wherein the data set comprises a training set; inputting the sample images in the training set into a parking space detection model to be trained, and executing the following operations by the parking space detection model to be trained: extracting features of the sample image to obtain a parking space feature image; obtaining parking space feature information based on the parking space feature image, and obtaining parking space representation information of a parking space in the sample image based on the parking space feature information; obtaining a loss function based on the parking space feature information, and updating model parameters of the parking space detection model to be trained based on the loss function; and iterating the operation until the parking space detection model to be trained converges to obtain the trained parking space detection model based on the deep learning network. In the above process, the model parameters of the parking space detection model are optimized for multiple times by continuous iteration until the optimized model parameters meet the corresponding requirements, for example, the loss function value of the parking space detection model is stabilized to be the minimum value. In addition, generally, as the training set increases, the accuracy of the trained parking space detection model also increases. Illustratively, the data set may further include a verification set and a test set, wherein the training set is a data sample for model fitting, i.e. a sample set for training, mainly used for training model parameters of the parking space detection model; the verification set is a sample set independently reserved in the model training process and can be used for adjusting the hyper-parameters of the model and carrying out preliminary evaluation on the capacity of the model; the test set may be used to evaluate the effect of the model. Illustratively, the resolution of the input image is 512×512, and the input image may be an RGB image or a grayscale image, preferably an RGB image. Illustratively, a parking space feature image is obtained by performing convolution operation on an input image, and the resolution of the obtained parking space feature image is 16×16.
In the embodiment of the application, the parking space detection model based on the deep learning network is a target recognition model such as YOLO v3, and the like, and the method is not limited thereto.
In the embodiment of the application, the parking space feature image comprises a parking space corner horizontal axis coordinate feature image, a parking space corner vertical axis coordinate feature image, an entry line horizontal axis length feature image, an entry line vertical axis length feature image, a parting line sine value feature image and a parting line cosine value feature image. Illustratively, the feature images are abstract representations of the input image, different feature images being usable to represent different specific features in the input image.
In an embodiment of the present application, the parking space feature information includes a parking space corner coordinate, an entry line length, and a parting line angle, and the obtaining of the parking space feature information based on the parking space feature image includes: obtaining the coordinates of the parking space corner based on the characteristic images of the transverse axis coordinates of the parking space corner and the characteristic images of the longitudinal axis coordinates of the parking space corner; obtaining the length of the entering line based on the length characteristic image of the transverse axis of the entering line and the length characteristic image of the longitudinal axis of the entering line; and obtaining the parting line angle based on the parting line sine value characteristic image and the parting line cosine value characteristic image. The transverse axis coordinates of the parking space corner can be obtained from the transverse axis coordinate feature image of the parking space corner, and the longitudinal axis coordinates of the parking space corner can be obtained from the longitudinal axis coordinate feature image of the parking space corner, so that the coordinates of the parking space corner can be obtained, as shown in fig. 2, the entering line AB comprises a first endpoint a and a second endpoint B, the parking space corner can be the first endpoint a or the second endpoint B, and the description is made below by taking the first endpoint a as the parking space corner. The entry line horizontal axis length is obtained from the entry line horizontal axis length feature image, and the entry line vertical axis length is obtained from the entry line vertical axis length feature image, where the entry line horizontal axis length and the entry line vertical axis length are calculated based on the parking space corner a, and the entry line horizontal axis length and the entry line vertical axis length may be negative, that is, the entry line horizontal axis length and the entry line vertical axis length themselves contain direction information, for example, when the entry line horizontal axis length is negative, it represents that the entry line extends from the parking space corner a to the horizontal axis negative direction in the horizontal axis direction. Illustratively, the sine value of the parting line can be obtained from the sine value characteristic image of the parting line, the cosine value of the parting line can be obtained from the cosine value characteristic image of the parting line, the tangent value of the parting line can be obtained on the premise of knowing the sine value and the cosine value of the parting line, and then the parting line angle can be obtained through an arctangent function, as shown in fig. 2, the parting lines AD 'and BC' are both parting lines, and only the angles, but not the lengths, of the parting lines AD 'and BC' are needed to be obtained. The coordinates obtained above may be coordinates in an image coordinate system or coordinates in a world coordinate system, but all coordinates should be located in a unified coordinate system. Illustratively, the parking space corner point is not a parking space corner point which is necessarily present in an image in a real sense, and the entering line and the dividing line are not various lines such as a solid line, a broken line and the like which are present in the image in a real sense, but are abstract mathematical geometric representations.
In an embodiment of the present application, the parking space feature image further includes at least one of the following: confidence feature images and occupancy flag images. Illustratively, when the parking space feature image includes a confidence feature image, the parking space feature information further includes confidence information, and the parking space feature information is obtained based on the parking space feature image, including: obtaining confidence information based on the confidence characteristic image; when the parking stall characteristic information includes occupation mark characteristic image, the parking stall characteristic information still includes occupation mark information, obtains the parking stall characteristic information based on the parking stall characteristic image, includes: and obtaining the occupation mark information based on the occupation mark characteristic image. Illustratively, the occupancy flag information can reflect whether the parking space is occupied, e.g., whether there is an obstacle on the parking space. For example, the available mark feature image may be output to replace the occupied mark feature image, and available mark information may be obtained to replace the occupied mark information, where the available mark information may reflect whether the parking space is available, and both the available mark information and the occupied mark information may reflect the use condition of the parking space.
In an embodiment of the application, the parking space characterization information includes an entry line first endpoint coordinate, an entry line second endpoint coordinate, and a split line angle. As shown in fig. 2, AB is an entry line, AD 'and BC' are parting lines, where a is a first end point of the entry line, B is a second end point of the entry line, and a is a parking space corner point, as can be seen from the foregoing, coordinates of the parking space corner point a, a length of the entry line AB, and an angle between the parting lines AD 'and BC', where the length of the entry line AB is represented by an abscissa length of the entry line AB and an ordinate length of the entry line AB, for example, coordinates of the parking space corner point a are (6, 6), and a length of the entry line AB is represented as (-2, -3), coordinates of the second end point B of the entry line AB are (4, 3), so as to obtain parking space characterization information: the coordinates of the first end point a of the entering line, the coordinates of the second end point B of the entering line and the angles of the parting lines AD 'and BC', as shown in fig. 2, further, the parking space characterization information can represent a parking space, that is, a parking space detection result can be obtained through the parking space characterization information.
In an embodiment of the present application, a first vehicle position centerline vector is represented by a first vehicle position centerline vector coordinate and a first vehicle position centerline vector angle, and the first vehicle position centerline vector is obtained based on vehicle position characterization information, including: averaging the first end point coordinates of the entering line and the second end point coordinates of the entering line to obtain first vehicle center line vector coordinates; the first vehicle centerline angle is equal to the split line angle. Illustratively, as shown in fig. 3, AB is an entry line, a is a first end point of the entry line, B is a second end point of the entry line, D is a midpoint of the entry line AB, AD 'and BC' are dividing lines, the first vehicle center line vector coordinates are coordinates of a point through which the first vehicle center line vector passes, the first vehicle center line vector can be represented by combining the coordinates with the first vehicle center line vector, in this embodiment, the coordinates of D are the first vehicle center line vector coordinates, and the vector led from the D at the first vehicle center line vector angle is the first vehicle center line vector, i.e., L0 in fig. 3. For example, setting the coordinates of a to be (x_a, y_a), the coordinates of B to be (x_b, y_b), the split line angle to be theta, the parking space characterization information may be represented as { x_a, y_a, x_b, y_b, theta }, and setting the coordinates of D to be (x_m, y_m), the first vehicle center line vector L0 may be represented as { x_m, y_m, theta_m }, and has:
In an embodiment of the present application, obtaining a first vehicle location line and a second vehicle location line based on a first image includes: obtaining a gray scale image based on the second image; performing histogram equalization operation on the gray level image to obtain an equalized gray level image; carrying out noise reduction treatment on the balanced gray level image to obtain a noise reduction gray level image; converting the noise reduction gray level image into a binary image by adopting an adaptive threshold method; performing open operation on the binary image to obtain a noise reduction binary image; skeletonizing the noise reduction binary image to obtain a binary skeleton image; detecting the binary skeleton image by adopting a Hough straight line detection method to obtain a potential parking space line; and filtering the potential parking space line by adopting priori knowledge to obtain the first parking space line and the second parking space line. Illustratively, when the acquired second image is an RGB image, in order to reduce the algorithm processing difficulty, the RGB image needs to be converted into a grayscale image. Illustratively, deriving the grayscale image based on the second image includes: converting the second image into an HSV image; and extracting a V-channel image in the HSV image as the gray level image. For example, in order to enhance the contrast of the parking space line as much as possible, the second image is first converted into an HSV image, and then the image of the V channel is extracted as a gray image to perform subsequent parking space line detection. Illustratively, histogram equalization operation is performed on the gray level image to obtain an equalized gray level image, which can enhance the contrast of the image, amplify the brightness difference between the parking space line and the background, and facilitate the subsequent detection of the parking space line. For example, noise of the image is inevitably amplified at the same time after the histogram equalization operation, and in order to reduce the influence of the noise on subsequent detection, the equalized gray-scale image may be subjected to noise reduction processing by adopting median filtering of 3*3, so as to obtain a noise-reduced gray-scale image. By converting the noise reduction gray level image into a binary image by adopting an adaptive threshold method, the difficulty of subsequent parking space line detection can be further reduced. Illustratively, the binary image is subjected to an open operation to obtain a noise reduction binary image, wherein the open operation refers to processing the binary image by adopting a method of corrosion before expansion so as to remove isolated outlier noise in the environment as much as possible. For example, because a certain width exists in the parking space line, the noise reduction binary image is subjected to skeletonizing treatment to obtain a binary skeleton image for facilitating the detection of the parking space line, so that only one straight line can be detected by one parking space line when the Hough straight line detection method is adopted later. The potential carport lines are filtered using a priori knowledge, which may include a priori knowledge that the carport lines belong to pairs of parallel lines, a parallel line spacing composite carport size requirement, and the like, for example.
In the embodiment of the application, the first parking space line is represented by a first end point coordinate of the first parking space line and a second end point coordinate of the first parking space line; the second parking space line is represented by a first end point coordinate of the second parking space line and a second end point coordinate of the second vehicle parking space line. For example, the first parking space line is not to be understood as a line segment, and the first end point of the first parking space line and the second end point of the first parking space line are not end points of the first parking space line in the true sense, but two points through which the first parking space line passes, and only the first parking space line first end point coordinate and the first parking space line second end point coordinate are used for representing the first parking space line; similarly, the second vehicle position line is not to be understood as a line segment, and the first end point of the second vehicle position line and the second end point of the second vehicle position line are not end points of the second vehicle position line in a true sense, but two points through which the second vehicle position line passes, and only the first vehicle position line is represented by the coordinates of the first end point of the second vehicle position line and the coordinates of the first end point of the second vehicle position line.
In an embodiment of the present application, the second vehicle center line vector is represented by a second vehicle center line vector coordinate and a second vehicle center line vector, and the obtaining the second vehicle center line vector based on the first parking space line and the second parking space line includes: averaging the first vehicle position line first end point coordinate and the second vehicle position line first end point coordinate to obtain a second center line first end point coordinate; averaging the first parking space line second endpoint coordinate and the second vehicle position line second endpoint coordinate to obtain a second central line second endpoint coordinate; averaging the first end point coordinates of the second center line and the second end point coordinates of the second center line to obtain second vehicle center line vector coordinates; and obtaining the second vehicle position neutral line protracted angle based on the second neutral line first end point coordinate and the second neutral line second end point coordinate. For example, as shown in fig. 3, L1 is a first vehicle position line, L2 is a second vehicle position line, L3 is a second vehicle position center line vector, a first vehicle position line first end point coordinate is set to be (x 1,y1), a first vehicle position line second end point coordinate is set to be (x 1',y1 '), a second vehicle position line first end point coordinate is set to be (x 2,y2), a first vehicle position line second end point coordinate is set to be (x 2',y2 '), then the first vehicle position line L1 may be represented as { x 1,y1,x1',y1 '), the second vehicle position line L2 may be represented as { x 2,y2,x2',y2 ', and a second center line first end point coordinate is set to be (x m,ym), and a second center line second end point coordinate is set to be (x m',ym '), there are:
meanwhile, the second vehicle center line vector coordinate is set to (x mc,ymc), the second vehicle center line vector angle is set to theta mc, the second vehicle center line vector L3 may be represented as { x mc,ymc,thetamc }, and there are:
In an embodiment of the present application, matching a first vehicle position neutral vector with a second vehicle position neutral vector and determining whether the first vehicle position neutral vector and the second vehicle position neutral vector are successfully matched includes: obtaining an absolute value of a difference value between the first vehicle position central line measuring angle and the second vehicle position central line measuring angle to obtain a central line included angle; solving the distance from the first vehicle center line vector coordinate to the second vehicle center line vector to obtain a center line distance; and when the included angle of the central line is not larger than the preset included angle and the central line distance is not larger than the preset distance, determining that the first vehicle central line vector is successfully matched with the second vehicle central line vector. As shown in fig. 3, the set centerline angle Δθ and the set centerline distance Δd can be obtained by a point-to-line distance formula, and there are:
Δθ=thetamc-theta_M
The preset included angle is, for example, 30 degrees. Illustratively, the preset distance is 30cm in world coordinate system.
In an embodiment of the present application, the fused parking space characterization information includes a first intersection coordinate, a second intersection coordinate, and a parking space angle, and the fused parking space characterization information is obtained based on the parking space characterization information, the first parking space line, and the second parking space line, including: obtaining an entry line based on the first entry line endpoint coordinates and the second entry line endpoint coordinates; based on the first vehicle position line and the entering line, obtaining the coordinate of an intersection point of the first vehicle position line and the entering line, and marking the coordinate as a first intersection point coordinate; based on the second bit line and the entering line, obtaining the coordinate of the intersection point of the second bit line and the entering line, and marking the coordinate as a second intersection point coordinate; the parking space angle is equal to the second vehicle position centerline measuring angle. For example, as shown in fig. 3, the intersection point of the first parking space line L1 and the entry line AB is E, the coordinate of E can be obtained by calculating the entry line AB and the first parking space line L1, the intersection point of the second parking space line L2 and the entry line AB is F, the coordinate of F can be obtained by calculating the entry line AB and the second parking space line L2, the coordinate of E is (x E,yE), and the coordinate of F is (x F,yF), and then the fused parking space characterization information can be represented as { x E,yE,xF,yF,thetamc }, that is, compared with the parking space characterization information { x_a, y_a, x_b, y_b, theta }, the fused parking space characterization information replaces the coordinates of a and B with the coordinates of E and F, and replaces the first parking space line with the second vehicle center line with the angle, so that the position and angle accuracy of parking space detection is improved.
In one example, a stall detection result is obtained based on stall characterization information when the first stall neutral line vector and the second stall neutral line vector are not successfully matched. When the included angle of the central line is larger than the preset included angle or the central line distance is larger than the preset distance, the fact that the first vehicle central line vector is not successfully matched with the second vehicle central line vector is determined, and a vehicle position detection result is obtained based on vehicle position representation information { x_A, y_A, x_B, y_B and theta }, namely, only when the matching is unsuccessful, the vehicle position representation information which is believed to be output by a vehicle position detection model based on a deep learning network is selected.
The parking space detection method according to an embodiment of the present application is described above by way of example. Based on the description, the parking space detection method disclosed by the application can be used for processing the first image based on the parking space detection model of the deep learning network to obtain parking space representation information, further obtaining a first vehicle space center line vector, and further obtaining fusion calculation of the parking space representation information and the first and second vehicle space lines and fusion parking space representation information when matching is successful, wherein although the parking space detection result can be obtained based on the parking space representation information, the obtained parking space detection result is low in position precision and angle precision, and therefore the first and second vehicle space lines are obtained based on the traditional parking space detection method, the first and second vehicle space lines are high in position precision and angle precision, further obtaining a second vehicle space center line vector, further matching the first vehicle space center line vector with the second vehicle space center line vector, and further obtaining fusion parking space representation information. In addition, the first vehicle position line and the second vehicle position line are obtained only based on the traditional vehicle position detection method, complex post-processing matching and analysis are not needed, the calculated amount based on the traditional vehicle position detection method is greatly reduced, and the processing flow based on the traditional vehicle position detection method is simplified. By way of example, the parking space detection method disclosed by the application combines the advantages of the traditional parking space detection method and the parking space detection method based on the deep learning network, so that the stability of an algorithm and the environmental adaptability are ensured, and the detected position precision and the detected angle precision are also greatly improved. In an exemplary embodiment, the parking space detection method does not completely trust the parking space characterization information output by the parking space detection model based on the deep learning network, and only when the first and second vehicle center line vectors are not successfully matched, the parking space detection result is obtained based on the parking space characterization information. By way of example, the first vehicle center line vector is matched with the second vehicle center line vector, so that the calculation amount is small, and the algorithm robustness is high.
Next, an electronic device of the present application is described with reference to fig. 4. Fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the application. As shown in fig. 4, the electronic device 400 includes a processor 410 and a memory 420, the memory 420 having stored thereon a computer program which, when executed by the processor 410, causes the processor 410 to perform the stall detection method described above.
The processor 410 may be, for example, a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities for issuing instructions through a series of algorithmic processing flows. By way of example, memory 420 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like.
The embodiment of the application also provides a vehicle, which comprises the electronic equipment. Illustratively, the vehicle may further include other constituent structures, such as a signal transmission system for signal transmission, etc., to which embodiments of the present application are not limited.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, e.g., the division of elements is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted, or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the invention and aid in understanding one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the invention. However, the method of the present invention should not be construed as reflecting the following intent: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in an item analysis device according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The above description is merely illustrative of the embodiments of the present invention and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present invention, and the changes or substitutions are covered by the protection scope of the present invention. The protection scope of the invention is subject to the protection scope of the claims.
Claims (18)
1. The parking space detection method is characterized by comprising the following steps of:
Acquiring a first image and a second image of a parking space;
Inputting the first image into a trained parking space detection model based on a deep learning network to obtain parking space characterization information;
Obtaining a first vehicle center line vector based on the vehicle position representation information;
Obtaining a first parking space line and a second parking space line based on the second image, wherein the first parking space line is parallel to the second parking space line;
obtaining a second vehicle center line vector based on the first vehicle position line and the second vehicle position line;
Matching the first vehicle position neutral line vector with the second vehicle position neutral line vector, and determining whether the first vehicle position neutral line vector and the second vehicle position neutral line vector are successfully matched;
when the first vehicle position neutral line vector is successfully matched with the second vehicle position neutral line vector, fused vehicle position representation information is obtained based on the vehicle position representation information, the first vehicle position line and the second vehicle position line;
and obtaining a parking space detection result based on the fusion parking space characterization information.
2. The method of claim 1, wherein the parking spot characterization information includes an entry line first endpoint coordinate, an entry line second endpoint coordinate, and a split line angle.
3. The method of claim 2, wherein the first vehicle location centerline vector is represented by a first vehicle location centerline vector coordinate and a first vehicle location centerline vector angle, the deriving the first vehicle location centerline vector based on the vehicle location characterization information comprising:
averaging the first end point coordinates of the entering line and the second end point coordinates of the entering line to obtain the first vehicle center line vector coordinates;
the first vehicle center line angulation is equal to the split line angulation.
4. The method of claim 1, wherein the deriving the first and second bit lines based on the second image comprises:
obtaining a gray scale image based on the second image;
performing histogram equalization operation on the gray level image to obtain an equalized gray level image;
carrying out noise reduction treatment on the balanced gray level image to obtain a noise reduction gray level image;
converting the noise reduction gray level image into a binary image by adopting an adaptive threshold method;
Performing open operation on the binary image to obtain a noise reduction binary image;
skeletonizing the noise reduction binary image to obtain a binary skeleton image;
detecting the binary skeleton image by adopting a Hough straight line detection method to obtain a potential parking space line;
and filtering the potential parking space line by adopting priori knowledge to obtain the first parking space line and the second parking space line.
5. The method of claim 4, wherein obtaining a grayscale image based on the second image comprises:
converting the second image into an HSV image;
and extracting a V channel image in the HSV image as the gray level image.
6. The method of claim 3, wherein the step of,
The first parking space line is represented by a first end point coordinate of the first parking space line and a second end point coordinate of the first parking space line;
the second parking space line is represented by a first end point coordinate of the second parking space line and a second end point coordinate of the second vehicle parking space line.
7. The method of claim 6, wherein the second vehicle position centerline vector is represented by a second vehicle position centerline vector coordinate and a second vehicle position centerline vector, the deriving the second vehicle position centerline vector based on the first vehicle position line and the second vehicle position line comprising:
averaging the first end point coordinates of the first parking space line and the first end point coordinates of the second parking space line to obtain second center line first end point coordinates;
averaging the first parking space line second endpoint coordinate and the second parking space line second endpoint coordinate to obtain a second central line second endpoint coordinate;
averaging the first end point coordinates of the second center line and the second end point coordinates of the second center line to obtain second position center line vector coordinates;
and obtaining the second vehicle position neutral line protracted angle based on the second neutral line first end point coordinate and the second neutral line second end point coordinate.
8. The method of claim 7, wherein the matching the first vehicle location centerline vector with the second vehicle location centerline vector and determining whether the first vehicle location centerline vector successfully matches the second vehicle location centerline vector comprises:
Obtaining an absolute value of a difference value between the first vehicle position central line measuring angle and the second vehicle position central line measuring angle to obtain a central line included angle;
Solving the distance from the first vehicle center line vector coordinate to the second vehicle center line vector to obtain a center line distance;
And when the center line included angle is not larger than a preset included angle and the center line distance is not larger than a preset distance, determining that the first vehicle center line vector and the second vehicle center line vector are successfully matched.
9. The method of claim 8, wherein the predetermined included angle is 30 degrees.
10. The method of claim 7, wherein the fused parking spot characterization information includes a first intersection coordinate, a second intersection coordinate, and a parking spot angle, and wherein the obtaining the fused parking spot characterization information based on the parking spot characterization information, the first parking spot line, and the second parking spot line includes:
obtaining an entry line based on the entry line first endpoint coordinates and the entry line second endpoint coordinates;
based on the first vehicle position line and the entering line, obtaining the coordinate of an intersection point of the first vehicle position line and the entering line, and marking the coordinate as a first intersection point coordinate;
Based on the second bit line and the entering line, obtaining the coordinate of the intersection point of the second bit line and the entering line, and marking the coordinate as a second intersection point coordinate;
the parking space angle is equal to the second vehicle space centerline measuring angle.
11. The method of claim 1, wherein a parking spot detection result is obtained based on the parking spot characterization information when the matching of the first and second vehicle position neutral line vectors is unsuccessful.
12. The method according to claim 1, wherein the training of the parking space detection model comprises the steps of:
constructing a data set, wherein the data set comprises a training set;
Inputting the sample images in the training set into a parking space detection model to be trained, and executing the following operations by the parking space detection model to be trained:
Extracting features of the sample image to obtain a parking space feature image;
obtaining parking space feature information based on the parking space feature image, and obtaining parking space representation information of a parking space in the sample image based on the parking space feature information;
Obtaining a loss function based on the parking space feature information, and updating model parameters of the parking space detection model to be trained based on the loss function;
and iterating the operation until the parking space detection model to be trained converges to obtain the trained parking space detection model based on the deep learning network.
13. The method of claim 12, wherein the parking spot feature image comprises a parking spot corner abscissa feature image, a parking spot corner ordinate coordinate feature image, an entry line abscissa length feature image, an entry line ordinate length feature image, a split line sine value feature image, and a split line cosine value feature image.
14. The method of claim 13, wherein the parking spot feature information includes a parking spot corner coordinate, an entry line length, and a parting line angle, the deriving the parking spot feature information based on the parking spot feature image comprises:
obtaining the parking space corner coordinates based on the parking space corner horizontal axis coordinate feature images and the parking space corner vertical axis coordinate feature images;
obtaining the length of the entering line based on the length characteristic image of the transverse axis of the entering line and the length characteristic image of the longitudinal axis of the entering line;
And obtaining the parting line angle based on the parting line sine value characteristic image and the parting line cosine value characteristic image.
15. The method of claim 13, wherein the parking spot feature image further comprises at least one of:
confidence feature images and occupancy flag feature images.
16. The method of claim 15, wherein the step of determining the position of the probe is performed,
When the parking space feature information comprises the confidence coefficient feature image, the parking space feature information further comprises the confidence coefficient information, and the parking space feature information is obtained based on the parking space feature image, and the method comprises the following steps:
obtaining the confidence information based on the confidence characteristic image;
When the parking space feature information comprises the occupation mark feature image, the parking space feature information further comprises the occupation mark information, and the parking space feature information is obtained based on the parking space feature image, and the method comprises the following steps:
And obtaining the occupation mark information based on the occupation mark characteristic image.
17. An electronic device comprising a processor and a memory, the memory having stored thereon a computer program which, when executed by the processor, causes the processor to perform the method of parking spot detection of any of claims 1-16.
18. A vehicle, characterized in that it comprises the electronic device of claim 17.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311227885.4A CN118279849A (en) | 2023-09-21 | 2023-09-21 | Parking space detection method, electronic equipment and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311227885.4A CN118279849A (en) | 2023-09-21 | 2023-09-21 | Parking space detection method, electronic equipment and vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118279849A true CN118279849A (en) | 2024-07-02 |
Family
ID=91632735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311227885.4A Pending CN118279849A (en) | 2023-09-21 | 2023-09-21 | Parking space detection method, electronic equipment and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118279849A (en) |
-
2023
- 2023-09-21 CN CN202311227885.4A patent/CN118279849A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11580647B1 (en) | Global and local binary pattern image crack segmentation method based on robot vision | |
EP3321842B1 (en) | Lane line recognition modeling method, apparatus, storage medium, and device, recognition method and apparatus, storage medium, and device | |
CN110781885A (en) | Text detection method, device, medium and electronic equipment based on image processing | |
CN110390306B (en) | Method for detecting right-angle parking space, vehicle and computer readable storage medium | |
CN111667470B (en) | Industrial pipeline flaw detection inner wall detection method based on digital image | |
CN112598922B (en) | Parking space detection method, device, equipment and storage medium | |
CN109583365A (en) | Method for detecting lane lines is fitted based on imaging model constraint non-uniform B-spline curve | |
US20170316573A1 (en) | Position measuring equipment | |
CN108305260A (en) | Detection method, device and the equipment of angle point in a kind of image | |
CN111127498B (en) | Canny edge detection method based on edge self-growth | |
CN113011285B (en) | Lane line detection method and device, automatic driving vehicle and readable storage medium | |
CN116168028B (en) | High-speed rail original image processing method and system based on edge filtering under low visibility | |
CN112861870A (en) | Pointer instrument image correction method, system and storage medium | |
CN107808165B (en) | Infrared image matching method based on SUSAN corner detection | |
CN111444911B (en) | Training method and device of license plate recognition model and license plate recognition method and device | |
CN116740072A (en) | Road surface defect detection method and system based on machine vision | |
CN107463939B (en) | Image key straight line detection method | |
CN117612128B (en) | Lane line generation method, device, computer equipment and storage medium | |
CN109978903B (en) | Identification point identification method and device, electronic equipment and storage medium | |
CN112634141B (en) | License plate correction method, device, equipment and medium | |
CN113012181A (en) | Novel quasi-circular detection method based on Hough transformation | |
CN111428538B (en) | Lane line extraction method, device and equipment | |
CN117333518A (en) | Laser scanning image matching method, system and computer equipment | |
CN112767425A (en) | Parking space detection method and device based on vision | |
CN118279849A (en) | Parking space detection method, electronic equipment and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |