CN115965934A - Parking space detection method and device - Google Patents

Parking space detection method and device Download PDF

Info

Publication number
CN115965934A
CN115965934A CN202211675403.7A CN202211675403A CN115965934A CN 115965934 A CN115965934 A CN 115965934A CN 202211675403 A CN202211675403 A CN 202211675403A CN 115965934 A CN115965934 A CN 115965934A
Authority
CN
China
Prior art keywords
image
parking space
video stream
dynamic video
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211675403.7A
Other languages
Chinese (zh)
Inventor
张雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING BDK ELECTRONICS CO LTD
Original Assignee
BEIJING BDK ELECTRONICS CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING BDK ELECTRONICS CO LTD filed Critical BEIJING BDK ELECTRONICS CO LTD
Priority to CN202211675403.7A priority Critical patent/CN115965934A/en
Publication of CN115965934A publication Critical patent/CN115965934A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

A parking space detection method and a device thereof are disclosed, the method obtains dynamic video stream images of adjacent cameras in a parking place, carries out image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and then carries out splicing fusion on the dynamic video stream images of the adjacent cameras to obtain a spliced and fused parking place panoramic image; manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between the parking space number and the parking space position, and cutting a parking space image according to the parking space table and a preset rule; extracting features of the cut parking space images, labeling the features, using the extracted features as a network prediction model training and verification data set, and training a network prediction model by adopting a convolutional neural network; and detecting whether the parking space state changes by using a machine vision algorithm, and predicting the parking space state in the panoramic image by using a network prediction model. The invention reduces the investment cost and improves the parking space detection performance.

Description

Parking space detection method and device
Technical Field
The invention belongs to the technical field of parking space identification, and particularly relates to a parking space detection method and device.
Background
With the development of social economy and the improvement of the living standard of people, people have higher requirements on the quality of life, and the quantity of automobiles serving as a convenient traffic mode for going out is continuously increased. In the parking facility, due to the limitation of cost and space, the growth rate is limited, parking spaces are scarce, and parking users spend a lot of time for finding the parking spaces, so that the parking time and cost are increased.
The mainstream intelligent parking lot parking space detection in the current market mainly comprises two types of detection based on a magnetic resistance sensor and an image. A scheme based on a magnetoresistive sensor needs to deploy a large number of sensor units to cover a parking lot, and one sensor unit is generally responsible for detecting one parking space, so that the installation is complicated and the cost is high. And if the traditional parking stall detecting system of outdoor parking area installation has the restriction such as getting electricity, communication and wiring, leads to the cost very high, and later maintenance is difficult. Therefore, it is very important to research an automatic parking space detection method for an open parking lot, which is low in price, convenient to use and less in environmental limitation.
Disclosure of Invention
Therefore, the invention provides a parking space detection method and device, and aims to solve the problems that complex hardware needs to be deployed, the cost is high, the influence of the environment is large, and the use is inconvenient in the traditional technology.
In order to achieve the above purpose, the invention provides the following technical scheme: a parking space detection method, comprising:
acquiring dynamic video stream images of adjacent cameras in a parking place, performing image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and performing splicing and fusion on the dynamic video stream images of the adjacent cameras to obtain spliced and fused panoramic images of the parking place;
manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between parking space numbers and parking space positions, and cutting out a parking space image according to the parking space table and preset rules;
extracting features of the cut parking space images, labeling the extracted features, using the labeled features as a network prediction model training and verification data set, and training a network prediction model by adopting a convolutional neural network;
and detecting whether the parking space state changes by using a machine vision algorithm, and predicting the parking space state in the panoramic image by using the network prediction model.
As a preferable scheme of the parking space detection method, in the image preprocessing process, barrel transformation is performed on the obtained dynamic video stream images of the adjacent cameras;
in the image feature extraction process, feature point extraction is carried out on dynamic video stream images of adjacent cameras after barrel-shaped transformation, and the feature point extraction adopts an SURF algorithm;
in the image feature matching process, feature points extracted from dynamic video stream images of adjacent cameras are matched in a pairwise corresponding manner, and a RANSAC algorithm is adopted for pairwise corresponding matching.
As a preferred scheme of the parking space detection method, in the image registration process, determining an overlapping area and an overlapping position of dynamic video stream images of adjacent cameras to be spliced includes:
a) Randomly extracting a plurality of characteristic matching coordinates from a dynamic video stream image of a first camera, and calculating a first perspective matrix by using the extracted characteristic matching coordinates;
b) Mapping all characteristic matching points of the dynamic video stream image of the second camera to the coordinate space of the dynamic video stream image of the first camera through the first perspective matrix, and solving a first Euclidean distance from the actual coordinates of the dynamic video stream image matching points of the first camera;
repeating the steps a) and b) to obtain a second perspective matrix and a second Euclidean distance obtained by using the second perspective matrix;
and comparing the first Euclidean distance with the second Euclidean distance, and taking a perspective matrix with a small Euclidean distance as a calculation result.
As a preferred scheme of the parking space detection method, in the image fusion process, perspective matrix transformation is performed on the dynamic video stream image of the second camera by using a perspective matrix obtained in the image registration process, and the dynamic video stream image of the second camera is mapped into a coordinate space of the dynamic video stream image of the first camera, so that a spliced image result of the dynamic video stream image of the first camera and the dynamic video stream image of the second camera is obtained.
And in the image balancing process, adjusting the brightness and the tone of the spliced image result obtained in the image fusion process, and balancing the spliced image result by adopting a strategy of updating an image mean value and white balance at regular time.
As a preferable scheme of the parking space detection method, in the process of detecting whether the parking space state changes by using a machine vision algorithm, the Sobel contour extraction algorithm is used for image edge detection, and the edge is extracted by using the extreme value of the first derivative of the image.
The present invention also provides a parking space detection apparatus, including:
the image acquisition module is used for acquiring dynamic video stream images of cameras adjacent to a parking place;
the image processing module is used for carrying out image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and then carrying out splicing and fusion on the dynamic video stream images of the adjacent cameras to obtain spliced and fused panoramic images of the parking places;
the parking space processing module is used for manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between the parking space number and the parking space position, and cutting a parking space image according to a preset rule according to the parking space table;
the model training module is used for extracting features of the cut parking space images, labeling the extracted features as a network prediction model training and verification data set, and training the network prediction model by adopting a convolutional neural network;
and the model prediction module is used for detecting whether the parking space state changes by using a machine vision algorithm and predicting the parking space state in the panoramic image by using the network prediction model.
As a preferred scheme of the parking space detection device, the image processing module comprises an image preprocessing submodule, and the image preprocessing submodule is used for performing barrel-shaped transformation on the obtained dynamic video stream images of the adjacent cameras;
the image processing module comprises an image feature extraction submodule, wherein the image feature extraction submodule is used for extracting feature points of dynamic video stream images of adjacent cameras after barrel-shaped transformation, and the feature points are extracted by adopting an SURF algorithm;
the image processing module comprises an image feature matching submodule, wherein the image feature matching submodule is used for performing pairwise corresponding matching on feature points extracted from dynamic video stream images of adjacent cameras, and the pairwise corresponding matching adopts an RANSAC algorithm.
As a preferred scheme of the parking space detection device, the image processing module comprises an image registration submodule, and the image registration submodule is used for determining an overlapping area and an overlapping position of dynamic video stream images of adjacent cameras to be spliced;
the strategy of the image registration submodule is as follows:
a) Randomly extracting a plurality of feature matching coordinates from a dynamic video stream image of a first camera, and calculating a first perspective matrix by using the extracted feature matching coordinates;
b) Mapping all characteristic matching points of the dynamic video stream image of the second camera to the coordinate space of the dynamic video stream image of the first camera through the first perspective matrix, and solving a first Euclidean distance from the actual coordinates of the dynamic video stream image matching points of the first camera;
repeating the steps a) and b) to obtain a second perspective matrix and a second Euclidean distance obtained by using the second perspective matrix;
comparing the first Euclidean distance with the second Euclidean distance, and taking a perspective matrix with a small Euclidean distance as a calculation result;
the image processing module comprises an image fusion submodule, wherein the image fusion submodule is used for carrying out perspective matrix transformation on the dynamic video stream image of the second camera by utilizing a perspective matrix obtained by image registration, mapping the dynamic video stream image of the second camera into a coordinate space of the dynamic video stream image of the first camera and obtaining a splicing image result of the dynamic video stream image of the first camera and the dynamic video stream image of the second camera;
the image processing module comprises an image balancing submodule, and the image balancing submodule is used for adjusting the brightness and the tone of a spliced image result obtained by image fusion, and carrying out balancing processing on the spliced image result by adopting a strategy of updating an image mean value and white balance at regular time.
In the model prediction module, when a machine vision algorithm is used for detecting whether the parking space state changes, a Sobel contour extraction algorithm is used for detecting the image edge, and the edge is extracted by using the extreme value of the first derivative of the image.
The method comprises the steps of obtaining dynamic video stream images of adjacent cameras in the parking place, carrying out image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and then carrying out splicing fusion on the dynamic video stream images of the adjacent cameras to obtain spliced and fused panoramic images of the parking place; manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between the parking space number and the parking space position, and cutting a parking space image according to a preset rule according to the parking space table; extracting features of the cut parking space images, labeling the extracted features, using the labeled features as a network prediction model training and verification data set, and training a network prediction model by adopting a convolutional neural network; and detecting whether the parking space state changes by using a machine vision algorithm, and predicting the parking space state in the panoramic image by using the network prediction model. The invention does not need complex hardware deployment, can directly use the pre-installed monitoring camera to detect and analyze the parking spaces in the parking lot, fully utilizes the existing equipment and reduces the investment cost; the detection of the small target can be realized in a scene with a large range, and the detection performance of the parking space is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so that those skilled in the art can understand and read the present invention, and do not limit the conditions for implementing the present invention, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the functions and purposes of the present invention, should still fall within the scope of the present invention.
Fig. 1 is a schematic flow chart of a parking space detection method according to an embodiment of the present invention;
fig. 2 is a flow chart of parking space state detection according to an embodiment of the present invention;
fig. 3 is a schematic view illustrating classification of parking space detection sample data according to an embodiment of the present invention;
fig. 4 is a diagram illustrating a parking space detection and identification effect according to an embodiment of the present invention;
fig. 5 is a schematic view of a parking space detection apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In the related technology, for a single-camera parking space image detection scheme, cameras are independently deployed, video images shot by the cameras are independently transmitted, return images and display are also independent, and overall large-scene monitoring pictures cannot be displayed. In the parking space detection algorithm, images of all cameras are calculated respectively, overlapping parts of the images of all cameras are considered, the parking space is planned integrally, the images are analyzed in a summary mode, the calculation burden of a server and the software research and development cost are increased, and the display effect is general.
In the correlation technique, utilize the fisheye camera to discern the parking stall angle identification line around, restore out the parking stall position through the parking stall angle position, but the region that the parking stall angle occupy is less, and the characteristic receives the destruction easily.
In the related technology, 4 fisheye cameras are used for realizing a bird's-eye view vision system, radon straight line detection is used for an omnidirectional bird's-eye view formed by splicing the fisheye cameras, detected straight lines are intersected to determine a parking space angle, the method lacks prior information, and algorithm failure can be caused by local deletion of the straight lines.
In the related technology, a characteristic angular point of a parking space is extracted by using a rapid angular point detection algorithm, the reliability of the angular point is evaluated by using a RANSAC algorithm according to the size of the detected angular point after the image is preprocessed, and the method is only suitable for scenes with good parking space angular characteristics.
In the correlation technique, the parking space is determined by painting the character strings on the parking space and identifying the character strings through the text detector, the algorithm needs to consume a large amount of manpower to deploy the parking space, and the portability is poor.
In the correlation technique, through carrying out the segmentation of pixel level to the parking stall to directly detect out the parking stall in the image, most parking stall pixel characteristics are close with general ground in fact, and it is relatively poor to directly use pixel characteristic robustness.
The parking space detection algorithm based on traditional image processing mainly depends on the change of parking space pixels to manually extract parking space characteristics, has low requirement on calculation force, can quickly identify the parking space state, but has poor detection effect under complex scenes such as shielding, fuzzy vehicle line and the like in a parking space, and the difficulty of extracting the parking space line characteristics can be increased due to illumination and weather changes, so that the parking space identification and the parking space state judgment are difficult to accurately realize, and in addition, objects in the parking space are difficult to distinguish, such as shared vehicles, people, construction wastes, sundries and the like.
With the appearance of the GPU, the computing capability of the computer is greatly improved, and the problem of big data processing of the neural network is solved. At present, a mainstream target detection algorithm is constructed based on a Convolutional Neural Network (CNN), and aiming at different detection environments, a plurality of improved algorithms are generated on the basis of the original CNN algorithm, so that detection parameters are optimized, and the identification precision is improved. The neural network automatically extracts features from the whole situation to realize end-to-end training, has better robustness, and can simultaneously complete the tasks of parking space line detection, parking space type detection and parking space state classification. In the related technology, the CNN is used for detecting the parking space state of the parking lot, the error tolerance is good for the environmental influences such as illumination condition change, shadow and local shielding, the robustness of the algorithm is good, but in the identification process, the amount of characteristic information which can be used for reference is small, and the detection rate is low.
In view of this, in order to solve the problems that complex hardware needs to be deployed, the cost is high, the influence of the environment is large, and the use is inconvenient in the prior art, the present application provides a parking space detection method and device. The following are specific contents of the examples of the present application.
Referring to fig. 1 and fig. 2, an embodiment of the present invention provides a parking space detection method, including the following steps:
s1, acquiring dynamic video stream images of adjacent cameras in a parking place, performing image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and performing splicing and fusion on the dynamic video stream images of the adjacent cameras to obtain spliced and fused parking place panoramic images;
s2, manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between parking space numbers and parking space positions, and cutting a parking space image according to a preset rule according to the parking space table;
s3, extracting features of the cut parking space images, labeling the features to serve as a network prediction model training and verification data set, and training a network prediction model by adopting a convolutional neural network;
and S4, detecting whether the parking space state changes by using a machine vision algorithm, and predicting the parking space state in the panoramic image by using the network prediction model.
In this embodiment, in step S1, to implement panoramic visualization of a park parking lot, scene segmentation and parking space state detection are performed based on a panoramic image, and first, dynamic video stream images of cameras adjacent to the park are spliced and fused. After the images to be spliced are obtained, the images are preprocessed, distortion of the images caused by perspective transformation of the spliced (especially splicing of multiple images) images is reduced, and the overall effect of splicing is improved; the SURF algorithm is selected to extract the image features, so that the precision characteristics of the SIFT features are maintained, and the operation speed is higher than that of the SIFT features; an RANSAC method is adopted to eliminate abnormal matching points to obtain the most appropriate perspective transformation matrix, so that the robustness of a calculation result is improved; the heterogeneous parallel architecture of the CPU and the GPU is adopted, the computing speed is improved, and the splicing instantaneity is ensured; when two spliced images are fused, calculating the optimal splicing seam, and solving the problem that a moving object in a dynamic video can generate ghost images and ghost images; and updating the integral tone of the spliced images by adopting a mean algorithm and a timing strategy, balancing the brightness and the color of the two images and ensuring the integral operation effect of real-time splicing of the panoramic video.
In the image preprocessing process, barrel transformation is carried out on the obtained two dynamic video stream images of adjacent cameras to be spliced, and the barrel transformation mainly aims to reduce deformation of the images after perspective transformation and avoid distortion of the spliced images caused by deformation.
Specifically, the barrel transformation is performed on the images, so that the splicing effect can be improved, because the overlapping area of the images to be spliced is mainly in the edge area, the divergence of edge image matching is reduced after the barrel transformation, the difference value of the vertical coordinates of the matching points is reduced as much as possible, and the deformation after perspective transformation is reduced.
In the embodiment, in order to make the stitching have good precision and robustness and make the stitching have good real-time performance, the SURF algorithm is adopted to complete the extraction of the image feature points.
Specifically, the SURF algorithm uses the idea of simplifying approximation in the SIFT algorithm for reference, and approximately simplifies a Gaussian second-order differential template in the DoH, so that the filtering of the image by the template only needs to carry out a few simple addition and subtraction operations, and the operations are irrelevant to the size of the filtering template. The result proves that the SURF algorithm is about 3 times faster than the SIFT algorithm in operation speed, the comprehensive performance is better than the SIFT algorithm, and the SURF algorithm feature point extraction and description mainly comprise 4 main steps: detecting a scale space extreme value, refining the position of the feature point, calculating description information of the feature point, and generating a feature vector describing the feature point, wherein the specific process is not repeated here.
In this embodiment, image feature matching is performed, that is, feature points of two extracted dynamic video stream images of adjacent cameras to be spliced are matched in a pairwise correspondence manner.
Specifically, the SURF matching algorithm is obtained by calculating the euclidean distance between two feature point descriptors, that is, finding two neighboring feature point descriptors qi ' and qi ' nearest to the euclidean distance of the feature point descriptor pi, and then calculating the ratio r of the euclidean distances between pi and qi ', and between pi and qi ″. If the ratio r is smaller than the specified threshold rule, the matching is successful, and if the (pi, qi') point pair is a pair of matching points in the image, otherwise, the matching is failed. The matching method is simple, convenient and quick, but can generate mismatching, the matched points are not hundreds of correct, a few of matched points are incorrect, a perspective matrix required by subsequent image registration needs to select 4 pairs of feature points for calculation, and if the incorrect matched points are selected, the calculated perspective matrix is wrong. Therefore, in order to improve the robustness of the calculation result, the incorrect matching points need to be eliminated to obtain the correct perspective matrix. For this purpose, RANSAC is used for screening mismatching points in an image registration process. Particularly, the matching calculation amount of the image feature points is large, and the heterogeneous parallel structure of the CPU and the GPU is adopted for calculation, so that the calculation speed is improved, and the real-time performance of the whole splicing process is ensured.
In this embodiment, image registration, that is, determining an overlapping region and an overlapping position between images to be stitched, is the core of the whole image stitching.
Specifically, image registration based on SURF algorithm feature points is adopted, and a transformation matrix between image sequences is constructed through matching point pairs, so that the panoramic image is spliced. And screening mismatching points by using RANSAC before registration. For the feature points after image feature matching, randomly extracting 4 different pairs of feature matching coordinates, calculating a perspective matrix H1 (a matrix of 3x 3) by using the 4 pairs of feature matching coordinates, mapping all feature matching points in the other image to a coordinate space of a first image through the perspective matrix H1, and calculating Euclidean distance from the actual coordinates of the matching points of the first image (in order to verify whether the calculated H1 matrix meets most of the feature matching points); and repeating the above contents, randomly extracting four different groups of feature matching coordinates, calculating a perspective matrix H2 and solving the Euclidean distance. And finally, taking the perspective matrix with the minimum Euclidean distance (which represents that the characteristic matrix H satisfies the most characteristic matching points and is the best) as a final calculation result, and simultaneously removing the matching characteristic points with overlarge Euclidean distance (which represents that the perspective matrix calculated from the 4 groups does not satisfy most matching points). Particularly, the extraction calculation amount of the image feature points is large, and the heterogeneous parallel structure of the CPU and the GPU is adopted for calculation, so that the calculation speed is improved, and the real-time performance of the whole splicing process is ensured.
In this embodiment, image fusion is performed, that is, perspective matrix transformation is performed on the second image according to a perspective matrix H obtained by image registration, the second image is mapped into a coordinate space of the first image, and then the second image is simply translated to realize seamless splicing with the first image.
Because the exposure parameters can be automatically selected when a common camera takes a picture, the brightness difference exists between input images, and the obvious light and shade change appears at two ends of a spliced image suture line. Therefore, the splice seam needs to be treated during the fusion process. The embodiment adopts an optimal splicing seam algorithm, the algorithm calculates the reattachment areas of the first graph and the second graph according to a perspective matrix H obtained by image registration, then calculates a difference image of the overlapping areas, further calculates the strength value of the matrix, and selects the strength value and the minimum value as the optimal splicing seam. The method carries out image fusion, and can eliminate the problems of ghost images and ghost images of the moving objects in the overlapped part in the video.
In this embodiment, the video images are balanced, that is, the brightness and the color tone of the spliced video monitoring images are adjusted, and because the video monitoring system can affect the brightness and the color of the images under long-term operation and different weather conditions, the spliced images can also change, and the spliced video images are balanced by adopting an image mean value and white balance method and a timing update strategy.
In this embodiment, because park parking area car scale is great, and the video covers the parking stall more than 140 after the concatenation, and the parking stall type is complicated moreover, including close-range view, long-range view, forward, reverse, side direction and shelter from regional parking stall, this has greatly increased the degree of difficulty of degree of depth network model data acquisition, mark and training.
Referring to fig. 3, considering that the position of a camera is fixed, comparing and utilizing hough change and other methods to automatically extract a parking space line and filter out non-parking space line parts, the invention adopts more flexible manual parking space coordinates drawing and constructs a parking space table of a mapping relation between a parking space number and a parking space position, then automatically cuts out a parking space image according to the parking space table and a set rule, expands a network model training and verification data set after extracting features of the image and labels, and continuously trains a network prediction model and continuously optimizes and adjusts network parameters by applying a convolutional neural network. In the daily operation process of the parking lot, machine vision algorithms such as image segmentation and edge detection are applied, the state of the parking places in the images is predicted by using a convolutional neural network model, the panoramic visual monitoring of the parking lot in the park is realized by using technologies such as multithreading and message middleware, and the position information of the idle parking places is fed back to a human-computer interface, so that a driver and a manager can conveniently obtain real-time information of the parking lot.
In this embodiment, in order to improve the parking space detection efficiency, the number of times of network model prediction needs to be reduced as much as possible on the premise of accurate prediction, first, whether the parking space state changes is preliminarily detected through a machine vision algorithm, in this embodiment, a contour extraction algorithm is used to realize image segmentation, because when a parking space is empty, a white parking bit line has a large difference from the surrounding environment, and after parking, the vehicle body contour can be distinguished from the surrounding environment, so the parking space image can be well segmented by using an edge detection method, the edge is extracted by using the polar value of the first derivative of the image, when the image changes slowly, the adjacent gray level changes little, so the gradient amplitude is small and can be considered to be 0, and when the image edge, the gray level changes violently, so the image edge is segmented, and the segmentation of the vehicle contour is realized.
Through experimental comparison of various edge detection algorithms, including Canny, sobel, laplacian, prewitt, roberts Cross and other algorithms, the Sobel algorithm is adopted in the embodiment finally, so that the required contour is segmented, and the contour line is prevented from being too thick or unclear. The Sobel algorithm is one of the most important algorithms in pixel image edge detection, and includes two sets of 3 × 3 matrixes, which are respectively in the horizontal and vertical directions, and perform plane convolution with the image to obtain horizontal and vertical brightness difference approximate values, where Gx and Gy are the approximate values of gray scale partial derivatives in the horizontal and vertical directions. For each point, by formula
Figure BDA0004017993370000111
An estimated value of the gradient is calculated and compared with a set threshold value to determine whether the value is a boundary point.
In a real working condition, due to the fact that illumination is not uniformly distributed and illumination intensity changes, an original image often accompanies noise data, in order to improve robustness of an algorithm, the image needs to be preprocessed before parking space detection, random noise caused by the environment is eliminated, and a regularized image after noise elimination is obtained. Meanwhile, the calculation amount is reduced by the aid of the defined area and the simple algorithm, judgment can be still conducted when the camera shoots obliquely, accuracy of the machine vision algorithm is improved through creative application thresholds, mutual influences among light, shadow and parking spaces can be avoided by setting different thresholds for the cameras at different positions and the different parking spaces, and judgment accuracy is improved. In order to improve the accuracy of threshold judgment, according to the change of different time, weather, illumination, environment, seasons and the like, the embodiment experimentally collects a large amount of data, calculates and analyzes the data, establishes a parking space number and parking space coordinates and a threshold mapping relation table under different conditions, and establishes a corresponding hash table when the system is started. When the parking space image is analyzed by applying machine vision algorithms such as edge detection and the like, according to corresponding conditions (for example, natural conditions such as weather, illumination and the like can be obtained by calling related internet services in real time), a hash table is inquired to determine a corresponding threshold value, the change of the parking space state is judged according to the instantaneous threshold value, and if the parking space is changed from an idle state to an occupied state, a network prediction model is utilized to further predict whether the parking space is occupied by the vehicle.
Aiming at convolutional neural network model training and prediction, according to the characteristic that parking stall types of park parks are complex, the parking stall types are classified in the embodiment, various network models of close scenes, distant scenes, forward, reverse, lateral and sheltered area parking stalls are respectively established, corresponding parking stall images are automatically cut out according to different network models and set rules, the corresponding network model training and verification data sets are expanded after characteristics are extracted and marked from the images, the convolutional neural network is used for continuously training the corresponding network prediction models and continuously optimizing and adjusting network parameters, the convolutional neural network model is different from a traditional two-classification network model, the parking stall occupation state can be predicted, the abnormal parking stall occupation state can be judged, shared single cars, people, building rubbish, sundries and the like in the parking stalls can be accurately judged. When the network model is used for parking space prediction, in order to further improve the prediction efficiency, the multi-thread technology is adopted in the embodiment, and different threads are adopted and corresponding network models are called for prediction according to parking space classification.
Referring to fig. 4, the technical scheme of the embodiment is applied to a parking lot in the bei dynasty, which is located in the 107 th city, east, four north, great streets, the beijing city, the dongchun, the park has frequent vehicle access, the vehicle access at the entrance and the exit is slow, and the parking space is relatively short. And the parking area of the scientific park is great, a single camera covers more than 100 parking stalls possibly, and the parking stall types are complex, including close shot, distant shot, forward direction, reverse direction, side direction and sheltering from regional parking stalls, so that the difficulty of deep network model data acquisition, marking and training is greatly increased. For the prediction of the parking space state, if a deep network model is completely adopted for prediction, after testing, under the configuration of a mainstream server, the prediction efficiency of each parking space image is about 30 milliseconds, and the prediction of each frame of image needs about 3000 milliseconds, so that the real-time performance cannot be ensured. The parking lot management system has the advantages that the delivered products are used, the parking space occupation condition in the parking lot can be obtained in real time in the daily operation process, the free parking space searching time is shortened, the parking lot traffic efficiency is improved, and the intelligent management level of the parking lot is integrally improved.
In summary, in the embodiments of the present invention, by acquiring the dynamic video stream images of the adjacent cameras in the parking lot, performing image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and then performing stitching fusion on the dynamic video stream images of the adjacent cameras, a stitched and fused panoramic image of the parking lot is obtained; manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between the parking space number and the parking space position, and cutting a parking space image according to a preset rule according to the parking space table; extracting features of the cut parking space images, labeling the extracted features, using the labeled features as a network prediction model training and verification data set, and training a network prediction model by adopting a convolutional neural network; and detecting whether the parking space state changes by using a machine vision algorithm, and predicting the parking space state in the panoramic image by using the network prediction model. According to the technical scheme, the whole parking lot can be covered by adopting a small number of cameras, meanwhile, from the safety perspective, the park parking lot is pre-equipped with the monitoring cameras, the pre-equipped monitoring cameras are directly used, based on the video splicing technology, the parking spaces in the parking lot are detected and analyzed, whether the parking spaces are parking spaces or not is judged, and the traffic efficiency of the parking lot is improved; the existing equipment is fully utilized, and the input cost is reduced. The parking lot image feature automatic extraction method is based on Convolutional Neural Network (CNN) construction, full scene display, modeling and segmentation are carried out on parking lot images through a machine vision and video splicing technology, automatic extraction parking lot image features are fused and independently learned through the CNN, detection parameters are continuously optimized, small target detection can be guaranteed in a large-range scene, and meanwhile detection performance is improved through a multi-thread and message middleware technology.
It should be noted that the method of the embodiments of the present disclosure may be executed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the devices may only perform one or more steps of the method of the embodiments of the present disclosure, and the devices may interact with each other to complete the method.
It should be noted that the above describes some embodiments of the disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Referring to fig. 5, based on the same inventive concept, corresponding to the parking space detection method according to any of the embodiments described above, the present application further provides a parking space detection apparatus, including:
the system comprises an image acquisition module 1, a storage module and a display module, wherein the image acquisition module is used for acquiring dynamic video stream images of cameras adjacent to a parking place;
the image processing module 2 is used for performing image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and then performing splicing and fusion on the dynamic video stream images of the adjacent cameras to obtain spliced and fused panoramic images of the parking places;
the parking space processing module 3 is used for manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between a parking space number and a parking space position, and cutting a parking space image according to a preset rule according to the parking space table;
the model training module 4 is used for extracting features of the cut parking space images, labeling the extracted features as a network prediction model training and verification data set, and training the network prediction model by adopting a convolutional neural network;
and the model prediction module 5 is used for detecting whether the parking space state changes by using a machine vision algorithm and predicting the parking space state in the panoramic image by using the network prediction model.
In this embodiment, the image processing module 2 includes an image preprocessing submodule 21, where the image preprocessing submodule 21 is configured to perform barrel transform on the obtained dynamic video stream images of adjacent cameras;
the image processing module 2 comprises an image feature extraction submodule 22, wherein the image feature extraction submodule 22 is used for extracting feature points of dynamic video stream images of adjacent cameras after barrel transformation, and the SURF algorithm is adopted for extracting the feature points;
the image processing module 2 comprises an image feature matching submodule 23, wherein the image feature matching submodule 23 is used for performing pairwise corresponding matching on feature points extracted from dynamic video stream images of adjacent cameras, and a RANSAC algorithm is adopted for pairwise corresponding matching.
In this embodiment, the image processing module 2 includes an image registration submodule 24, and the image registration submodule 24 is configured to determine an overlapping area and an overlapping position of dynamic video stream images of adjacent cameras to be spliced;
the strategy of the image registration sub-module 24 is:
a) Randomly extracting a plurality of feature matching coordinates from a dynamic video stream image of a first camera, and calculating a first perspective matrix by using the extracted feature matching coordinates;
b) Mapping all characteristic matching points of the dynamic video stream image of the second camera to the coordinate space of the dynamic video stream image of the first camera through the first perspective matrix, and solving a first Euclidean distance from the actual coordinates of the dynamic video stream image matching points of the first camera;
repeating the steps a) and b) to obtain a second perspective matrix and a second Euclidean distance obtained by using the second perspective matrix;
comparing the first Euclidean distance with the second Euclidean distance, and taking a perspective matrix with a small Euclidean distance as a calculation result;
the image processing module 2 comprises an image fusion submodule 25, wherein the image fusion submodule 25 is used for performing perspective matrix transformation on the dynamic video stream image of the second camera by using a perspective matrix obtained by image registration, mapping the dynamic video stream image of the second camera into a coordinate space of the dynamic video stream image of the first camera, and obtaining a splicing image result of the dynamic video stream image of the first camera and the dynamic video stream image of the second camera;
the image processing module 2 includes an image equalization submodule 26, and the image equalization submodule 26 is configured to adjust brightness and color tone of a spliced image result obtained by image fusion, and perform equalization processing on the spliced image result by adopting a strategy of updating an image mean value and white balance at regular time.
In this embodiment, in the model prediction module 5, in the process of detecting whether the parking space state changes by using a machine vision algorithm, image edge detection is performed by using a Sobel contour extraction algorithm, and an edge is extracted by using an extreme value of a first derivative of an image.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules of the apparatus are based on the same concept as the method embodiment in embodiment 1 of the present application, the technical effect brought by the contents is the same as the method embodiment of the present application, and specific contents may refer to the description in the foregoing method embodiment of the present application, and are not described herein again.
Based on the same inventive concept, corresponding to the method of any embodiment described above, the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the program, the parking space detection method described in any embodiment above is implemented.
Fig. 6 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the electronic device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 are communicatively coupled to each other within the device via bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1020 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 1020 and called to be executed by the processor 1010.
The input/output interface 1030 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component within the device (not shown) or may be external to the device to provide corresponding functionality. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1040 is used for connecting a communication module (not shown in the drawings) to implement communication interaction between the present apparatus and other apparatuses. The communication module can realize communication in a wired mode (for example, USB, network cable, etc.), and can also realize communication in a wireless mode (for example, mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path that transfers information between various components of the device, such as processor 1010, memory 1020, input/output interface 1030, and communication interface 1040.
It should be noted that although the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only the components necessary to implement the embodiments of the present disclosure, and need not include all of the components shown in the figures.
The electronic device of the above embodiment is used to implement the corresponding parking space detection method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, corresponding to any of the above-mentioned embodiment methods, the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the parking space detection method according to any of the above-mentioned embodiments.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The computer instructions stored in the storage medium of the above embodiment are used to enable the computer to execute the parking space detection method according to any of the above embodiments, and have the beneficial effects of the corresponding method embodiments, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the context of the present application, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures for simplicity of illustration and discussion, and so as not to obscure the embodiments of the application. Further, devices may be shown in block diagram form in order to avoid obscuring embodiments of the application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the application are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that the embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures, such as Dynamic RAM (DRAM), may use the discussed embodiments.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present application are intended to be included within the scope of the present application.

Claims (10)

1. A parking space detection method is characterized by comprising the following steps:
acquiring dynamic video stream images of adjacent cameras in a parking place, performing image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and performing splicing fusion on the dynamic video stream images of the adjacent cameras to obtain spliced and fused panoramic images of the parking place;
manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between parking space numbers and parking space positions, and cutting out a parking space image according to the parking space table and preset rules;
extracting features of the cut parking space images, labeling the extracted features, using the labeled features as a network prediction model training and verification data set, and training a network prediction model by adopting a convolutional neural network;
and detecting whether the parking space state changes by using a machine vision algorithm, and predicting the parking space state in the panoramic image by using the network prediction model.
2. The parking space detection method according to claim 1, wherein in the image preprocessing process, barrel transformation is performed on the obtained dynamic video stream images of adjacent cameras;
in the image feature extraction process, feature point extraction is carried out on dynamic video stream images of adjacent cameras after barrel-shaped transformation, and the feature point extraction adopts an SURF algorithm;
in the image feature matching process, feature points extracted from dynamic video stream images of adjacent cameras are matched in a pairwise correspondence mode, and the RANSAC algorithm is adopted in pairwise correspondence matching.
3. The parking space detection method according to claim 2, wherein in the image registration process, determining the overlapping area and the overlapping position of the dynamic video stream images of the adjacent cameras to be spliced comprises:
a) Randomly extracting a plurality of feature matching coordinates from a dynamic video stream image of a first camera, and calculating a first perspective matrix by using the extracted feature matching coordinates;
b) Mapping all characteristic matching points of the dynamic video stream image of the second camera to the coordinate space of the dynamic video stream image of the first camera through the first perspective matrix, and solving a first Euclidean distance from the actual coordinates of the dynamic video stream image matching points of the first camera;
repeating the steps a) and b) to obtain a second perspective matrix and a second Euclidean distance obtained by using the second perspective matrix;
and comparing the first Euclidean distance with the second Euclidean distance, and taking a perspective matrix with a small Euclidean distance as a calculation result.
4. The parking space detection method according to claim 3, wherein in the image fusion process, a perspective matrix obtained in the image registration process is used to perform perspective matrix transformation on the dynamic video stream image of the second camera, and the dynamic video stream image of the second camera is mapped into the coordinate space of the dynamic video stream image of the first camera, so as to obtain a spliced image result of the dynamic video stream image of the first camera and the dynamic video stream image of the second camera.
5. The parking space detection method according to claim 4, wherein in the image equalization process, the brightness and the color tone of the spliced image result obtained in the image fusion process are adjusted, and the spliced image result is equalized by adopting a strategy of updating the image mean value and the white balance at regular time.
6. The parking space detection method according to claim 1, wherein in the process of detecting whether the parking space state changes by using a machine vision algorithm, a Sobel contour extraction algorithm is used for image edge detection, and an edge is extracted by using the extreme value of a first derivative of an image.
7. A parking space detection device, comprising:
the image acquisition module is used for acquiring dynamic video stream images of cameras adjacent to the parking place;
the image processing module is used for carrying out image preprocessing, image feature extraction, image feature matching, image registration, image fusion and image equalization on the dynamic video stream images of the adjacent cameras, and then carrying out splicing and fusion on the dynamic video stream images of the adjacent cameras to obtain spliced and fused panoramic images of the parking places;
the parking space processing module is used for manually drawing parking space coordinates on the panoramic image, constructing a parking space table of a mapping relation between the parking space number and the parking space position, and cutting a parking space image according to a preset rule according to the parking space table;
the model training module is used for extracting features of the cut parking space images, labeling the extracted features as a network prediction model training and verification data set, and training the network prediction model by adopting a convolutional neural network;
and the model prediction module is used for detecting whether the parking space state changes by utilizing a machine vision algorithm and predicting the parking space state in the panoramic image by using the network prediction model.
8. The parking space detection device according to claim 7, wherein the image processing module comprises an image preprocessing sub-module, and the image preprocessing sub-module is used for performing barrel transformation on the acquired dynamic video stream images of the adjacent cameras;
the image processing module comprises an image feature extraction submodule, wherein the image feature extraction submodule is used for extracting feature points of dynamic video stream images of adjacent cameras after barrel-shaped transformation, and the feature points are extracted by adopting an SURF algorithm;
the image processing module comprises an image feature matching submodule, wherein the image feature matching submodule is used for performing pairwise corresponding matching on feature points extracted from dynamic video stream images of adjacent cameras, and the pairwise corresponding matching adopts an RANSAC algorithm.
9. The parking space detection device according to claim 8, wherein the image processing module comprises an image registration sub-module, and the image registration sub-module is configured to determine an overlapping area and an overlapping position of the dynamic video stream images of the adjacent cameras to be spliced;
the strategy of the image registration submodule is as follows:
a) Randomly extracting a plurality of feature matching coordinates from a dynamic video stream image of a first camera, and calculating a first perspective matrix by using the extracted feature matching coordinates;
b) Mapping all characteristic matching points of the dynamic video stream image of the second camera to the coordinate space of the dynamic video stream image of the first camera through the first perspective matrix, and solving a first Euclidean distance from the actual coordinates of the dynamic video stream image matching points of the first camera;
repeating the steps a) and b) to obtain a second perspective matrix and a second Euclidean distance obtained by using the second perspective matrix;
comparing the first Euclidean distance with the second Euclidean distance, and taking a perspective matrix with a small Euclidean distance as a calculation result;
the image processing module comprises an image fusion submodule, wherein the image fusion submodule is used for carrying out perspective matrix transformation on the dynamic video stream image of the second camera by utilizing a perspective matrix obtained by image registration, mapping the dynamic video stream image of the second camera into a coordinate space of the dynamic video stream image of the first camera and obtaining a splicing image result of the dynamic video stream image of the first camera and the dynamic video stream image of the second camera;
the image processing module comprises an image balancing submodule, and the image balancing submodule is used for adjusting the brightness and the tone of a spliced image result obtained by image fusion, and carrying out balancing processing on the spliced image result by adopting a strategy of updating an image mean value and a white balance at regular time.
10. The parking space detection device according to claim 9, wherein in the model prediction module, when a machine vision algorithm is used to detect whether the parking space state changes, a Sobel contour extraction algorithm is used to perform image edge detection, and an extreme value of a first derivative of an image is used to extract an edge.
CN202211675403.7A 2022-12-26 2022-12-26 Parking space detection method and device Pending CN115965934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211675403.7A CN115965934A (en) 2022-12-26 2022-12-26 Parking space detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211675403.7A CN115965934A (en) 2022-12-26 2022-12-26 Parking space detection method and device

Publications (1)

Publication Number Publication Date
CN115965934A true CN115965934A (en) 2023-04-14

Family

ID=87361033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211675403.7A Pending CN115965934A (en) 2022-12-26 2022-12-26 Parking space detection method and device

Country Status (1)

Country Link
CN (1) CN115965934A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237854A (en) * 2023-10-31 2023-12-15 武汉无线飞翔科技有限公司 Parking space distribution method, system, equipment and storage medium based on video identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237854A (en) * 2023-10-31 2023-12-15 武汉无线飞翔科技有限公司 Parking space distribution method, system, equipment and storage medium based on video identification
CN117237854B (en) * 2023-10-31 2024-03-19 武汉无线飞翔科技有限公司 Parking space distribution method, system, equipment and storage medium based on video identification

Similar Documents

Publication Publication Date Title
JP6926335B2 (en) Variable rotation object detection in deep learning
CN111027504A (en) Face key point detection method, device, equipment and storage medium
TW202013252A (en) License plate recognition system and license plate recognition method
CN109413411B (en) Black screen identification method and device of monitoring line and server
CN110751630B (en) Power transmission line foreign matter detection method and device based on deep learning and medium
CN109543691A (en) Ponding recognition methods, device and storage medium
CN109146832B (en) Video image splicing method and device, terminal equipment and storage medium
CN112001298B (en) Pedestrian detection method, device, electronic equipment and storage medium
CN112446246B (en) Image occlusion detection method and vehicle-mounted terminal
CN113052170B (en) Small target license plate recognition method under unconstrained scene
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN111666800A (en) Pedestrian re-recognition model training method and pedestrian re-recognition method
CN109558790B (en) Pedestrian target detection method, device and system
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN109916415A (en) Road type determines method, apparatus, equipment and storage medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115965934A (en) Parking space detection method and device
CN113205507B (en) Visual question answering method, system and server
CN111695373A (en) Zebra crossing positioning method, system, medium and device
CN113901961B (en) Parking space detection method, device, equipment and storage medium
CN114973028B (en) Aerial video image real-time change detection method and system
Kročka et al. Extending parking occupancy detection model for night lighting and snowy weather conditions
CN111738043A (en) Pedestrian re-identification method and device
CN112115737A (en) Vehicle orientation determining method and device and vehicle-mounted terminal
CN115620079A (en) Sample label obtaining method and lens failure detection model training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination