CN110688902B - Method and device for detecting vehicle area in parking space - Google Patents
Method and device for detecting vehicle area in parking space Download PDFInfo
- Publication number
- CN110688902B CN110688902B CN201910811564.6A CN201910811564A CN110688902B CN 110688902 B CN110688902 B CN 110688902B CN 201910811564 A CN201910811564 A CN 201910811564A CN 110688902 B CN110688902 B CN 110688902B
- Authority
- CN
- China
- Prior art keywords
- image
- vehicle
- rectangular frame
- detected
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for detecting a vehicle area in a parking space, wherein the method comprises the following steps: acquiring a first video image of a preset parking area, training a preset network model based on the first video image, and obtaining a vehicle detection model; detecting the image to be detected through the vehicle detection model, and determining whether a vehicle target exists in the image to be detected; if the parking space information exists, determining the position information of a first rectangular frame of a vehicle area in the image to be detected, and determining the position information of each sideline of the parking space where the vehicle target is located in the image to be detected; and calculating to obtain the vehicle area in the image to be detected based on the position information of the first rectangular frame and the position information of each side line. By the method and the device, the background area in the vehicle area is detected, and the condition that the vehicle area is detected inaccurately due to interference factors of the background area is avoided.
Description
Technical Field
The invention relates to the technical field of computer visual target detection, in particular to a method and a device for detecting a vehicle area in a parking space.
Background
The parking management based on high-level video has become an important subject in the construction and development of smart cities in recent years, and the parking management mode firstly carries out image video acquisition on vehicle and parking space information through a camera, and then carries out analysis and understanding on the vehicle information and vehicle behaviors through a computer vision technology, so that roadside parking is monitored and managed. The vehicle detection based on the video images is a basic step in high-order video parking management, and the accuracy of the vehicle area detected by the vehicle directly influences the accuracy of subsequent vehicle identification or vehicle behavior analysis and other applications.
Early vehicle detection employed manual extraction of target features, such as histogram of oriented gradients, and subsequent detection and identification of vehicles using machine learning algorithms, such as support vector machines. The method depends on manual experience, has limited feature expression capability and is easily influenced by complex scenes, so that the accuracy cannot meet the application requirements of the scenes. With the development of deep learning in recent years, deep learning based on a convolutional neural network has achieved significant achievement in the fields of image recognition, image detection, image segmentation, and the like. Compared with the traditional manual feature extraction method, the generalization capability of the detection model can be improved by the data training feature representation method.
However, no matter the manual feature extraction method or the feature learning method based on the convolutional neural network is used, the final output result of the vehicle detection represents the vehicle region in a rectangular frame mode, but the outline of the vehicle is not rectangular, so that the background region is necessarily included in the rectangular frame, and the accuracy of the results of subsequent vehicle identification, vehicle behavior analysis and the like is affected by too many background regions. In order to further improve the accuracy of vehicle region positioning, a method for training vehicle segmentation based on a mask is adopted in the prior art, but the method needs to perform pixel-level labeling on an image based on a vehicle mask method, so that the cost and efficiency of manual labeling are greatly increased.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting a vehicle area in a parking space, which can accurately detect the vehicle area in a parking lot scene.
In one aspect, an embodiment of the present invention provides a method for detecting a vehicle area in a parking space, including:
acquiring a first video image of a preset parking area, training a preset network model based on the first video image, and obtaining a vehicle detection model;
detecting the image to be detected through the vehicle detection model, and determining whether a vehicle target exists in the image to be detected;
if the parking space information exists, determining the position information of a first rectangular frame of a vehicle area in the image to be detected, and determining the position information of each sideline of the parking space where the vehicle target is located in the image to be detected;
and calculating to obtain the vehicle area in the image to be detected based on the position information of the first rectangular frame and the position information of each side line.
In another aspect, an embodiment of the present invention provides an apparatus for detecting a vehicle area in a parking space, including:
the system comprises a training module, a vehicle detection module and a vehicle monitoring module, wherein the training module is used for acquiring a first video image of a preset parking area, training a preset network model based on the first video image and obtaining a vehicle detection model;
the detection and determination module is used for detecting the image to be detected through the vehicle detection model and determining whether a vehicle target exists in the image to be detected;
the first determining module is used for determining the position information of a first rectangular frame of a vehicle area in the image to be detected and determining the position information of each sideline of a parking space where a vehicle target is located in the image to be detected if the first rectangular frame exists;
and the calculation module is used for calculating to obtain the vehicle area in the image to be detected based on the position information of the first rectangular frame and the position information of each side line.
The technical scheme has the following beneficial effects: by the method, the vehicle detection model trained based on the convolutional neural network can accurately detect the vehicle of the video image of the scene of the parking lot shot by the camera, and can accurately detect whether the vehicle target exists in the image to be detected, so that important precondition guarantee is provided for accurately determining the vehicle area subsequently; meanwhile, the vehicle region in the image to be detected is efficiently and accurately calculated based on the sideline position information of the parking space where the vehicle target is located, the condition that the vehicle region detection is inaccurate due to the image content of the non-vehicle region is avoided, the accuracy of vehicle region positioning in a parking lot scene is greatly improved, and further, important technical support is provided for improving the urban traffic and parking management efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for detecting a vehicle region within a parking space according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first rectangular frame detected in an image to be detected according to a preferred embodiment of the present invention;
FIG. 3 is a schematic view of a parking space according to a preferred embodiment of the present invention;
FIG. 4 is a diagram illustrating a second rectangular frame detected according to a preferred embodiment of the present invention;
FIG. 5 is a diagram illustrating a fourth rectangular frame detected according to a preferred embodiment of the present invention;
FIG. 6 is a diagram illustrating a process of detecting a third rectangular frame according to a preferred embodiment of the present invention;
FIG. 7 is a diagram illustrating a process of detecting a fifth rectangular frame according to a preferred embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating the positions of a first rectangular frame, a third rectangular frame and a fifth rectangular frame in an image to be detected according to a preferred embodiment of the present invention;
FIG. 9 is a diagram illustrating a method for calculating a background area at the polygon vertices of a vehicle area in accordance with a preferred embodiment of the present invention;
FIG. 10 is a vehicle zone polygon formed by the common overlap area of the first rectangular frame, the third rectangular frame and the fifth rectangular frame in accordance with a preferred embodiment of the present invention;
FIG. 11 is a schematic view of a vehicle region with the background removed from the vehicle region polygon vertices in accordance with a preferred embodiment of the present invention;
fig. 12 is a schematic structural diagram of an apparatus for detecting a vehicle area in a parking space according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating a method for detecting a vehicle area in a parking space according to an embodiment of the present invention, including:
101. acquiring a first video image of a preset parking area, training a preset network model based on the first video image, and obtaining a vehicle detection model;
102. detecting the image to be detected through the vehicle detection model, and determining whether a vehicle target exists in the image to be detected;
103. if the parking space information exists, determining the position information of a first rectangular frame of a vehicle area in the image to be detected, and determining the position information of each sideline of the parking space where the vehicle target is located in the image to be detected;
104. and calculating to obtain the vehicle area in the image to be detected based on the position information of the first rectangular frame and the position information of each side line.
Further, the obtaining a first video image of a predetermined parking area, training a predetermined network model based on the first video image, and obtaining a vehicle detection model includes:
labeling a vehicle rectangular frame label of each image frame in the first video image;
and training a predetermined network model through a gradient descent algorithm based on the labeled first video image and each vehicle rectangular frame label to obtain a vehicle detection model.
Further, before the step of determining the position information of each sideline of the parking space where the vehicle target is located in the image to be detected, the method comprises the following steps:
acquiring a second video image of the preset parking area, determining each parking space in the preset parking area according to the second video image, and determining the coordinate of each garage angle of each parking space;
determining position information of each sideline of each parking space according to the coordinates of each bank angle of each parking space;
determining the sidelines of the long sides of the parking spaces as a first sideline and a second sideline, determining the sidelines of the wide sides of the parking spaces as a third sideline and a fourth sideline, and sequentially using the first sideline as a starting point, the fourth sideline, the second sideline and the third sideline according to the anticlockwise direction;
and if not, sequentially determining the first side line and the second side line of the parking space in the image frame along the positive direction of the transverse axis of the plane coordinate.
Further, the step of calculating a vehicle region in the image to be detected based on the position information of the first rectangular frame and the position information of each edge line includes:
rotating an image to be detected by a first rotation angle along a first rotation direction to obtain a first rotation image, wherein a first side line or a second side line of a parking space in the first rotation image is in a horizontal position, and a long side of the parking space is sequentially a first side line and a second side line along the positive direction of a longitudinal axis of a plane coordinate;
detecting a first rotating image through the vehicle detection model to obtain a second rectangular frame;
and rotating the second rectangular frame by the first rotation angle along the direction opposite to the first rotation direction to obtain a third rectangular frame.
Further, the step of calculating a vehicle region in the image to be detected based on the position information of the first rectangular frame and the position information of each edge line includes:
rotating the image to be detected by a second rotation angle along a second rotation direction to obtain a second rotation image, wherein a third edge or a fourth edge of the parking space in the second rotation image is in a horizontal position, and the broadside of the parking space is a fourth side line and a third side line in sequence along the positive direction of the longitudinal axis of the plane coordinate;
detecting a second rotation image through the vehicle detection model to obtain a fourth rectangular frame;
and rotating the fourth rectangular frame by a second rotation angle along the direction opposite to the second rotation direction to obtain a fifth rectangular frame.
Further, the step of calculating a vehicle region in the image to be detected based on the position information of the first rectangular frame and the position information of each edge line includes:
mapping the third rectangular frame and the fifth rectangular frame to an image to be detected so as to determine the positions of the third rectangular frame and the fifth rectangular frame in the image to be detected;
determining a vehicle area polygon formed by a common overlapping area of the first rectangular frame, the third rectangular frame and the fifth rectangular frame in an image to be detected;
calculating and determining a background area at each vertex of the vehicle area polygon based on the vehicle area polygon;
and removing the image content of the background area to obtain a vehicle area in the image to be detected.
Further, the calculating and determining a background area at each vertex of the vehicle area polygon based on the vehicle area polygon specifically includes:
calculating and determining an inscribed circle at each vertex of the vehicle region polygon based on the vehicle region polygon;
determining a background area formed by each inscribed circle and the vehicle area polygon according to the inscribed circle at each vertex of the determined vehicle area polygon;
removing the image content of the background area in the image to be detected;
and converting each vertex angle of the polygon of the vehicle area into a fillet according to the radian of the inscribed circle to obtain the vehicle area in the image to be detected.
As shown in fig. 12, a schematic structural diagram of an apparatus for detecting a vehicle area in a parking space includes:
the training module 121 is configured to acquire a first video image of a predetermined parking area, train a predetermined network model based on the first video image, and obtain a vehicle detection model;
the detection and determination module 122 is configured to detect an image to be detected through the vehicle detection model, and determine whether a vehicle target exists in the image to be detected;
the first determining module 123 is configured to determine, if the first determining module exists, position information of a first rectangular frame in a vehicle area in the image to be detected, and determine position information of each sideline of a parking space where a vehicle target is located in the image to be detected;
and the calculating module 124 is configured to calculate a vehicle region in the image to be detected based on the position information of the first rectangular frame and the position information of each edge line.
Further, the training module includes:
the marking unit is used for marking a rectangular frame label of the vehicle of each image frame in the first video image;
and the training unit is used for training a predetermined network model through a gradient descent algorithm based on the labeled first video image and each vehicle rectangular frame label to obtain a vehicle detection model.
Further, comprising:
the acquisition and determination module is used for acquiring a second video image of the preset parking area, determining each parking space in the preset parking area according to the second video image and determining the coordinate of each garage angle of each parking space;
the second determining module is used for determining the position information of each sideline of each parking space according to the coordinates of each bank angle of each parking space;
determining the sidelines of the long sides of the parking spaces as a first sideline and a second sideline, determining the sidelines of the wide sides of the parking spaces as a third sideline and a fourth sideline, and sequentially using the first sideline as a starting point, the fourth sideline, the second sideline and the third sideline according to the anticlockwise direction;
and if not, sequentially determining the first side line and the second side line of the parking space in the image frame along the positive direction of the transverse axis of the plane coordinate.
Further, the calculation module includes:
the first rotating unit is used for rotating the image to be detected by a first rotating angle along a first rotating direction to obtain a first rotating image, wherein a first side line or a second side line of a parking space in the first rotating image is in a horizontal position, and the long side of the parking space is sequentially a first side line and a second side line along the positive direction of a longitudinal axis of a plane coordinate;
the first detection unit is used for detecting a first rotating image through the vehicle detection model to obtain a second rectangular frame;
and the second rotating unit is used for rotating the second rectangular frame by the first rotating angle along the direction opposite to the first rotating direction to obtain a third rectangular frame.
Further, the calculation module includes:
the third rotating unit is used for rotating the image to be detected by a second rotating angle along a second rotating direction to obtain a second rotating image, wherein a third edge line or a fourth edge line of the parking space in the second rotating image is in a horizontal position, and the wide side of the parking space is a fourth edge line and a third edge line in sequence along the positive direction of the longitudinal axis of the plane coordinate;
the second detection unit is used for detecting a second rotation image through the vehicle detection model to obtain a fourth rectangular frame;
and the fourth rotating unit is used for rotating the fourth rectangular frame by a second rotating angle along the direction opposite to the second rotating direction to obtain a fifth rectangular frame.
Further, the calculation module includes:
the mapping unit is used for mapping the third rectangular frame and the fifth rectangular frame to an image to be detected so as to determine the positions of the third rectangular frame and the fifth rectangular frame in the image to be detected;
a determining unit, configured to determine, in an image to be detected, a vehicle region polygon formed by a common overlapping region of the first rectangular frame, the third rectangular frame, and the fifth rectangular frame;
the calculating unit is used for calculating and determining a background area at each vertex of the vehicle area polygon based on the vehicle area polygon;
and the removing unit is used for removing the image content of the background area to obtain the vehicle area in the image to be detected.
Further, the computing unit is specifically configured to
Calculating and determining an inscribed circle at each vertex of the vehicle region polygon based on the vehicle region polygon;
determining a background area formed by each inscribed circle and the vehicle area polygon according to the inscribed circle at each vertex of the determined vehicle area polygon;
removing the image content of the background area in the image to be detected;
and converting each vertex angle of the polygon of the vehicle area into a fillet according to the radian of the inscribed circle to obtain the vehicle area in the image to be detected.
The technical scheme of the embodiment of the invention has the following beneficial effects: by the method, the vehicle detection model trained based on the convolutional neural network can accurately detect the vehicle of the video image of the scene of the parking lot shot by the camera, and can accurately detect whether the vehicle target exists in the image to be detected, so that important precondition guarantee is provided for accurately determining the vehicle area subsequently; meanwhile, the vehicle region in the image to be detected is efficiently and accurately calculated based on the sideline position information of the parking space where the vehicle target is located, the condition that the vehicle region detection is inaccurate due to the image content of the non-vehicle region is avoided, the accuracy of vehicle region positioning in a parking lot scene is greatly improved, and further, important technical support is provided for improving the urban traffic and parking management efficiency.
The above technical solutions of the embodiments of the present invention are described in detail below with reference to application examples:
the application example of the invention aims to accurately detect the vehicle area in the scene of the parking lot.
As shown in fig. 1, for example, in a vehicle area detection system, a first video image in a predetermined parking area is captured by a camera to obtain the first video image, and based on the first video image, a predetermined network model, such as a MobileNet-SSD model, is trained, that is, an SDD (Single Shot multi box Detector) target detection model using a lightweight convolutional neural network MobileNet as a backbone network, to obtain a vehicle detection model; then, detecting the image to be detected through the vehicle detection model, and determining whether a vehicle target exists in the image to be detected; if the vehicle target exists in the image to be detected, determining the position information of a first rectangular frame, such as a rectangular frame A, of a vehicle area in the image to be detected, and determining the position information of each sideline of a parking space where the vehicle target is located in the image to be detected; and calculating to obtain the vehicle area in the image to be detected based on the position information of the first rectangular frame A and the position information of each side line of the parking space. The schematic diagram of the first rectangular frame a in the image to be detected is shown in fig. 2.
It should be noted that, as can be understood by those skilled in the art, the MobileNet-SSD model is a classic target detection model in a deep learning algorithm, and adopts different scales and aspect ratios to perform dense sampling uniformly at different positions of a picture to obtain a preselected frame, and adopts a lightweight deep convolutional neural network, such as MobileNet, to extract an image feature map and then to classify and position-regress the preselected frame to obtain an accurate position of a target. In the embodiment of the invention, the high-order video scene is taken as an example to realize vehicle detection.
In a possible implementation manner, the obtaining a first video image of a predetermined parking area in step 101, training a predetermined network model based on the first video image, and obtaining a vehicle detection model includes: labeling a vehicle rectangular frame label of each image frame in the first video image; and training a predetermined network model through a gradient descent algorithm based on the labeled first video image and each vehicle rectangular frame label to obtain a vehicle detection model.
For example, in a vehicle region detection system, a first video within a predetermined parking region is captured by a cameraThe method comprises the steps of obtaining a first video image, and manually marking a vehicle rectangular frame label of each image frame in the first video image; the vehicle rectangular frame label of the vehicle target is used as a supervision label for training a MobileNet-SSD vehicle detection model during subsequent training, for example, by carrying out the supervision label on each image frame x in the acquired first video imageiLabeling to obtain image frame xiCorresponding rectangular frame label yiSubsequently, a training database of vehicle detections is established for each annotated image frame<X,Y>Wherein X ═ X1,x2,...,xn),Y=(y1,y2,...,yn) N represents the total number of image frames; will train the database<X,Y>The size of each image frame in the image frame is converted into 512 pixels multiplied by 512 pixels, 300 pixels multiplied by 300 pixels and 256 pixels multiplied by 256 pixels respectively, and each converted image frame is input into a network of the MobileNet-SSD, wherein a marked rectangular frame label Y is used for supervision, a batch random gradient descent algorithm is used for iteratively updating the vehicle detection model, the total iteration frequency is set to be 12 thousands of times in the training process, the initial learning rate is 0.001, the optimal vehicle detection model of the MobileNet-SSD is finally obtained, the vehicle detection model obtained after final training is derived, and the vehicle detection model is used for vehicle detection of a second video image in a subsequent parking lot to obtain a rectangular frame of a vehicle area. The Learning rate is used as an important super-parameter in supervised Learning and deep Learning, and determines whether and when the objective function can converge to a local minimum value.
It should be noted that, in the embodiment of the present invention, in order to increase the speed of vehicle detection, a lightweight network MobileNet is used to replace the VGG (a convolutional neural network proposed by researchers of Visual Geometry Group of oxford university and Google DeepMind company) backbone network on the basis of the original VGG-SDD model to provide image features, 8 convolutional layers are added behind the last convolutional layer conv13 of MobileNet, and then, 6 convolutional layers of conv11, conv13, conv14_2, conv15_2, conv16_2 and conv17_2 are extracted as features for detection, and finally, two parallel convolutional layers conv11_ mbox _ loc and conv11_ mbox _ conf are added as confidence levels for detecting the position of a positioned vehicle and determining whether the vehicle is detected or not, respectively.
Through the embodiment, the vehicle detection model is obtained based on convolutional neural network training, the speed and the accuracy of vehicle detection are greatly improved, important precondition guarantee is provided for the subsequent vehicle area detection of the video images of the parking lot scene shot by the camera accurately, further, the collected video images are simply marked, the vehicle detection model can be trained without manual accurate marking, and the cost of vehicle area detection is greatly reduced.
In a possible implementation manner, before the step of determining the position information of each sideline of the parking space where the vehicle target is located in the image to be detected in step 103, the method includes: acquiring a second video image of the preset parking area, determining each parking space in the preset parking area according to the second video image, and determining the coordinate of each garage angle of each parking space; determining position information of each sideline of each parking space according to the coordinates of each bank angle of each parking space; and according to the anticlockwise direction, each side line of each parking space takes the first side line as the start, and sequentially comprises the first side line, a fourth side line, a second side line and a third side line.
And if not, sequentially determining the first side line and the second side line of the parking space in the image frame along the positive direction of the transverse axis of the plane coordinate.
For example, in a vehicle area detection system, a second video image of a predetermined parking area is acquired by a high-level video camera, each parking space in the predetermined parking area is determined according to the second video image, and coordinates of each bank angle of each parking space are determined; and determining the position information of each sideline of each parking space according to the coordinates of each bank angle of each parking space. The sidelines of the long sides of the parking spaces are determined as a first sideline, such as a, a second sideline, such as b, the sidelines of the wide sides of the parking spaces are determined as a third sideline, such as c, and a fourth sideline, such as d, and the sidelines of the parking spaces start from the a and sequentially are a, d, b and c in the anticlockwise direction. As shown in fig. 3. When the side line of the long side of the parking space is in a horizontal position in the image frame where the parking space is located, side lines a and b of the parking space in the image frame are sequentially determined along the positive direction of the longitudinal axis of the plane coordinate, and otherwise, the side lines a and b of the parking space in the image frame are sequentially determined along the positive direction of the transverse axis of the plane coordinate.
Through the embodiment, the positions of all the sidelines of the parking spaces are determined, necessary calculation data are provided for determining the position information of all the sidelines of the parking spaces where the vehicle targets are located in the images to be detected subsequently, and further the accuracy of subsequent vehicle area detection is greatly improved.
In a possible implementation manner, the step 104 of calculating a vehicle region in the image to be detected based on the position information of the first rectangular frame and the position information of each edge includes: rotating an image to be detected by a first rotation angle along a first rotation direction to obtain a first rotation image, wherein a first side line or a second side line of a parking space in the first rotation image is in a horizontal position, and a long side of the parking space is sequentially a first side line and a second side line along the positive direction of a longitudinal axis of a plane coordinate; detecting a first rotating image through the vehicle detection model to obtain a second rectangular frame; and rotating the second rectangular frame by the first rotation angle along the direction opposite to the first rotation direction to obtain a third rectangular frame.
For example, in a vehicle area detection system, a first video image in a predetermined parking area is captured by a camera, the first video image is acquired, and a predetermined network model is trained based on the first video image to obtain a vehicle detection model; then, detecting the image to be detected through the vehicle detection model, and determining whether a vehicle target exists in the image to be detected; if the vehicle target exists in the image to be detected, determining the position information of a first rectangular frame A of a vehicle area in the image to be detected, and determining the position information of each side line of a parking space where the vehicle target is located in the image to be detected; based on the position information of each sideline of the parking space where the vehicle target is located in the image to be detected, wherein the first sideline, the second sideline, the third sideline and the fourth sideline of the parking space are a, b, c and d respectively; rotating the image to be detected by a first rotation angle, such as alpha degrees, along a first rotation direction to obtain a first rotation image, wherein a side line a or B of a parking space in the first rotation image is in a horizontal position, and long sides of the parking space are sequentially side lines a or B along the positive direction of a longitudinal axis of a plane coordinate, and detecting the first rotation image through the vehicle detection model to obtain a second rectangular frame, such as a rectangular frame B, as shown in FIG. 4; the second rectangular frame B is rotated by α degrees in the direction opposite to the first rotation direction, resulting in a third rectangular frame, such as rectangular frame B', as shown in fig. 6. In the embodiment of the present invention, if the first rotation direction is a clockwise direction, the direction opposite to the first rotation direction is a counterclockwise direction, and similarly, if the first rotation direction is a counterclockwise direction, the direction opposite to the first rotation direction is a clockwise direction. It should be noted that the angle range of the first rotation angle is [0, 90 ], and if the first side line or the second side line of the parking space where the vehicle target is located in the image to be detected is in the horizontal position in the image to be detected, the first rotation angle is 0 degree, and at this time, the third rectangular frame is the same as the second rectangular frame.
In a possible implementation manner, the step 104 of calculating a vehicle region in the image to be detected based on the position information of the first rectangular frame and the position information of each edge includes: rotating the image to be detected by a second rotation angle along a second rotation direction to obtain a second rotation image, wherein a third edge or a fourth edge of the parking space in the second rotation image is in a horizontal position, and the broadside of the parking space is a fourth side line and a third side line in sequence along the positive direction of the longitudinal axis of the plane coordinate; detecting a second rotation image through the vehicle detection model to obtain a fourth rectangular frame; and rotating the fourth rectangular frame by a second rotation angle along the direction opposite to the second rotation direction to obtain a fifth rectangular frame.
For example, in the following example, in the vehicle area detection system, the image to be detected is rotated by a second rotation angle, such as β degrees, along a second rotation direction to obtain a second rotation image, where a sideline c or d of the parking space in the second rotation image is in a horizontal position and broadsides of the parking space along the positive direction of the longitudinal axis of the plane coordinate are sidelines d and c in sequence; detecting a second rotation image through the vehicle detection model to obtain a fourth rectangular frame, such as a rectangular frame C, as shown in fig. 5; the fourth rectangular frame C is rotated by β degrees in the direction opposite to the second rotation direction, resulting in a fifth rectangular frame, such as rectangular frame C', as shown in fig. 7. In the embodiment of the present invention, if the second rotation direction is a clockwise direction, the direction opposite to the second rotation direction is a counterclockwise direction, and similarly, if the second rotation direction is a counterclockwise direction, the direction opposite to the second rotation direction is a clockwise direction. It should be noted that the angle range of the second rotation angle is [0, 90 ], and if a third line or a fourth line of a parking space where the vehicle target is located in the image to be detected is in a horizontal position in the image to be detected, the second rotation angle is 0 degree, and at this time, the fifth rectangular frame is the same as the fourth rectangular frame.
In a possible implementation manner, the step 104 of calculating a vehicle region in the image to be detected based on the position information of the first rectangular frame and the position information of each edge includes: mapping the third rectangular frame and the fifth rectangular frame to an image to be detected so as to determine the positions of the third rectangular frame and the fifth rectangular frame in the image to be detected; determining a vehicle area polygon formed by a common overlapping area of the first rectangular frame, the third rectangular frame and the fifth rectangular frame in an image to be detected; calculating and determining a background area at each vertex of the vehicle area polygon based on the vehicle area polygon; and removing the image content of the background area to obtain a vehicle area in the image to be detected.
For example, as described above, in the vehicle area detection system, when the side line a or B of the parking space is not in a horizontal position in the image to be detected, the third rectangular frame B 'and the fifth rectangular frame C' are mapped to the image to be detected, so as to determine the positions of the third rectangular frame B 'and the fifth rectangular frame C' in the image to be detected, as shown in fig. 8; in the image to be detected, determining a vehicle area polygon formed by the common overlapping area of the first rectangular frame A, the third rectangular frame B 'and the fifth rectangular frame C', as shown in FIG. 10; calculating and determining a background area at each vertex of the vehicle area polygon based on the vehicle area polygon; and then removing the image content of the background area to obtain the vehicle area in the image to be detected.
Through the embodiment, the vehicle region polygon obtained through mapping provides necessary precondition for accurately detecting the vehicle region and the background region subsequently, and the detection efficiency and accuracy are greatly improved.
In a possible implementation manner, the step 104 of calculating and determining a background area at each vertex of the vehicle area polygon based on the vehicle area polygon specifically includes: calculating and determining an inscribed circle at each vertex of the vehicle region polygon based on the vehicle region polygon; determining a background area formed by each inscribed circle and the vehicle area polygon according to the inscribed circle at each vertex of the determined vehicle area polygon; removing the image content of the background area in the image to be detected; and converting each vertex angle of the polygon of the vehicle area into a fillet according to the radian of the inscribed circle to obtain the vehicle area in the image to be detected.
For example, as described above, in the image to be detected, a vehicle region polygon formed by the common overlapping region of the first rectangular frame a, the third rectangular frame B ', and the fifth rectangular frame C' is determined; based on the vehicle area polygon, taking the first rectangular frame a and the third rectangular frame B 'overlapped to form the polygon vertex I as an example, as shown in fig. 9, respectively pointing on the edges of the first rectangular frame a and the third rectangular frame B' to obtain points M and N, so that the length of the line segment IM is equal to one sixth of the length of the edge line of the first rectangular frame a; the length of the line segment IN is equal to one sixth of the length of the edge of the third rectangular frame B'; and connecting the line segment MN, taking a point O ON the perpendicular bisector l of the line segment MN to enable the line segment length OM to be ON to be MN, and then drawing an inscribed circle which takes the point O as the center of the circle and OM as the radius as the vertex I of the polygon, wherein the final background area formed by the line segments IM, IN and the arc MN is the background area to be removed by the embodiment of the invention. And removing redundant background areas at each vertex of the polygon according to the method to obtain a final vehicle detection area, wherein the final vehicle detection area is shown in fig. 11.
Through the embodiment, the non-vehicle background area in the image to be detected can be removed, so that the vehicle area can be determined quickly and accurately, and the detection precision of the vehicle area is greatly improved.
The embodiment of the present invention provides a device for detecting a vehicle area in a parking space, which can implement the method embodiment provided above, and for specific function implementation, reference is made to the description in the method embodiment, which is not repeated herein.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks, units, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A method of detecting a vehicle region within a parking space, comprising:
acquiring a first video image of a preset parking area, training a preset network model based on the first video image, and obtaining a vehicle detection model;
detecting the image to be detected through the vehicle detection model, and determining whether a vehicle target exists in the image to be detected;
if the parking space information exists, determining the position information of a first rectangular frame of a vehicle area in the image to be detected, and determining the position information of each sideline of the parking space where the vehicle target is located in the image to be detected;
calculating to obtain a vehicle area in the image to be detected based on the position information of the first rectangular frame and the position information of each side line;
based on the position information of the first rectangular frame and the position information of each side line, calculating to obtain a vehicle area in the image to be detected, and the method comprises the following steps:
rotating an image to be detected by a first rotation angle along a first rotation direction to obtain a first rotation image, wherein a first side line or a second side line of a parking space in the first rotation image is in a horizontal position, and a long side of the parking space is sequentially a first side line and a second side line along the positive direction of a longitudinal axis of a plane coordinate;
detecting a first rotating image through the vehicle detection model to obtain a second rectangular frame;
rotating the second rectangular frame by the first rotation angle along the direction opposite to the first rotation direction to obtain a third rectangular frame;
rotating the image to be detected by a second rotation angle along a second rotation direction to obtain a second rotation image, wherein a third edge or a fourth edge of the parking space in the second rotation image is in a horizontal position, and the broadside of the parking space is a fourth side line and a third side line in sequence along the positive direction of the longitudinal axis of the plane coordinate;
detecting a second rotation image through the vehicle detection model to obtain a fourth rectangular frame;
and rotating the fourth rectangular frame by a second rotation angle along the direction opposite to the second rotation direction to obtain a fifth rectangular frame.
2. The method of claim 1, wherein the obtaining a first video image of a predetermined parking area, training a predetermined network model based on the first video image, and obtaining a vehicle detection model comprises:
labeling a vehicle rectangular frame label of each image frame in the first video image;
and training a predetermined network model through a gradient descent algorithm based on the labeled first video image and each vehicle rectangular frame label to obtain a vehicle detection model.
3. The method according to claim 1 or 2, wherein the step of determining the position information of each sideline of the parking space in which the vehicle target is located in the image to be detected is preceded by the steps of:
acquiring a second video image of the preset parking area, determining each parking space in the preset parking area according to the second video image, and determining the coordinate of each garage angle of each parking space;
determining position information of each sideline of each parking space according to the coordinates of each bank angle of each parking space;
determining the sidelines of the long sides of the parking spaces as a first sideline and a second sideline, determining the sidelines of the wide sides of the parking spaces as a third sideline and a fourth sideline, and sequentially using the first sideline as a starting point, the fourth sideline, the second sideline and the third sideline according to the anticlockwise direction;
and if not, sequentially determining the first side line and the second side line of the parking space in the image frame along the positive direction of the transverse axis of the plane coordinate.
4. The method according to claim 3, wherein the calculating a vehicle area in the image to be detected based on the position information of the first rectangular frame and the position information of each edge line comprises:
mapping the third rectangular frame and the fifth rectangular frame to an image to be detected so as to determine the positions of the third rectangular frame and the fifth rectangular frame in the image to be detected;
determining a vehicle area polygon formed by a common overlapping area of the first rectangular frame, the third rectangular frame and the fifth rectangular frame in an image to be detected;
calculating and determining a background area at each vertex of the vehicle area polygon based on the vehicle area polygon;
and removing the image content of the background area to obtain a vehicle area in the image to be detected.
5. The method according to claim 4, wherein the calculating and determining the background area at each vertex of the vehicle area polygon based on the vehicle area polygon specifically comprises:
calculating and determining an inscribed circle at each vertex of the vehicle region polygon based on the vehicle region polygon;
determining a background area formed by each inscribed circle and the vehicle area polygon according to the inscribed circle at each vertex of the determined vehicle area polygon;
removing the image content of the background area in the image to be detected;
and converting each vertex angle of the polygon of the vehicle area into a fillet according to the radian of the inscribed circle to obtain the vehicle area in the image to be detected.
6. An apparatus for detecting a vehicle area within a parking space, comprising:
the system comprises a training module, a vehicle detection module and a vehicle monitoring module, wherein the training module is used for acquiring a first video image of a preset parking area, training a preset network model based on the first video image and obtaining a vehicle detection model;
the detection and determination module is used for detecting the image to be detected through the vehicle detection model and determining whether a vehicle target exists in the image to be detected;
the first determining module is used for determining the position information of a first rectangular frame of a vehicle area in the image to be detected and determining the position information of each sideline of a parking space where a vehicle target is located in the image to be detected if the first rectangular frame exists;
the calculation module is used for calculating and obtaining a vehicle area in the image to be detected based on the position information of the first rectangular frame and the position information of each side line, and the calculation module comprises:
the first rotating unit is used for rotating the image to be detected by a first rotating angle along a first rotating direction to obtain a first rotating image, wherein a first side line or a second side line of a parking space in the first rotating image is in a horizontal position, and the long side of the parking space is sequentially a first side line and a second side line along the positive direction of a longitudinal axis of a plane coordinate;
the first detection unit is used for detecting a first rotating image through the vehicle detection model to obtain a second rectangular frame;
the second rotating unit is used for rotating the second rectangular frame by the first rotating angle along the direction opposite to the first rotating direction to obtain a third rectangular frame;
the third rotating unit is used for rotating the image to be detected by a second rotating angle along a second rotating direction to obtain a second rotating image, wherein a third edge line or a fourth edge line of the parking space in the second rotating image is in a horizontal position, and the wide side of the parking space is a fourth edge line and a third edge line in sequence along the positive direction of the longitudinal axis of the plane coordinate;
the second detection unit is used for detecting a second rotation image through the vehicle detection model to obtain a fourth rectangular frame;
and the fourth rotating unit is used for rotating the fourth rectangular frame by a second rotating angle along the direction opposite to the second rotating direction to obtain a fifth rectangular frame.
7. The apparatus of claim 6, wherein the training module comprises:
the marking unit is used for marking a rectangular frame label of the vehicle of each image frame in the first video image;
and the training unit is used for training a predetermined network model through a gradient descent algorithm based on the labeled first video image and each vehicle rectangular frame label to obtain a vehicle detection model.
8. The apparatus of claim 6 or 7, comprising:
the acquisition and determination module is used for acquiring a second video image of the preset parking area, determining each parking space in the preset parking area according to the second video image and determining the coordinate of each garage angle of each parking space;
the second determining module is used for determining the position information of each sideline of each parking space according to the coordinates of each bank angle of each parking space;
determining the sidelines of the long sides of the parking spaces as a first sideline and a second sideline, determining the sidelines of the wide sides of the parking spaces as a third sideline and a fourth sideline, and sequentially using the first sideline as a starting point, the fourth sideline, the second sideline and the third sideline according to the anticlockwise direction;
and if not, sequentially determining the first side line and the second side line of the parking space in the image frame along the positive direction of the transverse axis of the plane coordinate.
9. The apparatus of claim 8, wherein the computing module comprises:
the mapping unit is used for mapping the third rectangular frame and the fifth rectangular frame to an image to be detected so as to determine the positions of the third rectangular frame and the fifth rectangular frame in the image to be detected;
a determining unit, configured to determine, in an image to be detected, a vehicle region polygon formed by a common overlapping region of the first rectangular frame, the third rectangular frame, and the fifth rectangular frame;
the calculating unit is used for calculating and determining a background area at each vertex of the vehicle area polygon based on the vehicle area polygon;
and the removing unit is used for removing the image content of the background area to obtain the vehicle area in the image to be detected.
10. Device according to claim 9, characterised in that said calculation unit is particularly adapted to
Calculating and determining an inscribed circle at each vertex of the vehicle region polygon based on the vehicle region polygon;
determining a background area formed by each inscribed circle and the vehicle area polygon according to the inscribed circle at each vertex of the determined vehicle area polygon;
removing the image content of the background area in the image to be detected;
and converting each vertex angle of the polygon of the vehicle area into a fillet according to the radian of the inscribed circle to obtain the vehicle area in the image to be detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910811564.6A CN110688902B (en) | 2019-08-30 | 2019-08-30 | Method and device for detecting vehicle area in parking space |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910811564.6A CN110688902B (en) | 2019-08-30 | 2019-08-30 | Method and device for detecting vehicle area in parking space |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110688902A CN110688902A (en) | 2020-01-14 |
CN110688902B true CN110688902B (en) | 2022-02-11 |
Family
ID=69107684
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910811564.6A Active CN110688902B (en) | 2019-08-30 | 2019-08-30 | Method and device for detecting vehicle area in parking space |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110688902B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111709286B (en) * | 2020-05-14 | 2023-10-17 | 深圳市金溢科技股份有限公司 | Vehicle sorting and ETC transaction method, storage medium, industrial personal computer equipment and ETC system |
CN112800873A (en) * | 2021-01-14 | 2021-05-14 | 知行汽车科技(苏州)有限公司 | Method, device and system for determining target direction angle and storage medium |
CN113706608B (en) * | 2021-08-20 | 2023-11-28 | 云往(上海)智能科技有限公司 | Pose detection device and method of target object in preset area and electronic equipment |
CN114255584B (en) * | 2021-12-20 | 2023-04-07 | 济南博观智能科技有限公司 | Positioning method and system for parking vehicle, storage medium and electronic equipment |
CN115050005B (en) * | 2022-06-17 | 2024-04-05 | 北京精英路通科技有限公司 | Target detection method and detection device for high-level video intelligent parking scene |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913685A (en) * | 2016-06-25 | 2016-08-31 | 上海大学 | Video surveillance-based carport recognition and intelligent guide method |
CN108256554A (en) * | 2017-12-20 | 2018-07-06 | 深圳市金溢科技股份有限公司 | Vehicle reverse stopping judgment method, server and system based on deep learning |
CN108805184A (en) * | 2018-05-28 | 2018-11-13 | 广州英卓电子科技有限公司 | A kind of fixed space, image-recognizing method and system on vehicle |
CN109800696A (en) * | 2019-01-09 | 2019-05-24 | 深圳中兴网信科技有限公司 | Monitoring method, system and the computer readable storage medium of target vehicle |
CN110097776A (en) * | 2018-01-30 | 2019-08-06 | 杭州海康威视数字技术股份有限公司 | A kind of method for detecting parking stalls, monitor camera and monitor terminal |
CN110163107A (en) * | 2019-04-22 | 2019-08-23 | 智慧互通科技有限公司 | A kind of method and device based on video frame identification Roadside Parking behavior |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9171213B2 (en) * | 2013-03-15 | 2015-10-27 | Xerox Corporation | Two-dimensional and three-dimensional sliding window-based methods and systems for detecting vehicles |
-
2019
- 2019-08-30 CN CN201910811564.6A patent/CN110688902B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913685A (en) * | 2016-06-25 | 2016-08-31 | 上海大学 | Video surveillance-based carport recognition and intelligent guide method |
CN108256554A (en) * | 2017-12-20 | 2018-07-06 | 深圳市金溢科技股份有限公司 | Vehicle reverse stopping judgment method, server and system based on deep learning |
CN110097776A (en) * | 2018-01-30 | 2019-08-06 | 杭州海康威视数字技术股份有限公司 | A kind of method for detecting parking stalls, monitor camera and monitor terminal |
CN108805184A (en) * | 2018-05-28 | 2018-11-13 | 广州英卓电子科技有限公司 | A kind of fixed space, image-recognizing method and system on vehicle |
CN109800696A (en) * | 2019-01-09 | 2019-05-24 | 深圳中兴网信科技有限公司 | Monitoring method, system and the computer readable storage medium of target vehicle |
CN110163107A (en) * | 2019-04-22 | 2019-08-23 | 智慧互通科技有限公司 | A kind of method and device based on video frame identification Roadside Parking behavior |
Also Published As
Publication number | Publication date |
---|---|
CN110688902A (en) | 2020-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110688902B (en) | Method and device for detecting vehicle area in parking space | |
CN110827247B (en) | Label identification method and device | |
Zhang et al. | Semi-automatic road tracking by template matching and distance transformation in urban areas | |
CN111931627A (en) | Vehicle re-identification method and device based on multi-mode information fusion | |
CN110619279B (en) | Road traffic sign instance segmentation method based on tracking | |
CN109087510A (en) | traffic monitoring method and device | |
CN110659601B (en) | Depth full convolution network remote sensing image dense vehicle detection method based on central point | |
CN114049356B (en) | Method, device and system for detecting structure apparent crack | |
CN110647886A (en) | Interest point marking method and device, computer equipment and storage medium | |
WO2023231991A1 (en) | Traffic signal lamp sensing method and apparatus, and device and storage medium | |
CN112836699A (en) | Long-time multi-target tracking-based berth entrance and exit event analysis method | |
CN112634368A (en) | Method and device for generating space and OR graph model of scene target and electronic equipment | |
CN112465854A (en) | Unmanned aerial vehicle tracking method based on anchor-free detection algorithm | |
CN114677501A (en) | License plate detection method based on two-dimensional Gaussian bounding box overlapping degree measurement | |
CN112907626A (en) | Moving object extraction method based on satellite time-exceeding phase data multi-source information | |
CN112699711B (en) | Lane line detection method and device, storage medium and electronic equipment | |
CN112329886A (en) | Double-license plate recognition method, model training method, device, equipment and storage medium | |
Zhang et al. | Image-based approach for parking-spot detection with occlusion handling | |
CN114332814A (en) | Parking frame identification method and device, electronic equipment and storage medium | |
CN112597996B (en) | Method for detecting traffic sign significance in natural scene based on task driving | |
CN113392837A (en) | License plate recognition method and device based on deep learning | |
CN117612128A (en) | Lane line generation method, device, computer equipment and storage medium | |
CN118230279A (en) | Traffic signal lamp identification method, equipment, medium and vehicle based on twin network | |
CN117789160A (en) | Multi-mode fusion target detection method and system based on cluster optimization | |
CN117152949A (en) | Traffic event identification method and system based on unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 075000 building 10, phase I, Zhangjiakou Airport Economic and Technological Development Zone, Zhangjiakou City, Hebei Province Applicant after: Smart intercommunication Technology Co.,Ltd. Address before: 075000 building 10, phase I, Zhangjiakou Airport Economic and Technological Development Zone, Zhangjiakou City, Hebei Province Applicant before: INTELLIGENT INTERCONNECTION TECHNOLOGIES Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |