CN110163107B - Method and device for recognizing roadside parking behavior based on video frames - Google Patents
Method and device for recognizing roadside parking behavior based on video frames Download PDFInfo
- Publication number
- CN110163107B CN110163107B CN201910323229.1A CN201910323229A CN110163107B CN 110163107 B CN110163107 B CN 110163107B CN 201910323229 A CN201910323229 A CN 201910323229A CN 110163107 B CN110163107 B CN 110163107B
- Authority
- CN
- China
- Prior art keywords
- video frame
- vehicle
- video
- determining
- parking space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/14—Traffic control systems for road vehicles indicating individual free spaces in parking areas
- G08G1/145—Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
- G08G1/148—Management of a network of parking areas
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for identifying roadside parking behaviors based on video frames, wherein the method comprises the following steps: acquiring a plurality of continuous video frames acquired by video equipment; drawing a parking space area in a first video frame, and determining coordinate information of a parking space in the parking space area, wherein the first video frame is any collected video frame; determining a second video frame, and performing differential calculation on the first video frame and the second video frame based on the drawn parking space area, wherein the second video frame is a next video frame adjacent to the first video frame; judging whether the calculation result meets a preset detection rule, and if so, detecting the vehicle positions in the first video frame and the second video through a vehicle training model based on the coordinate information of the parking space; and determining roadside parking behavior of the vehicle based on the detection result. By the method and the system, automatic management of roadside parking can be completed without recognizing license plate information.
Description
Technical Field
The invention relates to the technical field of intelligent parking management, in particular to a method and a device for recognizing roadside parking behaviors based on video frames.
Background
Roadside parking management is parking management using the fields on both sides of a road on which traffic is made on the ground. With the rapid development of city economy and the continuous improvement of the living standard of people, the quantity of motor vehicles kept in cities is rapidly increased, and most cities face the trouble of motor vehicle parking space shortage or even serious shortage due to various historical and practical reasons. Therefore, roadside parking management becomes an important ring of urban parking management, and is receiving wide attention from governments and the public. However, since roadside parking belongs to open parking, the management means of the roadside parking has a lot of difficulties, so that on one hand, low-efficiency management is performed by manual patrol, and the cost is high; on the other hand, although some yards have installed devices such as geomagnetism and electronic timepieces, they are not ideal in terms of effects, and have problems such as difficulty in construction, troublesome operations, and great influence on environmental conditions. Therefore, the related industries focus attention on means for roadside parking management by high-level video at present.
Although roadside parking management based on high-level video does have the advantages of being not easy to damage after installation, capturing video comprehensively and clearly, and not needing manual operation on site, the roadside parking management still needs to record the entrance and exit actions and the license plate information by checking video information manually, so that the problem of how to automatically capture parking behaviors of vehicles such as entrance and exit is brought to the front of numerous practitioners.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying roadside parking behaviors based on video frames, which realize automatic identification of the roadside parking behaviors of a vehicle through video information and can complete automatic management of roadside parking without identifying license plate information.
In one aspect, an embodiment of the present invention provides a method for identifying roadside parking behavior based on video frames, including:
acquiring a plurality of continuous video frames acquired by video equipment, wherein the video equipment is used for acquiring image information of a roadside parking area;
drawing a parking space area in a first video frame, and determining coordinate information of a parking space in the parking space area, wherein the first video frame is any collected video frame;
determining a second video frame, and performing differential calculation on the first video frame and the second video frame based on the drawn parking space area, wherein the second video frame is a next video frame adjacent to the first video frame;
judging whether the calculation result meets a preset detection rule, and if so, detecting the vehicle positions in the first video frame and the second video through a vehicle training model based on the coordinate information of the parking space;
and determining roadside parking behavior of the vehicle based on the detection result.
In another aspect, an embodiment of the present invention provides a device for identifying roadside parking behavior based on video frames, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a plurality of continuous video frames acquired by video equipment, and the video equipment is used for acquiring image information of a roadside parking area;
the system comprises a drawing module, a storage module and a display module, wherein the drawing module is used for drawing a parking space area in a first video frame and determining the coordinate information of a parking space in the parking space area, and the first video frame is any collected video frame;
the differential calculation module is used for determining a second video frame, and performing differential calculation on the first video frame and the second video frame based on the drawn parking space area, wherein the second video frame is a next video frame adjacent to the first video frame;
the detection module is used for judging whether the calculation result meets a preset detection rule or not, and if so, detecting the vehicle positions in the first video frame and the second video through a vehicle training model based on the coordinate information of the parking space;
and the determining module is used for determining the roadside parking behavior of the vehicle based on the detection result.
The technical scheme has the following beneficial effects: according to the invention, based on the drawn parking space coordinate information and the parking space area, each video frame collected by the video equipment can be accurately and efficiently analyzed and judged, and the roadside parking behavior of the vehicle in the video frame is automatically identified according to the detection result, so that the automatic management of roadside parking can be completed without identifying license plate information, and important technical support is provided for improving the urban traffic and parking management efficiency; further, the management efficiency of roadside parking is greatly improved, the cost of roadside parking management is reduced, and meanwhile, the use experience of users is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for identifying roadside parking behavior based on video frames in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a high-level video device according to a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a parking space area in any video frame according to a preferred embodiment of the present invention;
FIG. 4 is a diagram illustrating the division of a first unit block in any video frame according to a preferred embodiment of the present invention;
FIG. 5 is a schematic diagram of the vehicle entering and exiting the parking lot in the video frame according to a preferred embodiment of the present invention;
FIG. 6 is a schematic illustration of the movement of a vehicle within a parking area in accordance with a preferred embodiment of the present invention;
fig. 7 is a schematic structural diagram of an apparatus for identifying roadside parking behavior based on video frames in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, which is a flowchart of a method for identifying roadside parking behavior based on video frames in an embodiment of the present invention, includes:
101. acquiring a plurality of continuous video frames acquired by video equipment, wherein the video equipment is used for acquiring image information of a roadside parking area;
102. drawing a parking space area in a first video frame, and determining coordinate information of a parking space in the parking space area, wherein the first video frame is any collected video frame;
103. determining a second video frame, and performing differential calculation on the first video frame and the second video frame based on the drawn parking space area, wherein the second video frame is a next video frame adjacent to the first video frame;
104. judging whether the calculation result meets a preset detection rule, and if so, detecting the vehicle positions in the first video frame and the second video through a vehicle training model based on the coordinate information of the parking space;
105. and determining roadside parking behavior of the vehicle based on the detection result.
Further, the determining a second video frame and performing differential calculation on the first video frame and the second video frame based on the drawn parking space area specifically include:
dividing images of a first video frame and a second video frame into a plurality of first unit blocks in unit blocks with a preset size, wherein the position of each first unit block in any video frame is the same;
determining each first unit block including the parking area as a second unit block based on the drawn parking area;
respectively calculating the pixel average value of each second unit block;
determining a one-to-one corresponding second unit block with the same position in the first video frame and the position in the second video frame;
and respectively calculating the difference value of the pixel average value of each one-to-one corresponding second unit block.
Further, the predetermined detection rule includes:
judging whether each calculated difference value is larger than a preset difference value threshold value or not;
counting the number of the differential values which are greater than a preset difference threshold value, and judging whether the number is greater than a preset number threshold value;
wherein, the judging whether the calculation result meets the predetermined detection rule specifically includes:
if the number is larger than a preset number threshold value, determining that the calculation result meets a preset detection rule;
and if the number is smaller than the preset number threshold value, determining that the calculation result does not meet the preset detection rule.
Further, if it is determined that the calculation result does not satisfy the predetermined detection rule, the method includes:
step a, taking a second video frame which does not meet a preset detection rule as a redetermined first video frame, and redetermining the second video frame, wherein the redetermined second video frame is a next video frame adjacent to the redetermined first video frame;
carrying out differential calculation on the redetermined first video frame and the redetermined second video frame, and judging whether the calculation result meets a preset detection rule or not;
and if not, skipping to execute the step a until the calculation result meets the preset detection rule.
Further, before the step of detecting the vehicle position in the first video frame and the second video by the vehicle training model based on the coordinate information of the parking space, the method includes:
acquiring a plurality of collected vehicle image samples in the roadside parking area;
and marking and training the plurality of vehicle image samples by a deep learning method based on a convolutional neural network to obtain a vehicle training model.
Further, after the step of detecting the vehicle position in the first video frame and the second video by the vehicle training model based on the coordinate information of the parking space, the method includes:
comparing the coordinates of the vehicle position and the parking space position in the detection result to obtain the mass center of the rectangular area of the vehicle;
and calculating the vehicle with the center of mass in the parking space, and recording the information of the vehicle with the center of mass in the parking space into the detection result.
Further, the determining the roadside parking behavior of the vehicle based on the detection result includes:
determining whether the centroid situations of the vehicles in the first video frame and the second video are consistent or not based on the detection result;
if the first video frame and the second video frame are consistent, determining that the vehicle does not have roadside parking behaviors;
if the first video frame and the second video frame are not consistent, determining that the vehicle in the first video frame and the second video frame has roadside parking behavior;
wherein the centroid situation of the vehicle includes any one of the centroid being within the parking space and the centroid not being within the parking space.
Optionally, after the step of determining the roadside-free parking behavior of the vehicle in the first video frame and the second video, the method further includes:
and c, skipping to execute the step a until the calculation of each video frame is completed.
Optionally, after the step of determining that there is roadside parking behavior for the vehicle in the first video frame and the second video, the method further includes:
step m, taking the second video frame with the roadside parking behavior as the redetermined first video frame, and redetermining the second video frame, wherein the redetermined second video frame is a next video frame adjacent to the redetermined first video frame;
detecting the vehicle positions in the re-determined first video frame and the re-determined second video frame through a vehicle training model;
and based on the detection result, if the mass center situation of the vehicle in the redetermined first video frame is inconsistent with the mass center situation of the vehicle in the redetermined second video frame, skipping to execute the step m until the mass center situation of the vehicle in the redetermined first video frame is consistent with the mass center situation of the vehicle in the redetermined second video frame.
As shown in fig. 7, a schematic structural diagram of an apparatus for identifying roadside parking behavior based on video frames includes:
a first obtaining module 71, configured to obtain a plurality of consecutive video frames collected by a video device, where the video device is configured to collect image information of a roadside parking area;
the system comprises a drawing module 72, a storage module and a display module, wherein the drawing module 72 is used for drawing a parking space area in a first video frame and determining coordinate information of a parking space in the parking space area, and the first video frame is any collected video frame;
a difference calculation module 73, configured to determine a second video frame, perform difference calculation on the first video frame and the second video frame based on the drawn parking space area, where the second video frame is a subsequent video frame adjacent to the first video frame;
the detection module 74 is configured to determine whether the calculation result meets a predetermined detection rule, and if so, detect the vehicle positions in the first video frame and the second video frame through a vehicle training model based on the coordinate information of the parking space;
and a determining module 75 for determining roadside parking behavior of the vehicle based on the detection result.
Further, the difference calculation module is specifically used for
Dividing images of a first video frame and a second video frame into a plurality of first unit blocks in unit blocks with a preset size, wherein the position of each first unit block in any video frame is the same;
determining each first unit block including the parking area as a second unit block based on the drawn parking area;
respectively calculating the pixel average value of each second unit block;
determining a one-to-one corresponding second unit block with the same position in the first video frame and the position in the second video frame;
and respectively calculating the difference value of the pixel average value of each one-to-one corresponding second unit block.
Further, the predetermined detection rule includes:
judging whether each calculated difference value is larger than a preset difference value threshold value or not;
counting the number of the differential values which are greater than a preset difference threshold value, and judging whether the number is greater than a preset number threshold value;
wherein the detection module is specifically used for
If the number is larger than a preset number threshold value, determining that the calculation result meets a preset detection rule;
and if the number is smaller than the preset number threshold value, determining that the calculation result does not meet the preset detection rule.
Further, if the detection module determines that the calculation result does not satisfy the predetermined detection rule, the method includes:
a first re-determination unit configured to take a second video frame that does not satisfy a predetermined detection rule as a re-determined first video frame, and re-determine the second video frame, where the re-determined second video frame is a subsequent video frame adjacent to the re-determined first video frame;
the difference calculation unit is used for carrying out difference calculation on the redetermined first video frame and the redetermined second video frame and judging whether the calculation result meets a preset detection rule or not;
and the first skipping unit is used for skipping to execute the first re-determining unit if the calculation result does not meet the preset detection rule.
Further, comprising:
the second acquisition module is used for acquiring a plurality of vehicle image samples in the collected roadside parking area;
and the training module is used for marking and training the plurality of vehicle image samples through a deep learning method based on a convolutional neural network to obtain a vehicle training model.
Further, comprising:
the comparison module is used for comparing the coordinates of the vehicle position in the detection result with the coordinates of the parking space position to obtain the mass center of the rectangular area of the vehicle;
and the calculation module is used for calculating the vehicle with the center of mass in the parking space and recording the information of the vehicle with the center of mass in the parking space into the detection result.
Further, the determining module includes:
a first determination unit for determining whether the centroid situation of the vehicle in the first video frame and the second video is consistent based on the detection result;
the second determining unit is used for determining the roadside-free parking behavior of the vehicle in the first video frame and the second video if the first video frame and the second video frame are consistent;
the third determining unit is used for determining that the vehicles in the first video frame and the second video of the vehicle have roadside parking behaviors if the first video frame and the second video frame are inconsistent;
wherein the centroid situation of the vehicle includes any one of the centroid being within the parking space and the centroid not being within the parking space.
Optionally, the second determining unit is further configured to
And skipping to execute the first re-determination unit until the calculation of each video frame is completed.
Optionally, the third determining unit further includes:
the second re-determination unit is used for taking the second video frame with roadside parking behavior as the re-determined first video frame and re-determining the second video frame, wherein the re-determined second video frame is a next video frame adjacent to the re-determined first video frame;
the detection unit is used for detecting the vehicle positions in the redetermined first video frame and the redetermined second video frame through the vehicle training model;
and the second skipping unit is used for skipping to execute the second re-determining unit if the mass center situation of the vehicle in the re-determined first video frame is inconsistent with the mass center situation of the vehicle in the re-determined second video frame based on the detection result until the mass center situation of the vehicle in the re-determined first video frame is consistent with the mass center situation of the vehicle in the re-determined second video frame.
The technical scheme of the embodiment of the invention has the following beneficial effects: according to the invention, based on the drawn parking space coordinate information and the parking space area, each video frame collected by the video equipment can be accurately and efficiently analyzed and judged, and the roadside parking behavior of the vehicle in the video frame is automatically identified according to the detection result, so that the automatic management of roadside parking can be completed without identifying license plate information, and important technical support is provided for improving the urban traffic and parking management efficiency; further, the management efficiency of roadside parking is greatly improved, the cost of roadside parking management is reduced, and meanwhile, the use experience of users is improved.
The above technical solutions of the embodiments of the present invention are described in detail below with reference to application examples:
the application example of the invention aims to automatically identify the roadside parking behavior of the vehicle through the video information, and realizes the automatic management of roadside parking without identifying license plate information.
For example, on a road section where image information of a roadside parking area is acquired through a high-level video, a plurality of continuous video frames of the image information of the roadside parking area acquired through video equipment are acquired through a parking management system A; determining any collected video frame as a first video frame, such as an ith video frame, drawing a parking space area in the ith video frame, and determining coordinate information of a parking space in the parking space area; determining the next video frame adjacent to the ith frame video frame, namely the (i + 1) th frame video frame, as a second video frame, and performing differential calculation on the ith frame video frame and the (i + 1) th frame video frame based on the drawn parking space area; and judging whether the calculation result meets a preset detection rule, and if so, detecting the vehicle positions in the ith frame of video frame and the (i + 1) th frame of video frame through a vehicle training model based on the coordinate information of the parking space.
It should be noted that, as shown in fig. 2, a schematic diagram of high-order video capture in this embodiment is shown. The existing camera can generally achieve the acquisition frequency of dozens or dozens of frames per second, so that the acquisition is intensive, on one hand, the image is transmitted to a background and processed, and the performance cannot meet the real-time requirement; on the other hand, in such a short time, the change between frames of the front and rear adjacent frames is very small and can be ignored, and because the embodiment of the invention needs to detect the vehicle change condition of the parking space between frames, continuous video frames need to be sampled, and the video frames detected in the embodiment of the invention are all sampled video frames; the continuous video frames in the embodiment of the present invention may be continuous video frames with a predetermined certain time interval, for example, continuous video frames with an interval of 5 seconds.
In a possible implementation manner, the determining a second video frame in step 103, and performing a difference calculation on the first video frame and the second video frame based on the drawn parking space area specifically includes: dividing images of a first video frame and a second video frame into a plurality of first unit blocks in unit blocks with a preset size, wherein the position of each first unit block in any video frame is the same; determining each first unit block including the parking area as a second unit block based on the drawn parking area; respectively calculating the pixel average value of each second unit block; determining a one-to-one corresponding second unit block with the same position in the first video frame and the position in the second video frame; and respectively calculating the difference value of the pixel average value of each one-to-one corresponding second unit block.
Wherein the predetermined detection rule comprises: judging whether each calculated difference value is larger than a preset difference value threshold value or not; counting the number of the difference values larger than the preset difference threshold value, and judging whether the number is larger than the preset number threshold value.
Wherein, the judging whether the calculation result meets the predetermined detection rule specifically includes: if the number is larger than a preset number threshold value, determining that the calculation result meets a preset detection rule; and if the number is smaller than the preset number threshold value, determining that the calculation result does not meet the preset detection rule.
For example, in the parking management system a, if a first video frame is an ith video frame, a second video frame is determined to be an i +1 th video frame, and then, images of the first video frame and the second video frame are divided into a plurality of first unit blocks by unit blocks with a predetermined size, for example, each frame in the video frames collected in the embodiment of the present invention has a size of 1920px (width) × 1080px (height), where px is a pixel, and lines are respectively drawn along the horizontal direction and the vertical direction with a as a side length from the upper left corner point (0, 0) of the ith video frame and the ith +1 th video frame, and the entire image of each of the ith video frame and the ith +1 th video frame is linearly cut into a plurality of small square blocks with a side length of a, where a is generally 16 px; wherein the position of each first unit block in any video frame is the same; determining each first unit block including the parking area as a second unit block based on the drawn parking area, as shown in fig. 3, which is a drawing diagram of a parking space area in any video frame; as shown in fig. 4, whether each unit block overlaps with the parking space area drawn in fig. 3, that is, whether each unit block is located in the parking space is calculated, only the unit blocks having an overlapping area with the parking area are reserved according to the calculation result, and if n unit blocks are located in the parking area, the number of the second unit blocks is n, and the pixel average value of each second unit block is calculated by the following formula one:
wherein a is smallSide length of block, m(i,j)Is the pixel value at point (i, j), SkA region of the kth tile;
then, determining a one-to-one corresponding second unit block which is in the first video frame and is in the ith frame video frame at present, and is in the second video frame and is in the (i + 1) th frame video frame at present; calculating the difference value of the pixel average value of each one-to-one corresponding second unit block by the following formula two:
wherein E iskIs the difference of the mean values of the corresponding k-th small blocks in two adjacent frames, mkIs the average pixel value of the kth tile and i is the kth tile on the ith frame.
Then, according to the calculation result, judging whether each calculated difference value is larger than a preset difference value threshold value; counting the number of the difference values larger than a preset difference threshold value, and judging whether the number is larger than a preset number threshold value; if so, determining that the calculation result meets a preset detection rule; if the calculated result does not meet the preset detection rule, determining that the calculated result does not meet the preset detection rule.
Through the embodiment, the vehicle information in the parking area in two continuous adjacent video frames can be rapidly and accurately determined, whether the vehicle has roadside parking behavior in the two continuous adjacent video frames can be accurately determined through difference value calculation, and the detection efficiency of the roadside parking behavior is greatly improved.
In a possible embodiment, if it is determined that the calculation result does not satisfy the predetermined detection rule, the method includes: step a, taking a second video frame which does not meet a preset detection rule as a redetermined first video frame, and redetermining the second video frame, wherein the redetermined second video frame is a next video frame adjacent to the redetermined first video frame; carrying out differential calculation on the redetermined first video frame and the redetermined second video frame, and judging whether the calculation result meets a preset detection rule or not; and if not, skipping to execute the step a until the calculation result meets the preset detection rule.
For example, in the parking management system a, the parking area in the video frame is pre-drawn, for example, any frame image in the consecutive video frames is selected, and the parking space area in the frame image is drawn to a certain vertex (x) of the parking space0,y0) As a starting point, a polygon is drawn along the boundary of the parking space, and each vertex (x) of the polygon is recorded1,y1)、(x2,y2)、(x3,y3) Finally, a closed polygon is formed, and the closed polygon is the drawn parking area; the video equipment is used for acquiring image information of a roadside parking area; acquiring a plurality of continuous video frames acquired by video equipment; determining the collected ith frame of video frame as a first video frame, drawing a parking space area in the ith frame of video frame, and determining the coordinate information of a parking space in the parking space area; determining the (i + 1) th frame of video frame as a second video frame, and performing differential calculation on the (i) th frame of video frame and the (i + 1) th frame of video frame based on the drawn parking space area; judging whether the calculation result meets a preset detection rule, if not, executing a step a, wherein the step a is to take the first video frame which does not meet the preset detection rule and the current i +1 th frame video frame as the redetermined first video frame, and redetermine a second video frame, wherein the redetermined second video frame is the first video frame which is redetermined and is the next video frame adjacent to the i +1 th frame video frame, namely the i +2 th frame video frame; carrying out differential calculation on the redetermined first video frame, namely the current i +1 th frame video frame and the redetermined second video frame, namely the current i +2 th frame video frame, and judging whether the calculation result meets a preset detection rule or not; and if not, skipping to execute the step a until the calculation result meets a preset detection rule.
Through this embodiment, can confirm the vehicle information in the parking area in consecutive two or two adjacent a plurality of video frames fast, accurately to through differential calculation, can confirm whether the vehicle has taken place the roadside parking action in these a plurality of video frames accurately, not only greatly improved the detection efficiency of roadside parking action, further greatly improved the accuracy of detecting the roadside parking action moreover.
In a possible embodiment, before the step of detecting the vehicle position in the first video frame and the second video by the vehicle training model based on the coordinate information of the parking space, the method includes: acquiring a plurality of collected vehicle image samples in the roadside parking area; and marking and training the plurality of vehicle image samples by a deep learning method based on a convolutional neural network to obtain a vehicle training model.
For example, in parking management system a, a plurality of vehicle image samples in a roadside parking area captured by a high-level video device are pre-acquired; and marking and training the plurality of vehicle image samples by a deep learning method based on a convolutional neural network to obtain a vehicle training model.
It should be noted that, as can be understood by those skilled in the art, a Convolutional Neural Network (CNN) is a kind of feed forward Neural network (fed forward Neural Networks) that includes convolution calculation and has a deep structure, and is one of the representative algorithms of deep learning (deep learning). In this embodiment, the specific steps of labeling and training the plurality of vehicle image samples by the deep learning method of the convolutional neural network are not repeated.
In a possible embodiment, after the step of detecting the vehicle position in the first video frame and the second video by the vehicle training model based on the coordinate information of the parking space, the method includes: comparing the coordinates of the vehicle position and the parking space position in the detection result to obtain the mass center of the rectangular area of the vehicle; and calculating the vehicle with the center of mass in the parking space, and recording the information of the vehicle with the center of mass in the parking space into the detection result.
For example, in the parking management system a, a vehicle training model is used to detect a vehicle position in a first video frame, such as the current first video frame being an ith video frame and a second video frame, or the current second video frame being an i +1 th video frame, to obtain a detection result, and then the vehicle position in the detection result is compared with a parking space position to obtain a centroid of a rectangular area of the vehicle, i.e., a central point of the rectangular area; and calculating the vehicle with the center of mass in the parking space, and recording the information of the vehicle with the center of mass in the parking space into the detection result.
Through this embodiment, realized need not to discern license plate information and can judge parking behaviors such as the parking area of going out, getting into of vehicle through the increase or the reduction of vehicle in the parking stall in the video frame, greatly improved the detection efficiency of roadside parking behavior.
In a possible embodiment, step 105 determines roadside parking behavior of the vehicle based on the detection result, including: determining whether the centroid situations of the vehicles in the first video frame and the second video are consistent or not based on the detection result; if the first video frame and the second video frame are consistent, determining that the vehicle does not have roadside parking behaviors; if the first video frame and the second video frame are not consistent, determining that the vehicle in the first video frame and the second video frame has roadside parking behavior; wherein the centroid situation of the vehicle includes any one of the centroid being within the parking space and the centroid not being within the parking space.
After the step of determining the roadside-free parking behavior of the vehicle in the first video frame and the second video, the method further comprises the following steps: and c, skipping to execute the step a until the calculation of each video frame is completed.
After the step of determining that the vehicle in the first video frame and the second video has roadside parking behavior, the method further comprises: step m, taking the second video frame with the roadside parking behavior as the redetermined first video frame, and redetermining the second video frame, wherein the redetermined second video frame is a next video frame adjacent to the redetermined first video frame; detecting the vehicle positions in the re-determined first video frame and the re-determined second video frame through a vehicle training model; and based on the detection result, if the mass center situation of the vehicle in the redetermined first video frame is inconsistent with the mass center situation of the vehicle in the redetermined second video frame, skipping to execute the step m until the mass center situation of the vehicle in the redetermined first video frame is consistent with the mass center situation of the vehicle in the redetermined second video frame.
For example, in the parking management system a, as described above, based on the detection result, it is determined whether the centroid situation of the vehicle in the first video frame, such as the current first video frame being the ith video frame and the second video, such as the current second video frame being the (i + 1) th video frame, is consistent; wherein the centroid situation of the vehicle includes either one of the centroid being within the parking space and the centroid not being within the parking space; if the video frames are consistent with the video frames, determining that the vehicles in the first video frames and the second video have no roadside parking behaviors, and then skipping to execute the step a until the calculation of each video frame is completed; if the first video frame and the second video frame of the vehicle are inconsistent, determining that the vehicle in the first video frame and the second video frame has roadside parking behavior, as shown in fig. 5, and then executing step m: taking the second video frame with the roadside parking behavior as a redetermined first video frame, if the current second video frame is the (i + 1) th frame video frame, the redetermined first video frame is the (i + 1) th frame video frame, and redetermining the second video frame, such as the (i + 2) th frame video frame, wherein the redetermined second video frame is a next video frame adjacent to the redetermined first video frame; detecting the vehicle positions in the re-determined first video frame and the re-determined second video frame through a vehicle training model; based on the detection result, if the centroid situation of the vehicle in the redetermined first video frame is not consistent with the centroid situation of the vehicle in the redetermined second video frame, the step m is skipped to be executed until the centroid situation of the vehicle in the redetermined first video frame is consistent with the centroid situation of the vehicle in the redetermined second video frame, as shown in fig. 6, the vehicle is moving in a parking area, but the vehicle does not perform the action of entering and exiting, at this time, the vehicle is considered to be in a stable state after a period of time change, and the basis of the result judgment can be used. And if the vehicle in the parking space in the j frame and the j +1 frame video frames has no change, comparing the change of the vehicle in the parking space in the i frame and the j frame video frames as the final result of the parking behavior. If the parking spaces of the two video frames have vehicles or do not have vehicles, the final result is that the parking behaviors such as entering and exiting the parking spaces do not exist; if the vehicle is in the parking space of the ith frame and the vehicle is not in the parking space of the jth frame, judging that the vehicle is driven out of the parking space; if no vehicle exists in the parking space of the ith frame and a vehicle exists in the parking space of the jth frame, judging that the vehicle enters the parking space, and then, turning to the step a to calculate the difference value of each second unit block of the next front frame and the next rear frame, namely the jth +1 frame and the jth +2 frame.
Through this embodiment, can be to the vehicle that takes place the parking action in two consecutive frames, further detection is done based on adjacent follow-up multiframe, until detecting the condition that the vehicle is in steady state, realized that various parking actions to the vehicle all can be detected comprehensively, the rate of accuracy that has further improved the detection, simultaneously, also can be to the vehicle that does not take place the parking action in two consecutive frames, further detection is done based on adjacent follow-up multiframe, make the roadside parking action of confirming the vehicle through the information of consecutive multiframe, the rate of accuracy that detects has greatly improved.
The embodiment of the invention provides a device for identifying roadside parking behaviors based on video frames, which can realize the method embodiment provided above, and for specific function realization, please refer to the description in the method embodiment, and further description is omitted here.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks, units, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (16)
1. A method for recognizing roadside parking behavior based on video frames is characterized by comprising the following steps:
acquiring a plurality of continuous video frames acquired by video equipment, wherein the video equipment is used for acquiring image information of a roadside parking area;
drawing a parking space area in a first video frame, and determining coordinate information of a parking space in the parking space area, wherein the first video frame is any collected video frame;
determining a second video frame, and performing differential calculation on the first video frame and the second video frame based on the drawn parking space area, wherein the second video frame is a next video frame adjacent to the first video frame;
judging whether the calculation result meets a preset detection rule, and if so, detecting the vehicle positions in the first video frame and the second video through a vehicle training model based on the coordinate information of the parking space;
the predetermined detection rule includes:
judging whether each calculated difference value is larger than a preset difference value threshold value or not;
counting the number of the differential values which are greater than a preset difference threshold value, and judging whether the number is greater than a preset number threshold value;
wherein, the judging whether the calculation result meets the predetermined detection rule specifically includes:
if the number is larger than a preset number threshold value, determining that the calculation result meets a preset detection rule;
if the number is smaller than a preset number threshold value, determining that the calculation result does not meet a preset detection rule;
and determining roadside parking behavior of the vehicle based on the detection result.
2. The method according to claim 1, wherein the determining the second video frame and performing a differential calculation on the first video frame and the second video frame based on the drawn parking space area specifically include:
dividing images of a first video frame and a second video frame into a plurality of first unit blocks in unit blocks with a preset size, wherein the position of each first unit block in any video frame is the same;
determining each first unit block including the parking area as a second unit block based on the drawn parking area;
respectively calculating the pixel average value of each second unit block;
determining a one-to-one corresponding second unit block with the same position in the first video frame and the position in the second video frame; and respectively calculating the difference value of the pixel average value of each one-to-one corresponding second unit block.
3. The method of claim 1, if it is determined that the calculation does not satisfy the predetermined detection rule, comprising:
step a, taking a second video frame which does not meet a preset detection rule as a redetermined first video frame, and redetermining the second video frame, wherein the redetermined second video frame is a next video frame adjacent to the redetermined first video frame;
carrying out differential calculation on the redetermined first video frame and the redetermined second video frame, and judging whether the calculation result meets a preset detection rule or not;
and if not, skipping to execute the step a until the calculation result meets the preset detection rule.
4. The method according to any one of claims 1-3, wherein before the step of detecting the vehicle position in the first video frame and the second video by the vehicle training model based on the coordinate information of the parking space, comprising:
acquiring a plurality of collected vehicle image samples in the roadside parking area;
and marking and training the plurality of vehicle image samples by a deep learning method based on a convolutional neural network to obtain a vehicle training model.
5. The method of claim 4, wherein the step of detecting the vehicle position in the first video frame and the second video by the vehicle training model based on the coordinate information of the parking space is followed by:
comparing the coordinates of the vehicle position and the parking space position in the detection result to obtain the mass center of the rectangular area of the vehicle;
and calculating the vehicle with the center of mass in the parking space, and recording the information of the vehicle with the center of mass in the parking space into the detection result.
6. The method of claim 5, wherein determining roadside parking behavior of the vehicle based on the detection results comprises:
determining whether the centroid situations of the vehicles in the first video frame and the second video are consistent or not based on the detection result;
if the first video frame and the second video frame are consistent, determining that the vehicle does not have roadside parking behaviors;
if the first video frame and the second video frame are not consistent, determining that the vehicle in the first video frame and the second video frame has roadside parking behavior;
wherein the centroid situation of the vehicle includes any one of the centroid being within the parking space and the centroid not being within the parking space.
7. The method of claim 6, wherein the step of determining roadside-free parking behavior of the vehicle in the first video frame and the second video is followed by:
and c, skipping to execute the step a until the calculation of each video frame is completed.
8. The method of claim 6, wherein the step of determining that roadside parking activity exists for the vehicle in the first video frame and the second video of the vehicle further comprises:
step m, taking the second video frame with the roadside parking behavior as the redetermined first video frame, and redetermining the second video frame, wherein the redetermined second video frame is a next video frame adjacent to the redetermined first video frame;
detecting the vehicle positions in the re-determined first video frame and the re-determined second video frame through a vehicle training model;
and based on the detection result, if the mass center situation of the vehicle in the redetermined first video frame is inconsistent with the mass center situation of the vehicle in the redetermined second video frame, skipping to execute the step m until the mass center situation of the vehicle in the redetermined first video frame is consistent with the mass center situation of the vehicle in the redetermined second video frame.
9. The utility model provides a device based on video frame discernment roadside parking action which characterized in that includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a plurality of continuous video frames acquired by video equipment, and the video equipment is used for acquiring image information of a roadside parking area;
the system comprises a drawing module, a storage module and a display module, wherein the drawing module is used for drawing a parking space area in a first video frame and determining the coordinate information of a parking space in the parking space area, and the first video frame is any collected video frame;
the differential calculation module is used for determining a second video frame, and performing differential calculation on the first video frame and the second video frame based on the drawn parking space area, wherein the second video frame is a next video frame adjacent to the first video frame;
the detection module is used for judging whether the calculation result meets a preset detection rule or not, and if so, detecting the vehicle positions in the first video frame and the second video through a vehicle training model based on the coordinate information of the parking space;
the predetermined detection rule includes:
judging whether each calculated difference value is larger than a preset difference value threshold value or not;
counting the number of the differential values which are greater than a preset difference threshold value, and judging whether the number is greater than a preset number threshold value;
wherein the detection module is specifically used for
If the number is larger than a preset number threshold value, determining that the calculation result meets a preset detection rule;
if the number is smaller than a preset number threshold value, determining that the calculation result does not meet a preset detection rule;
and the determining module is used for determining the roadside parking behavior of the vehicle based on the detection result.
10. Device according to claim 9, characterised in that said difference calculation module is particularly adapted to
Dividing images of a first video frame and a second video frame into a plurality of first unit blocks in unit blocks with a preset size, wherein the position of each first unit block in any video frame is the same;
determining each first unit block including the parking area as a second unit block based on the drawn parking area;
respectively calculating the pixel average value of each second unit block;
determining a one-to-one corresponding second unit block with the same position in the first video frame and the position in the second video frame;
and respectively calculating the difference value of the pixel average value of each one-to-one corresponding second unit block.
11. The apparatus of claim 9, wherein if the detection module determines that the calculation result does not satisfy the predetermined detection rule, the method further comprises:
a first re-determination unit configured to take a second video frame that does not satisfy a predetermined detection rule as a re-determined first video frame, and re-determine the second video frame, where the re-determined second video frame is a subsequent video frame adjacent to the re-determined first video frame;
the difference calculation unit is used for carrying out difference calculation on the redetermined first video frame and the redetermined second video frame and judging whether the calculation result meets a preset detection rule or not;
and the first skipping unit is used for skipping to execute the first re-determining unit if the calculation result does not meet the preset detection rule.
12. The apparatus according to any one of claims 9-11, comprising:
the second acquisition module is used for acquiring a plurality of vehicle image samples in the collected roadside parking area;
and the training module is used for marking and training the plurality of vehicle image samples through a deep learning method based on a convolutional neural network to obtain a vehicle training model.
13. The apparatus of claim 12, comprising:
the comparison module is used for comparing the coordinates of the vehicle position in the detection result with the coordinates of the parking space position to obtain the mass center of the rectangular area of the vehicle;
and the calculation module is used for calculating the vehicle with the center of mass in the parking space and recording the information of the vehicle with the center of mass in the parking space into the detection result.
14. The apparatus of claim 13, wherein the determining module comprises:
a first determination unit for determining whether the centroid situation of the vehicle in the first video frame and the second video is consistent based on the detection result;
the second determining unit is used for determining the roadside-free parking behavior of the vehicle in the first video frame and the second video if the first video frame and the second video frame are consistent;
the third determining unit is used for determining that the vehicles in the first video frame and the second video of the vehicle have roadside parking behaviors if the first video frame and the second video frame are inconsistent;
wherein the centroid situation of the vehicle includes any one of the centroid being within the parking space and the centroid not being within the parking space.
15. The apparatus of claim 14, wherein the second determining unit is further configured to skip execution of the first determining unit until the computation of each video frame is completed.
16. The apparatus of claim 14, wherein the third determining unit further comprises:
the second re-determination unit is used for taking the second video frame with roadside parking behavior as the re-determined first video frame and re-determining the second video frame, wherein the re-determined second video frame is a next video frame adjacent to the re-determined first video frame;
the detection unit is used for detecting the vehicle positions in the redetermined first video frame and the redetermined second video frame through the vehicle training model;
and the second skipping unit is used for skipping to execute the second re-determining unit if the mass center situation of the vehicle in the re-determined first video frame is inconsistent with the mass center situation of the vehicle in the re-determined second video frame based on the detection result until the mass center situation of the vehicle in the re-determined first video frame is consistent with the mass center situation of the vehicle in the re-determined second video frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910323229.1A CN110163107B (en) | 2019-04-22 | 2019-04-22 | Method and device for recognizing roadside parking behavior based on video frames |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910323229.1A CN110163107B (en) | 2019-04-22 | 2019-04-22 | Method and device for recognizing roadside parking behavior based on video frames |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110163107A CN110163107A (en) | 2019-08-23 |
CN110163107B true CN110163107B (en) | 2021-06-29 |
Family
ID=67639800
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910323229.1A Active CN110163107B (en) | 2019-04-22 | 2019-04-22 | Method and device for recognizing roadside parking behavior based on video frames |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163107B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110517506B (en) * | 2019-08-26 | 2021-10-12 | 重庆同枥信息技术有限公司 | Method, device and storage medium for detecting parking event based on traffic video image |
CN110688902B (en) * | 2019-08-30 | 2022-02-11 | 智慧互通科技股份有限公司 | Method and device for detecting vehicle area in parking space |
CN111178185A (en) * | 2019-12-17 | 2020-05-19 | 北京智芯原动科技有限公司 | High-level roadside parking detection method and device based on video |
CN111292353B (en) * | 2020-01-21 | 2023-12-19 | 成都恒创新星科技有限公司 | Parking state change identification method |
CN111476169B (en) * | 2020-04-08 | 2023-11-07 | 智慧互通科技股份有限公司 | Complex scene road side parking behavior identification method based on video frame |
CN111739043B (en) * | 2020-04-13 | 2023-08-08 | 北京京东叁佰陆拾度电子商务有限公司 | Parking space drawing method, device, equipment and storage medium |
CN111739335B (en) * | 2020-04-26 | 2021-06-25 | 智慧互通科技股份有限公司 | Parking detection method and device based on visual difference |
CN112766206B (en) * | 2021-01-28 | 2024-05-28 | 深圳市捷顺科技实业股份有限公司 | High-order video vehicle detection method and device, electronic equipment and storage medium |
CN113012467B (en) * | 2021-02-23 | 2022-04-29 | 中国联合网络通信集团有限公司 | Parking control method and device |
CN113052141A (en) * | 2021-04-26 | 2021-06-29 | 超级视线科技有限公司 | Method and device for detecting parking position of vehicle |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4178154B2 (en) * | 2005-08-30 | 2008-11-12 | 松下電器産業株式会社 | Parking position search support device, method and program |
CN102637360B (en) * | 2012-04-01 | 2014-07-16 | 长安大学 | Video-based road parking event detection method |
CN103325259B (en) * | 2013-07-09 | 2015-12-09 | 西安电子科技大学 | A kind of parking offense detection method based on multi-core parallel concurrent |
CN106384532A (en) * | 2015-07-31 | 2017-02-08 | 富士通株式会社 | Video data analysis method and apparatus thereof, and parking space monitoring system |
CN105513371B (en) * | 2016-01-15 | 2017-12-22 | 昆明理工大学 | A kind of highway parking offense detection method based on Density Estimator |
CN106204643A (en) * | 2016-07-01 | 2016-12-07 | 湖南源信光电科技有限公司 | Multi-object tracking method based on multiple features combining Yu Mean Shift algorithm |
CN106504580A (en) * | 2016-12-07 | 2017-03-15 | 深圳市捷顺科技实业股份有限公司 | A kind of method for detecting parking stalls and device |
CN107404653B (en) * | 2017-05-23 | 2019-10-18 | 南京邮电大学 | A kind of Parking rapid detection method of HEVC code stream |
-
2019
- 2019-04-22 CN CN201910323229.1A patent/CN110163107B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110163107A (en) | 2019-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163107B (en) | Method and device for recognizing roadside parking behavior based on video frames | |
EP3806064B1 (en) | Method and apparatus for detecting parking space usage condition, electronic device, and storage medium | |
CN111739335B (en) | Parking detection method and device based on visual difference | |
CN111339994B (en) | Method and device for judging temporary illegal parking | |
CN110491168B (en) | Method and device for detecting vehicle parking state based on wheel landing position | |
CN111476169B (en) | Complex scene road side parking behavior identification method based on video frame | |
CN109241938B (en) | Road congestion detection method and terminal | |
CN108765975B (en) | Roadside vertical parking lot management system and method | |
CN113055823B (en) | Method and device for managing shared bicycle based on road side parking | |
CN111898491B (en) | Identification method and device for reverse driving of vehicle and electronic equipment | |
CN113205692B (en) | Automatic identification method for road side parking position abnormal change | |
CN113205689B (en) | Multi-dimension-based roadside parking admission event judgment method and system | |
CN113033479B (en) | Berth event identification method and system based on multilayer perception | |
WO2023179416A1 (en) | Method and apparatus for determining entry and exit of vehicle into and out of parking space, device, and storage medium | |
CN111951601B (en) | Method and device for identifying parking positions of distribution vehicles | |
CN113205691A (en) | Method and device for identifying vehicle position | |
CN112861773A (en) | Multi-level-based berthing state detection method and system | |
CN111931673B (en) | Method and device for checking vehicle detection information based on vision difference | |
CN110880205B (en) | Parking charging method and device | |
CN112836699A (en) | Long-time multi-target tracking-based berth entrance and exit event analysis method | |
CN113450575B (en) | Management method and device for roadside parking | |
CN112766222B (en) | Method and device for assisting in identifying vehicle behavior based on berth line | |
CN113052141A (en) | Method and device for detecting parking position of vehicle | |
CN113449624B (en) | Method and device for determining vehicle behavior based on pedestrian re-identification | |
CN114463990A (en) | High-order video vehicle and license plate detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 075000 ten building, phase 1, airport economic and Technological Development Zone, Zhangjiakou, Hebei Applicant after: Smart intercommunication Technology Co.,Ltd. Address before: 075000 ten building, phase 1, airport economic and Technological Development Zone, Zhangjiakou, Hebei Applicant before: INTELLIGENT INTERCONNECTION TECHNOLOGIES Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |