CN111476169B - Complex scene road side parking behavior identification method based on video frame - Google Patents

Complex scene road side parking behavior identification method based on video frame Download PDF

Info

Publication number
CN111476169B
CN111476169B CN202010270386.3A CN202010270386A CN111476169B CN 111476169 B CN111476169 B CN 111476169B CN 202010270386 A CN202010270386 A CN 202010270386A CN 111476169 B CN111476169 B CN 111476169B
Authority
CN
China
Prior art keywords
vehicle
video frames
parking
parking space
coordinate information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010270386.3A
Other languages
Chinese (zh)
Other versions
CN111476169A (en
Inventor
闫军
杨怀恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202010270386.3A priority Critical patent/CN111476169B/en
Publication of CN111476169A publication Critical patent/CN111476169A/en
Priority to PCT/CN2020/132029 priority patent/WO2021203717A1/en
Application granted granted Critical
Publication of CN111476169B publication Critical patent/CN111476169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method for identifying parking behaviors on a road side of a complex scene based on video frames, which comprises the steps of drawing a parking space area in video equipment, detecting vehicles in continuous video frames and comparing differences of the vehicles in the parking space area in the continuous video frames, preliminarily determining the vehicles with the possibility of parking behaviors, and judging the parking behaviors on the road side of the vehicles in the complex scene by detecting auxiliary targets of each differential vehicle and combining the differences of the vehicles and the auxiliary targets in the continuous video frames. The recognition method is based on the processing of the continuous video frames of the images, has high calculation efficiency, is suitable for urban road side parking scenes, and has important significance for improving the recognition accuracy of urban road side parking behaviors and the automation level of a parking management system.

Description

Complex scene road side parking behavior identification method based on video frame
Technical Field
The application relates to the field of automatic control of road side parking, in particular to a method for identifying road side parking behaviors in complex scenes based on video frames.
Background
The road side parking is parking management by utilizing sites on two sides of a road passing through the ground, is an important component part of urban management, and along with rapid development of urban economy and continuous improvement of living standard of people, the maintenance quantity of urban motor vehicles is rapidly increased, and most cities face trouble of shortage or even serious shortage of motor vehicle parking spaces due to various reasons of history and reality, so that the road side parking management becomes an important link of urban parking management, receives wide attention of governments and people, and mainly adopts high-level videos to carry out road side parking management at present.
In recent years, road side parking systems based on high-order videos are applied on a large scale in many cities because of the advantages of difficult damage after installation, comprehensive and clear video capturing, no need of human operation on site and the like. As an important ring of smart city construction, the automatic road side parking management system truly solves the defects that the original road side parking can only be managed in an inefficient mode by means of manual patrol, the cost is high, the labor condition is poor and the like.
However, road side parking based on high-level video requires parking behaviors such as automatic capturing of entrance and exit depending on video frames, automatic positioning and capturing of clear license plates and the like, and an efficient processing algorithm becomes an important part of the whole system. Although some cities start to be applied on a large scale, the accuracy and real-time performance of the processing algorithm become a key ring of traffic, and the success and failure and effect of the whole system are affected. Especially, due to the limitations of installation conditions and surrounding environments, the video processing algorithm in the system is more difficult to accurately judge parking behaviors in complex scenes such as serious vehicle overlapping, unclear imaging of weather factors, insufficient imaging of illumination of a parking lot at night, shielding of obstacles such as leaves of urban greening and the like. How to adopt a more intelligent and more robust algorithm to accurately and automatically judge the in-out behavior of a vehicle in a difficult complex application environment becomes one of the difficulties presented to industry practitioners.
Based on the above, the application provides a complex scene road side parking behavior identification method based on video frames, which is used for overcoming the defects in the prior art.
Disclosure of Invention
The application aims to provide a method for identifying the parking behavior of a road side in a complex scene based on video frames, which is based on continuous frames of high-order videos in the complex scene of the road side parking, adopts an intelligent image processing algorithm to automatically identify the vehicle access behavior in the videos, and provides technical support for realizing automatic management of the road side parking in the complex scene and improving the traffic management and parking management efficiency of a smart city.
In order to achieve the above object, the present application provides a method for identifying parking behavior on a road side of a complex scene based on video frames, comprising:
acquiring a plurality of continuous video frames acquired by video equipment;
drawing a parking space area in any video frame in the plurality of video frames, and determining coordinate information of the parking space area;
detecting a plurality of continuous video frames to obtain coordinate information of a vehicle in the video frames;
comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to acquire the coordinate information of all the vehicles in the parking space;
judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames is changed or not;
if the vehicle coordinate information in the two adjacent video frames changes, detecting whether the coordinates of the vehicle accessories in the two video frames change or not to obtain a detection result;
based on the detection result, a vehicle parking behavior is determined.
Further optimizing, drawing a parking space area in any video frame in the plurality of video frames, and determining coordinate information of the parking space area, wherein the method specifically comprises the following steps:
and selecting a certain vertex of the parking space area as a coordinate starting point, drawing a polygon along the boundary of the parking space to form a closed polygon boundary, and determining the coordinate of each vertex of the polygon.
Further optimizing, before obtaining the coordinate information of all vehicles in the parking space, deleting the coordinate information of the vehicles if a certain vehicle coordinate in the video frame is not in the parking space area.
Further optimizing, wherein the obtaining of the coordinate information of all vehicles in the parking space specifically comprises the following steps of;
selecting coordinate information of a vehicle boundary frame;
determining whether the coordinates of the midpoint of the lower boundary of the vehicle bounding box are within the parking space region,
if yes, confirming that the vehicle enters the parking space area, and acquiring the coordinate information of the vehicle.
And further optimizing, if the vehicle coordinate information in the two adjacent video frames changes, cutting the changed vehicle region blocks in the two adjacent video frames.
Further optimizing, detecting the change of the vehicle accessory in the vehicle region block in two adjacent video frames, and judging that the vehicle has parking behavior according to the change result of the vehicle accessory.
Further optimizing, before detecting the vehicle coordinates in the continuous multiple video frames, building a vehicle training model, wherein building the training model specifically comprises:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
Further optimizing, the method further comprises the step of establishing an accessory training model before detecting the coordinates of the vehicle accessory, wherein the step of establishing the accessory training model comprises the following steps:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning of the vehicle appendages based on a convolutional neural network to obtain a vehicle appendage training model.
An identification device of complex scene road side parking behavior based on video frames, the device comprises,
the acquisition module is used for acquiring a plurality of continuous video frames acquired by the video equipment;
the drawing module is used for drawing a parking space area in any video frame in the plurality of video frames and determining coordinate information of the parking space area;
the first detection module is used for detecting coordinate information of the vehicle in a plurality of continuous video frames;
the comparison module is used for comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to obtain the coordinate information of all the vehicles in the parking space;
the judging module is used for judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames is changed or not;
the second detection module is used for detecting whether the coordinates of the vehicle accessories in the two video frames change or not if the vehicle coordinate information in the two adjacent video frames change, so as to obtain a detection result;
and the determining module is used for determining the parking behavior of the vehicle based on the detection result.
And the drawing module is particularly used for selecting a certain vertex of the parking space area as a coordinate starting point, drawing a polygon along the boundary of the parking space to form a closed polygon boundary, and determining the coordinate of each vertex of the polygon.
Further preferably, the device further comprises a deleting module, wherein the deleting module is used for deleting the coordinate information of the vehicle, which is not in the parking space area, in the video frame.
The comparison module is particularly used for selecting coordinate information of a vehicle boundary frame;
determining whether the coordinates of the midpoint of the lower boundary of the vehicle bounding box are within the parking space region,
if yes, confirming that the vehicle enters the parking space area, and acquiring the coordinate information of the vehicle.
Further preferably, the device further comprises a cutting module, and the cutting module is used for cutting the changed vehicle region blocks in the two adjacent video frames if the vehicle coordinate information in the two adjacent video frames is changed.
Further preferably, the second detection module is specifically configured to detect a change in the vehicle appendage in the vehicle zone block in two adjacent video frames.
Further optimizing, the device also comprises a training module, wherein the training end is used for building a vehicle training model, and building the training model specifically comprises:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
Further preferably, the training module is further configured to build an adjunct training model, and building the adjunct training model includes:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning of the vehicle appendages based on a convolutional neural network to obtain a vehicle appendage training model.
In the application, the accessory of the vehicle refers to all accessories attached to the vehicle and capable of representing the characteristics of the vehicle. The present application is represented by a vehicle rear view mirror and a vehicle tail light, but the subject identification method includes, but is not limited to, the subject attachment.
In the application, the video frames refer to continuous image frames acquired by high-altitude cameras arranged above a road side parking lot at intervals of a certain period of time.
The application has the beneficial effects that: the road side parking behavior of the vehicle in the complex scene is judged by the difference comparison of the vehicle and the auxiliary target in the continuous video frames, the processing of the continuous video frames based on the images is simple in principle, high in calculation efficiency, suitable for the urban road side parking scene, and significant in improving the recognition accuracy of the urban road side parking behavior and the automation level of the parking management system.
Drawings
FIG. 1 is a flow chart of a method for identifying parking behavior on a road side of a complex scene based on video frames according to the application;
FIG. 2 is a schematic drawing of a parking space region in a video frame of the present application;
FIG. 3 is a schematic illustration of a vehicle in a parking space according to the present application;
FIG. 4 is a schematic flow chart of the complex scene road side parking behavior recognition device based on video frames;
fig. 5 is a schematic flow chart of the complex scene road side parking behavior recognition device based on video frames.
Detailed Description
The technical scheme of the application is further described in detail through the drawings and the embodiments.
Depicted in fig. 1 is a flow chart of a video frame-based roadside parking behavior recognition method, specifically, the method includes;
101. acquiring a plurality of continuous video frames acquired by video equipment, specifically, as shown in fig. 2, acquiring a schematic diagram of an image by the video equipment, wherein the equipment for acquiring the video frames can select one or more of a gun type camera or a ball type camera, so that the accuracy of acquiring the image is ensured, and the coordinate information of all vehicles is acquired from the acquired plurality of video frames;
102. drawing a parking space region in any one of the plurality of video frames, determining coordinate information of the parking space region,
fig. 3 is a schematic drawing of a parking space region in a video frame, specifically, a vertex of the parking space region is selected as a coordinate origin as an origin a (a 0 ,b 0 ) Drawing a polygon along a boundary of a parking space, and recording each vertex B (a 1 ,b 1 )、C(a 2 ,b 2 )、D(a 3 ,b 3 ) And finally, forming a closed parking space polygon boundary ABCD, storing the polygon boundary of the parking space, and loading when the system is started.
103. Detecting a plurality of continuous video frames to obtain coordinate information of a vehicle in the video frames, and particularly adopting a neural network-based deep learning method to detect the vehicle in the video frames by using a loaded depth model to obtain the coordinate information of the vehicle in the video frames;
104. comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to acquire the coordinate information of all the vehicles in the parking space;
the method for acquiring the coordinate information of the vehicle in the parking space specifically comprises the following steps of; selecting coordinate information of a vehicle boundary frame, judging whether the coordinate of the middle point of the lower boundary of the vehicle boundary frame is in a parking space area,
if yes, confirming that the vehicle enters the parking space area, and acquiring the coordinate information of the vehicle. Specifically, the detected vehicle boundary frame is rectangular, and whether the midpoint of the lower boundary of the boundary frame corresponding to the vehicle is in a parking space area is judged according to the coordinate information of the boundary frame and the parking space coordinate information; if the point is in the parking space area, determining that the vehicle is driven into the parking space.
The four vertexes of the specific quadrangle are A (x 0 ,y 0 )、B(x 1 ,y 0 )、C(x 0 ,y 1 )、D(x 1 ,y 1 ),x 0 <x 1 ,y 0 <y 1 The midpoint coordinate of the lower boundary of the vehicle boundary box is O ((x) 0 +x 1 )/2,y 1 ). If O.sub.P ABCD ,P ABCD Is a quadrilateral with a vertex A, B, C, D, and the vehicle is positioned in the parking space; otherwise, the vehicle is notIn the parking space.
And if a certain vehicle coordinate in the video frame is not in the parking space area, deleting the vehicle coordinate information.
105. Judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames is changed or not;
as shown in fig. 3, which is a schematic diagram showing that the vehicles in the parking spaces change, selecting two adjacent video frames, namely an ith frame and an (i+1) th frame, respectively, comparing the vehicles in the parking spaces detected in the two video frames, and if the coordinates of the vehicles in the two video frames are unchanged during vehicle information comparison, continuously detecting the vehicles in the parking spaces in the (i+2) th frame and comparing the change condition of the vehicles between the (i+1) th frame and the (i+2) th frame; until the change of the vehicles in the parking spaces of two adjacent frames is detected.
106. Based on the detection result, a vehicle parking behavior is determined.
Specifically, if the vehicles in the parking spaces in the i-th frame and the i+1-th frame of the continuous video frames change, cutting the vehicle areas of the vehicles in the current continuous two video frames according to the acquired coordinate information in the vehicle information to be detected, and obtaining the vehicle area blocks of the vehicles in the current video frames.
For example, in the i-th frame and the i+1-th frame of the continuous video frames, the vehicle that has changed is the C1 vehicle in the i-th frame of the continuous video frames, and the C1 vehicle disappears in the i+1-th frame. Based on the obtained pixel coordinate information of the C1 vehicle information, cutting the vehicle region of the C1 vehicle in the ith frame and the (i+1) th frame of the current continuous video frame to obtain a C1 vehicle region block in the ith frame and the (i+1) th frame of the current continuous video frame, for example, assuming that the vertex of the pixel coordinate of the detection frame of the C1 vehicle in the video frame image is a (x 0 ,y 0 )、b(x 1 ,y 0 )、c(x 0 ,y 1 )、d(x 1 ,y 1 ) As shown in FIG. 3, wherein x is 0 <x 1 ,y 0 <y 1 The method comprises the steps of carrying out a first treatment on the surface of the Then, the abscissa x is cut out from the vehicle region blocks of the ith frame and the (i+1) th frame of the current continuous video frame 0 To x 1 Y, ordinate 0 To y 1 Cut image blocks B1 and B2 of C1 vehicle, having width x 1 -x 0 +1 (pixel), height y 1 -y 0 +1 (pixels), which preferentially selects the vehicle accessory, in the above example, the vehicle accessories in the image blocks B1 and B2, such as the tail lamp and the rear view mirror, are detected. For example, the tail lamp information in the detection B1 is recorded as W1, and the rear view mirror information is recorded as H1; the tail light information in the detection B2 is recorded as W2, and the rear view mirror information is recorded as H2.
Further, the road side parking behavior of the vehicle is judged through the difference comparison of the vehicle and the auxiliary target in the adjacent video frames. And comparing the affiliated accessories of the changed vehicles in the front video frame and the rear video frame, and judging the departure behavior of the vehicles according to the reduction of the vehicles and the accessories in the parking space area in the video frame.
If the trend of the change of the accessory is the same as the trend of the change of the vehicle, judging that the vehicle leaves; if the trend of the change of the accessory is different from the trend of the change of the vehicle, a plurality of video frames are collected again to judge until the parking behavior is determined.
In the i frame and the i+1 frame of the continuous video frames in the example, the C1 vehicle exists in the i frame, the i+1 frame disappears, the tail lamp information detected in the vehicle region blocks B1 and B2 in the front and rear adjacent frames also accords with the existence in W1, the tail lamp information disappears in W2, and the rearview mirror information is similar, so that the change trend of the auxiliary target is considered to accord with the change trend of the vehicle, and the existence of the departure action of the C1 vehicle is confirmed; if the change state of the vehicle accessory does not accord with the change trend of the vehicle, the C1 vehicle may have false detection or omission detection phenomena in different video frames, and re-detection is needed until the parking behavior is confirmed.
In the above embodiment, specifically, adjacent video frames are continuously detected, if no vehicle is in the parking space area in the i-th video frame, and if a vehicle is in the parking space area in the i+1th video frame, then an accessory of the vehicle is further selected for detection, for example, a rearview mirror or a tail lamp, etc., and if the change trend of the accessory in the i-th video frame and the i-th video frame is from none to all, then the change trend is consistent with the change trend of the vehicle, and then the entrance behavior of the vehicle is confirmed;
specifically, adjacent video frames are continuously detected, if a vehicle is in a parking space area in an ith video frame, and the vehicle disappears in an (i+1) th video frame, an accessory of the vehicle is further selected for detection, such as a rearview mirror or a tail lamp, and if the change trend of the accessory in the ith video frame and the ith video frame is from existence to non-existence, the change trend is consistent with the change trend of the vehicle, the exiting behavior of the vehicle is confirmed, and a vehicle training model and the same algorithm are adopted for judging the entering and exiting behaviors of the vehicle.
In the application, the method further comprises the step of building a vehicle training model before detecting the vehicle coordinates, wherein the building of the training model specifically comprises the following steps:
and pre-collecting a plurality of vehicle sample pictures in a parking area in a video frame, and performing labeling training on a plurality of vehicle sample pictures through deep learning based on a convolutional neural network to obtain a vehicle training model.
The method further comprises the steps of establishing an accessory training model before detecting the coordinates of the vehicle accessory, wherein the step of establishing the accessory training model comprises the following steps:
and pre-collecting a plurality of vehicle sample pictures in a parking area in a video frame, and performing labeling training on a plurality of vehicle accessory samples through deep learning based on a convolutional neural network to obtain a vehicle accessory training model.
The application also discloses a recognition device for executing the complex scene road side parking behavior recognition method based on the video frame, in particular, as shown in fig. 4, a flow diagram of the device is shown, the device comprises,
an acquisition module 1001, configured to acquire a plurality of continuous video frames acquired by a video device;
a drawing module 1002, configured to draw a parking space area in any one of the video frames, and determine coordinate information of the parking space area;
a first detection module 1003 for detecting coordinate information of a vehicle in a plurality of consecutive video frames;
a comparison module 1004 compares the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to obtain the coordinate information of all the vehicles in the parking space;
the judging module 1007 is used for judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames is changed or not;
the second detection module 1006 detects whether the coordinates of the vehicle accessories in the two video frames change if the vehicle coordinate information in the two adjacent video frames change, so as to obtain a detection result;
a determining module 1005 is configured to determine a parking behavior of the vehicle based on the detection result.
In an optimized embodiment, the drawing module is specifically configured to select a vertex of the parking space area as a coordinate starting point, draw a polygon along a boundary of the parking space, form a closed polygon boundary, and determine coordinates of each vertex of the polygon.
As shown in fig. 5, in an optimized embodiment, the apparatus further comprises a deletion module 1008,
and the deleting module is used for deleting the coordinate information of the vehicle which is not in the parking space area in the video frame.
In an optimized embodiment, the comparison module is specifically configured to select coordinate information of a vehicle bounding box;
determining whether the coordinates of the midpoint of the lower boundary of the vehicle bounding box are within the parking space region,
if yes, confirming that the vehicle enters the parking space area, and acquiring the coordinate information of the vehicle.
In an optimized embodiment, the apparatus further includes a cutting module 1009, where if the vehicle coordinate information in the two adjacent video frames changes, the cutting module is configured to cut the changed vehicle region blocks in the two adjacent video frames.
In an optimized embodiment, the second detection module is specifically configured to detect a change in a vehicle appendage in the vehicle region block in two adjacent video frames.
In an optimized implementation manner, the device further comprises a training module, the training end is used for building a vehicle training model, and building the training model specifically comprises:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
In an optimized embodiment, the training module is further configured to build an appendage training model, where building the appendage training model includes:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning of the vehicle appendages based on a convolutional neural network to obtain a vehicle appendage training model.
The flow of the complex scene road side parking behavior recognition method based on the video frame is further described by combining the embodiment,
in the application, the collected adjacent video frames are all continuous video frames with the interval of 5s as samples, but the interval is not limited in actual collection, so that the vehicle change of parking spaces among frames can be conveniently detected.
Before judging parking behavior, a plurality of vehicle samples in a parking environment are required to be collected in advance through a graph collecting device, vehicles and accessories are marked and the following models are trained, and the system is loaded when started, and a specific pre-training first convolutional neural network model for detecting vehicle information and a specific pre-training second convolutional neural network model for detecting the vehicle accessories are used; in order to increase the accuracy of vehicle detection, a vehicle detection model has been introduced into the system at system start-up.
In the vehicle detection, the state of the vehicle in the video frame is detected by the algorithm mainly through the difference ratio, whether the position of the vehicle in the adjacent video frame changes or not is judged, if the position of the vehicle in the adjacent video frame does not change, the detection is continued until the vehicle changes, and the same detection mode is adopted when the vehicle accessories are selected, so that the accuracy of the parking actual event judgment is ensured.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, application lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. As will be apparent to those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure.
Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".
Those skilled in the art will also recognize various illustrative logical blocks depicted in the examples of the present application.
The various illustrative logical blocks or units described in the embodiments of the application may be implemented or performed with a general purpose processor digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be stored in RAM memory, flash memory, or any other form of storage medium known in the art. In an example, a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may reside in a user terminal. In the alternative, the processor and the storage medium may reside as distinct components in a user terminal.
In one or more exemplary designs, the above-described functions of embodiments of the present application may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer readable media includes both computer storage media and communication media that facilitate transfer of computer programs from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but are not limited to, RAM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store program code in the form of instructions or data structures and other data structures that can be read by a general or special purpose computer, or a general or special purpose processor. Further, any connection is properly termed a computer-readable medium, e.g., if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless such as infrared, radio, and microwave, and is also included in the definition of computer-readable medium.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the application, and is not meant to limit the scope of the application, but to limit the application to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the application are intended to be included within the scope of the application.

Claims (10)

1. A method for identifying parking behaviors on road sides of complex scenes based on video frames is characterized by comprising the following steps:
acquiring a plurality of continuous video frames acquired by video equipment;
drawing a parking space area in any video frame in the plurality of video frames, and determining coordinate information of the parking space area;
detecting a plurality of continuous video frames to obtain coordinate information of a vehicle in the video frames;
comparing the coordinate information of the vehicle in the video frame with the drawn coordinate information of the parking space area one by one, selecting the coordinate information of a vehicle boundary frame, judging whether the coordinate of the middle point of the lower boundary of the vehicle boundary frame is in the parking space area, if so, confirming that the vehicle is driven into the parking space area, and obtaining the coordinate information of the vehicle;
judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames is changed or not;
if the vehicle coordinate information in the two adjacent video frames changes, cutting the changed vehicle region blocks in the two adjacent video frames, and detecting whether the coordinates of the vehicle accessories in the vehicle region blocks in the two adjacent video frames change or not to obtain a change result of the vehicle accessories;
based on the change results, a vehicle parking behavior is determined.
2. The method for identifying a parking behavior on a complex scene roadside based on video frames according to claim 1, wherein the steps of drawing a parking space area in any one of the plurality of video frames and determining coordinate information of the parking space area comprise:
and selecting a certain vertex of the parking space area as a coordinate starting point, drawing a polygon along the boundary of the parking space to form a closed polygon boundary, and determining the coordinate of each vertex of the polygon.
3. The method for identifying a parking behavior on a road side in a complex scene based on a video frame according to claim 1, wherein the step of acquiring the coordinate information of all vehicles located in the parking space further comprises deleting the coordinate information of a certain vehicle in the video frame if the coordinate information is not located in the parking space region.
4. A method of identifying a complex scene roadside parking behavior based on video frames according to any one of claims 1 to 3, further comprising building a vehicle training model before detecting vehicle coordinates in a continuous plurality of video frames, the building training model specifically comprising:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
5. A complex scene roadside parking behavior recognition method based on video frames according to any one of claims 1 to 3, characterized in that the detection of vehicle accessory coordinates is preceded by an establishment of an accessory training model comprising:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning of the vehicle appendages based on a convolutional neural network to obtain a vehicle appendage training model.
6. The utility model provides a recognition device of complex scene roadside parking behavior based on video frame which characterized in that: the apparatus comprises a device for the treatment of a patient,
the acquisition module is used for acquiring a plurality of continuous video frames acquired by the video equipment;
the drawing module is used for drawing a parking space area in any video frame in the plurality of video frames and determining coordinate information of the parking space area;
the first detection module is used for detecting a plurality of continuous video frames to obtain the coordinate information of the vehicle in the video frames;
the comparison module is used for comparing the coordinate information of the vehicle in the video frame with the drawn coordinate information of the parking space area one by one, selecting the coordinate information of the vehicle boundary frame, judging whether the coordinate of the middle point of the lower boundary of the vehicle boundary frame is in the parking space area, and if so, confirming that the vehicle is driven into the parking space area and obtaining the coordinate information of the vehicle;
the judging module is used for judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames is changed or not;
the second detection module is used for cutting the changed vehicle region blocks in the two adjacent video frames if the vehicle coordinate information in the two adjacent video frames is changed, and detecting whether the coordinates of the vehicle accessories in the vehicle region blocks in the two adjacent video frames are changed or not to obtain a change result of the vehicle accessories;
and the determining module is used for determining the parking behavior of the vehicle based on the change result.
7. The device for identifying a parking behavior on a road side of a complex scene based on a video frame according to claim 6, wherein the drawing module is specifically configured to select a vertex of a parking space area as a coordinate starting point, draw a polygon along a boundary of the parking space, form a closed polygon boundary, and determine coordinates of each vertex of the polygon.
8. The method for identifying a parking behavior on a complex scene roadside based on a video frame according to claim 6, wherein the apparatus further comprises a deletion module for deleting coordinate information of the vehicle not in a parking space area in the video frame.
9. The method for identifying a parking behavior on a complex scene roadside based on video frames according to any one of claims 6 to 8, wherein the apparatus further comprises a training module for building a training model of a vehicle, the building the training model specifically comprising:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
10. The method for identifying parking behavior on a complex scene roadside based on video frames according to claim 9, wherein said training module is further configured to build an appendage training model, the building of the appendage training model comprising:
and acquiring a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training by deep learning of the vehicle appendages based on a convolutional neural network to obtain a vehicle appendage training model.
CN202010270386.3A 2020-04-08 2020-04-08 Complex scene road side parking behavior identification method based on video frame Active CN111476169B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010270386.3A CN111476169B (en) 2020-04-08 2020-04-08 Complex scene road side parking behavior identification method based on video frame
PCT/CN2020/132029 WO2021203717A1 (en) 2020-04-08 2020-11-27 Method for recognizing roadside parking behavior in complex scenario on basis of video frames

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010270386.3A CN111476169B (en) 2020-04-08 2020-04-08 Complex scene road side parking behavior identification method based on video frame

Publications (2)

Publication Number Publication Date
CN111476169A CN111476169A (en) 2020-07-31
CN111476169B true CN111476169B (en) 2023-11-07

Family

ID=71750083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010270386.3A Active CN111476169B (en) 2020-04-08 2020-04-08 Complex scene road side parking behavior identification method based on video frame

Country Status (2)

Country Link
CN (1) CN111476169B (en)
WO (1) WO2021203717A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476169B (en) * 2020-04-08 2023-11-07 智慧互通科技股份有限公司 Complex scene road side parking behavior identification method based on video frame
CN113205692B (en) * 2021-04-29 2023-01-24 超级视线科技有限公司 Automatic identification method for road side parking position abnormal change
CN113450575B (en) * 2021-05-31 2022-04-19 超级视线科技有限公司 Management method and device for roadside parking
CN113570857A (en) * 2021-07-19 2021-10-29 超级视线科技有限公司 Roadside parking berth reservation method and system based on high-level video
CN113706919A (en) * 2021-08-20 2021-11-26 云往(上海)智能科技有限公司 Roadside parking space judgment method and intelligent parking system
CN114155619A (en) * 2021-12-09 2022-03-08 济南博观智能科技有限公司 Method, device, medium and system for automatically monitoring parking space
CN114530056B (en) * 2022-02-15 2023-05-02 超级视线科技有限公司 Parking management method and system based on positioning information and image information
CN115116262B (en) * 2022-04-07 2023-07-07 江西中天智能装备股份有限公司 Parking limit monitoring system based on image recognition
CN115050005B (en) * 2022-06-17 2024-04-05 北京精英路通科技有限公司 Target detection method and detection device for high-level video intelligent parking scene

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093194A (en) * 2013-01-07 2013-05-08 信帧电子技术(北京)有限公司 Breach of regulation vehicle detection method and device based on videos
CN105551264A (en) * 2015-12-25 2016-05-04 中国科学院上海高等研究院 Speed detection method based on license plate characteristic matching
CN106558068A (en) * 2016-10-10 2017-04-05 广东技术师范学院 A kind of visual tracking method and system towards intelligent transportation application
CN109389064A (en) * 2018-09-27 2019-02-26 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle characteristics acquisition methods and device
CN110163107A (en) * 2019-04-22 2019-08-23 智慧互通科技有限公司 A kind of method and device based on video frame identification Roadside Parking behavior
CN110163985A (en) * 2019-06-20 2019-08-23 广西云高智能停车设备有限公司 A kind of curb parking management charge system and charging method based on the identification of vehicle face
CN110287955A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target area determines model training method, device and computer readable storage medium
CN110322702A (en) * 2019-07-08 2019-10-11 中原工学院 A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9741249B2 (en) * 2011-08-16 2017-08-22 Conduent Business Services, Llc Automated processing method for bus crossing enforcement
CN103778786B (en) * 2013-12-17 2016-04-27 东莞中国科学院云计算产业技术创新与育成中心 A kind of break in traffic rules and regulations detection method based on remarkable vehicle part model
CN105869182B (en) * 2016-06-17 2018-10-09 北京精英智通科技股份有限公司 A kind of parking stall condition detection method and system
CN111476169B (en) * 2020-04-08 2023-11-07 智慧互通科技股份有限公司 Complex scene road side parking behavior identification method based on video frame

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093194A (en) * 2013-01-07 2013-05-08 信帧电子技术(北京)有限公司 Breach of regulation vehicle detection method and device based on videos
CN105551264A (en) * 2015-12-25 2016-05-04 中国科学院上海高等研究院 Speed detection method based on license plate characteristic matching
CN106558068A (en) * 2016-10-10 2017-04-05 广东技术师范学院 A kind of visual tracking method and system towards intelligent transportation application
CN109389064A (en) * 2018-09-27 2019-02-26 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle characteristics acquisition methods and device
CN110163107A (en) * 2019-04-22 2019-08-23 智慧互通科技有限公司 A kind of method and device based on video frame identification Roadside Parking behavior
CN110287955A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target area determines model training method, device and computer readable storage medium
CN110163985A (en) * 2019-06-20 2019-08-23 广西云高智能停车设备有限公司 A kind of curb parking management charge system and charging method based on the identification of vehicle face
CN110322702A (en) * 2019-07-08 2019-10-11 中原工学院 A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System

Also Published As

Publication number Publication date
CN111476169A (en) 2020-07-31
WO2021203717A1 (en) 2021-10-14

Similar Documents

Publication Publication Date Title
CN111476169B (en) Complex scene road side parking behavior identification method based on video frame
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN105913685A (en) Video surveillance-based carport recognition and intelligent guide method
CN110163107B (en) Method and device for recognizing roadside parking behavior based on video frames
CN101656023B (en) Management method of indoor car park in video monitor mode
CN111739335B (en) Parking detection method and device based on visual difference
CN110491168B (en) Method and device for detecting vehicle parking state based on wheel landing position
CN111339994B (en) Method and device for judging temporary illegal parking
CN111405196B (en) Vehicle management method and system based on video splicing
CN111340710B (en) Method and system for acquiring vehicle information based on image stitching
CN113066306B (en) Management method and device for roadside parking
WO2023179697A1 (en) Object tracking method and apparatus, device, and storage medium
CN111931673B (en) Method and device for checking vehicle detection information based on vision difference
CN114934467B (en) Parking space barrier control method, parking space barrier system and medium
CN115116012A (en) Method and system for detecting parking state of vehicle parking space based on target detection algorithm
CN114694095A (en) Method, device, equipment and storage medium for determining parking position of vehicle
CN115965934A (en) Parking space detection method and device
CN111723805B (en) Method and related device for identifying foreground region of signal lamp
CN116824549B (en) Target detection method and device based on multi-detection network fusion and vehicle
CN112560814A (en) Method for identifying vehicles entering and exiting parking spaces
CN112785610A (en) Lane line semantic segmentation method fusing low-level features
CN107564031A (en) Urban transportation scene foreground target detection method based on feedback background extracting
CN114758318A (en) Method for detecting parking stall at any angle based on panoramic view
Bachtiar et al. Parking management by means of computer vision
KR20210035360A (en) License Plate Recognition Method and Apparatus for roads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 075000 ten building, phase 1, airport economic and Technological Development Zone, Zhangjiakou, Hebei

Applicant after: Smart intercommunication Technology Co.,Ltd.

Address before: 075000 ten building, phase 1, airport economic and Technological Development Zone, Zhangjiakou, Hebei

Applicant before: INTELLIGENT INTER CONNECTION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant