CN111476169A - Complex scene roadside parking behavior identification method based on video frames - Google Patents

Complex scene roadside parking behavior identification method based on video frames Download PDF

Info

Publication number
CN111476169A
CN111476169A CN202010270386.3A CN202010270386A CN111476169A CN 111476169 A CN111476169 A CN 111476169A CN 202010270386 A CN202010270386 A CN 202010270386A CN 111476169 A CN111476169 A CN 111476169A
Authority
CN
China
Prior art keywords
vehicle
video frames
coordinate information
parking space
parking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010270386.3A
Other languages
Chinese (zh)
Other versions
CN111476169B (en
Inventor
闫军
杨怀恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intelligent Interconnection Technologies Co ltd
Original Assignee
Intelligent Interconnection Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intelligent Interconnection Technologies Co ltd filed Critical Intelligent Interconnection Technologies Co ltd
Priority to CN202010270386.3A priority Critical patent/CN111476169B/en
Publication of CN111476169A publication Critical patent/CN111476169A/en
Priority to PCT/CN2020/132029 priority patent/WO2021203717A1/en
Application granted granted Critical
Publication of CN111476169B publication Critical patent/CN111476169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a video frame-based method for recognizing roadside parking behaviors of a complex scene. The identification method is based on processing of continuous video frames of the images, is high in calculation efficiency, is suitable for the roadside parking scene of the city, and has important significance for improving the identification accuracy of roadside parking behaviors of the city and the automation level of a parking management system.

Description

Complex scene roadside parking behavior identification method based on video frames
Technical Field
The invention relates to the field of automatic control of roadside parking, in particular to a complex scene roadside parking behavior identification method based on video frames.
Background
Roadside parking is parking management performed by using fields on two sides of a road which passes through on the ground, and is an important component of city management, with the rapid development of city economy and the continuous improvement of living standard of people, the quantity of urban motor vehicles is rapidly increased, and most cities face the trouble of shortage or serious shortage of motor vehicle parking spaces due to various historical and practical reasons, so that the roadside parking management becomes an important link of the urban parking management, is widely concerned by governments and people, and is mainly used for managing the roadside parking by adopting high-level videos at present.
In recent years, roadside parking systems based on high-level videos are beginning to be applied online on a large scale in many cities due to the advantages that the roadside parking systems are not easy to damage after being installed, capture videos are comprehensive and clear, and the roadside parking systems do not need to be operated by people on site. As an important ring of smart city construction, the roadside automatic parking management system really solves the defects that the roadside parking in the prior art can only depend on manual patrol to carry out low-efficiency management, is high in cost and enterprise, and has poor labor conditions.
However, roadside parking based on high-level video requires automatic capture of parking behaviors such as entrance and exit, automatic positioning and capture of clear license plates and the like by means of video frames, and an efficient processing algorithm becomes an important part of the whole system. Although some cities begin to be applied in a large scale, the accuracy and real-time performance of the processing algorithm become a key ring for traffic communication, and the success or failure and the effect of the whole system are influenced. Especially, due to the limitation of installation conditions and surrounding environment, under complex scenes such as serious vehicle overlapping, unclear imaging of weather factors, insufficient illumination of parking lots at night, fuzzy imaging, and shielding of obstacles such as leaves of urban landscaping, etc., in the collected video, it is more difficult for a video processing algorithm in the system to accurately judge parking behaviors. How to adopt a more intelligent and more robust algorithm to accurately and automatically judge the entrance and exit behaviors of the vehicle in a difficult complex application environment becomes one of the difficult problems of the industry practitioners.
Based on the method, the invention provides a complex scene roadside parking behavior identification method based on video frames, which is used for overcoming the defects in the prior art.
Disclosure of Invention
The invention aims to provide a video frame-based complex scene roadside parking behavior identification method, which is based on continuous frames of high-level videos in a complex scene of roadside parking, adopts an intelligent image processing algorithm to automatically identify the vehicle entrance and exit behaviors in the videos, and provides technical support for realizing automatic management of roadside parking in the complex scene and improving the traffic control and parking management efficiency of smart cities.
In order to achieve the above object, the present invention provides a method for identifying roadside parking behavior in a complex scene based on video frames, comprising:
acquiring a plurality of continuous video frames acquired by video equipment;
drawing a parking space area in any one of the plurality of video frames, and determining coordinate information of the parking space area;
detecting a plurality of continuous video frames to obtain coordinate information of the vehicle in the video frames;
comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to obtain the coordinate information of all the vehicles in the parking space;
judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames changes or not;
if the vehicle coordinate information in the two adjacent video frames changes, detecting whether the coordinates of the vehicle appendage in the two video frames change or not to obtain a detection result;
based on the detection result, the vehicle parking behavior is determined.
Further optimization, a parking space area is drawn in any one of the plurality of video frames, and coordinate information of the parking space area is determined, specifically including:
selecting a certain vertex of the parking space area as a coordinate starting point, drawing a polygon along the boundary of the parking space to form a closed polygon boundary, and determining the coordinate of each vertex of the polygon.
Further optimization, before obtaining the coordinate information of all vehicles in the parking space, the method further comprises deleting the coordinate information of the vehicle if a certain vehicle coordinate in the video frame is not in the parking space area.
Further optimizing, specifically including coordinate information of all vehicles in the parking space;
selecting coordinate information of a vehicle boundary frame;
judging whether the midpoint coordinate of the lower boundary of the vehicle boundary frame is in the parking space area or not,
if so, confirming that the vehicle enters the parking space area, and acquiring the vehicle coordinate information.
And further optimizing, if the vehicle coordinate information in the two adjacent video frames changes, cutting the changed vehicle area blocks in the two adjacent video frames.
And further optimizing, detecting the change of the vehicle accessory in the vehicle area block in two adjacent video frames, and judging the vehicle parking behavior according to the change result of the vehicle accessory.
Further optimization, before detecting the vehicle coordinates in a plurality of continuous video frames, the method further comprises the step of establishing a vehicle training model, wherein the step of establishing the training model specifically comprises the following steps:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training through deep learning of a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
Further optimization, before the detection of the vehicle accessory coordinates, the method further comprises establishing an accessory training model, wherein the establishing of the accessory training model comprises:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training on the vehicle attachment through deep learning based on a convolutional neural network to obtain a vehicle attachment training model.
A device for identifying roadside parking behaviors in a complex scene based on video frames comprises,
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a plurality of continuous video frames acquired by video equipment;
the system comprises a drawing module, a storage module and a display module, wherein the drawing module is used for drawing a parking space area in any one of a plurality of video frames and determining the coordinate information of the parking space area;
the first detection module is used for detecting the coordinate information of the vehicle in a plurality of continuous video frames;
the comparison module is used for comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to obtain the coordinate information of all the vehicles in the parking space;
the judging module is used for judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames changes or not;
the second detection module is used for detecting whether the coordinates of the vehicle accessories in the two adjacent video frames change or not if the vehicle coordinate information in the two adjacent video frames changes, so that a detection result is obtained;
and the determining module is used for determining the parking behavior of the vehicle based on the detection result.
And further optimizing, the drawing module is specifically configured to select a certain vertex of the parking space area as a coordinate starting point, draw the polygon along the boundary of the parking space, form a closed polygon boundary, and determine the coordinate of each vertex of the polygon.
In a further optimization, the device further comprises a deleting module, wherein the deleting module is used for deleting the coordinate information of the vehicle which is not in the parking space area in the video frame.
Further optimizing, the comparison module is specifically used for selecting coordinate information of the vehicle boundary frame;
judging whether the midpoint coordinate of the lower boundary of the vehicle boundary frame is in the parking space area or not,
if so, confirming that the vehicle enters the parking space area, and acquiring the vehicle coordinate information.
And further optimizing, the device also comprises a cutting module, and if the vehicle coordinate information in the two adjacent video frames changes, the cutting module is used for cutting the changed vehicle area blocks in the two adjacent video frames.
In a further optimization, the second detection module is specifically configured to detect a change of a vehicle appendage in the vehicle area block in two adjacent video frames.
Further optimization, the device further comprises a training module, wherein the training end is used for establishing a vehicle training model, and the establishing of the training model specifically comprises:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training through deep learning of a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
In a further optimization, the training module is further configured to build an adjunct training model, where building the adjunct training model includes:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training on the vehicle attachment through deep learning based on a convolutional neural network to obtain a vehicle attachment training model.
In the invention, the accessory of the vehicle refers to all accessories which are attached to the vehicle and can represent the characteristics of the vehicle. The present invention is represented by a vehicle rear-view mirror and a vehicle tail light, but the present identification method refers to accessories including but not limited to these.
In the invention, the video frames refer to continuous image frames acquired by a high-altitude camera arranged above a roadside parking lot at intervals of a certain time period.
The invention has the beneficial effects that: by adopting the method and the device, the roadside parking behavior of the vehicle under the complex scene is judged through the difference comparison between the vehicle and the auxiliary target in the continuous video frames, the method and the device are based on the processing of the image continuous video frames, have simple principle and high calculation efficiency, are suitable for the roadside parking scene of the city, and have great significance for improving the recognition accuracy of the roadside parking behavior of the city and the automation level of a parking management system.
Drawings
FIG. 1 is a flow chart of a complex scene roadside parking behavior identification method based on video frames according to the present invention;
FIG. 2 is a schematic drawing of a parking space area in a video frame according to the present invention;
FIG. 3 is a schematic illustration of a vehicle change within a parking space of the present invention;
FIG. 4 is a schematic flow chart of the complex scene roadside parking behavior recognition device based on video frames according to the present invention;
fig. 5 is a schematic flow chart of the complex scene roadside parking behavior recognition device based on video frames.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Fig. 1 is a flowchart of a roadside parking behavior identification method based on video frames, specifically, the method includes;
101. acquiring a plurality of continuous video frames acquired by video equipment, specifically, as shown in fig. 2, a schematic diagram of acquiring images by the video equipment is shown, the equipment for acquiring the video frames can be one or more of a gun-shaped camera or a dome camera, so as to ensure the accuracy of the acquired images, and acquire coordinate information of all vehicles from the acquired video frames;
102. drawing a parking space area in any one of the plurality of video frames, determining coordinate information of the parking space area,
fig. 3 is a schematic drawing of a parking space area in a video frame, specifically, a vertex of the parking space area is selected as a coordinate starting point as a starting point a (a)0,b0) Drawing a polygon along the boundary of the parking space, and recording each vertex B (a) of the polygon1,b1)、C(a2,b2)、D(a3,b3) And finally, forming a closed parking space polygonal boundary ABCD, storing the parking space polygonal boundary, and loading when the system is started.
103. Detecting a plurality of continuous video frames to obtain coordinate information of the vehicle in the video frames, specifically adopting a neural network-based deep learning method, and detecting the vehicle in the video frames by using a loaded depth model to obtain the coordinate information of the vehicle in the video frames;
104. comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to obtain the coordinate information of all the vehicles in the parking space;
acquiring vehicle coordinate information in a parking space specifically comprises the following steps; selecting the coordinate information of the vehicle boundary frame, judging whether the midpoint coordinate of the lower boundary of the vehicle boundary frame is in the parking space area,
if so, confirming that the vehicle enters the parking space area, and acquiring the vehicle coordinate information. Specifically, the detected vehicle boundary frame is rectangular, and whether the midpoint of the lower boundary of the boundary frame corresponding to the vehicle is in the parking space area is judged according to the coordinate information of the boundary frame and the parking space coordinate information; if the midpoint is in the parking space area, it is determined that a vehicle is driven into the parking space.
The four vertexes of the specific quadrangle are A (x)0,y0)、B(x1,y0)、C(x0,y1)、D(x1,y1),x0<x1,y0<y1Then the coordinate of the midpoint of the lower boundary of the vehicle bounding box is O ((x)0+x1)/2,y1) If O € PABCD,PABCDIf the vertex is A, B, C, D, the vehicle is located in the parking space; otherwise, the vehicle is not in the parking space.
And if the coordinates of a certain vehicle in the video frame are not in the parking space area, deleting the coordinate information of the vehicle.
105. Judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames changes or not;
as shown in fig. 3, which is a schematic diagram of a vehicle changing in a parking space, two adjacent video frames are selected as an ith frame and an (i + 1) th frame, the vehicles in the parking space detected in the two video frames before and after are compared, when the vehicle information is compared, if the coordinates of the vehicle in the two video frames are not changed, the vehicle in the parking space in the (i + 2) th frame is continuously detected, and the vehicle changing conditions between the (i + 1) th frame and the (i + 2) th frame are compared; until the vehicle in the parking spaces of two adjacent frames is detected to be changed.
106. Based on the detection result, the vehicle parking behavior is determined.
Specifically, if the vehicles in the parking spaces in the ith frame and the (i + 1) th frame of the continuous video frames change, the vehicle areas of the changed vehicles in the two current continuous video frames are cut based on the acquired coordinate information in the vehicle information to be detected, and the vehicle area blocks of the changed vehicles in the current video frames are obtained.
For example, in the ith frame and the (i + 1) th frame of the continuous video frame, the vehicle which changes is the C1 vehicle in the ith frame of the continuous video frame, and in the (i + 1) th frameThe medium C1 vehicle disappears. Based on the pixel coordinate information of the acquired C1 vehicle information, cutting the vehicle region of the C1 vehicle in the ith frame and the (i + 1) th frame of the current continuous video frame to obtain the vehicle region block of the C1 in the ith frame and the (i + 1) th frame of the current continuous video frame, for example, assuming that the vertex of the pixel coordinate of the detection frame of the C1 vehicle in the video frame image is a (x)0,y0)、b(x1,y0)、c(x0,y1)、d(x1,y1) As shown in FIG. 3, where x0<x1,y0<y1(ii) a Then, respectively cutting out the abscissa x from the vehicle region blocks of the ith frame and the (i + 1) th frame of the current continuous video frame0To x1Ordinate y0To y1As cut image blocks B1 and B2 of the C1 vehicle, having a width x1-x0+1 (pixel) and height y1-y0+1 (pixels), the image block preferably selects the vehicle accessories, in the above example, the vehicle accessories in the image blocks B1 and B2, such as tail lights and rear view mirrors, etc., are detected. For example, tail lamp information in the test B1 is recorded as W1, and rear view mirror information is recorded as H1; tail lamp information in the detection B2 is recorded as W2, and rear view mirror information is recorded as H2.
Furthermore, the roadside parking behavior of the vehicle is judged by comparing the difference between the vehicle and the auxiliary target in the adjacent video frames. And comparing the belonged attachments of the changed vehicles in the front and rear video frames, and judging the departure behavior of the vehicles through the reduction of the vehicles and the attachments in the parking space area in the video frames.
If the trend of the change of the accessories is the same as the trend of the change of the vehicle, judging that the vehicle is out of the field; and if the trend of the change of the accessories is different from the trend of the change of the vehicle, the video frames are collected again for judgment until the parking behavior is determined.
In the i-th frame and the i + 1-th frame of the continuous video frames in the above example, if the C1 vehicle exists in the i-th frame and disappears in the i + 1-th frame, and the tail light information detected in the vehicle area blocks B1 and B2 in the adjacent frames before and after the frame also corresponds to the existence of W1, disappears in W2, and is similar to the rearview mirror information, the changing trend of the auxiliary object is considered to correspond to the changing trend of the vehicle, and the presence of the departure motion of the C1 vehicle is confirmed; if the change state of the vehicle accessories does not conform to the change trend of the vehicle, the vehicle C1 may have false detection or missing detection in different video frames, and needs to be detected again until the parking behavior is confirmed.
In the above embodiment, specifically, adjacent video frames are continuously detected, if there is no vehicle in the parking space area in the ith video frame and there is a vehicle in the parking space area in the (i + 1) th video frame, then an accessory of the vehicle is further selected for detection, such as a rearview mirror or a tail lamp, etc., and if the change trends of the accessory in the ith video frame and the ith video frame are also from none to none, the change trend is consistent with the change trend of the vehicle, and then it is determined that the vehicle has an entry behavior;
specifically, adjacent video frames are continuously detected, if a vehicle is in a parking space area in the ith video frame and the vehicle disappears in the (i + 1) th video frame, an accessory of the vehicle is further selected for detection, such as a rearview mirror or a tail lamp, and if the change trend of the accessory in the ith video frame and the ith video frame is from the existence to the nonexistence, the change trend is consistent with the change trend of the vehicle, the vehicle is confirmed to have a departure behavior, and a vehicle training model and the same algorithm are adopted for judging the departure behavior and the departure behavior of the vehicle.
In the invention, before detecting the vehicle coordinates, a vehicle training model is established, wherein the establishment of the training model specifically comprises the following steps:
and carrying out annotation training on a plurality of vehicle sample pictures in a pre-collected parking area in a video frame through deep learning of a convolutional neural network to obtain a vehicle training model.
Before the detection of the coordinates of the vehicle accessory, establishing an accessory training model, wherein the establishing of the accessory training model comprises the following steps:
and performing labeling training on a plurality of vehicle sample pictures in a pre-collected parking area in a video frame through deep learning of a convolutional neural network based on a plurality of accessory samples of the vehicle to obtain a vehicle accessory training model.
The invention also discloses a recognition device for executing the complex scene roadside parking behavior recognition method based on the video frames, in particular, as shown in figure 4, a flow chart of the device is schematic, the device comprises,
an obtaining module 1001 configured to obtain a plurality of consecutive video frames acquired by a video device;
a drawing module 1002, configured to draw a parking space area in any video frame of the multiple video frames, and determine coordinate information of the parking space area;
a first detecting module 1003, configured to detect coordinate information of a vehicle in a plurality of consecutive video frames;
the comparison module 1004 is used for comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to acquire the coordinate information of all the vehicles in the parking space;
the judging module 1007 is used for judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames changes;
a second detecting module 1006, configured to detect whether coordinates of the vehicle appendage in the two adjacent video frames change if the vehicle coordinate information in the two adjacent video frames changes, so as to obtain a detection result;
a determining module 1005, configured to determine a parking behavior of the vehicle based on the detection result.
In an optimized implementation manner, the drawing module is specifically configured to select a certain vertex of the parking space area as a coordinate starting point, draw a polygon along a boundary of the parking space, form a closed polygon boundary, and determine a coordinate of each vertex of the polygon.
As shown in fig. 5, a flow chart of another embodiment of the apparatus, in an optimized implementation, the apparatus further includes a deletion module 1008,
and the deleting module is used for deleting the coordinate information of the vehicle in the parking space area in the video frame.
In an optimized implementation mode, the comparison module is specifically configured to select coordinate information of a vehicle bounding box;
judging whether the midpoint coordinate of the lower boundary of the vehicle boundary frame is in the parking space area or not,
if so, confirming that the vehicle enters the parking space area, and acquiring the vehicle coordinate information.
In an optimized embodiment, the apparatus further includes a cutting module 1009, and if the vehicle coordinate information in the two adjacent video frames changes, the cutting module is configured to cut the changed vehicle region block in the two adjacent video frames.
In an optimized embodiment, the second detection module is specifically configured to detect a change in vehicle attachment in the vehicle area block in two adjacent video frames.
In an optimized implementation manner, the apparatus further includes a training module, where the training end is used to establish a vehicle training model, and establishing the training model specifically includes:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training through deep learning of a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
In an optimized embodiment, the training module is further configured to build an adjunct training model, and the building of the adjunct training model includes:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training on the vehicle attachment through deep learning based on a convolutional neural network to obtain a vehicle attachment training model.
The flow of the complex scene roadside parking behavior recognition method based on video frames of the present invention is further described with reference to the above embodiments,
in the invention, the collected adjacent video frames are all continuous video frames with the interval of 5s selected as samples, but the interval is not limited in actual collection, so that the change of the parking space vehicles among the frames can be conveniently detected.
Before judging parking behaviors, a plurality of vehicle samples in a parking environment need to be collected in advance through a graph collection device, vehicles and accessories are labeled, the following models are trained, and the models are loaded when a system is started; in order to improve the accuracy of vehicle detection, vehicle detection models have been introduced into the system at system startup.
In the vehicle detection, the state of the vehicle in the video frames is mainly detected through a difference comparison algorithm, whether the position of the vehicle in the adjacent video frames is changed or not is judged, if the position of the vehicle in the adjacent video frames is not changed, the detection is continued until the vehicle is changed, and similarly, the same detection mode is adopted when the vehicle accessories are selected, so that the accuracy of the parking actual event judgment is ensured.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure.
Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will also appreciate the various illustrative logical blocks set forth in the embodiments of the present invention.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be stored in RAM memory, flash memory, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described in connection with the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination of the three.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (16)

1. A road side parking behavior identification method based on a complex scene of video frames is characterized in that,
the method comprises the following steps:
acquiring a plurality of continuous video frames acquired by video equipment;
drawing a parking space area in any one of the plurality of video frames, and determining coordinate information of the parking space area;
detecting a plurality of continuous video frames to obtain coordinate information of the vehicle in the video frames;
comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to obtain the coordinate information of all the vehicles in the parking space;
judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames changes or not;
if the vehicle coordinate information in the two adjacent video frames changes, detecting whether the coordinates of the vehicle appendage in the two video frames change or not to obtain a detection result;
based on the detection result, the vehicle parking behavior is determined.
2. The method for recognizing the roadside parking behavior of the complex scene based on the video frame as claimed in claim 1, wherein the step of drawing a parking space area in any one of the plurality of video frames and determining the coordinate information of the parking space area specifically comprises:
selecting a certain vertex of the parking space area as a coordinate starting point, drawing a polygon along the boundary of the parking space to form a closed polygon boundary, and determining the coordinate of each vertex of the polygon.
3. The complex scene roadside parking behavior recognition method based on video frames as claimed in claim 1, further comprising deleting vehicle coordinate information if a certain vehicle coordinate in the video frame is not in the parking space area before acquiring coordinate information of all vehicles in the parking space.
4. The complex scene roadside parking behavior recognition method based on the video frame as claimed in claim 3, wherein the obtaining of coordinate information of all vehicles located in a parking space specifically comprises;
selecting coordinate information of a vehicle boundary frame;
judging whether the midpoint coordinate of the lower boundary of the vehicle boundary frame is in the parking space area or not,
if so, confirming that the vehicle enters the parking space area, and acquiring the vehicle coordinate information.
5. The method as claimed in claim 1, wherein if the coordinate information of the vehicle in the two adjacent video frames changes, the changed vehicle region blocks in the two adjacent video frames are cut.
6. The complex scene roadside parking behavior recognition method based on video frames as claimed in claim 5,
and detecting the change of the vehicle accessories in the vehicle area block in two adjacent video frames, and judging the vehicle parking behavior according to the change result of the vehicle accessories.
7. The video frame-based complex scene roadside parking behavior recognition method as claimed in any one of claims 1 to 5,
before detecting the vehicle coordinates in a plurality of continuous video frames, the method also comprises the step of establishing a vehicle training model, wherein the step of establishing the training model specifically comprises the following steps:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training through deep learning of a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
8. The video frame-based complex scene roadside parking behavior recognition method as claimed in any one of claims 1 to 5,
before the detection of the coordinates of the vehicle accessory, establishing an accessory training model, wherein the establishing of the accessory training model comprises the following steps:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training on the vehicle attachment through deep learning based on a convolutional neural network to obtain a vehicle attachment training model.
9. The utility model provides a complicated scene roadside parking action's recognition device based on video frame which characterized in that: the device comprises a plurality of devices which are connected with each other,
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a plurality of continuous video frames acquired by video equipment;
the system comprises a drawing module, a storage module and a display module, wherein the drawing module is used for drawing a parking space area in any one of a plurality of video frames and determining the coordinate information of the parking space area;
the first detection module is used for detecting a plurality of continuous video frames to obtain the coordinate information of the vehicle in the video frames;
the comparison module is used for comparing the coordinate information of the vehicles in the video frame with the drawn coordinate information of the parking space area one by one to obtain the coordinate information of all the vehicles in the parking space;
the judging module is used for judging whether the coordinate information of any vehicle in the parking space in two adjacent video frames changes or not;
the second detection module is used for detecting whether the coordinates of the vehicle accessories in the two adjacent video frames change or not if the vehicle coordinate information in the two adjacent video frames changes, so that a detection result is obtained;
and the determining module is used for determining the parking behavior of the vehicle based on the detection result.
10. The apparatus for recognizing roadside parking behavior of a complex scene based on video frames as claimed in claim 9, wherein:
the drawing module is specifically configured to select a certain vertex of the parking space area as a coordinate starting point, draw a polygon along the boundary of the parking space, form a closed polygon boundary, and determine the coordinate of each vertex of the polygon.
11. The video frame-based complex scene roadside parking behavior recognition method according to claim 10, wherein the device further comprises a deletion module,
and the deleting module is used for deleting the coordinate information of the vehicle in the parking space area in the video frame.
12. The complex scene roadside parking behavior recognition method based on video frames as claimed in claim 11, wherein the comparison module is specifically configured to select coordinate information of a vehicle bounding box;
judging whether the midpoint coordinate of the lower boundary of the vehicle boundary frame is in the parking space area or not,
if so, confirming that the vehicle enters the parking space area, and acquiring the vehicle coordinate information.
13. The method as claimed in claim 12, wherein the device further includes a cutting module, and if the coordinate information of the vehicle in the two adjacent video frames changes, the cutting module is configured to cut the changed vehicle region blocks in the two adjacent video frames.
14. The complex scene roadside parking behavior recognition method based on video frames as claimed in claim 13,
the second detection module is specifically used for detecting the change of the vehicle accessories in the vehicle area block in two adjacent video frames.
15. The video frame-based complex scene roadside parking behavior recognition method according to any one of claims 9 to 14,
the device also comprises a training module, wherein the training module is used for establishing a vehicle training model, and the establishing of the training model specifically comprises the following steps:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training through deep learning of a plurality of vehicle sample pictures based on a convolutional neural network to obtain a vehicle training model.
16. The video frame-based complex scene roadside parking behavior recognition method according to any one of claims 9 to 14,
the training module is further configured to establish an adjunct training model, the establishing the adjunct training model comprising:
the method comprises the steps of collecting a plurality of vehicle sample pictures in a parking area in a video frame in advance, and performing labeling training on the vehicle attachment through deep learning based on a convolutional neural network to obtain a vehicle attachment training model.
CN202010270386.3A 2020-04-08 2020-04-08 Complex scene road side parking behavior identification method based on video frame Active CN111476169B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010270386.3A CN111476169B (en) 2020-04-08 2020-04-08 Complex scene road side parking behavior identification method based on video frame
PCT/CN2020/132029 WO2021203717A1 (en) 2020-04-08 2020-11-27 Method for recognizing roadside parking behavior in complex scenario on basis of video frames

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010270386.3A CN111476169B (en) 2020-04-08 2020-04-08 Complex scene road side parking behavior identification method based on video frame

Publications (2)

Publication Number Publication Date
CN111476169A true CN111476169A (en) 2020-07-31
CN111476169B CN111476169B (en) 2023-11-07

Family

ID=71750083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010270386.3A Active CN111476169B (en) 2020-04-08 2020-04-08 Complex scene road side parking behavior identification method based on video frame

Country Status (2)

Country Link
CN (1) CN111476169B (en)
WO (1) WO2021203717A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205692A (en) * 2021-04-29 2021-08-03 超级视线科技有限公司 Automatic identification method for road side parking position abnormal change
CN113450575A (en) * 2021-05-31 2021-09-28 超级视线科技有限公司 Management method and device for roadside parking
WO2021203717A1 (en) * 2020-04-08 2021-10-14 智慧互通科技有限公司 Method for recognizing roadside parking behavior in complex scenario on basis of video frames
CN113570857A (en) * 2021-07-19 2021-10-29 超级视线科技有限公司 Roadside parking berth reservation method and system based on high-level video
CN113706919A (en) * 2021-08-20 2021-11-26 云往(上海)智能科技有限公司 Roadside parking space judgment method and intelligent parking system
CN114155619A (en) * 2021-12-09 2022-03-08 济南博观智能科技有限公司 Method, device, medium and system for automatically monitoring parking space
CN115050005A (en) * 2022-06-17 2022-09-13 北京精英路通科技有限公司 Target detection method and detection device for high-level video intelligent parking scene

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114530056B (en) * 2022-02-15 2023-05-02 超级视线科技有限公司 Parking management method and system based on positioning information and image information
CN115116262B (en) * 2022-04-07 2023-07-07 江西中天智能装备股份有限公司 Parking limit monitoring system based on image recognition

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044219A1 (en) * 2011-08-16 2013-02-21 Xerox Corporation Automated processing method for bus crossing enforcement
CN103093194A (en) * 2013-01-07 2013-05-08 信帧电子技术(北京)有限公司 Breach of regulation vehicle detection method and device based on videos
US20160034778A1 (en) * 2013-12-17 2016-02-04 Cloud Computing Center Chinese Academy Of Sciences Method for detecting traffic violation
CN105551264A (en) * 2015-12-25 2016-05-04 中国科学院上海高等研究院 Speed detection method based on license plate characteristic matching
CN106558068A (en) * 2016-10-10 2017-04-05 广东技术师范学院 A kind of visual tracking method and system towards intelligent transportation application
CN109389064A (en) * 2018-09-27 2019-02-26 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle characteristics acquisition methods and device
CN110163107A (en) * 2019-04-22 2019-08-23 智慧互通科技有限公司 A kind of method and device based on video frame identification Roadside Parking behavior
CN110163985A (en) * 2019-06-20 2019-08-23 广西云高智能停车设备有限公司 A kind of curb parking management charge system and charging method based on the identification of vehicle face
CN110287955A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target area determines model training method, device and computer readable storage medium
CN110322702A (en) * 2019-07-08 2019-10-11 中原工学院 A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869182B (en) * 2016-06-17 2018-10-09 北京精英智通科技股份有限公司 A kind of parking stall condition detection method and system
CN111476169B (en) * 2020-04-08 2023-11-07 智慧互通科技股份有限公司 Complex scene road side parking behavior identification method based on video frame

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044219A1 (en) * 2011-08-16 2013-02-21 Xerox Corporation Automated processing method for bus crossing enforcement
CN103093194A (en) * 2013-01-07 2013-05-08 信帧电子技术(北京)有限公司 Breach of regulation vehicle detection method and device based on videos
US20160034778A1 (en) * 2013-12-17 2016-02-04 Cloud Computing Center Chinese Academy Of Sciences Method for detecting traffic violation
CN105551264A (en) * 2015-12-25 2016-05-04 中国科学院上海高等研究院 Speed detection method based on license plate characteristic matching
CN106558068A (en) * 2016-10-10 2017-04-05 广东技术师范学院 A kind of visual tracking method and system towards intelligent transportation application
CN109389064A (en) * 2018-09-27 2019-02-26 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle characteristics acquisition methods and device
CN110163107A (en) * 2019-04-22 2019-08-23 智慧互通科技有限公司 A kind of method and device based on video frame identification Roadside Parking behavior
CN110287955A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target area determines model training method, device and computer readable storage medium
CN110163985A (en) * 2019-06-20 2019-08-23 广西云高智能停车设备有限公司 A kind of curb parking management charge system and charging method based on the identification of vehicle face
CN110322702A (en) * 2019-07-08 2019-10-11 中原工学院 A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021203717A1 (en) * 2020-04-08 2021-10-14 智慧互通科技有限公司 Method for recognizing roadside parking behavior in complex scenario on basis of video frames
CN113205692A (en) * 2021-04-29 2021-08-03 超级视线科技有限公司 Automatic identification method for road side parking position abnormal change
CN113450575A (en) * 2021-05-31 2021-09-28 超级视线科技有限公司 Management method and device for roadside parking
CN113570857A (en) * 2021-07-19 2021-10-29 超级视线科技有限公司 Roadside parking berth reservation method and system based on high-level video
CN113706919A (en) * 2021-08-20 2021-11-26 云往(上海)智能科技有限公司 Roadside parking space judgment method and intelligent parking system
CN114155619A (en) * 2021-12-09 2022-03-08 济南博观智能科技有限公司 Method, device, medium and system for automatically monitoring parking space
CN115050005A (en) * 2022-06-17 2022-09-13 北京精英路通科技有限公司 Target detection method and detection device for high-level video intelligent parking scene
CN115050005B (en) * 2022-06-17 2024-04-05 北京精英路通科技有限公司 Target detection method and detection device for high-level video intelligent parking scene

Also Published As

Publication number Publication date
CN111476169B (en) 2023-11-07
WO2021203717A1 (en) 2021-10-14

Similar Documents

Publication Publication Date Title
CN111476169B (en) Complex scene road side parking behavior identification method based on video frame
CN105913685A (en) Video surveillance-based carport recognition and intelligent guide method
CN110163107B (en) Method and device for recognizing roadside parking behavior based on video frames
CN111739335B (en) Parking detection method and device based on visual difference
CN107609491A (en) A kind of vehicle peccancy parking detection method based on convolutional neural networks
CN111339994B (en) Method and device for judging temporary illegal parking
CN104933409A (en) Parking space identification method based on point and line features of panoramic image
CN102142194B (en) Video detection method and system
CN112258668A (en) Method for detecting roadside vehicle parking behavior based on high-position camera
CN111292353B (en) Parking state change identification method
CN115116012B (en) Method and system for detecting parking state of vehicle parking space based on target detection algorithm
CN113205692B (en) Automatic identification method for road side parking position abnormal change
CN114781479A (en) Traffic incident detection method and device
WO2023179416A1 (en) Method and apparatus for determining entry and exit of vehicle into and out of parking space, device, and storage medium
CN112861773A (en) Multi-level-based berthing state detection method and system
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN113380069A (en) Street lamp-based roadside parking system and method thereof
CN112560814A (en) Method for identifying vehicles entering and exiting parking spaces
CN109671291B (en) Panoramic monitoring method based on intelligent sensor
Song et al. Vision-based parking space detection: A mask R-CNN approach
CN107564031A (en) Urban transportation scene foreground target detection method based on feedback background extracting
CN111105619A (en) Method and device for judging road side reverse parking
CN113901961B (en) Parking space detection method, device, equipment and storage medium
CN115131986A (en) Intelligent management method and system for closed parking lot
CN114758318A (en) Method for detecting parking stall at any angle based on panoramic view

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 075000 ten building, phase 1, airport economic and Technological Development Zone, Zhangjiakou, Hebei

Applicant after: Smart intercommunication Technology Co.,Ltd.

Address before: 075000 ten building, phase 1, airport economic and Technological Development Zone, Zhangjiakou, Hebei

Applicant before: INTELLIGENT INTER CONNECTION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant