CN109410628B - Method and system for detecting state of in-road berth and data processing device thereof - Google Patents

Method and system for detecting state of in-road berth and data processing device thereof Download PDF

Info

Publication number
CN109410628B
CN109410628B CN201710697651.4A CN201710697651A CN109410628B CN 109410628 B CN109410628 B CN 109410628B CN 201710697651 A CN201710697651 A CN 201710697651A CN 109410628 B CN109410628 B CN 109410628B
Authority
CN
China
Prior art keywords
detection result
state
parking space
information
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710697651.4A
Other languages
Chinese (zh)
Other versions
CN109410628A (en
Inventor
胡勇
杨耿
陈晓丹
吴继葵
庄天海
何小川
何海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Genvict Technology Co Ltd
Original Assignee
Shenzhen Genvict Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Genvict Technology Co Ltd filed Critical Shenzhen Genvict Technology Co Ltd
Priority to CN201710697651.4A priority Critical patent/CN109410628B/en
Publication of CN109410628A publication Critical patent/CN109410628A/en
Application granted granted Critical
Publication of CN109410628B publication Critical patent/CN109410628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/145Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
    • G08G1/147Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas where the parking area is within an open public zone, e.g. city centre
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method and a system for detecting the state of an in-road berth and a data processing device thereof, wherein the method for detecting the state of the in-road berth comprises the following steps: acquiring first detection result information in real time according to the vehicle position coordinates received from the roadside base station; acquiring second detection result information in real time according to the berth image information received from the camera; and performing fusion processing on the first detection result information and the second detection result information to determine the state of each berth. By implementing the technical scheme of the invention, the accurate identification rate of the berth state is improved. Moreover, the method can provide a basis for violation evidence collection and law enforcement and the like without manual field attendance, and has certain social and economic benefits.

Description

Method and system for detecting state of in-road berth and data processing device thereof
Technical Field
The present invention relates to the field of Intelligent Transportation Systems (ITS), and in particular, to a method and a System for detecting a state of an on-road parking space, and a data processing device thereof.
Background
In the traditional in-road parking detection technology, no matter parking meter equipment, a handheld POS machine, geomagnetic detection and radio frequency detection are adopted, the common characteristics of the parking meter equipment, the handheld POS machine, the geomagnetic detection and the radio frequency detection are that no way is available for automatically storing certificates, manual management is needed, parking spaces managed by people are few, the labor cost is very high, the number of the parking spaces managed by people is small, cash leaks are large, and the missing inspection rate is frequent. Although video detection is promoted, the video behavior identification technology is greatly challenged due to the influence of factors such as shading, dynamic background, visual angle and illumination change, and the identification rate of a single video vehicle inspection is still to be improved.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method, a system and a data processing device for detecting the status of an on-road berth, which can automatically collect the evidence in real time and have a high identification rate of the berth status, aiming at the above-mentioned defects of incapability of collecting the evidence and low identification rate in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a method for detecting the state of an in-road berth is constructed, and comprises the following steps:
acquiring first detection result information in real time according to the vehicle position coordinates received from the roadside base station;
acquiring second detection result information in real time according to the berth image information received from the camera;
and performing fusion processing on the first detection result information and the second detection result information to determine the state of each berth.
Preferably, the step of performing fusion processing on the first detection result information and the second detection result information to determine the state of each berth includes:
s31, if it is determined that the first vehicle enters the first parking space according to the first detection result information, the state of the first parking space is transferred from an idle state to a tag vehicle entering state;
s32, if it is determined that the first vehicle exits the first parking space according to the current first detection result information or the current second detection result information, the state of the first parking space is transferred to an idle state, and a primary parking record is generated; or the like, or, alternatively,
if the first vehicle is determined to drive into the first parking space again according to the current first detection result information, keeping the state of the first parking space unchanged;
if it is determined that a second vehicle drives into the first parking space according to the current first detection result information, keeping the state of the first parking space unchanged, and generating the driving-out time of the first vehicle; or the like, or, alternatively,
if the fact that a vehicle enters the first parking space is determined according to the current second detection result information in the first time, the state of the first parking space is kept unchanged, and the current first detection result information is associated with the current second detection result information; or
And if it is determined that the vehicle enters the first parking space according to the second detection result information, and the interval between the entering time in the second detection result information and the entering time in the first detection result information is greater than the first time, keeping the state of the first parking space unchanged.
Preferably, the step of performing fusion processing on the first detection result information and the second detection result information to determine the state of each berth includes:
s33, if it is determined that the first vehicle drives into the first parking space according to the second detection result information, the state of the first parking space is transferred from an idle state to an intermediate state;
s34, if the first vehicle is determined to be driven out of the first parking space according to the current second detection result information in the second time, the state of the first parking space is transferred to an idle state, and a primary parking record is generated; or the like, or, alternatively,
if the first vehicle is determined to drive into the first parking space according to the current first detection result information within the second time, the state of the first parking space is transferred to a tag vehicle driving state; or the like, or, alternatively,
and if no vehicle is determined to drive into the first parking space according to the current first detection result information within the second time, the state of the first parking space is transferred to a non-label vehicle driving state.
Preferably, when the state of the first berth is a non-tag vehicle driving-in state, the method further includes:
s35, if it is determined that the first vehicle exits the first parking space according to the current second detection result information within the third time, the state of the first parking space is transferred to an idle state, and a primary parking record is generated; or the like, or, alternatively,
if the second vehicle is determined to drive into the first parking space according to the current second detection result information within the third time, keeping the state of the first parking space unchanged; or the like, or, alternatively,
and if the fact that the vehicle enters the first parking space is determined according to the current first detection result information within the third time, the state of the first parking space is transferred to a tag vehicle entering state.
Preferably, the step of acquiring the second detection result information in real time includes:
acquiring second detection result information in real time by performing at least one of the following processes:
carrying out filtering matching processing on the berth image information; and/or the presence of a gas in the gas,
performing pattern recognition-based processing on the berth image information; and/or the presence of a gas in the gas,
and performing deep learning-based processing on the berthage image information.
Preferably, if the berth image information includes first image information from a first camera and second image information from a second camera, the step of performing filter matching processing includes:
s201, if a third detection result is obtained according to the first image information and the third detection result changes, a fourth detection result is obtained according to the second image information in a fourth time;
s202, if the fourth detection result is the same as the third detection result, taking the third detection result as a second detection result;
and S203, if the fourth detection result is different from the third detection result, determining a second detection result according to the third detection result and the fourth detection result.
Preferably, step S203 includes:
s2031, judging whether the current fourth detection result changes; if yes, executing step S2032; if not, executing step S2033;
step S2032, taking the fourth detection result as a second detection result, and then ending;
step S2033, judging whether a third detection result obtained according to the current first image information changes again in the fifth time, if so, executing step S2034; if not, executing step S2036;
step S2034, judging whether the current third detection result is the same as the fourth detection result, if so, executing step S2035;
s2035, taking the fourth detection result as a second detection result, and then ending;
step S2036, judging whether the times of the third detection result which is not changed are more than the preset times, if so, executing step S2037; if not, executing step S2033;
step S2037, counting the change times of the third detection result in the sixth time, obtaining average change time, judging whether the change times are larger than a first preset value or not, judging whether the average change time is smaller than a second preset value or not, and if yes, executing step S2038; if not, executing step S2039, wherein the sixth time is greater than the fourth time and the fifth time;
step S2038, taking the fourth detection result as a second detection result, and then ending;
and S2039, taking the third detection result as a second detection result.
The present invention also constructs a data processing apparatus of a system for detecting a state of an on-road berth, comprising:
the first acquisition module is used for acquiring first detection result information in real time according to the vehicle position coordinates received from the road side base station;
the second acquisition module is used for acquiring second detection result information in real time according to the berth image information received from the camera;
and the fusion processing module is used for carrying out fusion processing on the first detection result information and the second detection result information so as to determine the state of each berth.
The invention also constructs a data processing device of the in-road berth state detection system, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor is used for executing the computer program stored in the memory and realizing the method.
The invention also constructs a system for detecting the state of the in-road berth, which comprises:
the system comprises a data acquisition device arranged in an on-road berthing area, a data acquisition device and a control device, wherein the data acquisition device comprises a road side base station and a camera, the road side base station is used for communicating with an electronic tag on a vehicle to acquire the position coordinate of the vehicle, the camera is used for shooting berthing image information, and the on-road berthing area is respectively in the coverage range of the road side base station and the coverage range of the camera;
the network transmission device is used for sending the acquired vehicle position coordinates and the acquired parking position image information;
the data processing apparatus described above;
and the business application device is used for generating corresponding timing consumption information and notification reminding information according to the state of each berth.
By implementing the technical scheme of the invention, the radio frequency detection information (vehicle position coordinates) is respectively received from the road side base station, the video detection information (parking position image information) is received from the camera, then the radio frequency detection information and the video detection information are respectively processed to obtain two detection results, and then the two detection results are fused, so that the accurate identification rate of the parking position state is improved. Moreover, the obtained video detection information has evidence obtaining effect on illegal parking inspection, and can provide basis for illegal evidence obtaining and law enforcement and the like without manual on-site attendance, thereby having certain social benefit and economic benefit.
Drawings
In order to illustrate the embodiments of the invention more clearly, the drawings that are needed in the description of the embodiments will be briefly described below, it being apparent that the drawings in the following description are only some embodiments of the invention, and that other drawings may be derived from those drawings by a person skilled in the art without inventive effort. In the drawings:
FIG. 1 is a logic structure diagram of a first embodiment of the system for detecting the status of an in-road berth according to the present invention;
FIG. 2 is a flowchart of a first embodiment of a method for detecting a status of an in-circuit berthage according to the present invention;
FIG. 3 is a schematic representation of the berthing state transition of the present invention;
FIG. 4 is a schematic diagram of a multi-site camera layout of the present invention;
FIG. 5 is a logic structure diagram of a first embodiment of a data processing apparatus of the system for detecting status of an in-circuit berth according to the present invention;
fig. 6 is a logical structure diagram of a second embodiment of the data processing apparatus of the system for detecting a status of an in-circuit parking space according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Along with the use of two basic station discernment on road bayonet socket, highway in recent years, this application uses two basic recognition technology to the road parking detection, not only can improve parking stall recognition rate, can also realize unmanned on duty, remote management. The double-base identification realizes the combination of simultaneously using the RFID radio frequency identification technology and the shooting identification technology of the traffic camera, namely the combination of the radio frequency identification and the video behavior identification, and the identification accuracy is ensured by double guarantee.
The in-road parking double-base identification consists of a UWB (ultra wideband) base station and a parking space video detector which are deployed at the road side, and is used for detecting the parking state (whether the vehicle is parked or not, whether the vehicle is parked correctly, when the vehicle enters or leaves) of the parking space in the road and reading the label information (including the information of the vehicle owner and the vehicle information) of the parked vehicle. The double-base identification detection system transmits event information to a cloud background, the background needs to analyze, fuse and convert detection data to obtain a final result and transmit the final result to an application cloud, so that the application cloud triggers charging for legal parking, triggers checking and monitoring for illegal parking, and stores a snapshot image for evidence storage.
In combination with the logical structure diagram of the in-road berth state detection system shown in fig. 1, the system includes a data acquisition device 10, a network transmission device 20, a data processing device 30 and a service application device 40.
The data acquisition device 10 is arranged in an in-road berthing area and comprises an industrial personal computer, a plurality of roadside base stations and a plurality of cameras, wherein in the embodiment, the roadside base stations are all UWB base stations, one of the roadside base stations is a master base station, the other roadside base stations are slave base stations, and in-road berthing areas are respectively in the coverage range of the UWB base stations and the coverage range of the cameras. The UWB base station obtains the position coordinates of the vehicle by communicating with the electronic tag on the vehicle so as to provide information that the vehicle enters or exits a certain parking space. The camera shoots the image information of the berth so as to provide information of whether the vehicle drives in or out of a certain berth, whether the parking violation occurs, whether the berth is occupied by a non-vehicle and the like.
The network transmission device 20 is responsible for transmitting data between the data acquisition device 10 and the data processing device 30 safely in real time, for example, transmitting the vehicle position coordinates and the parking position image information acquired by the data acquisition device 10 to the data processing device 30, and it may be wifi, 3G/4G/5G, wired network, etc.
The data processing device 30 is the core of the overall business process. According to the principle of low coupling between system design modules, the data acquisition device 10 does not process data as much as possible, sends the detected original data to the data processing device 30, and the data processing device 30 performs corresponding processing according to the business requirements. The data processing device 30 is composed of middleware and a database. The core idea of the middleware design is to adopt a modular design, reduce the coupling among modules, provide the reusability of the modules, and simultaneously, according to the service requirements, each service module is deployed in a distributed manner. In addition, the data processing middleware adopts a distributed message queue and an event driven framework, maintains loose coupling of the modules by transmitting event messages among the modules with low coupling, and completes cooperation among the modules by means of communication of the event messages. The generation and execution of the parking service related data are all carried out in the middleware, wherein the parking position state (namely the vehicle parking event) is the core of the whole service, and under the condition that the double-base station is only responsible for detecting data, the data fusion analysis is the key of the data processing middleware.
The service application device 40 interacts with the data processing device 30, and may actively acquire result data of the data processing device 30, or passively receive message data pushed by the data processing device 30, and is mainly used for generating corresponding timing consumption information and notification reminding information according to the state of each berth.
Fig. 2 is a flowchart of a first embodiment of a method for detecting a state of an in-circuit parking space according to the present invention, where the method for detecting a state of an in-circuit parking space of the embodiment includes:
s10, acquiring first detection result information in real time according to the vehicle position coordinates received from the road side base station;
in this step, the data processing device may receive the vehicle position coordinates from the UWB base station through the network transmission device, may obtain first detection result information by analyzing the vehicle position coordinates, and may further determine vehicle entrance and exit parking space information according to the first detection result information, where the specific first detection result information may further include vehicle identification information.
S20, acquiring second detection result information in real time according to the berth image information received from the camera;
in this step, the data processing device may receive, from the camera, parking space image information captured by the camera through the network transmission device, and may obtain second detection result information by analyzing and processing the parking space image information, where the second detection result information includes information about whether the vehicle is driven into or out of the parking space, whether the vehicle is illegal to park, whether the parking space is occupied by a non-vehicle, and the like.
And S30, carrying out fusion processing on the first detection result information and the second detection result information to determine the state of each berth.
In this step, after acquiring the first detection result information and the second detection result information, the data processing device performs matching and optimal determination on the two pieces of detection result information, so that the berth state information can be more comprehensively and accurately determined, and the correct identification rate of berth detection is improved. In addition, the states of the berth can be divided into the following four types: an idle state, a tagged vehicle entry state, a non-tagged vehicle entry state, a video presence state (intermediate state).
In the traditional video berth snapshot, one berth is covered by a single camera, and the recognition accuracy of the cameras in different directions and different angles at different moments is different, so that the video detection result is incomplete or inaccurate. In the application, a multi-station camera layout is adopted, namely one camera covers a plurality of berths, two cameras in one berth are used for snapshot, and finally data matching and fusion are carried out, so that the identification accuracy is improved, and the completeness of a snapshot image is ensured.
Preferably, referring to fig. 3, the camera includes a spherical camera and a cylindrical camera, which can rotate 360 degrees and have adjustable focal length, and the height should be as high as possible, however, due to the large tree in the on-road scene, the installation height is limited, and tree shading is avoided, so the camera is set between 3 meters and 4 meters, one camera covers two berths on the same side and four berths on the opposite side, the installation angle is based on the fact that all the berths can be completely covered, for example, the horizontal viewing angle of the spherical camera is about 15 degrees, and the horizontal viewing angle of the cylindrical camera is about 80 degrees. And setting different credibility for the snap-shot images of different berths according to the distance relation between the camera and the berths.
Regarding step S20, preferably, the data processing device in the present system may perform at least one of the following processes to obtain the second detection result information in real time after receiving the real-time berth image information: filter matching processing, pattern recognition based processing, deep learning based processing. Wherein:
and the filtering matching processing is that when more than two cameras exist, after whether vehicles exist on the berth is identified according to the image data of each camera, multi-source fusion matching is carried out on more than two identification results.
The processing based on deep learning is to divide the image data captured by the camera into two types: a small berthage map and a large berthage scene map. Moreover, a Caffe-based deep learning framework is set up in advance, and probability judgment of whether vehicles exist is carried out on the parking position small map; the large image of the parking place scene possibly comprises images of at most six parking places, edge detection and deep learning illegal parking (crossing parking places, reversing parking places, unpicked parking places and the like) detection and judgment are carried out on the images, and a plurality of results of the parking places are correlated.
The processing based on pattern recognition comprises template matching and/or particle detection, wherein the template matching is to train enough in-road scene snapshot picture samples to establish a mathematical model. And digitizing the captured image of the berth video detection, sequentially taking out the pixel value of each point, substituting the pixel value into the model for processing, and obtaining the probability of whether the detection result has the car or not. Particle detection is based on template matching, and provides a more granular particle detection identification method as auxiliary judgment. Dividing a parking detection image into 4 particles according to the upper, lower, left and right, adding the whole image, and representing the identification result by a 5-bit binary number, namely xxxxx (x is 1 or 0), wherein the highest bit represents whether a vehicle exists in the whole parking detection image (when the first bit judges that the vehicle does not exist, the last four bits are needed to judge), and the rest four bits represent whether the vehicle exists in the upper, lower, left and right. And then further judging whether the vehicle exists or not according to the ratio of the vehicles in each part (namely, if the number of the vehicles in the last four is more than 2 or 3, the vehicle exists).
In an embodiment, it is first described that, if the berth image information includes first image information from the first camera and second image information from the second camera, after the data processing device processes the first image information and the second image information respectively, corresponding third detection result and fourth detection result may be obtained. Theoretically, the states of the same berth in the third detection result and the fourth detection result should be consistent. However, due to the possibility of pedestrians, leaves, or different angles on the road, it is likely that the states of the same parking positions in the third and fourth detection results will not be consistent, and in this case, it is necessary to perform the filter matching process on the third and fourth detection results, and the filter matching process includes the steps of:
s201, if a third detection result is obtained according to the first image information and the third detection result changes, a fourth detection result is obtained according to the second image information within a fourth time;
in this step, the first image information and the second image information received by the data processing apparatus are updated in real time, and therefore, the third detection result and the fourth detection result are also updated in real time. If the current third detection result is found to be changed from the previous third detection result at a certain time, the detection result changed means that the current detection result is changed from the previous detection result at the previous time, for example, the vehicle-presence state is changed into the vehicle-absence state, or the vehicle-absence state is changed into the vehicle-presence state, at this time, it can be determined whether the current fourth detection result is changed from the previous fourth detection result within a fourth time (for example, 30 seconds) to obtain the current fourth detection result.
S202, if the fourth detection result is the same as the third detection result, taking the third detection result as a second detection result;
in this step, regardless of whether the fourth detection result changes, as long as the current fourth detection result is the same as the third detection result, the third detection result can be used as the second detection result, and then the final parking position state judgment is performed according to the first detection result and the second detection result.
And S203, if the fourth detection result is different from the third detection result, determining a second detection result according to the third detection result and the fourth detection result.
Further, step S203 includes:
s2031, judging whether the current fourth detection result changes; if yes, executing step S2032; if not, executing step S2033;
step S2032, taking the fourth detection result as a second detection result, and then ending;
step S2033, judging whether a third detection result obtained according to the current first image information changes again in the fifth time, if so, executing step S2034; if not, executing step S2036; the fifth time is, for example, 30 seconds;
step S2034, judging whether the current third detection result is the same as the fourth detection result, if so, executing step S2035;
s2035, taking the fourth detection result as a second detection result, and then ending;
step S2036, judging whether the times of the third detection result which is not changed are more than the preset times, if so, executing step S2037; if not, executing step S2033;
step S2037, counting the change times of the third detection result in the sixth time, obtaining average change time, judging whether the change times are larger than a first preset value or not, judging whether the average change time is smaller than a second preset value or not, and if yes, executing step S2038; if not, executing step S2039, wherein the sixth time is greater than the fourth time and the fifth time, and the sixth time is, for example, 10 minutes;
step S2038, taking the fourth detection result as a second detection result, and then ending;
and S2039, taking the third detection result as a second detection result.
Regarding step S30, in a preferred embodiment, in combination with fig. 4, the method may specifically include:
s31, if it is determined that the first vehicle enters the first parking space according to the first detection result information, the state of the first parking space is transferred from an idle state to a tag vehicle entering state;
in this step, if the initial state of a certain parking space (taking the first parking space as an example) is the idle state, where the coordinate range of the first parking space is pre-stored, and when the roadside unit or the camera detects that the vehicle stops in the coordinate range, it is determined that the vehicle stops in the first parking space, since the first detection result information is obtained in real time according to the vehicle position coordinates transmitted by the UWB base station in real time, if it is determined that a certain vehicle (taking the first vehicle as an example) enters the first parking space at a certain time according to the current first detection result information, at this time, the state of the first parking space may be shifted to the tag vehicle entering state. After this step, if the latest first detection result information and the latest second detection result information are continuously obtained in real time, there are five cases in step S32:
step S32:
the method comprises the following steps that 1, if it is determined that a first vehicle drives out of a first parking space according to current first detection result information or current second detection result information, the state of the first parking space is transferred to an idle state, and a primary parking record is generated;
if the first vehicle is determined to drive into the first berth again according to the current first detection result information, keeping the state of the first berth unchanged;
if it is determined that a second vehicle enters the first parking space according to the current first detection result information, keeping the state of the first parking space unchanged, and generating the exit time of the first vehicle;
if the fact that the vehicle enters the first parking space is determined according to the current second detection result information in the first time, the state of the first parking space is kept unchanged, and the current first detection result information is associated with the current second detection result information, namely the fact that the vehicle entering the second detection result is the first vehicle can be determined;
and 5, if the vehicle is determined to drive into the first berth according to the second detection result information, and the interval between the driving-in time in the second detection result information and the driving-in time in the first detection result information is greater than the first time, keeping the state of the first berth unchanged. That is, although it is determined that the first vehicle enters the first parking space based on the first detection result information and it is determined that the vehicle enters the first parking space based on the second detection result information, the time is out, so that the first detection result information and the second detection result information are not associated, that is, the entering vehicle in the second detection result is not considered as the first vehicle.
Regarding step S30, in another preferred embodiment, in combination with fig. 4, the method may specifically include:
s33, if it is determined that the first vehicle drives into the first parking space according to the second detection result information, the state of the first parking space is transferred from an idle state to an intermediate state;
in this step, if the initial state of a certain parking space (taking the first parking space as an example) is the idle state, since the second detection result information is obtained in real time according to the parking space image information sent by the camera in real time, if it is determined that a certain vehicle (taking the first vehicle as an example) enters the first parking space according to the current second detection result information at a certain moment, the state of the first parking space can be transferred from the idle state to the intermediate state at this moment. After this step, if the latest first detection result information and the latest second detection result information are continuously obtained in real time, there are three cases in step S34:
step S34:
the method comprises the following steps that 1, if it is determined that a first vehicle exits from a first parking space according to current second detection result information in a second time, the state of the first parking space is transferred to an idle state, and a primary parking record is generated;
if the first vehicle is determined to drive into the first berth according to the current first detection result information within the second time, the state of the first berth is transferred to the state of driving into the tag vehicle; or the like, or, alternatively,
and 3, if no vehicle is determined to drive into the first parking space according to the current first detection result information within the second time, the state of the first parking space is transferred to a non-label vehicle driving state.
Further, when the first berth is in a non-tag vehicle entrance state, the method further comprises the following steps:
s35, if it is determined that the first vehicle exits the first parking space according to the current second detection result information within the third time, the state of the first parking space is transferred to an idle state, and a primary parking record is generated; or the like, or, alternatively,
if the second vehicle is determined to drive into the first parking space according to the current second detection result information within the third time, keeping the state of the first parking space unchanged; or the like, or, alternatively,
and if the fact that the vehicle enters the first parking space is determined according to the current first detection result information within the third time, the state of the first parking space is transferred to a tag vehicle entering state.
According to the embodiment, the data processing device performs fusion processing on the reported data, a multithreading mode is adopted to decouple the data processing process from the processed result pushing process, the data processing thread performs fusion processing on the data and assigns a pushing identification bit, and the pushing thread performs pushing processing only on the data of which the identification bit is a specific value in the processing result.
Fig. 5 is a logical structure diagram of a first embodiment of the data processing apparatus of the system for detecting the status of an in-road berth according to the present invention, the data processing apparatus of this embodiment includes a first obtaining module 10, a second obtaining module 20 and a fusion processing module 30, wherein the first obtaining module 10 is configured to obtain first detection result information in real time according to the vehicle position coordinates received from the UWB base station; the second obtaining module 20 is configured to obtain second detection result information in real time according to the berth image information received from the camera; the fusion processing module 30 is configured to perform fusion processing on the first detection result information and the second detection result information to determine a state of each berth.
Fig. 6 is a logical structure diagram of a second embodiment of the data processing apparatus of the system for detecting a status of an in-circuit berth according to the present invention, the data processing apparatus of this embodiment includes a memory 40 and a processor 50, the memory 40 stores a computer program, and the processor 50 is configured to execute the computer program stored in the memory 40 and implement the method in the above embodiments.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (8)

1. A method for detecting the state of an in-road berth is characterized by comprising the following steps:
acquiring first detection result information in real time according to the vehicle position coordinates received from the roadside base station;
according to the berth image information received from the camera, filtering and matching the berth image information to obtain second detection result information in real time;
performing fusion processing on the first detection result information and the second detection result information to determine the state of each berth;
further, if the berth image information includes first image information from a first camera and second image information from a second camera, the step of performing filter matching processing includes:
s201, if a third detection result is obtained according to the first image information and the current third detection result changes, a fourth detection result is obtained according to the second image information within a fourth time;
s202, if the fourth detection result is the same as the third detection result, taking the third detection result as a second detection result;
s203, if the fourth detection result is different from the third detection result, determining a second detection result according to the third detection result and the fourth detection result;
the step S203 includes:
s2031, judging whether the current fourth detection result changes; if yes, executing step S2032; if not, executing step S2033;
step S2032, taking the fourth detection result as a second detection result, and then ending;
step S2033, judging whether a third detection result obtained according to the current first image information changes again in the fifth time, if so, executing step S2034; if not, executing step S2036;
step S2034, judging whether the current third detection result is the same as the fourth detection result, if so, executing step S2035;
s2035, taking the fourth detection result as a second detection result, and then ending;
step S2036, judging whether the times of the third detection result which is not changed are more than the preset times, if so, executing step S2037; if not, executing step S2033;
step S2037, counting the change times of the third detection result in the sixth time, obtaining average change time, judging whether the change times are larger than a first preset value or not, judging whether the average change time is smaller than a second preset value or not, and if yes, executing step S2038; if not, executing step S2039, wherein the sixth time is greater than the fourth time and the fifth time;
step S2038, taking the fourth detection result as a second detection result, and then ending;
and S2039, taking the third detection result as a second detection result.
2. The method according to claim 1, wherein the step of fusing the first detection result information and the second detection result information to determine the state of each berth includes:
s31, if it is determined that the first vehicle enters the first parking space according to the first detection result information, the state of the first parking space is transferred from an idle state to a tag vehicle entering state;
s32, if it is determined that the first vehicle drives out of the first parking space according to the current first detection result information or the current second detection result information, the state of the first parking space is transferred to an idle state, and a primary parking record is generated; or the like, or, alternatively,
if the first vehicle is determined to drive into the first parking space again according to the current first detection result information, keeping the state of the first parking space unchanged;
if it is determined that a second vehicle drives into the first parking space according to the current first detection result information, keeping the state of the first parking space unchanged, and generating the driving-out time of the first vehicle; or the like, or, alternatively,
if the fact that a vehicle enters the first parking space is determined according to the current second detection result information in the first time, the state of the first parking space is kept unchanged, and the current first detection result information is associated with the current second detection result information; or
And if it is determined that the vehicle enters the first parking space according to the second detection result information, and the interval between the entering time in the second detection result information and the entering time in the first detection result information is greater than the first time, keeping the state of the first parking space unchanged.
3. The method according to claim 1, wherein the step of fusing the first detection result information and the second detection result information to determine the state of each berth includes:
s33, if it is determined that the first vehicle drives into the first parking space according to the second detection result information, the state of the first parking space is transferred from an idle state to an intermediate state;
s34, if the first vehicle is determined to be driven out of the first parking space according to the current second detection result information in the second time, the state of the first parking space is transferred to an idle state, and a primary parking record is generated; or the like, or, alternatively,
if the first vehicle is determined to drive into the first parking space according to the current first detection result information within the second time, the state of the first parking space is transferred to a tag vehicle driving state; or the like, or, alternatively,
and if no vehicle is determined to drive into the first parking space according to the current first detection result information within the second time, the state of the first parking space is transferred to a non-label vehicle driving state.
4. The method for detecting the state of an on-road berth according to claim 3, wherein when the state of the first berth is a non-tag vehicle drive-in state, the method further comprises:
s35, if it is determined that the first vehicle exits the first parking space according to the current second detection result information within the third time, the state of the first parking space is transferred to an idle state, and a primary parking record is generated; or the like, or, alternatively,
if the second vehicle is determined to drive into the first parking space according to the current second detection result information within the third time, keeping the state of the first parking space unchanged; or the like, or, alternatively,
and if the fact that the vehicle enters the first parking space is determined according to the current first detection result information within the third time, the state of the first parking space is transferred to a tag vehicle entering state.
5. The method for detecting the state of an in-road berth according to any one of claims 1 to 3, wherein the step of acquiring the second detection result information in real time comprises:
acquiring second detection result information in real time by performing at least one of the following processes:
performing pattern recognition-based processing on the berth image information; and/or the presence of a gas in the gas,
and performing deep learning-based processing on the berthage image information.
6. A data processing apparatus of a system for detecting a state of an in-road berth, comprising:
the first acquisition module is used for acquiring first detection result information in real time according to the vehicle position coordinates received from the road side base station;
the second acquisition module is used for acquiring second detection result information in real time by performing filtering matching processing on the berth image information according to the berth image information received from the camera;
the fusion processing module is used for performing fusion processing on the first detection result information and the second detection result information to determine the state of each berth;
moreover, if the berth image information includes first image information from a first camera and second image information from a second camera, the second obtaining module performs filter matching processing according to the following mode:
s201, if a third detection result is obtained according to the first image information and the current third detection result changes, a fourth detection result is obtained according to the second image information within a fourth time;
s202, if the fourth detection result is the same as the third detection result, taking the third detection result as a second detection result;
s203, if the fourth detection result is different from the third detection result, determining a second detection result according to the third detection result and the fourth detection result;
the step S203 includes:
s2031, judging whether the current fourth detection result changes; if yes, executing step S2032; if not, executing step S2033;
step S2032, taking the fourth detection result as a second detection result, and then ending;
step S2033, judging whether a third detection result obtained according to the current first image information changes again in the fifth time, if so, executing step S2034; if not, executing step S2036;
step S2034, judging whether the current third detection result is the same as the fourth detection result, if so, executing step S2035;
s2035, taking the fourth detection result as a second detection result, and then ending;
step S2036, judging whether the times of the third detection result which is not changed are more than the preset times, if so, executing step S2037; if not, executing step S2033;
step S2037, counting the change times of the third detection result in the sixth time, obtaining average change time, judging whether the change times are larger than a first preset value or not, judging whether the average change time is smaller than a second preset value or not, and if yes, executing step S2038; if not, executing step S2039, wherein the sixth time is greater than the fourth time and the fifth time;
step S2038, taking the fourth detection result as a second detection result, and then ending;
and S2039, taking the third detection result as a second detection result.
7. A data processing apparatus of a system for detecting a status of an in-way berth, comprising a memory and a processor, the memory storing a computer program, characterized in that the processor is configured to execute the computer program stored in the memory and to implement the method of any one of claims 1 to 5.
8. A system for detecting the status of an in-road berth, comprising:
the system comprises a data acquisition device arranged in an on-road berthing area, a data acquisition device and a control device, wherein the data acquisition device comprises a road side base station and a camera, the road side base station is used for communicating with an electronic tag on a vehicle to acquire the position coordinate of the vehicle, the camera is used for shooting berthing image information, and the on-road berthing area is respectively in the coverage range of the road side base station and the coverage range of the camera;
the network transmission device is used for sending the acquired vehicle position coordinates and the acquired parking position image information;
the data processing apparatus of claim 6 or 7;
and the business application device is used for generating corresponding timing consumption information and notification reminding information according to the state of each berth.
CN201710697651.4A 2017-08-15 2017-08-15 Method and system for detecting state of in-road berth and data processing device thereof Active CN109410628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710697651.4A CN109410628B (en) 2017-08-15 2017-08-15 Method and system for detecting state of in-road berth and data processing device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710697651.4A CN109410628B (en) 2017-08-15 2017-08-15 Method and system for detecting state of in-road berth and data processing device thereof

Publications (2)

Publication Number Publication Date
CN109410628A CN109410628A (en) 2019-03-01
CN109410628B true CN109410628B (en) 2021-10-19

Family

ID=65454144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710697651.4A Active CN109410628B (en) 2017-08-15 2017-08-15 Method and system for detecting state of in-road berth and data processing device thereof

Country Status (1)

Country Link
CN (1) CN109410628B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047314A (en) * 2019-03-22 2019-07-23 西安艾润物联网技术服务有限责任公司 Parking space information modification method, apparatus and system
CN110096975B (en) * 2019-04-17 2021-04-09 北京筑梦园科技有限公司 Parking space state identification method, equipment and system
CN112150851B (en) * 2019-06-26 2022-06-03 杭州海康威视数字技术股份有限公司 Testing method and device for geomagnetic detector
CN110930718A (en) * 2019-11-22 2020-03-27 北京精英路通科技有限公司 Parking system
CN112991811B (en) * 2021-02-24 2022-06-28 泰斗微电子科技有限公司 Parking space occupation state detection method, server and computer readable storage medium
CN113506467A (en) * 2021-07-07 2021-10-15 北京筑梦园科技有限公司 Parking space state information processing method and device and parking management system
CN114267090A (en) * 2021-12-22 2022-04-01 无锡加视诚智能科技有限公司 Parking system and data fusion use method
CN114241807A (en) * 2021-12-29 2022-03-25 高新兴智联科技有限公司 In-road parking acquisition system, method and equipment based on automobile electronic identification

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908250A (en) * 2010-07-15 2010-12-08 东南大学 Full RFID (Radio Frequency Identification Device) license plate based parkinglay-by intelligent monitoring system and method
CN102915638A (en) * 2012-10-07 2013-02-06 复旦大学 Surveillance video-based intelligent parking lot management system
CN103426327A (en) * 2013-06-27 2013-12-04 深圳市捷顺科技实业股份有限公司 Parking space management system and monitoring method
CN104134368A (en) * 2014-07-01 2014-11-05 公安部道路交通安全研究中心 Dynamic processing system and method for in-road parking information
CN104809909A (en) * 2015-03-05 2015-07-29 桑田智能工程技术(上海)有限公司 Cell empty carport identification system based on RFID signal intensity and assistant camera video
CN105225278A (en) * 2014-06-18 2016-01-06 深圳市金溢科技股份有限公司 A kind of road-surface concrete charge management method and system
CN106096554A (en) * 2016-06-13 2016-11-09 北京精英智通科技股份有限公司 Decision method and system are blocked in a kind of parking stall
CN106205136A (en) * 2014-12-31 2016-12-07 深圳市金溢科技股份有限公司 Vehicle positioning system based on UWB and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120086558A1 (en) * 2010-10-08 2012-04-12 Federal Signal Corporation Lane Position Detection Arrangement Using Radio Frequency Identification
CN106097762B (en) * 2016-08-04 2017-07-07 浙江志诚软件有限公司 A kind of vehicle information acquisition system and management system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908250A (en) * 2010-07-15 2010-12-08 东南大学 Full RFID (Radio Frequency Identification Device) license plate based parkinglay-by intelligent monitoring system and method
CN102915638A (en) * 2012-10-07 2013-02-06 复旦大学 Surveillance video-based intelligent parking lot management system
CN103426327A (en) * 2013-06-27 2013-12-04 深圳市捷顺科技实业股份有限公司 Parking space management system and monitoring method
CN105225278A (en) * 2014-06-18 2016-01-06 深圳市金溢科技股份有限公司 A kind of road-surface concrete charge management method and system
CN104134368A (en) * 2014-07-01 2014-11-05 公安部道路交通安全研究中心 Dynamic processing system and method for in-road parking information
CN106205136A (en) * 2014-12-31 2016-12-07 深圳市金溢科技股份有限公司 Vehicle positioning system based on UWB and method
CN104809909A (en) * 2015-03-05 2015-07-29 桑田智能工程技术(上海)有限公司 Cell empty carport identification system based on RFID signal intensity and assistant camera video
CN106096554A (en) * 2016-06-13 2016-11-09 北京精英智通科技股份有限公司 Decision method and system are blocked in a kind of parking stall

Also Published As

Publication number Publication date
CN109410628A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN109410628B (en) Method and system for detecting state of in-road berth and data processing device thereof
CN108983806B (en) Method and system for generating area detection and air route planning data and aircraft
CN104838426B (en) The method, apparatus and equipment of mobile asset for identification
CN108710827B (en) A kind of micro- police service inspection in community and information automatic analysis system and method
CN101692313A (en) Portable vehicle recognition device base on embedded platform
US8031084B2 (en) Method and system for infraction detection based on vehicle traffic flow data
KR101626377B1 (en) A system for detecting car being violated parking and stopping of based on big date using CCTV camera and black box vehicle
Choosri et al. IoT-RFID testbed for supporting traffic light control
CN112447041A (en) Method and device for identifying operation behavior of vehicle and computing equipment
CN110930715B (en) Method and system for identifying red light running of non-motor vehicle and violation processing platform
CN107871398A (en) A kind of method and system that traffic lights identification is carried out by drive recorder
CN111784444A (en) Shared bicycle management method and system
CN112381014A (en) Illegal parking vehicle detection and management method and system based on urban road
CN111404874A (en) Taxi suspect vehicle discrimination analysis system architecture
KR20070029329A (en) Detection method and system of signal violation vehicle, children, pet and vehicle wanted by police
Chandra et al. Smart parking management system: An integration of RFID, ALPR, and WSN
CN105070061A (en) Evidence-obtaining inspection method and system for vehicle peccancy
JPH0830892A (en) Traffic monitoring system and automobile with car number recognition device
CN103794051A (en) Cloned vehicle detecting system and corresponding detecting method based on parking information
CN112489487A (en) Street parking system
CN112201044A (en) Road violation vehicle identification method and system, storage medium and terminal
KR101686851B1 (en) Integrated control system using cctv camera
CN113158852B (en) Traffic gate monitoring system based on face and non-motor vehicle cooperative identification
Pan et al. Identifying Vehicles Dynamically on Freeway CCTV Images through the YOLO Deep Learning Model.
CN113345245A (en) Evidence obtaining system for non-motor vehicle violation snapshot and working method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant