CN111178291A - Parking payment system and parking payment method - Google Patents

Parking payment system and parking payment method Download PDF

Info

Publication number
CN111178291A
CN111178291A CN201911415697.8A CN201911415697A CN111178291A CN 111178291 A CN111178291 A CN 111178291A CN 201911415697 A CN201911415697 A CN 201911415697A CN 111178291 A CN111178291 A CN 111178291A
Authority
CN
China
Prior art keywords
license plate
image
vehicle
parking
payment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911415697.8A
Other languages
Chinese (zh)
Other versions
CN111178291B (en
Inventor
冯彦刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhumengyuan Technology Co ltd
Original Assignee
Beijing Zhumengyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhumengyuan Technology Co ltd filed Critical Beijing Zhumengyuan Technology Co ltd
Priority to CN201911415697.8A priority Critical patent/CN111178291B/en
Publication of CN111178291A publication Critical patent/CN111178291A/en
Application granted granted Critical
Publication of CN111178291B publication Critical patent/CN111178291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07BTICKET-ISSUING APPARATUS; FARE-REGISTERING APPARATUS; FRANKING APPARATUS
    • G07B15/00Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points
    • G07B15/06Arrangements for road pricing or congestion charging of vehicles or vehicle users, e.g. automatic toll systems
    • G07B15/063Arrangements for road pricing or congestion charging of vehicles or vehicle users, e.g. automatic toll systems using wireless information transmission between the vehicle and a fixed station
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/085Payment architectures involving remote charge determination or related payment systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Abstract

The invention discloses a parking payment system, which is characterized in that an ETC payment function of registering, logging in a platform account and opening a designated license plate number is performed on a parking platform through a vehicle owner terminal; the charging system comprises a license plate recognition camera and a control server, wherein the license plate recognition camera monitors the motion state of a vehicle on a lane, the license plate recognition camera captures a vehicle head when the vehicle is found to enter a parking area, an image of the vehicle is obtained, the image recognition is carried out on the vehicle and a license plate, and a license plate number is generated and sent to the control server; the control server calculates parking time according to the vehicle residence time and generates a payment order of parking fee, and the control server sends the license plate number and the payment order to the parking platform for ETC payment; and the parking platform is matched with the platform account according to the license plate number, judges whether the platform account corresponding to the license plate number opens the ETC payment function or not, and judges whether the license plate number is in an ETC blacklist or not.

Description

Parking payment system and parking payment method
Technical Field
The invention relates to the technical field of ETC payment, in particular to a parking payment system and a payment method thereof.
Background
ETC payment is originally the non-stop payment mode of using on highway charges, considers the high efficiency and the convenience of non-stop payment, and ETC payment is constantly expanding to other scenes, for example parking fee payment, refuel fee payment etc.. At present, two modes of ETC payment are introduced into a parking lot as follows:
(1) ETC antennas (RSU) are arranged at the entrance and the exit of the parking lot,
the method comprises the following steps that a vehicle entrance ETC antenna (RSU) interacts with an ETC electronic tag (OBU) on a vehicle, information such as equipment ID and license plate numbers is obtained, and vehicle entrance time is recorded;
the method comprises the following steps that a vehicle departure ETC antenna (RSU) interacts with an ETC electronic tag (OBU) on a vehicle, information such as equipment ID and license plate numbers is obtained, vehicle departure time is recorded, and meanwhile, a parking lot management system counts charging and deducts fees;
(2) the entrance of the parking lot is provided with a camera and a vehicle license plate for identification, and the exit is provided with an ETC antenna (RSU)
When a vehicle enters the parking lot, the camera identifies the license plate number and records the time of entering the parking lot;
the vehicle is on the scene, the ETC antenna (RSU) of the vehicle is interacted with the ETC electronic tag (OBU) on the vehicle, the license plate number is obtained, the vehicle time of leaving is recorded, the parking lot management system counts the charge,
in summary, the current precondition for using ETC payment is two:
1) an ETC antenna (RSU), i.e., an RSU (for reading an ETC electronic tag or OBU mounted on a vehicle) must be mounted at a place (exit) where the vehicle charges;
2) an ETC electronic tag (OBU) is mounted on a vehicle.
Taking parking lot charging as an example, the current ETC payment mode is as follows:
1) the vehicle arrives at the parking lot exit;
2) an outlet camera identifies the license plate number of the vehicle, and an outlet ETC antenna (RSU) reads information such as an OBU acquisition equipment ID;
3) the parking lot control server generates a payment order;
4) the parking lot control server controls an ETC antenna (RSU) to carry out fee deduction operation on the OBU according to the payment order;
5) the parking lot control server uploads a deduction result to an ETC clearing system according to a deduction interface returned by an ETC antenna (RSU);
6) and the ETC clearing system carries out fee deduction operation according to the fee deduction result.
The current ETC payment mode needs to install an ETC antenna (RSU) in a service scene (such as a parking lot exit), and the price of the ETC antenna (RSU) is very high, which seriously affects the popularization of the ETC payment in a scene except high-speed charging. Moreover, the authorization card of the ETC antenna (RSU) must be the same body as the ETC clearing system, which also limits the universality of ETC payment. The ETC payment process of the prior art is shown in FIG. 1.
In addition, the existing ETC antenna is installed in an in-road scene to cause the problem of false identification, and an unparked vehicle, namely a passing vehicle, is identified; the cost for installing the ETC antenna is high; and the road charging is easy to cause the conditions of fee evasion and fee arrearage.
The vehicle identification and license plate identification technology is a very key technology in a charging system, is the basis of other research directions, is also a very important basis of the license plate identification technology, is an important premise for carrying out license plate identification subsequently, and can acquire some important data information of moving vehicles through identification of the moving vehicles. The license plate recognition technology can realize the detection and subsequent processing of the license plate number of the monitored vehicle. The technology depends on the current important and popular technologies such as image processing, mode recognition, computer vision and the like to a great extent, extracts corresponding license plate numbers by analyzing and processing vehicle videos or images acquired by acquisition equipment, and especially plays an important role in parking lot charging management. But currently, an effective algorithm and means aiming at vehicle identification and license plate identification are lacked.
Disclosure of Invention
The present invention is directed to a parking payment system and a parking payment method thereof to solve the above problems.
In order to achieve the purpose, the invention provides the following technical scheme:
a parking payment system comprises a charging system, a vehicle owner terminal, a parking platform and an ETC clearing system,
registering and logging in a platform account and opening an ETC payment function of a specified license plate number on a parking platform through a vehicle owner terminal;
the charging system comprises a license plate recognition camera and a control server,
the license plate recognition camera monitors the motion state of the vehicle on the lane, the license plate recognition camera captures the vehicle head when the vehicle is found to enter a parking area, the vehicle image is obtained, the image recognition is carried out on the vehicle and the license plate, and a license plate number is generated and sent to the control server;
the control server calculates parking time according to the vehicle residence time and generates a payment order of parking fee, and the control server sends the license plate number and the payment order to the parking platform for ETC payment;
and the parking platform is matched with the platform account according to the license plate number, judges whether the platform account corresponding to the license plate number opens the ETC payment function or not, and judges whether the license plate number is in an ETC blacklist or not.
The parking platform sends the license plate number and the payment order information to the ETC clearing system to carry out fee deduction operation, and the parking platform sends the information that ETC payment is successful to the control server.
The ETC credit clearing system deducts corresponding toll payment side accounts from ETC accounts of vehicle owners or ETC-associated bank cards according to business orders, and stores vehicle information including license plate numbers, vehicle owner information, ETC electronic tag information and ETC-associated account information.
The charging system also comprises a channel gate, when a license plate recognition camera at the entrance of the parking lot generates video triggering of a snapshot vehicle head, the fact that a vehicle arrives at the entrance of the parking lot is judged, vehicle and license plate image recognition is started, a vehicle image is obtained and is subjected to image recognition, a license plate number is generated and sent to a control server, and the channel gate is controlled to be opened after the license plate number passes verification;
when the video triggering of the vehicle head is captured by the license plate recognition camera at the exit of the parking lot, the vehicle is judged to arrive at the exit of the parking lot, license plate image recognition is started, the vehicle image is obtained, image recognition is carried out on the vehicle and the license plate, a license plate number is generated and sent to the control server, and after the license plate number passes the verification, the gate of the control channel is opened.
A parking payment method comprising the steps of:
step 1, a vehicle owner registers on a parking platform through a vehicle owner terminal, logs in a platform account and opens an ETC payment function of a specified license plate number;
step 2, when the vehicle is about to enter a parking area, a license plate recognition camera acquires images of the vehicle and the license plate, performs image recognition on the vehicle and the license plate, generates a license plate number and sends the license plate number to a control server;
step 3, when the vehicle is about to leave the parking area, the license plate recognition camera acquires a vehicle image, performs image recognition on the vehicle and the license plate, generates a license plate number and sends the license plate number to the control server, the control server generates a parking fee payment order according to the entering and leaving information, and sends the license plate number and the parking fee payment order to the parking platform;
step 4, the parking platform matches the platform account according to the license plate number, judges whether the platform account corresponding to the license plate number opens the ETC payment function, and if not, sends ETC payment failure information to the control server; if the license plate number is in the ETC blacklist, the parking platform judges whether the license plate number is in the ETC blacklist, and if the license plate number is in the ETC blacklist, ETC payment failure information is sent to the control server; if not, entering the next step;
step 5, the parking platform sends the license plate number and the payment order information to an ETC (electronic toll collection) sorting system for fee deduction operation, and the parking platform sends information that ETC payment is successful to a control server;
and 6, the parking platform sends a fee deduction success signal to the control server.
The parking area is a parking lot, when a license plate recognition camera at the entrance of the parking lot generates video triggering of a snapshot vehicle head, the fact that a vehicle arrives at the entrance of the parking lot is judged, vehicle and license plate image recognition is started, a target object is shot by the license plate recognition camera and converted into an image signal, then the image signal is transmitted to an image processing part, and after the image signal passes verification, a channel gate is controlled to be opened;
when the license plate recognition camera at the exit of the parking lot is triggered by capturing the video of the vehicle head, judging that the vehicle reaches the exit of the parking lot, starting the vehicle and license plate image recognition, shooting a target object by the license plate recognition camera, converting the target object into an image signal, transmitting the image signal to the image processing part, and controlling the opening of the channel gate after the target object passes the verification.
The parking platform maintains the ETC blacklist through an ETC sorting system, and the toll payment process is automatically completed without the operation of a vehicle owner user.
Has the advantages that:
1. the invention utilizes the characteristic that the unique corresponding relation between the license plate number and the vehicle owner ETC account is stored in the ETC clearing system. In a corresponding service scene, the service system can complete ETC payment operation through the license plate number and payment order data only by acquiring the license plate number of the vehicle. The scheme directly carries out ETC payment operation through a clearing system at the center of a road network of a traffic department and can be used nationwide;
2. the invention can be applied to many fields, for example, merchants such as supermarkets/gas stations/vehicle maintenance, etc., and the ETC can be directly adopted to pay corresponding expenses such as supermarkets/refueling/vehicle maintenance, etc. only by accessing the settlement system. Consumption can be realized through the mobile phone number, and the unique identification mark depending on ETC is based on the license plate number or the mobile phone number. Or both.
3. The invention redesigns the ETC payment mode by utilizing the unique corresponding relation between the license plate number stored in the clearing system and the ETC associated account and the capability of the road network center of the traffic department of clearing all ETC electronic label devices.
4. According to the invention, ETC antenna (RSU) equipment is not required to be installed in a business scene, license plate payment is directly adopted, the cost is reduced, and the promotion of ETC payment in a non-high-speed fee scene is facilitated; ETC payment is directly butted with a road network center of a traffic department, and the technical scheme can be ensured to be universal nationwide.
5. The invention avoids the problem of false identification (identifying the vehicle which is not parked, namely the vehicle which passes by) caused by the installation of the ETC antenna in the scene in the road.
6. The invention saves the cost for installing the ETC antenna, and is beneficial to reducing the cost and popularizing the ETC parking in a large range.
7. The parking in the road is easy to cause the conditions of fee evasion and arrearage due to the open parking space, and the fee deduction can be completed immediately after the vehicle leaves the parking space through ETC payment based on the license plate number, so that the probability of fee evasion and arrearage is reduced, and the payment rate is ensured.
8. The method uses the PBAS algorithm and the BLOB matching method to match and identify the vehicle target, realizes the quick and accurate identification of the moving vehicle image, and thus provides an effective basis for the acquisition of the license plate;
9. the method effectively eliminates noise and reserves the original information of the license plate image by using the Robert operator to carry out edge detection; judging whether the license plate exists in the image by using an SVM (support vector machine), so that the processing speed and accuracy of the license plate image are improved; the license plate is positioned by combining rough positioning and precise positioning, so that the uncertainty of the license plate position caused by the color and the form diversification of the license plate, the uncertainty of the suspension position, serious contamination, light, environmental factors and the like is avoided; the Hough transformation is used for correcting the inclined license plate, so that the accuracy and the robustness of the detection and correction of the inclined license plate are improved; the recognition of the license plate characters is realized by combining two matching modes of template matching and neural network matching, and the accuracy of the license plate character recognition is effectively ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a prior art charging system;
FIG. 2 is a schematic view of the charging system of the present invention;
FIG. 3 is a flow chart of a method of payment for a charge of the present invention;
FIG. 4 is a schematic view of the charging system for the on-road berth of the present invention;
FIG. 5 is a schematic view of the parking lot berth charging system of the present invention;
FIG. 6 is a vehicle identification flow diagram of the present invention;
FIG. 7 is a flow chart of license plate recognition according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
A parking payment system comprises a charging system, a vehicle owner terminal, a parking platform and an ETC clearing system,
registering and logging in a platform account and opening an ETC payment function of a specified license plate number on a parking platform through a vehicle owner terminal;
the charging system comprises a license plate recognition camera and a control server,
the license plate recognition camera monitors the motion state of the vehicle on the lane, when the vehicle is found to enter a parking area, the license plate recognition camera captures the vehicle head, obtains a vehicle image, performs image recognition on the vehicle and the license plate, generates a license plate number and sends the license plate number to the control server;
the control server calculates parking time according to the vehicle residence time and generates a payment order of parking fee, and the control server sends the license plate number and the payment order to the parking platform for ETC payment;
the parking platform matches the license plate number with the platform account, judges whether the platform account corresponding to the license plate number opens an ETC payment function, and judges whether the license plate number is in an ETC blacklist;
the parking platform sends the license plate number and the payment order information to an ETC (electronic toll collection) sorting system for fee deduction operation, and the parking platform sends information that ETC payment is successful to the control server;
the ETC credit clearing system deducts corresponding toll payment side accounts from the ETC account of the vehicle owner or the bank card associated with the ETC according to the business order, and the ETC credit clearing system stores vehicle information including license plate numbers, vehicle owner information, ETC electronic tag information and ETC associated account information.
Each issuer of the ETC electronic tag has a respective sorting system, but the issuing of each ETC electronic tag can only carry out sorting operation on the ETC electronic tag equipment issued by the issuer, and the sorting system of the road network center of the traffic department can carry out sorting operation on all the ETC electronic tag equipment.
The parking area is the on-road parking space, and the parking duration acquisition mode and the license plate number acquisition mode of the on-road parking space comprise the following two modes:
3) vehicle detector + hand-held PDA mode
The vehicle detector detects the time when the vehicle enters and leaves the on-road parking space, so that the parking time of the vehicle is calculated, and the vehicle detector detects whether the vehicle is on the parking space and cannot identify the license plate number of the vehicle.
Wherein, vehicle detector includes: a geomagnetic detector, an ultrasonic detector, a radar detector, a laser detector, or a combination of any two thereof.
When a vehicle enters a parking berth, a toll collector can automatically identify and register the license plate number of the vehicle by using a license plate identification camera of the handheld PDA device during parking, and the handheld PDA device directly sends an identification result to a remote control server through a 4G card;
the calculation of the parking time of the vehicle and the recognition of the license plate number can be completed through the cooperation of the two operations.
4) License plate recognition camera mode
When the vehicle enters and leaves the on-road berth, the license plate recognition camera automatically detects the state that the vehicle enters or leaves the berth and recognizes the license plate number of the vehicle;
license plate discernment camera is fixed mounting usually on the stand, and a camera corresponds 1 ~ 10 berths, and common license plate discernment camera includes: the high-position video camera (the installation position is about 6 meters in height), the middle-position video camera (the installation position is about 3 meters in height), the video pile (about 1 meter of upright post), the low-position video camera (the installation is on the curb stone).
The charging system of the on-road berth is not provided with a server locally, the license plate recognition camera directly sends a recognition result to a remote control server through an internet of things card, a 4G card and an optical fiber, the control server is a cloud server, and a payment order is generated by the cloud server.
The charging system also comprises a channel gate, when a license plate recognition camera at the entrance of the parking lot generates video triggering of a snapshot vehicle head, the fact that a vehicle arrives at the entrance of the parking lot is judged, vehicle and license plate image recognition is started, a vehicle image is obtained and is subjected to image recognition, a license plate number is generated and sent to a control server, and the channel gate is controlled to be opened after the license plate number passes verification;
when the video triggering of the vehicle head is captured by the license plate recognition camera at the exit of the parking lot, the vehicle is judged to arrive at the exit of the parking lot, license plate image recognition is started, the vehicle image is obtained, image recognition is carried out on the vehicle and the license plate, a license plate number is generated and sent to the control server, and after the license plate number passes the verification, the gate of the control channel is opened.
When video triggering of a snapshot vehicle head occurs, judging that a vehicle arrives at an entrance of a parking lot, starting vehicle and license plate image recognition, starting a computer output instruction, transmitting a current vehicle and license plate image to the computer in real time by using an image acquisition card, shooting a target object by using a camera and converting the target object into an image signal under the condition that a light source provides illumination, finally transmitting the image signal to an image processing part by using the image acquisition card, and controlling a channel gate to be opened after passing verification;
when the video of the tail of the vehicle is triggered, the vehicle is judged to arrive at the exit of the parking lot, license plate image recognition is started, a computer outputs an instruction to start, an image acquisition card is transmitted to a current vehicle image of the computer in real time, a camera shoots a target object and converts the target object into an image signal under the condition that a light source provides illumination, and finally the image signal is transmitted to an image processing part through the image acquisition card, and after the image signal passes verification, a channel gate is controlled to be opened.
The control server is arranged in a local off-road closed parking lot, the license plate recognition camera sends a recognition result to the control server, and the control server generates a payment order.
The control server is a cloud server arranged in a remote mode, the license plate recognition camera directly sends a recognition result to the cloud server, and the cloud server generates a payment order (the mode has the defect that the payment order cannot be normally charged if a network has a problem).
A parking payment method comprising the steps of:
step 1, a vehicle owner registers on a parking platform through a vehicle owner terminal, logs in a platform account and opens an ETC payment function of a specified license plate number;
step 2, when the vehicle is about to enter a parking area, a license plate recognition camera acquires images of the vehicle and the license plate, performs image recognition on the vehicle and the license plate, generates a license plate number and sends the license plate number to a control server;
step 3, when the vehicle is about to leave the parking area, the license plate recognition camera acquires a vehicle image, performs image recognition on the vehicle and the license plate, generates a license plate number and sends the license plate number to the control server, the control server generates a parking fee payment order according to the entering and leaving information, and sends the license plate number and the parking fee payment order to the parking platform;
step 4, the parking platform matches the platform account according to the license plate number, judges whether the platform account corresponding to the license plate number opens the ETC payment function, and if not, sends ETC payment failure information to the control server; if the license plate number is in the ETC blacklist, the parking platform judges whether the license plate number is in the ETC blacklist, and if the license plate number is in the ETC blacklist, ETC payment failure information is sent to the control server; if not, entering the next step;
step 5, the parking platform sends the license plate number and the payment order information to an ETC (electronic toll collection) sorting system for fee deduction operation, and the parking platform sends information that ETC payment is successful to a control server;
and 6, the parking platform sends a fee deduction success signal to the control server.
The parking area is a parking lot, when a license plate recognition camera at the entrance of the parking lot generates video triggering of a snapshot vehicle head, the fact that a vehicle arrives at the entrance of the parking lot is judged, vehicle and license plate image recognition is started, a target object is shot by the license plate recognition camera and converted into an image signal, then the image signal is transmitted to an image processing part, and after the image signal passes verification, a channel gate is controlled to be opened;
when the license plate recognition camera at the exit of the parking lot is triggered by capturing the video of the vehicle head, judging that the vehicle reaches the exit of the parking lot, starting the vehicle and license plate image recognition, shooting a target object by the license plate recognition camera, converting the target object into an image signal, transmitting the image signal to the image processing part, and controlling the opening of the channel gate after the target object passes the verification.
The parking platform maintains the ETC blacklist through an ETC sorting system, and the toll payment process is automatically completed without the operation of a vehicle owner user.
The specific process of vehicle image recognition is as follows:
the vehicle image recognition improves the stable adaptability to the vehicle target recognition,
step 1.1, carrying out video array,
processing the video to obtain a video array;
step 1.2, the original image is preprocessed,
the method is characterized in that a classic Gaussian filter algorithm is adopted to filter noise of the video array image, the Gaussian filter filters the noise and simultaneously blurs the video array image to a certain extent, and the degree of blurring of the image is controlled by controlling the size of a window of the Gaussian filter.
Some noise interference such as Gaussian noise and some stains in the image exist in the video image acquired by the camera, the noise on the image is possibly enhanced in the image-related processing processes such as acquisition, transmission and processing, the video needs to be preprocessed, and the reduction of the influence of the noise and the filtering of the noise are important links in the image preprocessing technology.
Step 1.3, detecting the vehicle target,
the PBAS algorithm is adopted for vehicle target detection, the background and foreground images of the vehicle are preprocessed, step 1.3.1, a background model is established,
the background modeling method of the PBAS algorithm takes the gradient amplitude and the pixel size in the previous N frames of images of the video array image as a background model;
step 1.3.2, foreground detection is carried out through segmentation decision,
deciding whether a certain pixel belongs to the foreground or the background, and using the background model B (x) in the PBAS algorithmi) Is made up of background pixel values,
B(xi)={B1(xi),...,Bk(xi),...,BN(xi)},
pixel point x of current k frameiPixel value of (B)k(xi) The average value of the differences between the pixel values of the background part of other N-1 frames is less than a predetermined threshold value T (x)i) Then the pixel point xiBelongs to a background point;
step 1.3.3, updating the background template,
and determining the replaced sample data in a random mode, updating the data samples in the neighborhood of the pixel in a randomly selected mode, automatically changing and adjusting the updating rate, and updating the neighborhood sample data set by adopting the new pixel value of the neighborhood instead of the new pixel value.
The background is constantly changed and is easily influenced by illumination change, leaves, shadows and some moving background objects, so that updating of the background template is necessary
Step 1.3.4, judging the updating of the threshold value,
threshold value R (x)i) Should change with the frequency of the change of the background area, the more complex the background is, the simpler the background is, the lower the threshold is, the estimation of the change of the background is
Figure BDA0002351129640000131
Threshold value T (x)i) A dynamic self-adaptive change is carried out,
Figure BDA0002351129640000132
wherein, Tinc/dec,Tinc/decIs a fixed parameter value;
step 1.4, foreground object pretreatment
And (3) executing a median filtering algorithm on the foreground target image, sequencing the gray values of the pixels of the image in a sliding window with variable width according to a certain rule and mode, and replacing the gray value of the original central pixel with the median calculated by the pixels.
The filtering effect of the median filtering algorithm is shown in that random noise is inhibited and filtered, edges can be effectively protected from being blurred, and effective edge information in the image is kept as far as possible. Median filtering may also be used to filter out spike data information. The reason that the algorithm is effective is that the filtered data retains the change trend of the original image, and meanwhile, the influence of spike pulse image data on subsequent analysis is removed.
Step 1.5, identifying a vehicle target;
after a foreground object and a background object are extracted from a vehicle object, the vehicle object is identified, an identification algorithm is realized by adopting a BLOB (binary object) matching method, a binary image is obtained through a vehicle detection algorithm, the BLOB of a moving vehicle is extracted by adopting a method that the same pixel is a connected domain, data information of the center, the area, the position information and the moment of the BLOB is obtained through calculation, the vehicle is identified by adopting contour identification, and when the vehicle reaches an interested target area, the image identified by the vehicle is stored locally, which specifically comprises the following steps:
step 1.5.1, performing linear transformation on the binary image, where a formula of the linear transformation may be expressed as dst (x, y, k) ═ scale × src (x, y, k) + shift, where dst (x, y, k) represents a target pixel value of a k-th frame, src (x, y, k) represents a source pixel value, scale represents a slope, and shift represents an intercept;
(1) when scale is more than 1, increasing the contrast between pixels in the image;
the pixel values in the image are increased after the linear transformation image processing, so that the overall image display effect of the processed image is enhanced compared with that of the original image.
(2) When scale is 1, adjusting the brightness of the image;
(3) when 0 < scale < 1, the effect is just opposite to that of scale > 1, and the contrast of each pixel in the image and the overall display effect are weakened;
(4) when scale < 0, the original image is darker in brightness in lighter areas and lighter in darker areas than after processing.
And step 1.5.2, filtering noise of the linearly transformed image by adopting morphological operation, reserving a connected domain which best meets the condition, namely extracting image components including boundary, convex hull and skeleton data information from the image, performing image processing by using an opening operation of corrosion and expansion, eliminating tiny noise on the image, and smoothing the boundary of the image.
Step 1.5.3, extracting BLOB features, numbering BLOBs,
BLOB features include: the area of the region and the location of the centroid, wherein,
(1) the area is the number of pixels in the block area, the extracted binary image is scanned, the size of the target area pixel is marked and measured by the area concept, the area is represented by S (), namely the number of pixels in the block area, namely the area
Figure BDA0002351129640000151
Ri(x, y) is the ith BLOB block regionF (x, y) is a binary function;
(2) the centroid position obtains the centroid of the target area, performs related identification algorithm processing on the centroid data information to obtain the historical position of the vehicle target and the position information after identification,
Rithe centroid position of (x, y) is (x)0,y0) Then, the first step is executed,
Figure BDA0002351129640000152
wherein M is10,M00And M00Is the moment of the BLOB block;
and step 1.5.4, matching and identifying the vehicle object by using the BLOB characteristics, marking the numbered BLOB blocks as track groups when the numbered BLOB blocks are matched with the BLOB characteristics of the adjacent images, and numbering track group areas, thereby realizing vehicle object identification.
The recognition of the license plate is carried out,
step 2.1, preprocessing a license plate picture;
step 2.1.1, carrying out image graying treatment,
because the image obtained directly from the image acquisition equipment has high color resolution and large information data amount, the execution speed of a CPU (central processing unit) in image processing is slowed due to the extremely large storage space occupied, and a color image is often converted into a gray image to accelerate the system processing speed in order to facilitate subsequent image analysis processing.
Graying is the process of making the three components of red (R), green (G) and blue (B) equal, and is characterized by that it uses weighted average value method, according to the different specific weights of R, G and B, selects proper weight value and takes the weighted sum average of three values of R, G and B, i.e. it is a weighted sum average of R, G and B
Figure BDA0002351129640000161
Where f (x, y) is a grayscale image function, WrIs the weight of the R component R (x, y) at point (x, y), WgIs the weight of the G component G (x, y) at point (x, y), WbIs the weight of the B component B (x, y) at point (x, y);
wherein, because human eyes have different sensitivities to red, green and blue, the highest sensitivity is green, G has the smallest specific gravity, and W is taken asr0.11; the lowest sensitivity is blue, so the specific gravity of B is the largest, and W is takeng0.59; sensitivity to Red centered, take WrThe gray image obtained by the method has the best effect as 0.30;
step 2.1.2, the image is enhanced,
using histogram equalization to enhance the image, wherein the histogram equalization treatment is to uniformly disperse the values of the areas with concentrated gray values in the original gray image into the whole gray range, and redistribute the pixel values of the gray image to ensure that the pixel values in a certain range are uniformly distributed;
step 2.1.3, the binary operation,
the image binarization is to set the gray scale of pixel points on an image to be black or white, the whole image becomes a black-and-white effect image, the 256 gray level image is converted into a binary image capable of reflecting the whole or local characteristics of the image by setting a proper threshold value, the pixel value of the binary image is only 0 and 255, the multi-level value of the pixel is not related like a color image, the subsequent further processing becomes much simpler after the binarization processing, and the processing and compression amount of data become smaller. Thus, in a practical system, the cost of image processing with high processing speed, low cost and large amount of information is greatly reduced.
The license plate image has two parts of foreground characters and background license plate, when the light scattering on the license plate is uniform, the background and the characters are separated by setting a global threshold value by using a global threshold value method,
when light scattering on the license plate is uneven, the license plate binaryzation cannot be set by using a global threshold value, image binaryzation is carried out by using a local threshold value method, and a background and characters are separated by setting a local threshold value by using the local threshold value method.
Wherein, the original gray image function is represented as f (x, y), the binarized image function is represented as g (x, y), T is a local threshold used for distinguishing an object from a background, and the mathematical expression process of binarization is represented as follows:
Figure BDA0002351129640000171
step 2.1.4, edge detection,
edge detection is a key means of image processing, which is an effective way to achieve separation of background and foreground, finding digital areas from a complex background. Conventional edge detectors extract edge information from the high frequency (noise) components of an image. Morphology is another important image processing theory that effectively eliminates noise and preserves the original information of the image.
Using a Robert operator to carry out edge detection, wherein the Roberts operator is an approximate method for solving gradient by taking difference in any pair of mutually vertical directions as a difference, and replacing a gradient value by adopting the difference between two adjacent pixel values in a diagonal direction;
let the gray scale image function f (x, y) be an input with integer pixel coordinates, whose gradient is defined as:
Figure BDA0002351129640000181
the above differentiation process is approximated by the following difference, where (x +1, y +1) is a pixel point in the oblique direction of (x, y), then
Figure BDA0002351129640000182
The simplification results in:
Figure BDA0002351129640000183
Figure BDA0002351129640000185
the template form of the two formulas is as follows:
Figure BDA0002351129640000184
the detection effect of the Roberts operator on the oblique edge is lower than that of the vertical edge and the horizontal edge, meanwhile, the edge is accurately positioned, but the Roberts operator has the best effect on an image with steep low noise, the inclination effect is not ideal, missing detection is easily caused, and some false edges are generated. Therefore, the operator is suitable for image segmentation with obvious edges and less noise.
Step 2.2, judging the license plate,
after the license plate picture is preprocessed, whether a license plate exists in the picture needs to be judged, and the license plate picture is realized by adopting an SVM (support vector machine), and the specific process is as follows:
step 2.2.1, generating training data,
the vehicle picture source of the training data is from the internet and local shooting, and the screened vehicle picture is sent to a license plate positioning algorithm to generate a rectangular picture block;
step 2.2.2, classification,
the rectangular image blocks are divided into license plate image blocks and non-license plate image blocks, labels are printed on the rectangular image blocks, 70% of the rectangular image blocks form training data, and the rest 30% of the rectangular image blocks form test data;
step 2.2.3, training,
training the SVM by 70% of training data to obtain a control template;
step 2.2.4, testing,
comparing the comparison template with 30% of test data, if the recognition rate is higher than 90%, forming a reference template, otherwise, returning to the step 2.2.1, and newly training;
step 2.2.5, judging,
comparing the preprocessed license plate image with a reference template, judging whether the license plate image is the license plate image, if not, taking the next video array image, judging again, and if so, entering the next step;
step 2.3, positioning the license plate,
the license plate positioning technology is an indispensable step and a very critical step of the license plate recognition technology, and the position performance of the license plate positioning technology is directly related to the speed of a license plate recognition system and also influences the speed of the recognition system. And it is a classical problem. Due to the diversification of colors and forms of the Chinese license plate, there are many difficulties in the process of the position of the license plate: uncertainty in suspension position, severe license contamination, light and environmental factors, etc.
Step 2.3.1, roughly positioning the license plate,
the original grayscale image function f (x, y) has the following characteristics: the contrast between the grey background of the license plate and the alphabetic characters is large; the horizontal gray scale in the license plate area changes frequently; the license plate is hung on the bottom of the vehicle, and the position is the bottom of the whole image.
According to the above feature, a first order difference operation in the horizontal direction is used, which can make a region having frequent gray level changes prominent. The first order difference is:
f'(x,y)=f(x,y)-f(x,y+1)
wherein x is 1,2, the., n, y is 1,2, the.., m, m and n are the height and width of the image, and the image is processed in a binary mode, so that the license plate area can be more prominent and most background interference can be removed;
a downward horizontal scan is performed because the license plate consists of 7 characters, and typically, the edge points of each row are greater than 14 in the horizontal area. Determining the edge point to be 15 according to the characteristics of the Chinese license plate and the test result, finding a qualified license plate plane, and cutting a sub-image of the original gray level image in the horizontal direction to obtain a primary license plate positioning image;
step 2.3.2, the license plate is accurately positioned,
after the coarse localization, the license plate region usually includes the vehicle boundary, and the fine localization is required based on the coarse localization.
At present, the license plates of the Chinese vehicles have four types, wherein the blue background is white characters, the yellow background is black characters, the white background is black or red characters, and the black background is white characters; because the license plate area is covered with most of the whole picture, the license plate position can be accurately positioned according to the inherent characteristics of the color and the color of the license plate.
In fact, the highest frequency of a license plate is white characters in a blue background license plate, followed by black characters in a yellow background license plate, black or red characters in a white background, and white characters in a black background license plate. Therefore, the accurate positioning of the white characters in the license plate with the blue background is firstly determined, if the blue area exists, the requirement of the area cannot be met, and the black characters are positioned in the yellow background, the black characters are positioned in the white background and the white characters are positioned in the license plate area with the black background according to the frequency from high to low.
The detailed process of accurate positioning is as follows:
step 2.3.2.1, generating binary gray scale image m (x, y) according to the binarized image function g (x, y),
Figure BDA0002351129640000211
step 2.3.2.2, identifying candidate regions: each candidate region is identified and then step 2.3.2.3 is entered from the largest region according to descending order.
Step 2.3.2.3, determining geometric structural features: calculating the aspect ratio R of the selected candidate region, if R ∈ [5,2], entering step 2.3.2.4, otherwise, replacing the candidate region;
step 2.3.2.4, determine statistical letter-frequency characteristics: calculating the gray value of the average hopping frequency L of the selected candidate region in the horizontal direction, wherein if L belongs to [15,5], the license plate position is the selected candidate region, and the background color of the license plate is the color represented by the gray value; otherwise, return to step 2.3.2.3 to process the next candidate region.
Step 2.4, correcting the license plate;
detecting whether the position of the license plate is inclined, if the license plate is inclined, directly influencing the license plate segmentation and the final recognition accuracy, and needing to carry out rotation correction on the image.
For oblique license plate images, we sometimes have to leave the oblique images with some background impurities in order to ensure the integrity of the license plate. The quality of pictures shot under different environmental conditions is different, some pictures have high background brightness, some pictures have darkness and no light, and some pictures are also interfered by noise, so that the uncertain factors directly influence the processing effect of the pictures and cause interference on various designed algorithms.
The method of Hough transformation is adopted for gradient correction, the accuracy and robustness of the detection and correction of the inclined license plate are improved, and the steps of solving the gradient detection and correction of the license plate are as follows:
step 2.4.1, starting from the i (i ═ 1, 2.. times.n) column of the image, and searching from top to bottom, it is known that a point with the first value "1" is found, and is marked as (x)i,yi);
Step 2.4.2, i is i +1, and step 2.4.1 is repeated until i is n and the width of the license plate image is reached, n points exist for the license plate with the width of n, and a group of linear equations is obtained:
y=px+q,
wherein p is the slope of the line, q is the intercept of the line, and p ranges from [ -20,20]According to the method, Hough transformation is carried out on n points, the parameter space finds the most sub-point set of collinear points according to the array accumulated value, and the linear slope of the point set is represented as the slope p of an upper boundarysSimilarly, the slope p of the lower boundary can be obtainedxThen the average value is obtained
Figure BDA0002351129640000221
According to the determined gradient p0Performing inclination correction on the license plate;
step 2.5, character segmentation;
after correction, a regular license plate image is obtained, single characters need to be segmented and normalized, the size of the characters is the same as that of characters in a template library, the license plate image is characterized in that a frame is used as a boundary, the number of bright spots is far more than that of other parts in the image due to the fact that the frame is rectangular and is 1 in the transverse direction and the longitudinal direction, and the single characters are accurately segmented by taking the bright spots as segmentation basis;
the method comprises the following concrete steps:
step 2.5.1: firstly, accumulating each row of the binary license plate image, and storing the result in an array C [ ];
step 2.5.2: judging the position of the first character, where C [ j ] is not 0 and j is used as the left boundary of the first character;
step 2.5.3: continuing to search the column with C [ j ] being 0 to the right as the right boundary of the first character;
step 2.5.4: and the process is circulated for 7 times until all characters are segmented.
Step 2.6, the characters are normalized,
the character normalization is to convert the character image obtained after the segmentation into a picture with uniform size,
assume that the size of a single character picture before normalization is m0×n0Normalized to m × n, and the compression ratio of the image in the x-axis direction is
Figure BDA0002351129640000232
In the y-axis direction of a compression ratio of
Figure BDA0002351129640000233
The relationship between the original point (x, y) and the point (x ', y') in the new graph is:
x'=fx×x,
y'=fy×y,
the gray value at point (x ', y') in the new map is then:
f'(x',y')=f'(x'/fx,y'/fy),
in the original image, the point (x'/f)x,y'/fy) Falls between the following four points:
P1:([x'/fx],[y'/fy]),P2:([x'/fx]+1,[y'/fy]),P3:([x'/fx],[y'/fy]+1),P4:([x'/fx]+1,[y'/fy]+1),
wherein [ ] is the operation of obtaining evidence.
Then the process of the first step is carried out,
Figure BDA0002351129640000231
Δ x, Δ y represent the amount of positional change between f (x, y) and f ' (x ', y ').
Step 2.7, character recognition;
the template matching method carries out self-negative recognition, normalizes the single character to make the size of the character be the same as that of the template, subtracts the character from the template character, if the size of the character is equal, the corresponding bit is 0 inevitably, and judges the character template which is most similar to the character according to the number of 0 in the result, and takes the character as the matched character to be used as the recognition output result.
The most important advantage of the template matching method is that the realization is simple, and especially under the condition that the segmented character images are regular, the recognition rate is high.
The method comprises the following specific steps:
step 2.7.1, establishing an automatic identification code table, wherein the code table is as follows: 0-9, A-Z, Suyushan Shanlujing Minliao Zhejiang Yue;
2.7.2, because the first digit of the license plate is a Chinese character, the part is directly matched from the part of the Chinese character;
2.7.3, matching letters between the third fixed positions A-Z of the license plate in a circulating way in the A-Z;
step 2.7.4, the third to seventh digits are letters or numbers, and need to match with all the letter and number templates one by one:
and step 2.7.5, outputting a corresponding result after 7 characters are matched.
Wherein, the Chinese character matching of the step 2.7.2 adopts neural network matching,
all layers contain trainable parameters (weights), the input is a 32 x 32 pixel image, larger than the maximum character database (up to 20 x 20 pixels centered in the 28 x 28 field), and finally the receiver field of the convolutional layer forms a 20 x 20 region in the center of the 32 x 32 input;
labeled Cx in the following convolutional layer, the sub-sampling layer is labeled Sx, where x is the layer index;
each layer (namely convolutional layer and downsampling layer) has a plurality of characteristic trainable coefficients, one characteristic MAP is responsible for extracting one characteristic of input, and the other characteristic MAP consists of a plurality of neurons;
level C1 is a convolutional layer with 6 feature maps of size 28 × 28, each neural unit in each feature map is connected to a 5 × 5 neighborhood in the input layer, the size mapped to level C1 is 28 × 28, level C1 has 6 filters, the number of trainable parameters at level C1 is:
(5×5+1)×6=156,
the number of connections to the input layer is:
156×(28×28)=122304,
the S2 layer is a sub-sampling layer with 6 signatures of size 14 × 14, each neuron in each signature is connected to a 2 × 2 neighborhood in the corresponding signature in C1, the S2 layer has 12 trainable parameters and 5580 connections;
convolution layers and sub-sampling layers are convoluted, four inputs of a C1 layer are summed, a result obtained by multiplying trainable coefficients and adding deviation is transferred to an S2 layer through an activation function, the number of characteristic maps in the S2 layer is 1/4 of the number of characteristic maps in the C1 layer, wherein each row and column takes half;
the C3 layer is a convolutional layer with 16 feature MAPs of size 10 × 10, each cell of each feature MAP is connected to multiple 5 × 5 neighborhoods at the same position in the subset of the feature MAP of the S2 layer, the C3 layer has 1516 trainable parameters and 151600 connections, and the convolution is performed with the previous layer by a convolution kernel of 5 × 5, and then a feature MAP with 10 × 10 neurons is obtained, because the C3 layer has 16 different convolution kernels, corresponding to 16 feature MAPs;
each C3 layer feature MAP is connected with an S2 layer feature MAP in a combined mode, and a convolution kernel of 5 x 5 is convolved with an S2 layer to obtain 16 feature MAPs;
the S4 level is a sub-sampling level with 16 feature maps of size 5 × 5, each cell in each feature map is connected to a 2 × 2 neighborhood in the corresponding feature map in the C3 level, the S4 level has 32 trainable parameters, and the number of trainable connections between the S4 level and the previous level is 2000;
the C5 layer is a convolutional layer with 240 feature maps, unlike the connection from the S2 layer to the C3 layer, each cell of the C5 layer is connected to 5 × 5 neighborhoods on all feature maps of S4, the feature map sizes of the S4 layer and the C5 layer are both 5 × 5, and the connection between them is complete;
the F6 layer has 84 units and is completely connected with the previous layer (C5), the dot product between the input vector and the weighting vector is calculated in the F6 layer, and the deviation is added to the vector;
then will be denoted as α for cell iiIs generated by a sigmoid () activation functioniState of cell i represented:
xi=sigmoid(αi),
the compression function tanh () is a scaled-down hyperbolic tangent function:
F(αi)=A×tanh(S×αi),
where A is the amplitude of the function and S is the function that determines its slope at the origin F (α)i) And the horizontal asymptotes at + a and-a, which are odd numbers;
the output layer consists of euclidean radial basis function units RBF, with 72 neurons, one for each character class, one unit for each class, and 84 inputs each. Each RBF unit yiThe output of (c) is calculated as follows:
yi=∑i(xj-wij)×2i,
wherein, wijIn order to be a deviation, the deviation,
wherein, the letter matching and the number matching of the step 2.7.3 and the step 2.7.4 adopt template matching,
selecting a part of the search image as a template, and defining the search sub-image as Si,j(m, n), (m, n) denotes coordinates of each pixel in the search image, and the template is defined as T (m)t,nt),(mt,nt) Representing the coordinates of each pixel in the template, moving the template T (m) at each (m, n) point in the search imaget,nt) And the center (or origin) of (c), and calculates Si,j(m, n) coefficient and T (m)t,nt) Similarity across the entire area spanned by the template:
the search range is: 1 < i < W-m, 1 < H < H-n
Wherein W and H are the width and height of the license plate image,
by comparing T (m)t,nt) And Si,j(m, n) similarity, completing the template matching process to obtainThe degree of matching D (i, j),
Figure BDA0002351129640000271
m and N are the width and height of the template;
is unfolded to obtain
Figure BDA0002351129640000272
When the value of the degree of matching D (i, j) is minimum, the target is found.
The operation of the template matching algorithm is quite simple, and the method is suitable for quickly searching and matching simple characters.
The above-described embodiment merely represents one embodiment of the present invention, but is not to be construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (15)

1. A parking payment system comprises a charging system, a vehicle owner terminal, a parking platform and an ETC clearing system,
registering, logging in a platform account and opening an ETC payment function of a specified license plate number on the parking platform through the vehicle owner terminal;
the charging system comprises a license plate recognition camera and a control server,
the license plate recognition camera monitors the motion state of the vehicle on the lane, the license plate recognition camera captures the vehicle head when the vehicle is about to enter a parking area, a vehicle image is obtained, image recognition is carried out on the vehicle and the license plate, a license plate number is generated, and the license plate number is sent to the control server;
the control server calculates parking time according to the vehicle residence time and generates a payment order of parking fee, and the control server sends the license plate number and the payment order to the parking platform for ETC payment;
and the parking platform is matched with the platform account according to the license plate number, judges whether the platform account corresponding to the license plate number opens an ETC payment function or not, and judges whether the license plate number is in an ETC blacklist or not.
2. The parking payment system with license plate recognition function as claimed in claim 1, wherein: the parking platform sends the license plate number and the payment order information to the ETC sorting system for fee deduction operation, and the parking platform sends the successful ETC payment information to the control server.
3. A parking payment system as recited in claim 1, wherein: the ETC credit clearing system deducts corresponding toll payment side accounts from ETC accounts of vehicle owners or ETC-associated bank cards according to business orders, and the ETC credit clearing system saves vehicle information including license plate numbers, vehicle owner information, ETC electronic tag information and ETC-associated account information.
4. A parking payment system as recited in claim 1, wherein: the parking area is an on-road parking space, and the license plate number and parking duration of the on-road parking space are acquired in the following two modes:
1) the method comprises the steps that a vehicle detector detects that a vehicle enters a parking space, the license plate recognition camera is a license plate recognition camera of a handheld PDA device, a toll collector uses the license plate recognition camera of the handheld PDA device to automatically recognize and register the license plate number of the vehicle during parking, and the vehicle detector detects the time that the vehicle enters and leaves the parking space in a road so as to calculate the parking time of the vehicle;
2) when the vehicle enters and leaves the on-road berth, the license plate recognition camera automatically recognizes the license plate number of the vehicle and detects the time when the vehicle enters and leaves the on-road berth.
5. The parking payment system of claim 4, wherein: the vehicle detector of claim 1) includes: a geomagnetic detector, an ultrasonic detector, a radar detector, a laser detector, or a combination of any two thereof.
6. The parking payment system of claim 4, wherein: one of the 2) manner the license plate recognition camera corresponds to 1 ~ 10 berths, the license plate recognition camera includes: the device comprises a high-position camera with the installation height of 6 meters, a middle-position camera with the installation height of 3 meters, a video pile with the height of 1 meter and a low-position camera arranged on a roadside stone.
7. The parking payment system of claim 4, wherein: the charging system of the on-road berth is not provided with a server locally, the vehicle detector directly sends the identification result to a remote control server through an internet of things card, the handheld PDA device directly sends the identification result to the remote control server through a 4G card, the license plate identification camera directly sends the identification result to the remote control server through the internet of things card, the 4G card and the optical fiber, the control server is a cloud server, and the cloud server generates a payment order.
8. A parking payment system as recited in claim 1, wherein: the charging system comprises a parking area, a charging system and a control server, wherein the parking area is a parking lot, the charging system further comprises a channel gate, when the license plate recognition camera at the entrance of the parking lot generates video triggering of a head of the vehicle, the vehicle is judged to arrive at the entrance of the parking lot, vehicle and license plate image recognition is started, a vehicle image is obtained and is subjected to image recognition, a license plate number is generated and sent to the control server, and the channel gate is controlled to be opened after the license plate number passes verification;
when the license plate recognition camera at the exit of the parking lot generates video triggering of the snapshot vehicle head, judging that the vehicle reaches the exit of the parking lot, starting license plate image recognition, acquiring a vehicle image, performing image recognition on the vehicle and a license plate, generating a license plate number, sending the license plate number to the control server, and controlling the opening of the channel gate after passing verification.
9. A parking payment system as recited in claim 8, wherein: the control server is arranged in the local off-road closed parking lot, the license plate recognition camera sends a recognition result to the control server, and the control server generates a payment order.
10. A parking payment system as recited in claim 9, wherein: the control server is a cloud server arranged in a remote place, the license plate recognition camera directly sends a recognition result to the cloud server, and the cloud server generates a payment order.
11. A parking payment method using the parking payment system of any one of claims 1-10, comprising the steps of:
step 1, a vehicle owner registers on the parking platform through the vehicle owner terminal, logs in the platform account and opens an ETC payment function of a specified license plate number;
step 2, when a vehicle is about to enter the parking area, the license plate recognition camera acquires images of the vehicle and the license plate, performs image recognition on the vehicle and the license plate, generates a license plate number and sends the license plate number to the control server;
step 3, when the vehicle is about to leave the parking area, the license plate recognition camera acquires a vehicle image, performs image recognition on the vehicle and the license plate, generates a license plate number and sends the license plate number to the control server, the control server generates a parking fee payment order according to the entering and leaving information, and sends the license plate number and the parking fee payment order to the parking platform;
step 4, the parking platform matches with the platform account according to the license plate number;
step 5, the parking platform sends the license plate number and the payment order information to the ETC clearing system for fee deduction, and the parking platform sends the information that ETC payment is successful to the control server;
and 6, the parking platform sends a fee deduction success signal to the control server.
12. The parking payment method according to claim 11, wherein the step 4 further comprises determining whether the platform account corresponding to the license plate number is provided with an ETC payment function, and if not, sending an ETC payment failure message to the control server; if the license plate number is in the ETC blacklist, the parking platform judges whether the license plate number is in the ETC blacklist, and if the license plate number is in the ETC blacklist, ETC payment failure information is sent to the control server; if not, the next step is entered.
13. The parking payment method as recited in claim 11, wherein: the parking platform passes through ETC divides the system maintenance ETC blacklist clearly, need not car owner user operation among the payment process of parkking, accomplishes automatically.
14. The parking payment method as recited in claim 11, wherein: the specific process of vehicle image recognition in step 2 is as follows:
step 1.1, carrying out video array,
processing the video to obtain a video array;
step 1.2, the original image is preprocessed,
the method comprises the steps that a classic Gaussian filter algorithm is adopted to filter noise of a video array image, a Gaussian filter filters the noise and simultaneously blurs the video array image to a certain extent, and the degree of blurring of the image is controlled by controlling the size of a window of the Gaussian filter;
step 1.3, detecting the vehicle target,
the PBAS algorithm is adopted for vehicle target detection, the background and foreground images of the vehicle are preprocessed,
step 1.3.1, establishing a background model,
the background modeling method of the PBAS algorithm takes the gradient amplitude and the pixel size in the previous N frames of images of the video array image as a background model;
step 1.3.2, foreground detection is carried out through segmentation decision,
deciding whether a certain pixel belongs to the foreground or the background, and using the background model B (x) in the PBAS algorithmi) Is made up of background pixel values,
B(xi)={B1(xi),...,Bk(xi),...,BN(xi)},
pixel point x of current k frameiPixel value of (B)k(xi) The average value of the differences between the pixel values of the background part of other N-1 frames is less than a predetermined threshold value T (x)i) Then the pixel point xiBelongs to a background point;
step 1.3.3, updating the background template,
determining the replaced sample data in a random mode, updating the data samples in the neighborhood of the pixel in a randomly selected mode, automatically changing and adjusting the updating rate, and updating the neighborhood sample data set by adopting the new pixel value of the neighborhood instead of the new pixel value;
step 1.3.4, judging the updating of the threshold value,
threshold value R (x)i) Should change with the frequency of the change of the background area, the more complex the background is, the simpler the background is, the lower the threshold is, the estimation of the change of the background is
Figure FDA0002351129630000051
Threshold value T (x)i) A dynamic self-adaptive change is carried out,
Figure FDA0002351129630000061
wherein, Tinc/dec,Tinc/decIs a fixed parameter value;
step 1.4, foreground object pretreatment
Performing a median filtering algorithm on the foreground target image, sorting gray values of pixels of the image in a sliding window with variable width according to a certain rule and mode, and replacing the gray value of the original central pixel with a median calculated by the pixels;
step 1.5, identifying a vehicle target;
after a foreground object and a background object are extracted from a vehicle object, the vehicle object is identified, an identification algorithm is realized by adopting a BLOB matching method, a binary image is obtained through a vehicle detection algorithm, the BLOB of a moving vehicle is extracted by adopting a method that the same pixel is a connected domain, data information of the center, the area, the position information and the moment of the BLOB is obtained through calculation, the vehicle is identified by adopting contour identification, and when the vehicle reaches an interested target area, the image identified by the vehicle is stored locally, specifically:
step 1.5.1, performing linear transformation on the binary image, where a formula of the linear transformation may be expressed as dst (x, y, k) ═ scale × src (x, y, k) + shift, where dst (x, y, k) represents a target pixel value of a k-th frame, src (x, y, k) represents a source pixel value, scale represents a slope, and shift represents an intercept;
(1) when scale is more than 1, increasing the contrast between pixels in the image;
the pixel values in the image are increased after the linear transformation image processing, so that the overall image display effect of the processed image is enhanced compared with that of the original image.
(2) When scale is 1, adjusting the brightness of the image;
(3) when 0 < scale < 1, the effect is just opposite to that of scale > 1, and the contrast of each pixel in the image and the overall display effect are weakened;
(4) when scale < 0, the original image is darker in brightness in lighter areas and lighter in darker areas than after processing.
Step 1.5.2, filtering noise of the linearly transformed image by adopting morphological operation, reserving a connected domain which best meets the condition, namely extracting image components including boundary, convex shell and skeleton data information from the image, performing image processing by using an opening operation of corrosion and expansion, eliminating tiny noise on the image, and smoothing the boundary of the image;
step 1.5.3, extracting BLOB features, numbering BLOBs,
BLOB features include: the area of the region and the location of the centroid, wherein,
(1) the area is the number of pixels in the block area, the extracted binary image is scanned, the size of the target area pixel is marked and measured by the area concept, the area is represented by S (), namely the number of pixels in the block area, namely the area
Figure FDA0002351129630000071
Ri(x, y) is the ith BLOB block region, and f (x, y) is a binary function;
(2) the centroid position obtains the centroid of the target area, performs related identification algorithm processing on the centroid data information to obtain the historical position of the vehicle target and the position information after identification,
Rithe centroid position of (x, y) is (x)0,y0) Then, the first step is executed,
Figure FDA0002351129630000072
wherein M is10,M00And M00Is the moment of the BLOB block;
and step 1.5.4, matching and identifying the vehicle object by using the BLOB characteristics, marking the numbered BLOB blocks as track groups when the numbered BLOB blocks are matched with the BLOB characteristics of the adjacent images, and numbering track group areas, thereby realizing vehicle object identification.
15. The parking payment method as recited in claim 11, wherein: the specific process of license plate image recognition in step 2 is as follows:
step 2.1, preprocessing a license plate picture;
step 2.1.1, carrying out image graying treatment,
graying is the process of making the three components of red (R), green (G) and blue (B) equal, and is characterized by that it uses weighted average value method, according to the different specific weights of R, G and B, selects proper weight value and takes the weighted sum average of three values of R, G and B, i.e. it is a weighted sum average of R, G and B
Figure FDA0002351129630000081
Where f (x, y) is a grayscale image function, WrIs the weight of the R component R (x, y) at point (x, y), WgIs the weight of the G component G (x, y) at point (x, y), WbIs the weight of the B component B (x, y) at point (x, y);
wherein, because human eyes have different sensitivities to red, green and blue, the highest sensitivity is green, G has the smallest specific gravity, and W is taken asr0.11; the lowest sensitivity is blue, so the specific gravity of B is the largest, and W is takeng0.59; sensitivity to Red centered, take WrThe gray image obtained by the method has the best effect as 0.30;
step 2.1.2, the image is enhanced,
using histogram equalization to enhance the image, wherein the histogram equalization treatment is to uniformly disperse the values of the areas with concentrated gray values in the original gray image into the whole gray range, and redistribute the pixel values of the gray image to ensure that the pixel values are uniformly distributed;
step 2.1.3, the binary operation,
the license plate image has two parts of foreground characters and background license plate, when the light scattering on the license plate is uniform, the background and the characters are separated by setting a global threshold value by using a global threshold value method,
wherein, the original gray image function is represented as f (x, y), the binarized image function is represented as g (x, y), T is a local threshold used for distinguishing an object from a background, and the mathematical expression process of binarization is represented as follows:
Figure FDA0002351129630000091
step 2.1.4, edge detection,
using a Robert operator to carry out edge detection, wherein the Roberts operator is an approximate method for solving gradient by taking difference in any pair of mutually vertical directions as a difference, and replacing a gradient value by adopting the difference between two adjacent pixel values in a diagonal direction;
let the gray scale image function f (x, y) be an input with integer pixel coordinates, whose gradient is defined as:
Figure FDA0002351129630000092
the above differentiation process is approximated by the following difference, where (x +1, y +1) is a pixel point in the oblique direction of (x, y), then
Figure FDA0002351129630000093
The simplification results in:
Figure FDA0002351129630000094
Figure FDA0002351129630000095
the template form of the two formulas is as follows:
Figure FDA0002351129630000096
step 2.2, judging the license plate,
after the license plate picture is preprocessed, whether a license plate exists in the picture needs to be judged, and the license plate picture is realized by adopting an SVM (support vector machine), and the specific process is as follows:
step 2.2.1, generating training data,
the vehicle picture source of the training data is from the internet and local shooting, and the screened vehicle picture is sent to a license plate positioning algorithm to generate a rectangular picture block;
step 2.2.2, classification,
the rectangular image blocks are divided into license plate image blocks and non-license plate image blocks, labels are printed on the rectangular image blocks, 70% of the rectangular image blocks form training data, and the rest 30% of the rectangular image blocks form test data;
step 2.2.3, training,
training the SVM by 70% of training data to obtain a control template;
step 2.2.4, testing,
comparing the comparison template with 30% of test data, if the recognition rate is higher than 90%, forming a reference template, otherwise, returning to the step 2.2.1, and newly training;
step 2.2.5, judging,
comparing the preprocessed license plate image with a reference template, judging whether the license plate image is the license plate image, if not, taking the next video array image, judging again, and if so, entering the next step;
step 2.3, positioning the license plate,
step 2.3.1, roughly positioning the license plate,
the original grayscale image function f (x, y) has the following characteristics: the contrast between the grey background of the license plate and the alphabetic characters is large; the horizontal gray scale in the license plate area changes frequently; the license plate is hung on the bottom of the vehicle, and the position is the bottom of the whole image.
According to the above feature, a first order difference operation in the horizontal direction is used, which can make a region having frequent gray level changes prominent. The first order difference is:
f'(x,y)=f(x,y)-f(x,y+1)
wherein x is 1,2, the., n, y is 1,2, the.., m, m and n are the height and width of the image, and the image is processed in a binary mode, so that the license plate area can be more prominent and most background interference can be removed;
a downward horizontal scan is performed because the license plate consists of 7 characters, and typically, the edge points of each row are greater than 14 in the horizontal area. Determining the edge point to be 15 according to the characteristics of the Chinese license plate and the test result, finding a qualified license plate plane, and cutting a sub-image of the original gray level image in the horizontal direction to obtain a primary license plate positioning image;
step 2.3.2, the license plate is accurately positioned,
the detailed process of accurate positioning is as follows:
step 2.3.2.1, generating binary gray scale image m (x, y) according to the binarized image function g (x, y),
Figure FDA0002351129630000111
step 2.3.2.2, identifying candidate regions: each candidate region is identified and then step 2.3.2.3 is entered from the largest region according to descending order.
Step 2.3.2.3, determining geometric structural features: calculating the aspect ratio R of the selected candidate region, if R ∈ [5,2], entering step 2.3.2.4, otherwise, replacing the candidate region;
step 2.3.2.4, determine statistical letter-frequency characteristics: calculating the gray value of the average hopping frequency L of the selected candidate region in the horizontal direction, wherein if L belongs to [15,5], the license plate position is the selected candidate region, and the background color of the license plate is the color represented by the gray value; otherwise, return to step 2.3.2.3 to process the next candidate region.
Step 2.4, correcting the license plate;
the method for correcting the inclination by the Hough transformation comprises the following steps of:
step 2.4.1, starting from the i (i ═ 1, 2.. times.n) column of the image, and searching from top to bottom, it is known that a point with the first value "1" is found, and is marked as (x)i,yi);
Step 2.4.2, i is i +1, and step 2.4.1 is repeated until i is n and the width of the license plate image is reached, n points exist for the license plate with the width of n, and a group of linear equations is obtained:
y=px+q,
wherein p is the slope of the line, q is the intercept of the line, and p ranges from [ -20,20]According to the method, Hough transformation is carried out on n points, the parameter space finds the most sub-point set of collinear points according to the array accumulated value, and the linear slope of the point set is represented as the slope p of an upper boundarysSimilarly, the slope p of the lower boundary can be obtainedxThen the average value is obtained
Figure FDA0002351129630000121
According to the determined gradient p0Performing inclination correction on the license plate;
step 2.5, character segmentation;
after correction, a regular license plate image is obtained, single characters need to be segmented and normalized, the size of the characters is the same as that of characters in a template library, the license plate image is characterized in that a frame is used as a boundary, the number of bright spots is far more than that of other parts in the image due to the fact that the frame is rectangular and is 1 in the transverse direction and the longitudinal direction, and the single characters are accurately segmented by taking the bright spots as segmentation basis;
the method comprises the following concrete steps:
step 2.5.1: firstly, accumulating each row of the binary license plate image, and storing the result in an array C [ ];
step 2.5.2: judging the position of the first character, where C [ j ] is not 0 and j is used as the left boundary of the first character;
step 2.5.3: continuing to search the column with C [ j ] being 0 to the right as the right boundary of the first character;
step 2.5.4: circulating for 7 times until all characters are segmented;
step 2.6, the characters are normalized,
the character normalization is to convert the character image obtained after the segmentation into a picture with uniform size,
assume that the size of a single character picture before normalization is m0×n0Normalized to m × n, and the compression ratio of the image in the x-axis direction is
Figure FDA0002351129630000131
In the y-axis direction of a compression ratio of
Figure FDA0002351129630000132
The relationship between the original point (x, y) and the point (x ', y') in the new graph is:
x'=fx×x,
y'=fy×y,
the gray value at point (x ', y') in the new map is then:
f'(x',y')=f'(x'/fx,y'/fy),
in the original image, the point (x'/f)x,y'/fy) Falls between the following four points:
P1:([x'/fx],[y'/fy]),P2:([x'/fx]+1,[y'/fy]),P3:([x'/fx],[y'/fy]+1),P4:([x'/fx]+1,[y'/fy]+1),
wherein [ ] is the operation of obtaining evidence.
Then the process of the first step is carried out,
Figure FDA0002351129630000133
Δ x, Δ y represent the amount of positional change between f (x, y) and f ' (x ', y ').
Step 2.7, character recognition;
the template matching method carries out self-negative recognition, normalizes the single character to make the size of the character be the same as that of the template, subtracts the character from the template character, if the size of the character is equal, the corresponding bit is 0 inevitably, and judges the character template which is most similar to the character according to the number of 0 in the result, and takes the character as the matched character to be used as the recognition output result.
The most important advantage of the template matching method is that the realization is simple, and especially under the condition that the segmented character images are regular, the recognition rate is high.
The method comprises the following specific steps:
step 2.7.1, establishing an automatic identification code table, wherein the code table is as follows: 0-9, A-Z, Suyushan Shanlujing Minliao Zhejiang Yue;
2.7.2, because the first digit of the license plate is a Chinese character, the part is directly matched from the part of the Chinese character;
2.7.3, matching letters between the third fixed positions A-Z of the license plate in a circulating way in the A-Z;
step 2.7.4, the third to seventh digits are letters or numbers, and need to match with all the letter and number templates one by one:
and step 2.7.5, outputting a corresponding result after 7 characters are matched.
Wherein, the Chinese character matching of the step 2.7.2 adopts neural network matching,
all layers contain trainable parameters (weights), the input is a 32 x 32 pixel image, larger than the maximum character database (up to 20 x 20 pixels centered in the 28 x 28 field), and finally the receiver field of the convolutional layer forms a 20 x 20 region in the center of the 32 x 32 input;
labeled Cx in the following convolutional layer, the sub-sampling layer is labeled Sx, where x is the layer index;
each layer (namely convolutional layer and downsampling layer) has a plurality of characteristic trainable coefficients, one characteristic MAP is responsible for extracting one characteristic of input, and the other characteristic MAP consists of a plurality of neurons;
level C1 is a convolutional layer with 6 feature maps of size 28 × 28, each neural unit in each feature map is connected to a 5 × 5 neighborhood in the input layer, the size mapped to level C1 is 28 × 28, level C1 has 6 filters, the number of trainable parameters at level C1 is:
(5×5+1)×6=156,
the number of connections to the input layer is:
156×(28×28)=122304,
the S2 layer is a sub-sampling layer with 6 signatures of size 14 × 14, each neuron in each signature is connected to a 2 × 2 neighborhood in the corresponding signature in C1, the S2 layer has 12 trainable parameters and 5580 connections;
convolution layers and sub-sampling layers are convoluted, four inputs of a C1 layer are summed, a result obtained by multiplying trainable coefficients and adding deviation is transferred to an S2 layer through an activation function, the number of characteristic maps in the S2 layer is 1/4 of the number of characteristic maps in the C1 layer, wherein each row and column takes half;
the C3 layer is a convolutional layer with 16 feature MAPs of size 10 × 10, each cell of each feature MAP is connected to multiple 5 × 5 neighborhoods at the same position in the subset of the feature MAP of the S2 layer, the C3 layer has 1516 trainable parameters and 151600 connections, and the convolution is performed with the previous layer by a convolution kernel of 5 × 5, and then a feature MAP with 10 × 10 neurons is obtained, because the C3 layer has 16 different convolution kernels, corresponding to 16 feature MAPs;
each C3 layer feature MAP is connected with an S2 layer feature MAP in a combined mode, and a convolution kernel of 5 x 5 is convolved with an S2 layer to obtain 16 feature MAPs;
the S4 level is a sub-sampling level with 16 feature maps of size 5 × 5, each cell in each feature map is connected to a 2 × 2 neighborhood in the corresponding feature map in the C3 level, the S4 level has 32 trainable parameters, and the number of trainable connections between the S4 level and the previous level is 2000;
the C5 layer is a convolutional layer with 240 feature maps, unlike the connection from the S2 layer to the C3 layer, each cell of the C5 layer is connected to 5 × 5 neighborhoods on all feature maps of S4, the feature map sizes of the S4 layer and the C5 layer are both 5 × 5, and the connection between them is complete;
the F6 layer has 84 units and is completely connected with the previous layer (C5), the dot product between the input vector and the weighting vector is calculated in the F6 layer, and the deviation is added to the vector;
then will be denoted as α for cell iiIs generated by a sigmoid () activation functioniState of cell i represented:
xi=sigmoid(αi),
the compression function tanh () is a scaled-down hyperbolic tangent function:
F(αi)=A×tanh(S×αi),
where A is the amplitude of the function and S is the function that determines its slope at the origin F (α)i) And the horizontal asymptotes at + a and-a, which are odd numbers;
the output layer consists of euclidean radial basis function units RBF, with 72 neurons, one for each character class, one unit for each class, and 84 inputs each. Each RBF unit yiThe output of (c) is calculated as follows:
yi=∑i(xj-wij)×2i,
wherein, wijIn order to be a deviation, the deviation,
wherein, the letter matching and the number matching of the step 2.7.3 and the step 2.7.4 adopt template matching,
selecting a part of the search image as a template, and defining the search sub-image as Si,j(m, n), (m, n) denotes coordinates of each pixel in the search image, and the template is defined as T (m)t,nt),(mt,nt) Representing the coordinates of each pixel in the template, moving the template T (m) at each (m, n) point in the search imaget,nt) And the center (or origin) of (c), and calculates Si,j(m, n) coefficient and T (m)t,nt) Similarity across the entire area spanned by the template:
the search range is: 1 < i < W-m, 1 < H < H-n
Wherein W and H are the width and height of the license plate image,
by comparing T (m)t,nt) And Si,j(m, n) to complete the template matching process to obtain the matching degree D (i, j),
Figure FDA0002351129630000171
m and N are the width and height of the template;
unfolding to obtain:
Figure FDA0002351129630000172
when the value of the degree of matching D (i, j) is minimum, the target is found.
CN201911415697.8A 2019-12-31 2019-12-31 Parking payment system and parking payment method Active CN111178291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911415697.8A CN111178291B (en) 2019-12-31 2019-12-31 Parking payment system and parking payment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911415697.8A CN111178291B (en) 2019-12-31 2019-12-31 Parking payment system and parking payment method

Publications (2)

Publication Number Publication Date
CN111178291A true CN111178291A (en) 2020-05-19
CN111178291B CN111178291B (en) 2021-01-12

Family

ID=70652349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911415697.8A Active CN111178291B (en) 2019-12-31 2019-12-31 Parking payment system and parking payment method

Country Status (1)

Country Link
CN (1) CN111178291B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111710053A (en) * 2020-05-16 2020-09-25 山东高速信息工程有限公司 ETC-based toll collection device and method for vehicle-related places
CN111882752A (en) * 2020-07-23 2020-11-03 支付宝(杭州)信息技术有限公司 Payment method, payment system and business system
CN111915751A (en) * 2020-09-10 2020-11-10 广州德中科技有限公司 Roadside parking charging method, device, equipment and storage medium
CN111915750A (en) * 2020-08-26 2020-11-10 云南昆船数码科技有限公司 Roadside parking charging system and charging method thereof
CN111951600A (en) * 2020-07-20 2020-11-17 领翌技术(横琴)有限公司 Parking space and vehicle identification information automatic matching method and parking system
CN112070081A (en) * 2020-08-20 2020-12-11 广州杰赛科技股份有限公司 Intelligent license plate recognition method based on high-definition video
CN112348979A (en) * 2020-09-16 2021-02-09 深圳市顺易通信息科技有限公司 ETC (electronic toll collection) non-inductive parking fee payment method and related equipment
CN112687109A (en) * 2020-11-19 2021-04-20 泰州锐比特智能科技有限公司 Lane change reminding system and method applying cloud storage
CN112907832A (en) * 2021-01-19 2021-06-04 浙江大华技术股份有限公司 Processing method of refueling event, video processing device and storage medium
CN112991809A (en) * 2021-02-03 2021-06-18 齐娟 Urban level digital parking management system
CN113365224A (en) * 2021-06-03 2021-09-07 星觅(上海)科技有限公司 Method, device and equipment for detecting vehicle passing abnormity and storage medium
CN113487752A (en) * 2021-07-01 2021-10-08 西安建筑科技大学 Unattended parking system
CN113689628A (en) * 2021-07-08 2021-11-23 北京北大千方科技有限公司 ETC technology-based refueling payment method, system, device, electronic equipment and medium
CN114283525A (en) * 2022-01-05 2022-04-05 青岛特来电新能源科技有限公司 ETC-based vehicle charging fee payment method, system and storage medium
CN114495302A (en) * 2021-12-27 2022-05-13 深圳市小马控股有限公司 Vehicle fee deduction method and device, computer equipment and storage medium
CN114882642A (en) * 2021-03-26 2022-08-09 北京华夏易通行科技有限公司 ETC charging and fee deducting technology
US20220292836A1 (en) * 2021-03-10 2022-09-15 FlashParking, Inc. Method and system for vehicle authentication
CN117496607A (en) * 2023-11-07 2024-02-02 武汉无线飞翔科技有限公司 ETC (electronic toll collection) -based intelligent parking lot management method and system

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004084121A1 (en) * 2003-03-17 2004-09-30 Fujitsu Limited Car identifying method and device
US20110164823A1 (en) * 2007-09-05 2011-07-07 ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE of Daejeon,Republic of Korea Video object extraction apparatus and method
CN102231242A (en) * 2011-06-13 2011-11-02 黄卫 Intelligent management system and method for roadside parking
CN103426311A (en) * 2013-06-27 2013-12-04 深圳市捷顺科技实业股份有限公司 Handset and vehicle management system
WO2014110629A1 (en) * 2013-01-17 2014-07-24 Sensen Networks Pty Ltd Automated vehicle recognition
CN104766384A (en) * 2015-04-25 2015-07-08 吕曦轩 No-parking network charging system based on vehicle license plate recognition technology
CN105447917A (en) * 2015-12-11 2016-03-30 四川长虹电器股份有限公司 Highway self-service toll collection system based on image identification technology
CN205540964U (en) * 2016-04-07 2016-08-31 浙江同兴技术股份有限公司 Full video identification's parking charging system
US20160275481A1 (en) * 2013-08-13 2016-09-22 Neology, Inc. Systems and methods for managing an account
CN106485801A (en) * 2016-12-30 2017-03-08 伟龙金溢科技(深圳)有限公司 Vehicles management method based on ETC and Car license recognition, device and its system
CN106600722A (en) * 2016-11-14 2017-04-26 南京积图网络科技有限公司 Toll service device, method and system
CN106952352A (en) * 2017-03-23 2017-07-14 腾讯科技(深圳)有限公司 A kind of non-stop charging method and vehicle toll collection system
CN107610252A (en) * 2017-08-16 2018-01-19 齐鲁交通信息有限公司 Freeway toll mobile-payment system and method with Car license recognition is applied based on mobile terminal
CN108053504A (en) * 2018-01-22 2018-05-18 智慧互通科技有限公司 A kind of parking lot access management system and method based on pay this extra mode
CN108198256A (en) * 2017-12-29 2018-06-22 北京工业大学 Trackside intelligent parking toll collection system based on ETC technologies
CN108550194A (en) * 2018-02-23 2018-09-18 北京是捷科技有限公司 A kind of curb parking charging method and system based on ETC
CN108665569A (en) * 2018-05-14 2018-10-16 佛山市洁宇信息科技有限公司 A kind of vehicle non-parking charge payment system and its method
CN109271904A (en) * 2018-09-03 2019-01-25 东南大学 A kind of black smoke vehicle detection method based on pixel adaptivenon-uniform sampling and Bayesian model
CN208548054U (en) * 2018-08-07 2019-02-26 沈阳静态交通投资建设管理有限公司 Road-surface concrete field identification system
CN109544694A (en) * 2018-11-16 2019-03-29 重庆邮电大学 A kind of augmented reality system actual situation hybrid modeling method based on deep learning
CN109903582A (en) * 2019-03-15 2019-06-18 北京筑梦园科技有限公司 A kind of curb parking management method, storage medium, processor and system
CN109948474A (en) * 2019-03-04 2019-06-28 成都理工大学 AI thermal imaging all-weather intelligent monitoring method
CN209281637U (en) * 2019-02-18 2019-08-20 河北省交通规划设计院 Merge the LTE-V bus or train route cooperative device of ETC technology
CN110310378A (en) * 2019-07-24 2019-10-08 广东艾科智泊科技股份有限公司 A kind of open type parking ground parking charge method and system based on double mirror
CN110415367A (en) * 2019-07-31 2019-11-05 中国工商银行股份有限公司 Vehicle mobile-payment system and method

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004084121A1 (en) * 2003-03-17 2004-09-30 Fujitsu Limited Car identifying method and device
US20110164823A1 (en) * 2007-09-05 2011-07-07 ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE of Daejeon,Republic of Korea Video object extraction apparatus and method
CN102231242A (en) * 2011-06-13 2011-11-02 黄卫 Intelligent management system and method for roadside parking
WO2014110629A1 (en) * 2013-01-17 2014-07-24 Sensen Networks Pty Ltd Automated vehicle recognition
CN103426311A (en) * 2013-06-27 2013-12-04 深圳市捷顺科技实业股份有限公司 Handset and vehicle management system
US20160275481A1 (en) * 2013-08-13 2016-09-22 Neology, Inc. Systems and methods for managing an account
CN104766384A (en) * 2015-04-25 2015-07-08 吕曦轩 No-parking network charging system based on vehicle license plate recognition technology
CN105447917A (en) * 2015-12-11 2016-03-30 四川长虹电器股份有限公司 Highway self-service toll collection system based on image identification technology
CN205540964U (en) * 2016-04-07 2016-08-31 浙江同兴技术股份有限公司 Full video identification's parking charging system
CN106600722A (en) * 2016-11-14 2017-04-26 南京积图网络科技有限公司 Toll service device, method and system
CN106485801A (en) * 2016-12-30 2017-03-08 伟龙金溢科技(深圳)有限公司 Vehicles management method based on ETC and Car license recognition, device and its system
CN106952352A (en) * 2017-03-23 2017-07-14 腾讯科技(深圳)有限公司 A kind of non-stop charging method and vehicle toll collection system
CN107610252A (en) * 2017-08-16 2018-01-19 齐鲁交通信息有限公司 Freeway toll mobile-payment system and method with Car license recognition is applied based on mobile terminal
CN108198256A (en) * 2017-12-29 2018-06-22 北京工业大学 Trackside intelligent parking toll collection system based on ETC technologies
CN108053504A (en) * 2018-01-22 2018-05-18 智慧互通科技有限公司 A kind of parking lot access management system and method based on pay this extra mode
CN108550194A (en) * 2018-02-23 2018-09-18 北京是捷科技有限公司 A kind of curb parking charging method and system based on ETC
CN108665569A (en) * 2018-05-14 2018-10-16 佛山市洁宇信息科技有限公司 A kind of vehicle non-parking charge payment system and its method
CN208548054U (en) * 2018-08-07 2019-02-26 沈阳静态交通投资建设管理有限公司 Road-surface concrete field identification system
CN109271904A (en) * 2018-09-03 2019-01-25 东南大学 A kind of black smoke vehicle detection method based on pixel adaptivenon-uniform sampling and Bayesian model
CN109544694A (en) * 2018-11-16 2019-03-29 重庆邮电大学 A kind of augmented reality system actual situation hybrid modeling method based on deep learning
CN209281637U (en) * 2019-02-18 2019-08-20 河北省交通规划设计院 Merge the LTE-V bus or train route cooperative device of ETC technology
CN109948474A (en) * 2019-03-04 2019-06-28 成都理工大学 AI thermal imaging all-weather intelligent monitoring method
CN109903582A (en) * 2019-03-15 2019-06-18 北京筑梦园科技有限公司 A kind of curb parking management method, storage medium, processor and system
CN110310378A (en) * 2019-07-24 2019-10-08 广东艾科智泊科技股份有限公司 A kind of open type parking ground parking charge method and system based on double mirror
CN110415367A (en) * 2019-07-31 2019-11-05 中国工商银行股份有限公司 Vehicle mobile-payment system and method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
尹川: "基于Android系统的车牌识别系统的设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
李建华: "基于图像处理技术的车牌识别方法研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
沈小军 等: ""互联网+"智慧停车模式助力城市交通规划和发展", 《2019城市发展与规划论文集》 *
王晶: "基于神经网络的车牌识别技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
范福海 等: "基于视频桩的路内停车管理系统框架研究", 《城市交通》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111710053A (en) * 2020-05-16 2020-09-25 山东高速信息工程有限公司 ETC-based toll collection device and method for vehicle-related places
CN111951600B (en) * 2020-07-20 2022-02-22 领翌技术(横琴)有限公司 Parking space and vehicle identification information automatic matching method and parking system
CN111951600A (en) * 2020-07-20 2020-11-17 领翌技术(横琴)有限公司 Parking space and vehicle identification information automatic matching method and parking system
CN111882752A (en) * 2020-07-23 2020-11-03 支付宝(杭州)信息技术有限公司 Payment method, payment system and business system
WO2022017180A1 (en) * 2020-07-23 2022-01-27 支付宝(杭州)信息技术有限公司 Payment method, payment system, and service system
US11544970B2 (en) 2020-07-23 2023-01-03 Alipay (Hangzhou) Information Technology Co., Ltd. Payment methods, payment systems and service systems
CN112070081A (en) * 2020-08-20 2020-12-11 广州杰赛科技股份有限公司 Intelligent license plate recognition method based on high-definition video
CN112070081B (en) * 2020-08-20 2024-01-09 广州杰赛科技股份有限公司 Intelligent license plate recognition method based on high-definition video
CN111915750A (en) * 2020-08-26 2020-11-10 云南昆船数码科技有限公司 Roadside parking charging system and charging method thereof
CN111915751B (en) * 2020-09-10 2022-06-03 广州德中科技有限公司 Roadside parking charging method, device, equipment and storage medium
CN111915751A (en) * 2020-09-10 2020-11-10 广州德中科技有限公司 Roadside parking charging method, device, equipment and storage medium
CN112348979A (en) * 2020-09-16 2021-02-09 深圳市顺易通信息科技有限公司 ETC (electronic toll collection) non-inductive parking fee payment method and related equipment
CN112687109A (en) * 2020-11-19 2021-04-20 泰州锐比特智能科技有限公司 Lane change reminding system and method applying cloud storage
CN112907832A (en) * 2021-01-19 2021-06-04 浙江大华技术股份有限公司 Processing method of refueling event, video processing device and storage medium
CN112991809A (en) * 2021-02-03 2021-06-18 齐娟 Urban level digital parking management system
US20220292836A1 (en) * 2021-03-10 2022-09-15 FlashParking, Inc. Method and system for vehicle authentication
CN114882642A (en) * 2021-03-26 2022-08-09 北京华夏易通行科技有限公司 ETC charging and fee deducting technology
CN113365224B (en) * 2021-06-03 2022-08-05 星觅(上海)科技有限公司 Method, device and equipment for detecting vehicle passing abnormity and storage medium
CN113365224A (en) * 2021-06-03 2021-09-07 星觅(上海)科技有限公司 Method, device and equipment for detecting vehicle passing abnormity and storage medium
CN113487752A (en) * 2021-07-01 2021-10-08 西安建筑科技大学 Unattended parking system
CN113689628A (en) * 2021-07-08 2021-11-23 北京北大千方科技有限公司 ETC technology-based refueling payment method, system, device, electronic equipment and medium
CN114495302A (en) * 2021-12-27 2022-05-13 深圳市小马控股有限公司 Vehicle fee deduction method and device, computer equipment and storage medium
CN114283525A (en) * 2022-01-05 2022-04-05 青岛特来电新能源科技有限公司 ETC-based vehicle charging fee payment method, system and storage medium
CN117496607A (en) * 2023-11-07 2024-02-02 武汉无线飞翔科技有限公司 ETC (electronic toll collection) -based intelligent parking lot management method and system

Also Published As

Publication number Publication date
CN111178291B (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN111178291B (en) Parking payment system and parking payment method
Panahi et al. Accurate detection and recognition of dirty vehicle plate numbers for high-speed applications
Jain et al. Deep automatic license plate recognition system
Al-Ghaili et al. Vertical-edge-based car-license-plate detection method
CN110969160B (en) License plate image correction and recognition method and system based on deep learning
Comelli et al. Optical recognition of motor vehicle license plates
KR101756849B1 (en) Parking control and management system for on-street parking lot
Puranic et al. Vehicle number plate recognition system: a literature review and implementation using template matching
Yoon et al. Blob extraction based character segmentation method for automatic license plate recognition system
CN110197166B (en) Vehicle body loading state recognition device and method based on image recognition
Tian et al. A two-stage character segmentation method for Chinese license plate
Arora et al. Automatic number plate recognition system using optical character recognition
Devi et al. An Efficient Hybrid Technique for Automatic License Plate Recognitions
Nguwi et al. Number plate recognition in noisy image
CN109447060A (en) A kind of license plate intelligent identifying system and method for highway
CN112085018A (en) License plate recognition system based on neural network
KR101784764B1 (en) The highway toll payment method for electric vehicle
Prajapati et al. A Review Paper on Automatic Number Plate Recognition using Machine Learning: An In-Depth Analysis of Machine Learning Techniques in Automatic Number Plate Recognition: Opportunities and Limitations
He et al. Combining global and local features for detection of license plates in video
Mohammad et al. An Efficient Method for Vehicle theft and Parking rule Violators Detection using Automatic Number Plate Recognition
Fernandez et al. Raspberry Pi based ANPR for Smart Access
Nguyen et al. Real-time license plate localization based on a new scale and rotation invariant texture descriptor
Pu et al. A robust and real-time approach for license plate detection
Mani et al. An Efficient Method for License Plate Detection and Recognition using OCR
Mathews et al. Improved computer vision-based framework for electronic toll collection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant