CN113496563B - Intelligent gate management method, device and equipment - Google Patents

Intelligent gate management method, device and equipment Download PDF

Info

Publication number
CN113496563B
CN113496563B CN202010198327.XA CN202010198327A CN113496563B CN 113496563 B CN113496563 B CN 113496563B CN 202010198327 A CN202010198327 A CN 202010198327A CN 113496563 B CN113496563 B CN 113496563B
Authority
CN
China
Prior art keywords
container
vehicle
image
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010198327.XA
Other languages
Chinese (zh)
Other versions
CN113496563A (en
Inventor
田野
聂方正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010198327.XA priority Critical patent/CN113496563B/en
Publication of CN113496563A publication Critical patent/CN113496563A/en
Application granted granted Critical
Publication of CN113496563B publication Critical patent/CN113496563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a management method, a device and equipment of an intelligent gate, wherein the method comprises the following steps: acquiring a target container image and a first acquisition moment of the target container image; acquiring a target vehicle image and a second acquisition time of the target vehicle image; if the fact that the container vehicle enters the intelligent gate is determined according to the first acquisition time and the second acquisition time, determining a container identifier of the container vehicle according to the target container image, and determining a license plate identifier of the container vehicle according to the target vehicle image; and if the container vehicle is determined to be legal according to the container identifier and the license plate identifier, allowing the container vehicle to pass through the intelligent gate. Through the technical scheme of the application, the identification accuracy of the container identification and the license plate identification is improved, and the identification can be completed rapidly and efficiently.

Description

Intelligent gate management method, device and equipment
Technical Field
The application relates to the technical field of intelligent traffic, in particular to a management method, device and equipment of an intelligent gate.
Background
In recent years, with the popularization and use of EDI (Electronic data interchange, electronic data exchange) technology, and the maturation of case number recognition OCR (Optical Character Recognition ) technology and car number recognition RFID (Radio Frequency Identification ) technology, the application of smart gates is becoming more and more widespread. The wisdom gate can simplify the basis and erect to can improve the discernment rate of accuracy of container vehicle, thereby can improve the efficiency of gate, practiced thrift the cost moreover greatly.
In the related art, the smart gate may deploy a plurality of correlation devices (e.g., infrared correlation devices), such as the smart gate may deploy 4 pairs of infrared correlation devices. The infrared correlation device uses infrared correlation technology to detect the arrival time information and the departure time information of the container vehicle, thereby realizing the identification of the container vehicle.
However, the above-mentioned manner needs to deploy a plurality of pairs of infrared correlation devices, and the building test process of the infrared correlation devices is complex, the time consumption of the environment building test is long, and the later fault detection and maintenance are complex.
Disclosure of Invention
The application provides a management method of an intelligent gate, which comprises the following steps:
Acquiring a target container image and a first acquisition moment of the target container image;
acquiring a target vehicle image and a second acquisition time of the target vehicle image;
if the fact that the container vehicle enters the intelligent gate is determined according to the first acquisition time and the second acquisition time, determining a container identifier of the container vehicle according to the target container image, and determining a license plate identifier of the container vehicle according to the target vehicle image;
and if the container vehicle is determined to be legal according to the container identifier and the license plate identifier, allowing the container vehicle to pass through the intelligent gate.
The application provides a management device of wisdom gate, the device includes:
the acquisition module is used for acquiring a target container image and a first acquisition moment of the target container image; and acquiring a target vehicle image and a second acquisition time of the target vehicle image;
the determining module is used for determining the container identification of the container vehicle according to the target container image and determining the license plate identification of the container vehicle according to the target vehicle image if the container vehicle is determined to enter the intelligent gate according to the first acquiring time and the second acquiring time;
And the control module is used for allowing the container vehicle to pass through the intelligent gate if the container vehicle is determined to be legal according to the container identifier and the license plate identifier.
The application provides a terminal device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the steps of:
acquiring a target container image and a first acquisition moment of the target container image;
acquiring a target vehicle image and a second acquisition time of the target vehicle image;
if the fact that the container vehicle enters the intelligent gate is determined according to the first acquisition time and the second acquisition time, determining a container identifier of the container vehicle according to the target container image, and determining a license plate identifier of the container vehicle according to the target vehicle image;
and if the container vehicle is determined to be legal according to the container identifier and the license plate identifier, allowing the container vehicle to pass through the intelligent gate.
According to the technical scheme, in the embodiment of the application, the infrared correlation device is not required to be deployed on the intelligent gate, whether the container vehicle enters the intelligent gate or not can be determined according to the first acquisition time of the target container image and the second acquisition time of the target vehicle image, the recognition of the container vehicle is realized, the recognition accuracy is improved, the problems that the construction test process is complex, the time consumption of the environment construction test is long, the later fault investigation and maintenance are complex and the like can be avoided. The method can determine the container identification of the container vehicle according to the target container image and determine the license plate identification of the container vehicle according to the target vehicle image, thereby improving the identification accuracy of the container identification and the license plate identification, rapidly and efficiently completing the identification and relieving the congestion. The mode improves the identification accuracy rate, and the identification accuracy rate in the aspects of license plate identification, container identification, box type and the like reaches more than 98%. The scheme is simple to erect, and the capital cost is reduced by about two thirds compared with the existing infrared correlation scheme. The stability and the environmental adaptability (limit of field size, flatness, windage and the like) of the scheme are correspondingly improved, faults are easy to find, the fault rate is low, and the construction and maintenance costs are low.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of a smart gate management system in one embodiment of the present application;
FIG. 2 is a flow chart of a method of intelligent gate management in one embodiment of the present application;
FIG. 3 is a flow chart of a method of intelligent gate management in another embodiment of the present application;
FIG. 4 is a block diagram of a smart gate management device in one embodiment of the present application;
fig. 5 is a block diagram of a terminal device in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
Before describing the technical scheme of the present application, concepts related to the present application are described:
neural network: machine learning is a way to implement artificial intelligence to study how computers simulate or implement learning behavior of humans to obtain new knowledge or skills, reorganizing existing knowledge structures to continuously improve their own performance. Deep learning belongs to a subclass of machine learning, while neural networks are implementations of deep learning. The neural network may include, but is not limited to: convolutional neural networks (abbreviated as CNN), cyclic neural networks (abbreviated as RNN), fully connected networks, and the like. Structural elements of the neural network may include, but are not limited to: the convolutional layer (Conv), pooling layer (Pool), excitation layer, full-link layer (FC), etc., are not limited thereto.
In the convolution layer, the image features are enhanced by performing convolution operation on the image by using a convolution kernel, the convolution layer performs convolution operation in a spatial range by using a convolution kernel, the convolution kernel can be a matrix with m×n size, and the input of the convolution layer and the convolution kernel perform convolution to obtain the output of the convolution layer. The convolution operation is actually a filtering process, in which the pixel value f (x, y) of a point (x, y) on the image is convolved with a convolution kernel w (x, y). For example, a convolution kernel of 4*4 is provided, the convolution kernel of 4*4 containing 16 values, the size of these 16 values being configurable as desired. Sliding on the image in sequence according to the size of 4*4 results in a plurality of 4*4 sliding windows, and convolving the 4*4 convolution kernel with each sliding window results in a plurality of convolution features, which are the output of the convolution layer and provided to the pooling layer.
In the pooling layer, it is actually a down-sampling process, and by performing operations of taking the maximum value, taking the minimum value, taking the average value, etc. on a plurality of convolution features (i.e. the output of the convolution layer), the calculation amount can be reduced, and the feature invariance can be maintained. In the pooling layer, the image can be sub-sampled by utilizing the principle of local correlation of the image, so that the data processing amount can be reduced, and useful information in the image can be reserved.
In the excitation layer, features of the pooled layer output may be mapped using an activation function (e.g., a nonlinear function) to introduce a nonlinear factor such that the neural network enhances expression through nonlinear combinations. Wherein the activation function of the excitation layer may include, but is not limited to, a ReLU (Rectified Linear Units, rectified linear unit) function, which is illustrated as a ReLU function, then the ReLU function may set a feature less than 0 to 0 and a feature greater than 0 to remain unchanged among all features of the pooled layer output.
In the fully connected layer, the fully connected layer is used for fully connecting all the features input to the fully connected layer, so that a feature vector is obtained, and the feature vector can comprise a plurality of features.
In practical applications, the neural network may be constructed by combining one or more convolution layers, one or more pooling layers, one or more excitation layers, and one or more fully-connected layers according to different requirements.
Training process of neural network: the training data may be used to train various neural network parameters within the neural network, such as convolutional layer parameters (e.g., convolutional kernel parameters), pooling layer parameters, excitation layer parameters, full-link layer parameters, etc., which are not limited in this regard, and all of the neural network parameters within the neural network may be trained. By training the parameters of each neural network in the neural network, the neural network can be fitted with the mapping relationship between the input and the output.
The using process of the neural network comprises the following steps: the input data can be provided to the neural network, the neural network processes the input data, for example, the input data is processed by utilizing parameters of each neural network to obtain output data, and the input data and the output data meet the mapping relation of the input and the output fitted by the neural network.
Wisdom gate: through simplifying basic erection, and adopting multi-frame recognition and deep learning algorithm, the recognition accuracy of container vehicles is improved, the gate efficiency can be improved, and the cost is greatly saved.
In the related art, the smart gate may deploy a plurality of pairs of infrared correlation devices, for example, the smart gate may deploy 4 pairs of infrared correlation devices. The infrared correlation device can detect the arrival time information and the departure time information of the container vehicle by using an infrared correlation technology, thereby realizing the identification of the container vehicle. However, the above-mentioned manner needs to deploy a plurality of pairs of infrared correlation devices, and the building test process of the infrared correlation devices is complex, the time consumption of the environment building test is long, and the later fault investigation and maintenance are complex.
To above-mentioned discovery, in this application embodiment, need not arrange infrared correlation device at wisdom gate, also can confirm whether have container vehicle to get into wisdom gate, realize container vehicle's discernment. Because the infrared correlation device does not need to be deployed at the intelligent gate, the problems that the construction test process is complex, the time consumption of the environment construction test is long, the later-period fault investigation and maintenance are complex and the like can be avoided.
The following describes the technical solution of the embodiment of the present application in conjunction with a specific application scenario.
Referring to fig. 1, a schematic diagram of a management system for intelligent gate is shown, where the management system may include, but is not limited to, an intelligent detection unit, a license plate detection unit, and a terminal device. The intelligent detection unit can also be called as intelligent gate intelligent unit, and the intelligent detection unit can be by taking a candid photograph machine and light filling lamp constitution. The license plate detection unit can also be called as a license plate identification snapshot unit, and the license plate detection unit can be composed of a snapshot machine and a light supplementing lamp.
For example, the above-mentioned snapshot machine is a device with an image acquisition function, and a deep learning algorithm may be built in, for how to use the deep learning algorithm for processing, see the subsequent embodiments.
For example, a container vehicle may include a vehicle, and a container deployed on the vehicle, the unique identification of the container may be referred to as a container identification, i.e., the container identification is used to distinguish between different containers. The unique identification of a vehicle may be referred to as a license plate identification, i.e. a license plate identification is used to distinguish between different vehicles.
In this embodiment, the intelligent detection unit is used to collect images of the container, and for convenience of distinction, the images collected by the intelligent detection unit are referred to herein as container images, which may generally include container identifiers. The license plate detection unit is used for collecting images of the vehicle, and for convenience of distinction, the images collected by the license plate detection unit are referred to as vehicle images, and the vehicle images can generally comprise license plate identifications.
In the process of collecting the container image by the intelligent detection unit, collecting the container image by the snapshot machine, carrying out light supplementing treatment by the light supplementing lamp, in the process of collecting the vehicle image by the license plate detection unit, collecting the vehicle image by the snapshot machine, carrying out light supplementing treatment by the light supplementing lamp, and not limiting the image collecting process.
For example, the number of the smart detection units may be at least two, in fig. 1, 4 smart detection units are taken as an example, and in practical application, the number of the smart detection units may be more or less.
In fig. 1, the 4 smart detection units are a smart detection unit a, a smart detection unit B, a smart detection unit C, and a smart detection unit D, respectively. The intelligent detection unit A is used for collecting the container image in the right direction of the container vehicle, the intelligent detection unit B is used for collecting the container image in the left direction of the container vehicle, the intelligent detection unit C is used for collecting the container image (also called as a front container image) in the front direction (namely the head direction) of the container vehicle, and the intelligent detection unit D is used for collecting the container image (also called as a rear container image) in the rear direction (namely the tail direction) of the container vehicle.
In practical application, the intelligent detection unit D can be deployed, and at least one intelligent detection unit of the intelligent detection unit A, the intelligent detection unit B and the intelligent detection unit C can be additionally deployed on the basis of deploying the intelligent detection unit D. For example, only the smart detection unit C and the smart detection unit D may be deployed, or only the smart detection unit a and the smart detection unit D may be deployed, or only the smart detection unit B and the smart detection unit D may be deployed, although the above manner is just a few examples, and is not limited thereto.
For convenience of description, in the subsequent process, the intelligent detection unit a, the intelligent detection unit B, the intelligent detection unit C, and the intelligent detection unit D are deployed as examples, and the implementation process of other cases is similar.
In summary, the intelligent detection unit a collects a container image of the right direction of the container vehicle, where the container image includes the container identifier of the right direction of the container vehicle. The intelligent detection unit B acquires a container image of the left direction of the container vehicle, wherein the container image comprises a container identifier of the left direction of the container vehicle. The intelligent detection unit C acquires a container image of the front side direction of the container vehicle, the container image including a container identification (the container identification may also be referred to as a front-box identification) of the front side direction of the container vehicle. The intelligent detection unit D acquires a container image of the rear side direction of the container vehicle, which includes a container identification (which may also be referred to as a post-box identification) of the rear side direction of the container vehicle.
For example, the number of license plate detection units may be at least one, in fig. 1, taking 1 license plate detection unit as an example, and in practical application, the number of license plate detection units may be further increased.
In fig. 1, the license plate detection unit may be a license plate detection unit E for acquiring a vehicle image in a front side direction (i.e., a head direction) of the container vehicle, or the license plate detection unit E for acquiring a vehicle image in a rear side direction (i.e., a tail direction) of the container vehicle, or the license plate detection unit E for acquiring a vehicle image in a front side direction and a vehicle image in a rear side direction of the container vehicle.
The vehicle image for the container vehicle front side direction may include a license plate identification of the container vehicle front side direction. The vehicle image for the rear direction of the container vehicle may include a license plate identification for the rear direction of the container vehicle. For convenience of description, in the following embodiments, taking a vehicle image of the front side direction of the container vehicle, which includes a license plate identification of the front side direction, as an example, the license plate detection unit E is taken.
As shown in fig. 1, the intelligent detection unit C is directed toward the head of the container vehicle, and thus, can collect a container image in the front side direction of the container vehicle. The intelligent detection unit D is directed toward the rear of the container vehicle, and thus can collect the container image in the rear direction of the container vehicle.
The intelligent detection unit B is arranged on the left side of the portal frame and is opposite to the running direction, the intelligent detection unit A is arranged on the right side of the portal frame, the intelligent detection unit B is opposite to the left side direction of the container vehicle, and the intelligent detection unit A is opposite to the right side direction of the container vehicle, so that the intelligent detection unit B can collect the container image of the left side direction of the container vehicle, and the intelligent detection unit A can collect the container image of the right side direction of the container vehicle.
Referring to fig. 1, distances and heights between related entities are shown by way of example only, and may be arbitrarily configured in relation to the deployment of an actual scenario, without limitation.
For example, the height of the gantry may be any value, in fig. 1, 5.6 meters (m) being an example. The distance between the smart detection unit a and the smart detection unit B may be any value, for example 9 meters in fig. 1. The distance between the two uprights can be of any value, in fig. 1, for example 4-6 metres. X is the pole extension distance, X can be any value, Y is the distance from the pole to the tail parking line, Y can be any value, and in FIG. 1, taking the sum of X and Y as an example, 5 meters, the tail parking line is the position of the tail when the container vehicle is parked. The distance between the license plate detection unit E and the tail parking line may be any value, for example 11 meters in fig. 1. The height of the license plate detection unit E may be any value, and in fig. 1, 1.6 meters is taken as an example.
Based on the above application scenario, an embodiment of the present application provides a method for managing an intelligent gate, and referring to fig. 2, a flow chart of the method is shown, and the method is applied to a terminal device, where the method includes:
step 201, a target container image and a first acquisition time of the target container image are acquired.
For example, the terminal device may acquire the first video stream from the intelligent detection unit, acquire a plurality of frames of container images (i.e., the plurality of frames of container images acquired by the intelligent detection unit) according to the first video stream, and determine the target container image from the plurality of frames of container images. Further, assuming that the terminal device determines the target container image at the time a, the first acquisition time of the target container image may be the time a.
For example, when determining the target container image from the multi-frame container image, the terminal device may determine the confidence level of the container identifier in each frame of container image through the neural network, and determine the target container image from the multi-frame container image according to the confidence level of the container identifier in each frame of container image. For example, a container image with the highest confidence of the container identification is determined as the target container image. Alternatively, the confidence of the container identifications in part of the container images (such as the odd frame images in all the container images or the even frame images in all the container images) is determined through the neural network, and the target container image is determined from the multi-frame container images according to the confidence of the container identifications. For example, a container image with the highest confidence of the container identification is determined as the target container image.
For example, based on the confidence level of each container identifier, if the maximum confidence level is greater than a preset threshold, the terminal device may determine the container image with the maximum confidence level of the container identifier as the target container image. Or if the maximum confidence coefficient is not greater than the preset threshold value, the terminal equipment determines that all the container images are not target container images, namely the container identification cannot be obtained from the container images.
The preset threshold value can be configured according to experience, and when the confidence coefficient is larger than the preset threshold value, the container identification in the container image is clear, and the accurate container identification can be obtained from the container image.
Of course, the above manner is merely an example, and is not limited thereto, as long as the target container image can be determined from among the plurality of frame container images, for example, the nth frame container image in the plurality of frame container images may be directly determined as the target container image, and N is an arbitrary value, such as half of the total number of container images.
In one possible implementation manner, a deep learning algorithm may be preconfigured in the terminal device, and the terminal device may determine the confidence level of the container identifier in each frame of container image through the deep learning algorithm, and determine the container image with the maximum confidence level of the container identifier as the target container image.
As a specific implementation manner of the deep learning algorithm, the terminal device may input each frame of container image to the neural network, so that the neural network determines the confidence level of the container identifier in each frame of container image, and outputs the confidence level of the container identifier in each frame of container image.
Based on the above, the terminal device can learn the confidence of the container identification in each frame of container image, and determine the container image with the highest confidence of the container identification as the target container image.
Of course, the implementation of determining the target container image based on the neural network is only one example of the application, and other manners of determining the target container image may be used, which is not limited thereto. For convenience of description, a description will be given hereinafter taking a case of determining a target container image based on a neural network as an example.
In the training process of the neural network, a large number of training images can be acquired, and the acquisition process of the training images is not limited. For each training image, the container identification and the confidence of the container identification may be included, e.g., the container identification is abc, and the confidence of the container identification is 100%. After the training images are input into the neural network, the training images can be used for training the parameters of the neural network in the neural network, and the training process is not limited. In the parameter training process, feature vectors of training images can be extracted, and a mapping relation among the feature vectors, the container identifications and the confidence degrees of the container identifications is established.
The training process of the neural network can be completed by the terminal equipment, namely the terminal equipment executes the training process of the neural network to obtain the neural network after training. Alternatively, the training process of the neural network may be completed by the back-end server, that is, the back-end server performs the training process of the neural network to obtain the neural network that has completed training, and deploys the neural network that has completed training to each terminal device (that is, the terminal devices of different smart gates), so that the terminal devices obtain the neural network that has completed training.
In the using process of the neural network, after the terminal equipment obtains multi-frame container images, the container images can be input into the neural network. The neural network can process each frame of container image by utilizing each neural network parameter, in the processing process, the characteristic vector of each frame of container image can be extracted, the container identification corresponding to the characteristic vector can be obtained, the confidence of the container identification is not limited in the processing process. In summary, the neural network may obtain the confidence level of the container identifier in each frame of container image, and output the confidence level of the container identifier in each frame of container image.
In summary, the terminal device may obtain the confidence level of the container identifier in each frame of container image from the neural network, and determine the container image with the highest confidence level of the container identifier as the target container image.
The implementation process of step 201 is described below in connection with a specific application scenario.
Referring to fig. 1, during the running of the container vehicle, the intelligent detection unit C may periodically collect container images of the front side direction of the container vehicle, that is, collect a plurality of frames of container images, and transmit a first video stream to the terminal device, where the first video stream includes the container images. After the terminal device obtains the first video stream, a plurality of frames of container images can be obtained according to the first video stream, the confidence level of the container identification in each frame of container images is determined through a neural network, a target container image C1 is determined from the plurality of frames of container images according to the confidence level, the target container image C1 is a container image in the front side direction of the container vehicle, which is acquired by the intelligent detection unit C, and the first obtaining moment C2 of the target container image C1 is determined.
For example, the triggering mode of the intelligent detection unit C may be that the container identifier of the front side direction of the container vehicle is located in a triggering line, and the triggering line represents a specified position in the screen and may be configured empirically.
In the process of periodically collecting container images, the intelligent detection unit C judges whether the container identification in each frame of container image is positioned on a trigger line or not according to each frame of container image. If not, the frame of container image does not need to be transmitted to the terminal device, and if so, the continuous M frames of container images are transmitted to the terminal device from the frame of container image, that is, the first video stream transmitted to the terminal device by the intelligent detection unit C, including the M frames of container images. The value of M may be empirically configured, and is not limited thereto.
When the container identification in the front side direction of the container vehicle is located on the trigger line, the container identification in the container image collected by the intelligent detection unit C is clear, the container identification can be accurately identified from the container image, and the identification accuracy of the container identification is high.
In order to determine whether the container identifier in the container image is located at the trigger line, a deep learning algorithm may be pre-configured in the intelligent detection unit C, and the intelligent detection unit C determines whether the container identifier in the container image is located at the trigger line through the deep learning algorithm. As an implementation of the deep learning algorithm, the intelligent detection unit C may input the container image to the neural network, so that the neural network determines whether the container identification in the container image is already located at the trigger line. Of course, the neural network-based implementation is merely an example, and other ways of determining whether a container identification in a container image is already located at a trigger line may be used, without limitation. For convenience of description, a neural network-based implementation will be exemplified later.
In the training process of the neural network, a large number of training images can be acquired, and for each positive sample, the container identification in the training image is located on the trigger line, and the label value is a first value. For each negative sample, the container identification in the training image is not located at the trigger line and the tag value is a second value. After the training images are input into the neural network, the training images can be utilized to train the parameters of each neural network in the neural network, and in the parameter training process, the feature vectors of the training images can be extracted, and the mapping relation between the feature vectors and the label values is established. The tag value may be a first value indicating that the container identification is located in the trigger line or a second value indicating that the container identification is not located in the trigger line.
In the using process of the neural network, after the intelligent detection unit C obtains the container image, the container image can be input into the neural network. The neural network processes the container image by utilizing the parameters of the neural network, can extract the feature vector of the container image in the processing process, and obtains the label value corresponding to the feature vector by utilizing the mapping relation. If the tag value is the first value, determining that the container identifier is located in the trigger line, and if the tag value is the second value, determining that the container identifier is not located in the trigger line.
In summary, the neural network may determine whether the container identifier in the container image is located on the trigger line, and output the determination result. Further, the intelligent detection unit C may acquire the determination result from the neural network, and then learn whether the container identifier in the container image is located on the trigger line.
Referring to fig. 1, the implementation manners of the intelligent detection unit a, the intelligent detection unit B and the intelligent detection unit D are the same as the implementation manner of the intelligent detection unit C, and the detailed description thereof will not be repeated. The terminal device may determine the target container image a1 from the multi-frame container image acquired by the intelligent detection unit a, and determine the first acquisition time a2 of the target container image a 1. The terminal device may determine the target container image B1 from the multi-frame container image acquired by the intelligent detection unit B, and determine the first acquisition time B2 of the target container image B1. The terminal device may determine the target container image D1 from the multi-frame container image acquired by the intelligent detection unit D, and determine the first acquisition time D2 of the target container image D1.
Illustratively, a trigger line configured for the intelligent detection unit C, which may be a horizontal line, is near the middle of the screen of the intelligent detection unit C, and may be empirically configured. And a trigger line configured for the intelligent detection unit D is close to the middle part of the screen of the picture of the intelligent detection unit D, and the trigger line can be a transverse line and can be configured according to experience. In addition, for the trigger line configured for the smart detection unit a, near the right side (e.g., one third of the right side or the middle position) of the screen of the smart detection unit a, the trigger line may be a vertical line, which indicates a position where the container identification is normally free from distortion, and may be configured empirically. Further, for the trigger line configured for the smart detection unit B, near the left side (e.g., one third of the left side or the middle position) of the screen of the smart detection unit B, the trigger line may be a vertical line indicating a position where the container identification is normally free from distortion, and may be empirically configured.
Step 202, acquiring a target vehicle image and a second acquisition time of the target vehicle image.
The terminal device may obtain a second video stream from the license plate detection unit, and obtain a plurality of frames of vehicle images (i.e., a plurality of frames of vehicle images collected by the license plate detection unit) according to the second video stream, and the terminal device may determine the target vehicle image from the plurality of frames of vehicle images. Further, assuming that the terminal device determines the target vehicle image at the time B, the second acquisition time of the target vehicle image may be the time B.
For example, when determining the target vehicle image from the multiple frames of vehicle images, the terminal device may determine the confidence level of the license plate identifier in each frame of vehicle image through the neural network; and determining the target vehicle image from the multi-frame vehicle image according to the confidence coefficient of the license plate identification in each frame of vehicle image. For example, a vehicle image with the highest confidence of the license plate identification is determined as the target vehicle image. Or determining the confidence of license plate identifications in partial vehicle images (such as odd frame images in all vehicle images or even frame images in all vehicle images) through a neural network, and determining the target vehicle image from multiple frames of vehicle images according to the confidence of the license plate identifications. For example, a vehicle image with the highest confidence of the license plate identification is determined as the target vehicle image. Of course, the above manner is merely an example, and is not limited thereto, as long as the target vehicle image can be determined from among the plurality of frame vehicle images, for example, the nth frame vehicle image among the plurality of frame vehicle images is directly determined as the target vehicle image, and N is an arbitrary value, such as half of the total number of vehicle images.
In one possible implementation manner, a deep learning algorithm may be preconfigured in the terminal device, and the terminal device may determine the confidence level of the license plate identifier in each frame of vehicle image through the deep learning algorithm, and determine the vehicle image with the largest confidence level of the license plate identifier as the target vehicle image. The neural network is used as an implementation mode of the deep learning algorithm, and the terminal equipment can input each frame of vehicle image into the neural network so that the neural network can determine the confidence coefficient of the license plate identification in each frame of vehicle image and output the confidence coefficient of the license plate identification in each frame of vehicle image. Based on the above, the terminal device can acquire the confidence coefficient of the license plate identifier in each frame of vehicle image, and determine the vehicle image with the maximum confidence coefficient of the license plate identifier as the target vehicle image.
The process of determining the confidence coefficient of the license plate identifier by the terminal device through the neural network can be referred to as determining the confidence coefficient of the container identifier by the terminal device through the neural network, and will not be described herein. By way of example, based on the neural network after training, the vehicle license plate, the vehicle body color, the vehicle model, the vehicle brand and the like can be accurately detected, and finally the confidence of the license plate identification in the vehicle image is determined. For the training process of the neural network and the use process of the neural network, refer to step 201, which is not described herein.
The implementation process of step 202 is described below in connection with a specific application scenario.
Referring to fig. 1, during the running process of the container vehicle, the license plate detection unit E periodically collects vehicle images, that is, collects multiple frames of vehicle images, and sends a second video stream to the terminal device, where the second video stream includes the vehicle images. After the terminal equipment obtains the second video stream, a plurality of frames of vehicle images are obtained according to the second video stream, the confidence coefficient of license plate identification in each frame of vehicle image is determined through a neural network, a target vehicle image is determined from the plurality of frames of vehicle images according to the confidence coefficient, and the second obtaining moment of the target vehicle image is determined.
For example, the triggering mode of the license plate detection unit E may be that the license plate identifier in the front side direction of the container vehicle is located on a triggering line, and the triggering line represents a designated position in the screen and may be configured empirically.
In the process of periodically collecting vehicle images, the license plate detection unit E judges whether license plate identifications in each frame of collected vehicle images are located in a trigger line or not. If not, the frame of vehicle image does not need to be transmitted to the terminal device, and if so, the continuous L frames of vehicle images are transmitted to the terminal device from the frame of vehicle image, that is, the license plate detection unit E transmits the second video stream to the terminal device, including the L frames of vehicle images. The value of L may be empirically configured, and is not limited thereto.
When the license plate identifier in the front side direction of the container vehicle is located on the trigger line, the license plate identifier in the vehicle image acquired by the license plate detection unit E is clear, the terminal equipment can accurately identify the license plate identifier from the vehicle image, and the identification accuracy of the license plate identifier is high.
In order to determine whether the license plate identifier in the vehicle image is located in the trigger line, a deep learning algorithm may be configured in the license plate detection unit E, and whether the license plate identifier in the vehicle image is located in the trigger line may be determined through the deep learning algorithm. For example, the license plate detection unit E inputs the vehicle image to the neural network, and the neural network determines whether the license plate identifier in the vehicle image is located on the trigger line, and the specific manner is referred to step 201, which is not described herein.
For example, a trigger line configured for the license plate detection unit E is near the middle of the screen of the picture of the license plate detection unit E, and the trigger line may be a horizontal line and may be configured empirically.
Step 203, if it is determined that the container vehicle enters the intelligent gate according to the first acquisition time of the target container image and the second acquisition time of the target vehicle image, determining the container identifier of the container vehicle according to the target container image, and determining the license plate identifier of the container vehicle according to the target vehicle image.
For example, the target container image may include a first target container image for a preset direction of the container vehicle, and the preset direction may include at least one of a front side direction, a left side direction, and a right side direction. On the basis of this, the target cut-off time of container vehicle detection can be determined from the first acquisition time of the first target container image and the second acquisition time of the target vehicle image. If a second target container image is acquired before the target interception moment, determining that the container vehicle enters the intelligent gate; wherein the second target container image is a target container image for a rear-side direction of the container vehicle.
Illustratively, the target intercept time refers to: the cut-off time of the second target container image in the rear side direction of the container vehicle is acquired, that is, the second target container image should be acquired before the target cut-off time. Based on this, if a second target container image is acquired before the target intercept time, it is indicated that the container vehicle is entering a smart gate. If the second target container image is not acquired at the target intercept time, it is indicated that the container vehicle is not entering the smart gate.
For example, referring to the above embodiment, the terminal device may determine the target container image a1, the target container image b1, the target container image c1, the target container image d1, and the target vehicle image.
The first target container image may include a target container image a1, a target container image b1, and a target container image c1, and the second target container image may include a target container image d1.
Based on this, the terminal device determines the target cut-off time of container vehicle detection from the first acquisition time a2 of the target container image a1, the first acquisition time b2 of the target container image b1, the first acquisition time c2 of the target container image c1, and the second acquisition time of the target vehicle image. If the target container image d1 is acquired before the target intercept time, it is determined that the container vehicle enters the smart gate.
For example, if the terminal device only acquires the target container image a1, the target container image b1, and a part of the target container images in the target container image c1, the target cut-off time may be determined by using the first acquisition time of the part of the target container images and the second acquisition time of the target vehicle image. For example, if only the target container image a1 and the target container image b1 are acquired and the target container image c1 is not acquired, the terminal device determines the target cut-off time from the first acquisition time a2 of the target container image a1, the first acquisition time b2 of the target container image b1, and the second acquisition time of the target vehicle image.
For convenience of description, in the following embodiments, description will be given taking an example in which the terminal device can acquire the target container image a1, the target container image b1, and the target container image c 1.
In one possible embodiment, the target intercept time may be determined in the following manner:
mode 1, determining a delay time according to a first acquisition time of a first target container image, and determining a target cut-off time of container vehicle detection according to the delay time and a second acquisition time.
For example, the terminal device determines a delay time length according to the first acquisition time a2 of the target container image a1, the first acquisition time b2 of the target container image b1, and the first acquisition time c2 of the target container image c1, and determines a target cut-off time according to the delay time length and the second acquisition time of the target vehicle image.
For example, a functional relationship between the delay time and the first acquisition time may be preconfigured, where the functional relationship may be configured by a user according to experience, or may be learned by a machine learning algorithm, and this is not limited. For example, the functional relationship may be y=ax1+bx2+cx3, which is merely an example, and is not limited thereto, as long as the delay time length has a relationship with the first acquisition time.
Y represents the time delay period, X1 represents the first acquisition time of the target container image a1, A represents the weight value of X1, X2 represents the first acquisition time of the target container image B1, B represents the weight value of X2, X3 represents the first acquisition time of the target container image C1, and C represents the weight value of X3. The weight value a, the weight value B and the weight value C may be configured according to experience by a user, or may be learned by a machine learning algorithm, and are not limited thereto, as long as the weight value a, the weight value B and the weight value C are known values.
In summary, the first acquisition time a2, the first acquisition time b2, and the first acquisition time c2 may be substituted into X1, X2, and X3 in the above functional relationship, so as to obtain the delay time Y. Of course, when the functional relationship changes, the delay time may also be determined according to the first acquisition time a2, the first acquisition time b2, and the first acquisition time c2, which are related to a specific functional expression and are not described herein again.
After obtaining the delay time, the terminal device may determine a target interception time according to the delay time and the second acquisition time, for example, the target interception time is a sum of the delay time and the second acquisition time.
By way of example, the delay duration may refer to: from the second acquisition time when the target vehicle image is acquired, a second target container image in the rear-side direction of the container vehicle should be acquired within the range of the delay time length. In combination with the definition of the target interception moment (i.e. the interception moment when the second target container image is acquired), the sum of the delay time length and the second acquisition moment can be the target interception moment.
Based on the above, the second acquisition time begins to count, and if the second target container image is acquired within the range of the delay time length, the container vehicle is indicated to enter the intelligent gate. If the second target container image is not obtained after the range of the delay time is exceeded, the container vehicle does not enter the intelligent gate.
And 2, determining a time delay time according to the first acquisition time and the second acquisition time of the first target container image, and determining the target interception time of container vehicle detection according to the time delay time and the second acquisition time.
For example, the terminal device determines a delay time according to the first acquisition time a2 of the target container image a1, the first acquisition time b2 of the target container image b1, the first acquisition time c2 and the second acquisition time of the target container image c1, and determines a target cut-off time according to the delay time and the second acquisition time.
For example, a functional relationship between the delay time and the first acquisition time and the second acquisition time may be preconfigured, and the functional relationship may be empirically configured by a user or may be learned through a machine learning algorithm, which is not limited. For example, an example of a functional relationship may be y=a×1+b×2+c×3+d×x4.
Y represents a time delay period, X1 represents a first acquisition time of the target container image a1, A represents a weight value of X1, X2 represents a first acquisition time of the target container image B1, B represents a weight value of X2, X3 represents a first acquisition time of the target container image C1, C represents a weight value of X3, X4 represents a second acquisition time of the target vehicle image, and D represents a weight value of X4. The weight A, the weight B, the weight C and the weight D can be configured according to experience by a user or can be learned by a machine learning algorithm.
In summary, the first acquisition time a2, the first acquisition time b2, the first acquisition time c2, and the second acquisition time may be substituted into X1, X2, X3, and X4 in the above functional relationship to obtain the delay duration Y.
After obtaining the delay time, the terminal device may determine a target interception time according to the delay time and the second acquisition time, for example, the target interception time is a sum of the delay time and the second acquisition time.
Of course, the above-described modes 1 and 2 are merely examples, and are not limited thereto, as long as the target cut-off time can be determined from the first acquisition time and the second acquisition time of the first target container image.
In one possible implementation, based on the target cut-off time, there may be several cases:
case 1: if a second target container image is acquired before the target intercept time, determining that the container vehicle enters the intelligent gate. For example, the terminal device acquires a first target container image (e.g., a target container image a1, a target container image b1, a part or all of a target container image c 1) for the container vehicle a, and acquires a target vehicle image for the container vehicle a, determines a target cut-off time based on a first acquisition time and a second acquisition time of the first target container image, and if a second target container image (i.e., a target container image d 1) for the container vehicle a is acquired before the target cut-off time, the terminal device determines that the container vehicle a enters the smart gate.
Case 2: if a second target container image for the container vehicle is not acquired when the target intercept time is reached, determining that the container vehicle does not enter the smart gate. For example, the terminal device acquires the first target container image and the target vehicle image for the container vehicle a, determines the target cut-off time based on the first acquisition time and the second acquisition time of the first target container image, and if the second target container image for the container vehicle a is not acquired yet when the target cut-off time is reached, the terminal device determines that the container vehicle a does not enter the smart gate, discards the first target container image and the target vehicle image related to the container vehicle a, and ends the identification process for the container vehicle a.
Case 3: if a second target container image for a container vehicle is not acquired and a first target container image for another container vehicle is acquired before the target intercept time is reached, determining that the container vehicle does not enter the smart gate. For example, the terminal device acquires the first target container image and the target vehicle image for the container vehicle a, determines the target cut-off time based on the first acquisition time and the second acquisition time of the first target container image, and if the second target container image for the container vehicle a is not acquired before the target cut-off time is reached, but the terminal device determines that the container vehicle a does not enter the smart gate if the first target container image for the container vehicle B has been acquired, discards the first target container image and the target vehicle image related to the container vehicle a, and ends the identification process for the container vehicle a. Since the first target container image for the container vehicle B has been acquired, the identification process of the container vehicle B is performed, and the specific identification process is not described here.
The following describes a process of determining whether a container vehicle enters an intelligent gate according to a first acquisition time and a second acquisition time in combination with a specific application scenario. In the present application scenario, it is assumed that the order of acquiring the target container image and the target vehicle image is, for example, the target container image c1 in the front direction of the vehicle, the target container image a1 in the right direction of the vehicle, the target vehicle image b1 in the left direction of the vehicle, and the target container image d1 in the rear direction of the vehicle, and the above order is, of course, merely illustrative, not limiting. Based on the above sequence, the implementation flow may be as shown in fig. 3:
Step 2031, it is determined whether or not the target container image c1 is acquired. If so, a first acquisition time c2 of the target container image c1 is recorded and step 2032 is executed. If not, step 2032 is performed directly.
Step 2032, it is determined whether or not the target container image a1 is acquired. If so, a first acquisition time a2 of the target container image a1 is recorded and step 2033 is executed. If not, step 2033 is executed directly.
Step 2033, it is determined whether or not the target vehicle image is acquired. If so, a second acquisition time of the target vehicle image is recorded and step 2034 is executed. If not, step 2034 is executed directly.
Step 2034, it is determined whether or not the target container image b1 is acquired. If so, a first acquisition time b2 of the target container image b1 is recorded and step 2035 is executed. If not, step 2035 is performed directly.
Step 2035, it is determined whether or not the target container image d1 is acquired. If so, the first acquisition time d2 of the target container image d1 is recorded, the condition that the container vehicle enters the intelligent gate is determined, the condition that the container vehicle enters the intelligent gate is successfully detected, and the process is ended. If not, step 2036 is executed.
Step 2036, it is determined whether or not the target container image c1 for another container vehicle is acquired. If so, determining that the container vehicle does not enter the intelligent gate, so as to successfully detect that the container vehicle does not enter the intelligent gate, ending the flow, and executing the detection of another container vehicle. If not, step 2037 is executed.
Step 2037, it is determined whether the current time has reached the target cut-off time. If so, determining that the container vehicle does not enter the intelligent gate, so as to successfully detect that the container vehicle does not enter the intelligent gate, and ending the flow. If not, return to step 2035 and continue to determine if the target container image d1 was acquired.
In summary, the terminal device may determine whether a container vehicle enters the smart gate, and if the container vehicle enters the smart gate, the terminal device may determine a container identifier of the container vehicle according to the target container image, and determine a license plate identifier of the container vehicle according to the target vehicle image.
In one possible embodiment, to determine the container identification of the container vehicle, the following may be used: the terminal device inputs the target container image to the neural network, so that the neural network determines the container identification of the container vehicle according to the feature vector of the target container image. To determine the license plate identification of a container vehicle, the following means may be employed: the terminal equipment inputs the target vehicle image into the neural network so that the neural network can determine the license plate identification of the container vehicle according to the feature vector of the target vehicle image.
For example, a deep learning algorithm may be preconfigured in the terminal device, and the terminal device may determine the container identifier in the target container image through the deep learning algorithm, and determine the license plate identifier in the target vehicle image through the deep learning algorithm. For example, the terminal device may input the target container image to the neural network, such that the neural network determines the container identification in the target container image. The terminal device may input the target vehicle image to the neural network, so that the neural network determines the license plate identifier in the target vehicle image.
In the training process of the neural network, a large number of training images are utilized to train each neural network parameter in the neural network, the process is not limited, and the mapping relation between the feature vector and the container identifier and the mapping relation between the feature vector and the license plate identifier can be included for the neural network which is trained.
In the using process of the neural network, after the terminal equipment obtains the target container image, the target container image can be input into the neural network. The neural network may extract a feature vector of the target container image, obtain a container identifier corresponding to the feature vector, and output the container identifier, so that the terminal device may obtain the container identifier, that is, the container identifier of the container vehicle. After obtaining the target vehicle image, the terminal device may input the target vehicle image to the neural network. The neural network can extract the feature vector of the target vehicle image, obtain the license plate identification corresponding to the feature vector, and output the license plate identification, so that the terminal equipment can obtain the license plate identification, namely the license plate identification of the container vehicle.
And 204, if the container vehicle is determined to be legal according to the container identifier and the license plate identifier, allowing the container vehicle to pass through the intelligent gate. If the container vehicle is determined to be an illegal container vehicle according to the container identifier and the license plate identifier, the container vehicle is forbidden to pass through the intelligent gate.
In one possible implementation manner, the terminal device may send the container identifier and the license plate identifier to a third party platform, so that the third party platform performs validity detection on the container vehicle according to the container identifier and the license plate identifier. For example, if a data item corresponding to the container identifier and the license plate identifier exists in the database of the third party platform, the data item includes related information of the container vehicle, related information of the vehicle owner and the like, the content of the data item is not limited, and the container vehicle is recorded as legal in the data item, the third party platform detects that the container vehicle is legal and sends a legal instruction to the terminal device. Or if the data item corresponding to the container identifier and the license plate identifier exists in the database of the third party platform, and the container vehicle is recorded as illegal in the data item, the third party platform detects that the container vehicle is illegal and sends an illegal instruction to the terminal equipment. Or if the data items corresponding to the container identifier and the license plate identifier do not exist in the database of the third party platform, the third party platform detects that the container vehicle is illegal and sends an illegal instruction to the terminal equipment. Of course, the above are only examples of detection modes, and are not limited thereto.
If the terminal equipment receives a legal instruction returned by the third party platform, the container vehicle is determined to be legal, and the container vehicle is allowed to pass through an intelligent gate, such as a lifting rod is controlled to pass.
If the terminal equipment receives an illegal instruction returned by the third party platform, determining that the container vehicle is an illegal container vehicle, and prohibiting the container vehicle from passing through the intelligent gate to be processed by related personnel.
In another possible implementation manner, the terminal device itself can also perform validity detection on the container vehicle according to the container identifier and the license plate identifier. For example, if a data item corresponding to the container identifier and the license plate identifier exists in the database, and the container vehicle is legal in the data item, the container vehicle is detected to be legal, and the container vehicle is allowed to pass through the intelligent gate. Or if the data item corresponding to the container identifier and the license plate identifier exists in the database, and the container vehicle is illegally recorded in the data item, detecting that the container vehicle is illegally, and prohibiting the container vehicle from passing through the intelligent gate. Or if the data items corresponding to the container identification and the license plate identification do not exist in the database, detecting that the container vehicle is illegal, and prohibiting the container vehicle from passing through the intelligent gate. Of course, the above manner is merely an example, and is not limited thereto.
The terminal device may also send information such as the first acquisition time and the second acquisition time to the third party platform, so that the third party platform records the first acquisition time and the second acquisition time. For example, the third party platform records the first acquisition time a2, the first acquisition time b2, the first acquisition time c2, the first acquisition time d2, the second acquisition time, and the like. The first acquisition time c2 may represent a time when the container vehicle enters the smart gate, and the first acquisition time d2 may represent a time when the container vehicle exits the smart gate.
In the above embodiment, the terminal device needs to determine the target container image from the multi-frame container images, which will be described below. Assuming that the multi-frame container image comprises a container image 1-a container image 10, the terminal equipment determines the confidence coefficient 1 of the container identification in the container image 1, if the confidence coefficient 1 is larger than a preset threshold value, the container image 1 is used as an optimal frame image, and if the confidence coefficient 1 is not larger than the preset threshold value, the container image 1 is not used as the optimal frame image. The terminal device may then determine the confidence level 2 of the container identification in the container image 2. If the confidence coefficient 2 is not greater than the preset threshold value, ending the processing of the container image 2, continuing to determine the confidence coefficient 3 of the container identification in the container image 3, and so on.
If the confidence coefficient 2 is larger than the preset threshold value, judging whether an optimal frame image exists currently, if the optimal frame image does not exist, taking the container image 2 as the optimal frame image, ending the processing of the container image 2, continuing to determine the confidence coefficient 3 of the container identification in the container image 3, and the like.
If the optimal frame image exists, the confidence coefficient 2 is compared with the confidence coefficient of the optimal frame image, if the confidence coefficient 2 is larger than the confidence coefficient of the optimal frame image, the container image 2 is updated to be the optimal frame image, the processing of the container image 2 is finished, the confidence coefficient 3 of the container identification in the container image 3 is continuously determined, and the like. If the confidence coefficient 2 is smaller than the confidence coefficient of the optimal frame image, the optimal frame image is kept unchanged, the processing of the container image 2 is ended, the confidence coefficient 3 of the container identification in the container image 3 is continuously determined, and the like.
Based on the above processing manner, after the container image 1-10 is processed, an optimal frame image can be obtained, and the terminal device can use the optimal frame image as a target container image. Similarly, the terminal device may also determine the target vehicle image from the multiple frame vehicle images in the above manner.
In summary, when the confidence coefficient of the container image is greater than the preset threshold, the terminal device can determine the target container image, and if the confidence coefficient of all the container images is not greater than the preset threshold, the target container image does not exist in all the container images, that is, the terminal device does not acquire the target container image.
In the above embodiment, the terminal device may acquire the container identifier r1 from the target container image a1, acquire the container identifier r2 from the target container image b1, acquire the container identifier r3 from the target container image c1, and acquire the container identifier r4 from the target container image d 1. If the container identifier r1, the container identifier r2, the container identifier r3 and the container identifier r4 are the same, the identification of the container identifier is accurate. If different container identifications exist in the container identification r1, the container identification r2 and the container identification r3 and the container identification r4, the identification of the container identification is inaccurate, or the container vehicle has a plurality of container identifications, namely, the container vehicle is an abnormal vehicle, and a user needs to be prompted to perform abnormal treatment.
According to the technical scheme, in the embodiment of the application, the infrared correlation device is not required to be deployed on the intelligent gate, whether the container vehicle enters the intelligent gate or not can be determined according to the first acquisition time of the target container image and the second acquisition time of the target vehicle image, the recognition of the container vehicle is realized, the recognition accuracy is improved, the problems that the construction test process is complex, the time consumption of the environment construction test is long, the later fault investigation and maintenance are complex and the like can be avoided. The method can determine the container identification of the container vehicle according to the target container image and determine the license plate identification of the container vehicle according to the target vehicle image, thereby improving the identification accuracy of the container identification and the license plate identification, rapidly and efficiently completing the identification and relieving the congestion. The mode improves the identification accuracy rate, and the identification accuracy rate in the aspects of license plate identification, container identification, box type and the like reaches more than 98%. The scheme is simple to erect, and the capital cost is reduced by about two thirds compared with the existing infrared correlation scheme. The stability and the environmental adaptability (limit of field size, flatness, windage and the like) of the scheme are correspondingly improved, faults are easy to find, the fault rate is low, and the construction and maintenance costs are low.
Based on the same application concept as the above method, the embodiment of the present application further provides a device for managing an intelligent gate, as shown in fig. 4, which is a structure diagram of the device, where the device may include:
an acquisition module 41, configured to acquire a target container image and a first acquisition time of the target container image; and acquiring a target vehicle image and a second acquisition time of the target vehicle image;
a determining module 42, configured to determine, if it is determined that the container vehicle enters the smart gate according to the first acquisition time and the second acquisition time, a container identifier of the container vehicle according to the target container image, and determine a license plate identifier of the container vehicle according to the target vehicle image;
and the control module 43 is configured to allow the container vehicle to pass through the smart gate if it is determined that the container vehicle is a legal container vehicle according to the container identifier and the license plate identifier.
The target container image comprises a first target container image of a preset direction for the container vehicle, wherein the preset direction comprises at least one direction of a front side direction, a left side direction and a right side direction; the determining module 42 is specifically configured to, when determining that the container vehicle enters the smart gate according to the first acquisition time and the second acquisition time: determining a target cut-off time of container vehicle detection according to the first acquisition time and the second acquisition time of the first target container image; if a second target container image is acquired before the target interception moment, determining that a container vehicle enters an intelligent gate; wherein the second target container image is a target container image for a rear-side direction of the container vehicle.
The determining module 42 determines, according to the first acquisition time and the second acquisition time of the first target container image, that the target of container vehicle detection is at the time of interception is specifically:
determining a time delay time length according to a first acquisition time of the first target container image; or determining a time delay time length according to the first acquisition time and the second acquisition time of the first target container image;
and determining the target interception time according to the delay time and the second acquisition time.
The determining module 42 is further configured to: if a second target container image for the container vehicle is not acquired when the target interception moment is reached, determining that the container vehicle does not enter a smart gate; or if the second target container image for the container vehicle is not acquired and the first target container image for another container vehicle is acquired, determining that the container vehicle does not enter the smart gate.
The acquiring module 41 is specifically configured to, when acquiring the image of the target container: acquiring a first video stream from an intelligent detection unit, and acquiring multi-frame container images according to the first video stream; determining the confidence coefficient of the container identification in each frame of container image through a neural network; determining a target container image from the multi-frame container images according to the confidence level of the container identification in each frame of container image;
The acquiring module 41 is specifically configured to, when acquiring the target vehicle image: acquiring a second video stream from a license plate detection unit, acquiring a plurality of frames of vehicle images according to the second video stream, and determining the confidence coefficient of license plate identification in each frame of vehicle image through the neural network; and determining a target vehicle image from the multiple frames of vehicle images according to the confidence coefficient of the license plate identification in each frame of vehicle image.
The obtaining module 41 is specifically configured to, when determining the target container image from the multiple frames of container images, according to the confidence level of the container identifier in each frame of container image: determining a container image with the maximum confidence coefficient of the container identification as the target container image; the obtaining module 41 is specifically configured to, when determining the target vehicle image from the multiple frames of vehicle images according to the confidence level of the license plate identifier in each frame of vehicle image: and determining the vehicle image with the maximum confidence coefficient of the license plate identification as the target vehicle image.
The determining module 42 is specifically configured to, when determining the container identifier of the container vehicle from the target container image: inputting the target container image into a neural network, so that the neural network determines the container identification of the container vehicle according to the feature vector of the target container image;
The determining module 42 is specifically configured to, when determining the license plate identifier of the container vehicle according to the target vehicle image: and inputting the target vehicle image into a neural network, so that the neural network determines the license plate identification of the container vehicle according to the feature vector of the target vehicle image.
The control module 43 is specifically configured to, when determining that the container vehicle is a legal container vehicle according to the container identifier and the license plate identifier: the container identification and the license plate identification are sent to a third party platform, so that the third party platform can perform validity detection on the container vehicle according to the container identification and the license plate identification; if a legal instruction is received, determining that the container vehicle is a legal container vehicle; and the legal instruction is sent when the third party platform detects that the container vehicle is legal.
The control module 43 is further configured to: and sending the first acquisition time and the second acquisition time to the third party platform so that the third party platform records the first acquisition time and the second acquisition time.
Based on the same application concept as the above method, a terminal device is further provided in the embodiment of the present application, and from a hardware level, a schematic diagram of a hardware architecture of the terminal device may be shown in fig. 5. The terminal device may include: a processor 51 and a machine-readable storage medium 52, the machine-readable storage medium 52 storing machine-executable instructions executable by the processor 51; the processor 51 is configured to execute machine executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 51 is configured to execute machine executable instructions to implement the steps of:
acquiring a target container image and a first acquisition moment of the target container image;
acquiring a target vehicle image and a second acquisition time of the target vehicle image;
if the fact that the container vehicle enters the intelligent gate is determined according to the first acquisition time and the second acquisition time, determining a container identifier of the container vehicle according to the target container image, and determining a license plate identifier of the container vehicle according to the target vehicle image;
and if the container vehicle is determined to be legal according to the container identifier and the license plate identifier, allowing the container vehicle to pass through the intelligent gate.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where the machine-readable storage medium stores a number of computer instructions, where the computer instructions can implement the method disclosed in the above example of the present application when executed by a processor.
For example, the computer instructions, when executed by a processor, can implement the steps of:
Acquiring a target container image and a first acquisition moment of the target container image;
acquiring a target vehicle image and a second acquisition time of the target vehicle image;
if the fact that the container vehicle enters the intelligent gate is determined according to the first acquisition time and the second acquisition time, determining a container identifier of the container vehicle according to the target container image, and determining a license plate identifier of the container vehicle according to the target vehicle image;
and if the container vehicle is determined to be legal according to the container identifier and the license plate identifier, allowing the container vehicle to pass through the intelligent gate.
By way of example, the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, and the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (9)

1. A method for managing intelligent gates, the method comprising:
acquiring a target container image and a first acquisition moment of the target container image;
acquiring a target vehicle image and a second acquisition time of the target vehicle image;
if the fact that the container vehicle enters the intelligent gate is determined according to the first acquisition time and the second acquisition time, determining a container identifier of the container vehicle according to the target container image, and determining a license plate identifier of the container vehicle according to the target vehicle image;
If the container vehicle is determined to be legal according to the container identifier and the license plate identifier, allowing the container vehicle to pass through the intelligent gate;
wherein the target container image comprises a first target container image for a preset direction of the container vehicle; the preset direction comprises at least one direction of a front side direction, a left side direction and a right side direction;
the determining that the container vehicle enters the intelligent gate according to the first acquiring time and the second acquiring time comprises the following steps: determining a time delay time length according to the first acquisition time or determining a time delay time length according to the first acquisition time and the second acquisition time; determining a target interception moment of container vehicle detection according to the delay time length and the second acquisition moment; if a second target container image for the container vehicle in the rear side direction is acquired before the target interception moment, determining that the container vehicle enters a smart gate.
2. The method according to claim 1, wherein the method further comprises:
if a second target container image for the container vehicle is not acquired when the target interception moment is reached, determining that the container vehicle does not enter a smart gate; or alternatively, the process may be performed,
If a second target container image for a container vehicle is not acquired and a first target container image for another container vehicle is acquired, determining that the container vehicle does not enter a smart gate.
3. A method according to claim 1 or 2, characterized in that,
the acquiring the target container image includes: acquiring a first video stream from an intelligent detection unit, and acquiring multi-frame container images according to the first video stream; determining the confidence coefficient of the container identification in each frame of container image through a neural network; determining a target container image from the multi-frame container images according to the confidence level of the container identification in each frame of container image;
the acquiring the target vehicle image includes: acquiring a second video stream from a license plate detection unit, acquiring a plurality of frames of vehicle images according to the second video stream, and determining the confidence coefficient of license plate identification in each frame of vehicle image through the neural network; and determining a target vehicle image from the multiple frames of vehicle images according to the confidence coefficient of the license plate identification in each frame of vehicle image.
4. The method of claim 3, wherein the step of,
And determining a target container image from the multi-frame container image according to the confidence coefficient of the container identification in each frame of container image, wherein the method comprises the following steps:
determining a container image with the maximum confidence coefficient of the container identification as the target container image;
the determining the target vehicle image from the multi-frame vehicle image according to the confidence coefficient of the license plate identification in each frame of vehicle image comprises the following steps:
and determining the vehicle image with the maximum confidence coefficient of the license plate identification as the target vehicle image.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the determining the container identification of the container vehicle according to the target container image comprises the following steps:
inputting the target container image into a neural network, so that the neural network determines the container identification of the container vehicle according to the feature vector of the target container image;
the determining the license plate identification of the container vehicle according to the target vehicle image comprises the following steps:
and inputting the target vehicle image into a neural network, so that the neural network determines the license plate identification of the container vehicle according to the feature vector of the target vehicle image.
6. The method of claim 1, wherein said determining that the container vehicle is a legitimate container vehicle based on the container identification and the license plate identification comprises:
the container identification and the license plate identification are sent to a third party platform, so that the third party platform carries out validity detection on the container vehicle according to the container identification and the license plate identification;
if a legal instruction is received, determining that the container vehicle is a legal container vehicle; and the legal instruction is sent when the third party platform detects that the container vehicle is legal.
7. The method of claim 6, wherein the method further comprises:
and sending the first acquisition time and the second acquisition time to the third party platform so that the third party platform records the first acquisition time and the second acquisition time.
8. A management device for intelligent gates, the device comprising:
the acquisition module is used for acquiring a target container image and a first acquisition moment of the target container image; and acquiring a target vehicle image and a second acquisition time of the target vehicle image;
The determining module is used for determining the container identification of the container vehicle according to the target container image and determining the license plate identification of the container vehicle according to the target vehicle image if the container vehicle is determined to enter the intelligent gate according to the first acquiring time and the second acquiring time;
the control module is used for allowing the container vehicle to pass through the intelligent gate if the container vehicle is determined to be legal according to the container identifier and the license plate identifier;
wherein the target container image comprises a first target container image for a preset direction of the container vehicle; the preset direction comprises at least one direction of a front side direction, a left side direction and a right side direction;
the determining module is specifically configured to, when determining that the container vehicle enters the intelligent gate according to the first acquiring time and the second acquiring time: determining a time delay time length according to the first acquisition time or determining a time delay time length according to the first acquisition time and the second acquisition time; determining a target interception moment of container vehicle detection according to the delay time length and the second acquisition moment; if a second target container image for the container vehicle in the rear side direction is acquired before the target interception moment, determining that the container vehicle enters a smart gate.
9. A terminal device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the steps of:
acquiring a target container image and a first acquisition moment of the target container image;
acquiring a target vehicle image and a second acquisition time of the target vehicle image;
if the fact that the container vehicle enters the intelligent gate is determined according to the first acquisition time and the second acquisition time, determining a container identifier of the container vehicle according to the target container image, and determining a license plate identifier of the container vehicle according to the target vehicle image;
if the container vehicle is determined to be legal according to the container identifier and the license plate identifier, allowing the container vehicle to pass through the intelligent gate;
wherein the target container image comprises a first target container image for a preset direction of the container vehicle; the preset direction comprises at least one direction of a front side direction, a left side direction and a right side direction;
The determining that the container vehicle enters the intelligent gate according to the first acquiring time and the second acquiring time comprises the following steps: determining a time delay time length according to the first acquisition time or determining a time delay time length according to the first acquisition time and the second acquisition time; determining a target interception moment of container vehicle detection according to the delay time length and the second acquisition moment; if a second target container image for the container vehicle in the rear side direction is acquired before the target interception moment, determining that the container vehicle enters a smart gate.
CN202010198327.XA 2020-03-19 2020-03-19 Intelligent gate management method, device and equipment Active CN113496563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010198327.XA CN113496563B (en) 2020-03-19 2020-03-19 Intelligent gate management method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010198327.XA CN113496563B (en) 2020-03-19 2020-03-19 Intelligent gate management method, device and equipment

Publications (2)

Publication Number Publication Date
CN113496563A CN113496563A (en) 2021-10-12
CN113496563B true CN113496563B (en) 2023-04-25

Family

ID=77993573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010198327.XA Active CN113496563B (en) 2020-03-19 2020-03-19 Intelligent gate management method, device and equipment

Country Status (1)

Country Link
CN (1) CN113496563B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993031A (en) * 2016-10-27 2018-05-04 杭州海康威视系统技术有限公司 A kind of harbour tally method and device
CN109711787A (en) * 2018-11-07 2019-05-03 上海图森未来人工智能科技有限公司 A kind of harbour intelligence control system and related system and device
CN109726969A (en) * 2018-11-07 2019-05-07 上海图森未来人工智能科技有限公司 Harbour intelligence control system and related system and device
CN110197348A (en) * 2018-02-24 2019-09-03 北京图森未来科技有限公司 Automatic driving vehicle control method and automatic Pilot control device

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7596242B2 (en) * 1995-06-07 2009-09-29 Automotive Technologies International, Inc. Image processing for vehicular applications
GB2383415B (en) * 2000-09-08 2005-02-23 Automotive Tech Int Vehicle wireless sensing and communication system
JP2003081440A (en) * 2001-09-10 2003-03-19 Ishikawajima Harima Heavy Ind Co Ltd Method and device for reservation of carrying out container in container terminal
JP2003303363A (en) * 2002-04-11 2003-10-24 Ishikawajima Harima Heavy Ind Co Ltd Entry/exit management method and system for container terminal
US7676062B2 (en) * 2002-09-03 2010-03-09 Automotive Technologies International Inc. Image processing for vehicular applications applying image comparisons
US7231065B2 (en) * 2004-03-15 2007-06-12 Embarcadero Systems Corporation Method and apparatus for controlling cameras and performing optical character recognition of container code and chassis code
CN2847411Y (en) * 2005-11-24 2006-12-13 中国科学院自动化研究所 Electronic police device
US8447112B2 (en) * 2010-12-17 2013-05-21 Xerox Corporation Method for automatic license plate recognition using adaptive feature set
CN104166865A (en) * 2014-08-06 2014-11-26 佛山市明睿达科技有限公司 Port dispatching method and device based on (radio frequency identification) RFID technology
CN106210616A (en) * 2015-05-04 2016-12-07 杭州海康威视数字技术股份有限公司 The acquisition method of container representation information, device and system
JP6766994B2 (en) * 2016-05-09 2020-10-14 株式会社三井E&Sマシナリー Container top condition recording system
CN106651881B (en) * 2016-12-28 2023-04-28 同方威视技术股份有限公司 Vehicle inspection system, vehicle part recognition method and system
CN107895413A (en) * 2017-09-18 2018-04-10 同方威视技术股份有限公司 Electronic lock equipment and the method for carrying out loading space examination
JP6937499B2 (en) * 2018-03-14 2021-09-22 国土交通省関東地方整備局長 Container handling method and container handling system utilizing information and communication technology
CN108457218B (en) * 2018-05-16 2023-05-02 青岛港国际股份有限公司 Multi-station type flow control system for gate of container terminal
CN208309436U (en) * 2018-05-16 2019-01-01 青岛港国际股份有限公司 Container terminal multi-drop sluice gate
CN109829450A (en) * 2019-03-08 2019-05-31 中国联合网络通信有限公司广州市分公司 A kind of building site dump truck intelligent monitoring system and monitoring and managing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993031A (en) * 2016-10-27 2018-05-04 杭州海康威视系统技术有限公司 A kind of harbour tally method and device
CN110197348A (en) * 2018-02-24 2019-09-03 北京图森未来科技有限公司 Automatic driving vehicle control method and automatic Pilot control device
CN109711787A (en) * 2018-11-07 2019-05-03 上海图森未来人工智能科技有限公司 A kind of harbour intelligence control system and related system and device
CN109726969A (en) * 2018-11-07 2019-05-07 上海图森未来人工智能科技有限公司 Harbour intelligence control system and related system and device

Also Published As

Publication number Publication date
CN113496563A (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN111814621B (en) Attention mechanism-based multi-scale vehicle pedestrian detection method and device
CN110807385B (en) Target detection method, target detection device, electronic equipment and storage medium
CN111767882A (en) Multi-mode pedestrian detection method based on improved YOLO model
CN111709285A (en) Epidemic situation protection monitoring method and device based on unmanned aerial vehicle and storage medium
CN111652114B (en) Object detection method and device, electronic equipment and storage medium
CN112418360B (en) Convolutional neural network training method, pedestrian attribute identification method and related equipment
CN111767878A (en) Deep learning-based traffic sign detection method and system in embedded device
Jain et al. Performance analysis of object detection and tracking algorithms for traffic surveillance applications using neural networks
CN111553355B (en) Monitoring video-based method for detecting and notifying store outgoing business and managing store owner
CN111382808A (en) Vehicle detection processing method and device
CN115004269A (en) Monitoring device, monitoring method, and program
CN114022837A (en) Station left article detection method and device, electronic equipment and storage medium
CN113160272B (en) Target tracking method and device, electronic equipment and storage medium
CN109727268A (en) Method for tracking target, device, computer equipment and storage medium
CN111950507B (en) Data processing and model training method, device, equipment and medium
CN113496563B (en) Intelligent gate management method, device and equipment
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN112990156B (en) Optimal target capturing method and device based on video and related equipment
Wu et al. Research on asphalt pavement disease detection based on improved YOLOv5s
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN116311154A (en) Vehicle detection and identification method based on YOLOv5 model optimization
CN113837977B (en) Object tracking method, multi-target tracking model training method and related equipment
US20220405527A1 (en) Target Detection Methods, Apparatuses, Electronic Devices and Computer-Readable Storage Media
CN109101917A (en) Mask method, training method, the apparatus and system identified again for pedestrian
CN114639084A (en) Road side end vehicle sensing method based on SSD (solid State disk) improved algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant