CN113034378A - Method for distinguishing electric automobile from fuel automobile - Google Patents

Method for distinguishing electric automobile from fuel automobile Download PDF

Info

Publication number
CN113034378A
CN113034378A CN202011630190.7A CN202011630190A CN113034378A CN 113034378 A CN113034378 A CN 113034378A CN 202011630190 A CN202011630190 A CN 202011630190A CN 113034378 A CN113034378 A CN 113034378A
Authority
CN
China
Prior art keywords
image
thermal infrared
frame
vehicle
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011630190.7A
Other languages
Chinese (zh)
Other versions
CN113034378B (en
Inventor
史文中
张英俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute HKPU
Original Assignee
Shenzhen Research Institute HKPU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute HKPU filed Critical Shenzhen Research Institute HKPU
Priority to CN202011630190.7A priority Critical patent/CN113034378B/en
Publication of CN113034378A publication Critical patent/CN113034378A/en
Application granted granted Critical
Publication of CN113034378B publication Critical patent/CN113034378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for distinguishing an electric automobile from a fuel automobile, which comprises the following steps: acquiring a thermal infrared image of a target vehicle; preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing; inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles. The invention utilizes the difference of the electric automobile and the fuel automobile on the thermal infrared image, adopts the neural network model to identify the automobile, has higher precision and can obtain better target detection result.

Description

Method for distinguishing electric automobile from fuel automobile
Technical Field
The invention relates to the technical field of remote sensing, in particular to a method for distinguishing an electric automobile from a fuel automobile.
Background
With the improvement of environmental awareness and the development of new energy automobiles, the number of electric automobiles on roads is remarkably increased, and the understanding of the number and the proportion of the electric automobiles and fuel automobiles on the roads is beneficial to departments of traffic management, environmental protection and the like to master conditions and make decisions.
For the discrimination of the electric automobile, the prior art can carry out rough judgment according to the visible light characteristics, such as inquiring the vehicle information through the color of the license plate or by identifying the license plate, but the method has larger defects and uncertainty and has low discrimination precision.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The invention aims to solve the technical problem that the precision of distinguishing the electric automobile from the fuel automobile is not high in the prior art by providing a method for distinguishing the electric automobile from the fuel automobile aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problem is as follows:
a method of distinguishing between an electric vehicle and a fuel-powered vehicle, comprising the steps of:
acquiring a thermal infrared image of a target vehicle;
preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing;
inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
The method for distinguishing the electric automobile from the fuel automobile comprises the following steps of:
performing image enhancement processing on the thermal infrared image by adopting a histogram equalization algorithm to obtain an image-enhanced thermal infrared image;
and cutting the thermal infrared image subjected to image enhancement according to the lane range to obtain a processed thermal infrared image.
The method for distinguishing the electric automobile from the fuel automobile, wherein the thermal infrared image is subjected to image enhancement processing by adopting a histogram equalization algorithm to obtain an image-enhanced thermal infrared image, comprises the following steps:
determining the number of pixels of each gray level in the thermal infrared image;
determining a cumulative distribution function of the thermal infrared image according to the number of pixels of each gray level in the thermal infrared image;
and obtaining the thermal infrared image after image enhancement according to the cumulative distribution function and the thermal infrared image.
The method for distinguishing the electric automobile from the fuel automobile is characterized in that the trained neural network model is obtained by training through the following steps:
acquiring an original thermal infrared image of a vehicle and an original visible light image of the vehicle;
preprocessing the original thermal infrared image to obtain a processed original thermal infrared image;
marking the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a marking file;
and training a neural network model according to the processed original thermal infrared image and the label file to obtain the trained neural network model.
The method for distinguishing the electric vehicle from the fuel vehicle, wherein the acquiring of the original thermal infrared image of the vehicle and the original visible light image of the vehicle comprises:
acquiring an original thermal infrared image and an original visible light image of a vehicle on a road by adopting an unmanned aerial vehicle carrying a thermal infrared sensor; the infrared sensor faces the traveling direction of the vehicle and is arranged obliquely downward to be opposite to the tail of the vehicle.
The method for distinguishing the electric automobile from the fuel oil automobile is characterized in that the label file is final label information of each frame of image in the processed original thermal infrared image;
labeling the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a label file, comprising:
marking a first frame image and a last frame image of the processed original thermal infrared image after the vehicle enters a view field by adopting a square frame according to the original visible light image to obtain marking information of the first frame image and marking information of the last frame image; wherein the heat flow at the tail of the vehicle in the original thermal infrared image is positioned in the square frame;
and obtaining final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image and each frame of image in the processed original thermal infrared image so as to obtain a labeling file.
The method for distinguishing the electric automobile from the fuel automobile is characterized in that the labeling information comprises: the target center coordinate is the center coordinate of the box, the height of the target is the height of the box, and the width of the target is the width of the box;
the obtaining of the final annotation information of each frame of image according to the annotation information of the first frame of image, the annotation information of the last frame of image, and each frame of image in the processed original thermal infrared image to obtain an annotation file includes:
obtaining a confidence map of a next frame image corresponding to the first frame image of the vehicle by adopting a space-time context model from the first frame image, and taking a point with the maximum confidence coefficient in the confidence map of the next frame image as a forward target center coordinate of the next frame image until obtaining the forward target center coordinate of each frame image in the processed original thermal infrared image;
the step of obtaining, starting from the last frame of image, reverse annotation information of a previous frame of image corresponding to the last frame of image of the vehicle by using a space-time context model until obtaining the reverse annotation information of each frame of image in the processed original thermal infrared image includes:
obtaining a confidence map of a previous frame image corresponding to the last frame image of the vehicle by adopting a space-time context model from the last frame image, and taking a point with the maximum confidence coefficient in the confidence map of the previous frame image as a reverse target center coordinate of the previous frame image until obtaining the reverse target center coordinate of each frame image in the processed original thermal infrared image;
and for each frame of image in the processed original thermal infrared image, obtaining the height and width of the target in the frame of image according to the forward target center coordinate, the reverse target center coordinate, the labeling information of the first frame of image and the labeling information of the last frame of image of the frame of image, thereby obtaining the final labeling information of each frame of image in the processed original thermal infrared image.
The method for distinguishing the electric vehicle from the fuel vehicle, wherein the neural network model comprises: SSD model and Yolov5 model.
A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of any of the methods described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the steps of the method of any of the above.
Has the advantages that: the invention utilizes the difference of the electric automobile and the fuel automobile on the thermal infrared image, adopts the neural network model to identify the automobile, has higher precision and can obtain better target detection results.
Drawings
FIG. 1 is a flow chart of a method for distinguishing between an electric vehicle and a fuel-powered vehicle according to an embodiment of the present invention.
Fig. 2 is a relationship between the height of the target and the moving distance (pixel) of the target in the image in the method for distinguishing the electric vehicle from the fuel vehicle provided by the embodiment of the invention.
Fig. 3 is a relationship between the width of the target and the moving distance (pixel) of the target in the image in the method for distinguishing the electric vehicle from the fuel vehicle provided by the embodiment of the invention.
Fig. 4 is a schematic diagram of a forward tracking process using space-time context algorithm (STC) in a method for distinguishing an electric vehicle from a fuel vehicle according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a space-time context algorithm (STC) back-tracking procedure used in a method for distinguishing between electric vehicles and fuel-powered vehicles according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a tracking process of a semi-automatic labeling method provided in an embodiment of the present invention.
FIG. 7 is a loss diagram of a training process on an SSD algorithm for distinguishing electric vehicle and fuel vehicle data sets provided by an embodiment of the invention.
FIG. 8 is a graph illustrating the loss of the training process on the Yolov5 algorithm for distinguishing electric vehicle and fuel vehicle data sets provided by the embodiment of the present invention.
Fig. 9 is a first schematic diagram of a model test effect obtained by training an SSD algorithm according to the method for distinguishing an electric vehicle from a fuel vehicle provided in the embodiment of the present invention.
Fig. 10 is a second schematic diagram of a model test effect obtained by training an SSD algorithm according to the method for distinguishing an electric vehicle from a fuel vehicle provided in the embodiment of the present invention.
FIG. 11 is a first schematic diagram of a model test effect obtained by training a Yolov5 algorithm by using the method for distinguishing electric vehicles from fuel-powered vehicles provided by the embodiment of the invention.
FIG. 12 is a second schematic diagram of a model test effect obtained by training a Yolov5 algorithm by the method for distinguishing electric vehicles from fuel vehicles according to the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1-12, the present invention provides some embodiments of a method for distinguishing between an electric vehicle and a fuel-powered vehicle.
For the discrimination of the electric automobile, the prior art can perform rough judgment according to the visible light characteristics, such as inquiring the vehicle information through the color of the license plate or through recognizing the license plate, but the method has larger defects and uncertainty. Firstly, differences of countries/regions exist, some countries and regions do not provide special license plates for new energy/electric vehicles, and even if the countries and regions provide special license plates, green license plates can be hung on pure electric vehicles and hybrid electric vehicles (the pure electric vehicles can be hung with green license plates at the beginning of 'D', and the hybrid electric vehicles can be hung with green license plates at the beginning of 'F'), but the following problems exist only depending on the license plates: 1) license plate identification information cannot be acquired at night or in a road section with poor illumination; 2) the electric automobile can also hang blue common license plates to cause wrong separation; 3) no useful information can be obtained from the license plate for the non-license plate, the fake license plate and the special license plate (special license plate); 4) for green vehicles, identifying the first letters "D" and "F" (motor/hybrid) of the license plate presents certain difficulties, requiring high sensor resolution and environmental conditions.
As shown in fig. 1, a method for distinguishing an electric vehicle from a fuel vehicle according to the present invention includes the following steps:
and S100, acquiring a thermal infrared image of the target vehicle.
Specifically, an unmanned aerial vehicle carrying a thermal infrared sensor is used for acquiring an original thermal infrared image of a target vehicle on a road, the unmanned aerial vehicle carrying the thermal infrared sensor is used for acquiring a thermal infrared image of the target vehicle on the road in an inclined mode (an included angle between a main optical axis of photography and a gravity direction is about 65 degrees), the sensor inclines downwards towards the advancing direction of the target vehicle, the thermal infrared image of the tail of the target vehicle on the road is acquired, the image storage format is a single picture or video, and if the image is a video, the image needs to be cut into a single image. Of course, other methods may be used to collect the thermal infrared image of the target vehicle, for example, a thermal infrared sensor may be installed on the road, and the thermal infrared sensor may be combined with the road skynet monitoring.
S200, preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing.
Specifically, after the thermal infrared image is obtained, the thermal infrared image is preprocessed to obtain a processed thermal infrared image, and since the environment of the road is complicated and variable, the thermal infrared image of the target vehicle on the road needs to be preprocessed. The purpose of image enhancement is to adopt some technical means to improve the visual effect of the image or to convert the image into a form more suitable for human eye observation and machine recognition. The basic methods of image enhancement mainly include gray level conversion, gray level equalization, pseudo color enhancement, smoothing, sharpening, filtering and the like.
In the present invention, a histogram equalization algorithm is adopted to perform image enhancement, and step S200 specifically includes:
and step S210, carrying out image enhancement processing on the thermal infrared image by adopting a histogram equalization algorithm to obtain the thermal infrared image after image enhancement.
Specifically, histogram equalization is a simple and effective image enhancement technique that changes the gray scale of each pixel in an image by changing the histogram of the image, and is mainly used to enhance the contrast of an image with a small dynamic range. The original image may be concentrated in a narrow interval due to its gray distribution, resulting in an insufficiently sharp image. The gray values with a large number of pixels (namely, the gray values which play a main role in the picture) in the image are widened, and the gray values with a small number of pixels (namely, the gray values which do not play a main role in the picture) are merged, so that the contrast is increased, the image is clear, and the aim of enhancement is fulfilled.
Step S210 specifically includes:
and step S211, determining the number of pixels of each gray level in the thermal infrared image.
Step S212, determining a cumulative distribution function of the thermal infrared image according to the number of pixels of each gray level in the thermal infrared image.
And S213, obtaining the thermal infrared image after image enhancement according to the cumulative distribution function and the thermal infrared image.
Specifically, a histogram equalization calculation formula is adopted to perform image enhancement on the thermal infrared image, and a histogram equalization algorithm comprises the following steps:
(1) counting the number n of pixels of each gray level in the thermal infrared imagei,0≤i<L, L is the number of gray levels.
For example, the gray level may be 255, and the number of pixels of the ith gray level in the thermal infrared image is counted until the number n of pixels of each gray level in the statistical thermal infrared image is obtainedi
(2) The statistical probability of the pixel with the gray level i in the thermal infrared image is as follows: p is a radical ofx(i)=p(x=i)=niN is the number of pixels in the image.
The probability of occurrence can be understood as the ratio, in particular the number of pixels of the ith gray level to the total number of pixels. And obtaining the occurrence probability of each gray level in the thermal infrared image.
(3)pxThe cumulative distribution function of (d) is:
Figure BDA0002876096370000081
specifically, a cumulative distribution function is obtained according to the occurrence probability of each gray level in the thermal infrared image, that is, the occurrence probability of each gray level in the thermal infrared image is accumulated from the 0 th gray level to obtain the cumulative distribution function.
(4) Histogram equalization calculation formula:
Figure BDA0002876096370000082
cdfminis the cumulative distribution function minimum, M and N are thermal redThe length and width of the outer image, and L is a gray scale number, cdf (v) represents the value of v in the thermal infrared image, and h (v) represents the value of v in the thermal infrared image after image enhancement.
The histogram of the thermal infrared image output from the thermal infrared sensor is converted into a more uniformly distributed form to enhance the overall contrast of the image.
And S220, cutting the thermal infrared image after the image enhancement according to the lane range to obtain a processed thermal infrared image.
When the thermal infrared sensor shoots images, the thermal infrared images comprise bidirectional vehicles on roads, and because the thermal infrared images of the electric vehicle and the fuel vehicle are different on the thermal infrared images, the thermal infrared images of the tails of the two types of vehicles have obvious difference, and the thermal infrared images of the heads of the two types of vehicles have uncertain difference, the thermal infrared images of the vehicles on the unidirectional lane are only needed, and the opposite lane of the unidirectional lane is cut off to realize lane shielding so as to avoid the influence of the vehicles on the opposite lane on the distinguishing of the vehicles.
Since the thermal infrared sensor is imaged in an oblique manner, in addition to acquiring vehicle information on a desired road, it is also common to collect vehicle (head imaging) information on an opposite road, which interferes with the model detection effect of the data set, so it is necessary to crop out the opposite lane in the image. In practical applications, one side or one corner of an image is usually required to be cut, and the specific method is as follows: for a batch of images with the same cropping range, image data is read in batch, the cropping range is manually set as required, and the pixel value of the image of the part to be cropped is set to 0.
Step S300, inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
Specifically, after the processed thermal infrared image is obtained, the processed thermal infrared image is input into the trained neural network model, the processed thermal infrared image is processed through the trained neural network model, and the vehicle type corresponding to the target vehicle is output, so that whether the target vehicle is an electric vehicle or a fuel vehicle is distinguished.
The neural network model comprises: ssd (single shot multi-box detector) and Yolov 5. The single shot indicates that the SSD algorithm belongs to a one-stage method, and the MultiBox indicates that the SSD is multi-box prediction.
The trained neural network model is obtained by training through the following steps:
step A100, obtaining an original thermal infrared image of a vehicle and an original visible light image of the vehicle.
Specifically, training data is acquired prior to training, and a raw thermal infrared image of the vehicle and a raw visible light image of the vehicle are acquired.
Specifically, step a100 includes:
a110, acquiring an original thermal infrared image and an original visible light image of a vehicle on a road by adopting an unmanned aerial vehicle carrying a thermal infrared sensor; wherein, the infrared sensor is arranged in a downward inclination manner.
It should be noted that the platform for acquiring data such as the original thermal infrared image and the original visible light image of the vehicle on the road does not affect the training and application of the model, for example, the data acquired by the unmanned aerial vehicle can be used for training and installed on the data acquired by the thermal infrared sensor on the road for application. As another example, the training may be on data acquired by thermal infrared sensors installed on the road, while data acquired by drones are used for applications.
Step A200, preprocessing the original thermal infrared image to obtain a processed original thermal infrared image. Of course, the pre-treatment includes: image enhancement processing and lane shading processing.
Specifically, after the original thermal infrared image is obtained, the original thermal infrared image is preprocessed to obtain a processed original thermal infrared image. The specific preprocessing process may be the same as step S200.
And A300, marking the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a marking file.
When the vehicle is marked, manual marking can be adopted, and certainly, a semi-automatic marking mode can be adopted to obtain a marking file.
And (3) integrating the original thermal infrared influence and the original visible light image, and distinguishing the electric automobile and the fuel automobile in the original thermal infrared image.
During labeling, the method for distinguishing whether a specific automobile in the data set is an electric automobile or a fuel automobile is as follows: and acquiring an original visible light image according to the characteristics of the vehicle on the original infrared thermal image while acquiring the original infrared thermal image. First, it is observed whether the vehicle has a tail heat flow in the thermal infrared image (the tail of the vehicle, a wake-like region formed on the thermal infrared image due to high-temperature exhaust gas discharged from the exhaust system), and then it is observed whether the tail of the vehicle is in a high-temperature state and the characteristics of the visible image (the color of the license plate, the brand, etc.).
In step (1): and judging whether the automobile has tail heat flow and an obvious high-temperature area on the original infrared thermal image. If the automobile has tail heat flow and an obvious high-temperature area on the infrared thermal image, the automobile is judged to be a fuel automobile regardless of the color of the license, and if the automobile does not have the tail heat flow and the obvious high-temperature area on the infrared thermal image, the next step is carried out;
in step (2): if the automobile does not have tail heat flow and an obvious high-temperature area on the original infrared thermal image, judging whether the license plate is a new energy license plate (green), if so, judging the license plate to be an electric automobile, and if not, entering the next step;
in step (3): if the automobile has no tail heat flow and obvious high-temperature area on the infrared thermal image, and the license plate is not a new energy license plate, the image characteristics of the front and back sequences of the target need to be analyzed: a) if the target of the whole sequence has no tail heat flow and no obvious high-temperature area, the target can be judged as an electric automobile; b) If the front sequence and the back sequence accord with the characteristics of the fuel automobile, the fuel automobile is judged.
It is worth noting that there are many hybrid vehicles on the current road, which obviously conform to the performance of fuel vehicles when imaging but suspend new energy license plates of the same color as electric vehicles, and which belong to fuel vehicles according to a distinguishing method (if the vehicle has tail heat flow and an obvious high temperature area on the infrared thermal image, the vehicle is judged to be a fuel vehicle regardless of the color of the license plate). For the hybrid vehicle, only the power system of the hybrid vehicle is focused on imaging. If it has a tail heat flow and a significantly high temperature region on the thermal infrared image, the hybrid vehicle is called a fuel vehicle regardless of whether it is in a state of switching between electric and fuel. By the method, all automobiles on the road can be classified into electric automobiles or fuel automobiles, and the electric automobiles and the fuel automobiles are not mixed.
When semi-automatic labeling is adopted, a space-time context target tracking algorithm is used for assisting in carrying out semi-automatic labeling on targets of an electric automobile and a fuel automobile, the space-time context target tracking algorithm is based on a Bayes framework, a space-time relation is established between a target object and local context of the target object, the model models statistical correlation between the target and low-level features (namely image intensity and position) of the target and surrounding areas of the target object, and the optimal target position is obtained by calculating a confidence map and maximizing a position likelihood function of the target.
And generating a labeling file containing four parameters of the target upper left corner coordinates (x, y), the target width and the target height.
As a preferred example of the invention, when a vehicle target is labeled, a specific vehicle target can continuously obtain a plurality of (frame) images from the time of entering the visual field to the time of exiting the visual field, the appearance of the target between adjacent frames does not change suddenly, and the position does not change greatly, therefore, a semi-automatic high-precision labeling method is provided, a space-time context method in the target tracking field is introduced, the method learns the space-time context model on line through an appearance model of the tracking target and the space relation between the tracking target and the surrounding context information, expresses the target center coordinate in the next frame through the maximum confidence value, the tracking of the target center coordinate position is more accurate but the change result of the target scale degree is not accurate under the condition that the appearance of the target is not changed greatly, therefore, the method is introduced into an automatic labeling training set process, and the precision of the tracking result when the target center coordinate position and the target scale change is improved, marking a head frame and a tail frame of a target in a continuous sequence, tracking by using a bidirectional target tracking algorithm, calculating the position of the target by combining a forward tracking process and a reverse tracking process, and calculating the target scale according to the upper position and the lower position of the target in an image.
Specifically, step a300 includes:
step A310, labeling a first frame image and a last frame image after the vehicle enters a view field in the processed original thermal infrared image by adopting a square frame to obtain labeling information of the first frame image and labeling information of the last frame image; and the heat flow of the tail part of the vehicle in the original thermal infrared image is positioned in the box.
Specifically, the labeling information includes: the target center coordinate is the center coordinate of the box, the height of the target is the height of the box, and the width of the target is the width of the box.
Specifically, the first frame image and the last frame image of the target from the entering visual field to the exiting visual field are marked. The method is characterized in that a rectangle parallel to a coordinate axis is used for labeling, a principle of containing automobile tail heat flow is followed when a fuel automobile is labeled, and if the target is an electric automobile (without tail heat flow), a space similar to the tail heat flow of the fuel automobile needs to be reserved.
Step A320, obtaining the final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image and each frame of image in the processed original thermal infrared image, so as to obtain a labeling file.
Specifically, according to the labeling information of the first frame image, the labeling information of the last frame image, and each frame image in the processed original thermal infrared image, a space-time context model is adopted to obtain the final labeling information of each frame image, so as to obtain a labeling file.
Specifically, step a320 includes:
step A321, starting from the first frame image, obtaining a confidence map of the vehicle in a next frame image corresponding to the first frame image by adopting a space-time context model, and taking a point with the maximum confidence coefficient in the confidence map of the next frame image as a forward target center coordinate of the next frame image until obtaining the forward target center coordinate of each frame image in the processed original thermal infrared image.
Specifically, starting from the target in the first frame, a confidence map of the target in the next frame is obtained by using a space-time context model, and a point with the maximum confidence is found, wherein the point is the target center coordinate of the next frame of the image, so that the target center coordinates of the rest frames are obtained in an iteration mode.
Step A322, from the last frame of image, obtaining a confidence map of the vehicle in a previous frame of image corresponding to the last frame of image by adopting a space-time context model, and taking a point with the maximum confidence coefficient in the confidence map of the previous frame of image as a reverse target center coordinate of the previous frame of image until obtaining the reverse target center coordinate of each frame of image in the processed original thermal infrared image.
Specifically, a confidence map of the vehicle in a previous frame image corresponding to the last frame image is obtained by adopting a space-time context model from the last frame image, and a point with the maximum confidence coefficient in the confidence map of the previous frame image is used as a reverse target center coordinate of the previous frame image until a reverse target center coordinate of each frame image in the processed original thermal infrared image is obtained;
in the step a323, in the reverse tracking process, for each frame of image in the processed original thermal infrared image, the height of the target and the width of the target in the frame of image are obtained according to the forward target center coordinate and the reverse target center coordinate of the frame of image, the label information of the first frame of image, and the label information of the last frame of image, so as to obtain the final label information of each frame of image in the processed original thermal infrared image.
According to the sceneThe distance the target moves to update the scale (including the height of the target and the width of the target). Aiming at each frame of image in the processed original thermal infrared image, obtaining the forward Scale of the frame of image according to the forward target center coordinate and the reverse target center coordinate of the frame of image and the labeling information of the first frame of imageforward(n) (i.e., the forward height of the target and the forward width of the target). Obtaining the reverse Scale of the frame image according to the forward target center coordinate and the reverse target center coordinate of the frame image and the labeling information of the last frame imagereverse(n) (i.e., reverse height of the target and reverse width of the target). Obtaining a final target center coordinate loc (n) of the frame image according to the forward target center coordinate and the reverse target center coordinate of the frame image, wherein the target center coordinate is an abscissa of the target center and an ordinate of the target center. And finally, obtaining the final scale (n) (namely the final height of the target and the final width of the target) of the frame image according to the labeling information of the first frame image, the labeling information of the last frame image and the final target center coordinate of the frame image, so as to obtain the final labeling information of each frame image in the processed original thermal infrared image.
As the height and width of the target are approximately linear with the distance (pixels) the target moves in the image as shown in fig. 2 and 3, the forward tracking process target scale update can be calculated by the following equation, since the error is small enough to be considered linear:
Figure BDA0002876096370000131
Scaleforward(n) is the target dimension, Scale, of the nth frame during the forward tracking processreverse(1) Is the target scale size of frame 1 in the back tracking process,
Figure BDA0002876096370000141
is the ordinate of the target center of the nth frame in the forward tracking process,
Figure BDA0002876096370000142
is the ordinate of the center of the object in the first frame (i.e., the last frame of the forward trace) during the backward trace.
And starting from the last frame of the image, acquiring a confidence map of the target in the last frame (namely the last frame) by using a space-time context model, wherein the position with the maximum confidence is the central coordinate of the target in the last frame. The target scale is updated as follows:
Figure BDA0002876096370000143
Scalereverse(n) is the target dimension, Scale, of the nth frame in the reverse tracking processforward(1) Is the target Scale size, Scale, of frame 1 in the forward tracking processreverse(1) Is the target scale size of frame 1 in the back tracking process,
Figure BDA0002876096370000144
is the ordinate of the center of the object of frame 1 in the forward tracking process,
Figure BDA0002876096370000145
is the ordinate of the target center of the nth frame in the reverse tracking process.
The calculation method of the final position of the target is a weighted average sum of a forward process and a reverse process, the weight is determined according to the frame number of the first frame and the last frame away from the target, and the specific calculation method is as follows:
Figure BDA0002876096370000146
loc (n) is the target center coordinate of the nth frame in the final result, Locreverse(k-n) is the target center coordinate of the (k-n) th frame in the backward tracking process (i.e., the n-th frame in the forward tracking process), and k is the total number of frames of the image in the tracking process.
Knowing that the scale of the target changes approximately linearly with the change of the position of the target in the image, and knowing the scales of the first frame and the last frame of the target, the final scale of the target can be obtained according to the position of the target, the calculation process is as follows:
Figure BDA0002876096370000151
scale (n) is the Scale of the target in the nth frame of image in the final result, Scalereverse(1) Is the target Scale size, Scale, of frame 1 in the reverse tracking processforward(1) Is the target scale size of frame 1 in the forward tracking process,
Figure BDA0002876096370000152
is the ordinate of the center of the object of frame 1 in the forward tracking process,
Figure BDA0002876096370000153
is the ordinate of the center of the object in the first frame (i.e., the last frame of the forward trace) during the backward trace.
And A400, training a neural network model according to the processed original thermal infrared image and the label file to obtain the trained neural network model.
The well-made data set is trained by using a neural network model (which can be a deep convolution neural network model), and in order to prove the effectiveness of the well-made data set, the well-made data set is trained by using two target detection algorithms of a classic SSD and a latest Yolov5 and tested on a test set. In the SSD algorithm experiments, the parameters are as follows: batch size: 8, learning rate: 2e-4, weight decay: 5e-4, number of iterations: 60000, learning rate decay step: (30000, 45000, 60000), about 170 sessions (epoch) were trained. In the Yolov5 algorithm experiment, four models of Yolov5s, Yolov5m, Yolov5 and Yolov5x are used, and the results show that the results of the four models have small difference, taking the Yolov5s model with the minimum network parameters as an example, the parameters are as follows, and the batch size is as follows: 32, image size: 640, confidence threshold: 1e-3, IOU threshold for non-maxima suppression process: 0.65, 400 sessions were trained.
Specifically, the loss function of the neural network model includes: location loss, confidence loss, and overall loss; wherein the overall loss is a function of the location loss and the confidence loss.
In order to illustrate the effect of the method for distinguishing the electric vehicle from the fuel vehicle according to the embodiment of the present invention, fig. 4 shows an effect diagram of forward tracking using a space-time context algorithm (STC), fig. 5 shows an effect diagram of reverse tracking using the space-time context algorithm (STC), and fig. 6 shows an effect diagram of tracking and labeling using the algorithm according to the embodiment of the present invention. FIG. 7 shows the position loss, confidence loss, and change in total loss of a training process of a data set on an SSD algorithm for an embodiment of the invention, where the total loss is a weighted sum of the position loss and the confidence loss: l (x, c, L, g) ═ Lconf(x,c)+aLloc(x,l,g),Lconf(x, c) is the loss of confidence, Lloc(x, l, g) is the loss of position. The position loss is L between the prediction frame and the ground truth frame2Loss:
Figure BDA0002876096370000161
Figure BDA0002876096370000162
is an indication parameter when
Figure BDA0002876096370000163
Time indicates that the ith prior frame matches the jth ground truth, liIs a predicted value of the position of the corresponding bounding box of the ith prior box
Figure BDA0002876096370000164
Is the location parameter of the jth ground truth. Confidence loss may be a number of types of logic loss:
Figure BDA0002876096370000165
Figure BDA0002876096370000166
as the confidence level of the ith prior box,
Figure BDA0002876096370000167
is an indication parameter when
Figure BDA0002876096370000168
Time indicates that the ith prior box matches the jth ground truth, and the coefficient α is typically set to 0.06 using multiple classes of logic loss. FIG. 8 shows the change in GIoU (loss function) of a data set training process on the Yolov5 algorithm according to an embodiment of the present invention. Fig. 9 shows a result of the SSD algorithm on the test set, and it can be found that the lowest one of the fuel cars does not completely enter the field of view, and that the target is not marked in the data set, but is recognized as a fuel car by the SSD algorithm, and the other targets are correctly recognized. Fig. 10 shows a result of the SSD algorithm on the test set, and it can be found that the top fuel car does not completely leave the field of view, and this target is not marked in the data set and is also identified as a fuel car by the SSD algorithm. Fig. 11 shows a result of Yolov5 algorithm on the test set, which shows that all targets were correctly identified and the uppermost vehicle tail was not falsely detected, and fig. 12 shows a result of Yolov5 algorithm on the test set, which shows that all targets were correctly identified and the lowermost target entering the field of view was not falsely detected.
Comparing the accuracy of the box selection in fig. 4, fig. 5 and fig. 6 shows that the embodiment of the present invention has better accuracy and can be used for labeling a data set. Comparing the results of target detection in fig. 9 and 10 with those in fig. 11 and 12, it can be seen that the data set of the embodiment of the present invention has better results on both SSD and Yolov5 algorithms, but misdetection of vehicles entering and exiting the field of view occurs in the SSD algorithms, but the detection results of these vehicles are also true in fact, but are not labeled in the data set.
Table 1 further shows the detection quantization accuracy indexes of the two deep convolutional neural networks on the data set, which include: 1. average precision of the fuel automobile (the average precision value of the fuel automobile Recall from 0-1); 2. average precision of the electric vehicle (average precision value of the electric vehicle Recall from 0-1); 3. fuel car average accuracy (average accuracy after removing interfering objects entering and exiting the field of view); mAP (average precision of fuel car and average precision of electric car).
From the target detection results in table 1, the method for distinguishing the electric vehicle from the fuel vehicle provided by the embodiment of the present invention reaches or approaches 0.99 at both algorithms, wherein it is found in the SSD algorithm experiment that, due to the interference of the fuel vehicle targets entering and exiting the field of view, in fact, the targets entering and exiting the field of view are not labeled in the data set, so that the interference is a kind of interference, and after removing the interference targets, the maps reach 0.9866 on the SSD calculation method, and the mah exceeding 0.99 can be obtained on the Yolov5 algorithm, which in conclusion shows that the method for distinguishing the electric vehicle from the fuel vehicle provided by the embodiment of the present invention has a better distinguishing accuracy.
TABLE 1 results of target detection
Figure BDA0002876096370000171
Compared with the existing electric automobile and fuel automobile identification technology, the embodiment of the invention mainly considers the thermal infrared image characteristics and the data acquisition and processing flow of the electric automobile and the fuel automobile:
1) on the basis of comprehensively analyzing the existing vehicle identification technology, a scheme for distinguishing an electric vehicle from a fuel vehicle without depending on a visible light image is provided: meanwhile, the difference of the electric automobile and the fuel automobile in a thermal infrared wave band and the performance advantage of the deep convolution neural network in image feature learning are considered, and a space is reserved at the tail of the automobile with large thermal infrared image feature difference to increase the identification accuracy.
2) A semi-automatic labeling method is provided by integrating a target tracking technology and a photogrammetry theory: firstly, marking a head frame and a tail frame of an image sequence, using a traditional target tracking algorithm to track forward and backward to obtain a target position, and then obtaining a target scale according to the upper position and the lower position of the target in the image to generate a marking file.
Through the two points, the embodiment of the invention can obtain a better target detection result and has greater practicability.
Based on the method for distinguishing the electric automobile from the fuel automobile in any embodiment, the invention further provides an embodiment of computer equipment.
The computer equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the following steps:
acquiring a thermal infrared image of a target vehicle;
preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing;
inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
Based on the method for distinguishing the electric automobile from the fuel automobile in any embodiment, the invention further provides an embodiment of a computer readable storage medium.
The computer-readable storage medium of the present invention has stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a thermal infrared image of a target vehicle;
preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing;
inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method of distinguishing between electric vehicles and fuel-powered vehicles, comprising the steps of:
acquiring a thermal infrared image of a target vehicle;
preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing;
inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
2. The method of claim 1, wherein the preprocessing the thermal infrared image to obtain a processed thermal infrared image comprises:
performing image enhancement processing on the thermal infrared image by adopting a histogram equalization algorithm to obtain an image-enhanced thermal infrared image;
and cutting the thermal infrared image subjected to image enhancement according to the lane range to obtain a processed thermal infrared image.
3. The method for distinguishing the electric vehicle from the fuel vehicle according to claim 2, wherein the image enhancement processing is performed on the thermal infrared image by using a histogram equalization algorithm to obtain an image-enhanced thermal infrared image, and the method comprises the following steps:
determining the number of pixels of each gray level in the thermal infrared image;
determining a cumulative distribution function of the thermal infrared image according to the number of pixels of each gray level in the thermal infrared image;
and obtaining the thermal infrared image after image enhancement according to the cumulative distribution function and the thermal infrared image.
4. The method for distinguishing between electric vehicles and fuel-powered vehicles according to claim 1, wherein the trained neural network model is trained by the following steps:
acquiring an original thermal infrared image of a vehicle and an original visible light image of the vehicle;
preprocessing the original thermal infrared image to obtain a processed original thermal infrared image;
marking the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a marking file;
and training a neural network model according to the processed original thermal infrared image and the label file to obtain the trained neural network model.
5. The method of distinguishing electric vehicles from fuel-powered vehicles according to claim 4, wherein said acquiring a raw thermal infrared image of a vehicle and a raw visible light image of said vehicle comprises:
acquiring an original thermal infrared image and an original visible light image of a vehicle on a road by adopting an unmanned aerial vehicle carrying a thermal infrared sensor; wherein the infrared sensor faces a traveling direction of the vehicle and is disposed to be inclined downward to be opposite to a rear of the vehicle.
6. The method for distinguishing electric vehicles from fuel-powered vehicles according to claim 4, wherein the label file is the final label information of each frame of image in the processed original thermal infrared image;
labeling the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a label file, comprising:
labeling a first frame image and a last frame image after the vehicle enters a view field in the processed original thermal infrared image by adopting a square frame according to the original visible light image to obtain labeling information of the first frame image and labeling information of the last frame image; wherein the heat flow at the tail of the vehicle in the original thermal infrared image is positioned in the square frame;
and obtaining final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image and each frame of image in the processed original thermal infrared image so as to obtain a labeling file.
7. The method of distinguishing an electric vehicle from a fuel vehicle according to claim 6, wherein the label information includes: the target center coordinate is the center coordinate of the box, the height of the target is the height of the box, and the width of the target is the width of the box;
the obtaining of the final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image, and each frame of image in the processed original thermal infrared image to obtain a labeling file includes:
obtaining a confidence map of a next frame image corresponding to the first frame image of the vehicle by adopting a space-time context model from the first frame image, and taking a point with the maximum confidence coefficient in the confidence map of the next frame image as a forward target center coordinate of the next frame image until obtaining the forward target center coordinate of each frame image in the processed original thermal infrared image;
obtaining a confidence map of a previous frame image corresponding to the last frame image of the vehicle by adopting a space-time context model from the last frame image, and taking a point with the maximum confidence coefficient in the confidence map of the previous frame image as a reverse target center coordinate of the previous frame image until obtaining the reverse target center coordinate of each frame image in the processed original thermal infrared image;
and aiming at each frame of image in the processed original thermal infrared image, obtaining the height and the width of a target in the frame of image according to the forward target center coordinate and the reverse target center coordinate of the frame of image, the label information of the first frame of image and the label information of the last frame of image, thereby obtaining the final label information of each frame of image in the processed original thermal infrared image.
8. The method of distinguishing between electric vehicles and fuel-powered vehicles according to claim 4, wherein the neural network model comprises: SSD model and Yolov5 model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202011630190.7A 2020-12-30 2020-12-30 Method for distinguishing electric automobile from fuel automobile Active CN113034378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011630190.7A CN113034378B (en) 2020-12-30 2020-12-30 Method for distinguishing electric automobile from fuel automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011630190.7A CN113034378B (en) 2020-12-30 2020-12-30 Method for distinguishing electric automobile from fuel automobile

Publications (2)

Publication Number Publication Date
CN113034378A true CN113034378A (en) 2021-06-25
CN113034378B CN113034378B (en) 2022-12-27

Family

ID=76459121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011630190.7A Active CN113034378B (en) 2020-12-30 2020-12-30 Method for distinguishing electric automobile from fuel automobile

Country Status (1)

Country Link
CN (1) CN113034378B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115287089A (en) * 2022-09-02 2022-11-04 香港理工大学 Method for preparing aromatic monomer from lignin
CN117373259A (en) * 2023-12-07 2024-01-09 四川北斗云联科技有限公司 Expressway vehicle fee evasion behavior identification method, device, equipment and storage medium
GB2620362A (en) * 2022-06-15 2024-01-10 William Cowper Stephen Systems and methods for managing electrical and internal combustion vehicles

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0942395A2 (en) * 1998-03-13 1999-09-15 Siemens Corporate Research, Inc. Method for digital video processing
WO2009047366A2 (en) * 2007-10-12 2009-04-16 Toyota Motor Europe Nv Methods and systems for processing of video data
JP2013003901A (en) * 2011-06-17 2013-01-07 Sumitomo Electric Ind Ltd Electric vehicle identification apparatus and electric vehicle identification method
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN109829449A (en) * 2019-03-08 2019-05-31 北京工业大学 A kind of RGB-D indoor scene mask method based on super-pixel space-time context
CN110503661A (en) * 2018-05-16 2019-11-26 武汉智云星达信息技术有限公司 A kind of target image method for tracing based on deeply study and space-time context
CN110570451A (en) * 2019-08-05 2019-12-13 武汉大学 multithreading visual target tracking method based on STC and block re-detection
CN111429725A (en) * 2020-02-17 2020-07-17 国网安徽电动汽车服务有限公司 Intelligent recognition charging method for electric automobile based on intelligent commercialization
CN211773082U (en) * 2019-12-04 2020-10-27 安徽育求消防科技有限公司 Device for distinguishing electric automobile from fuel automobile based on infrared characteristics
CN112070111A (en) * 2020-07-28 2020-12-11 浙江大学 Multi-target detection method and system adaptive to multiband images

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0942395A2 (en) * 1998-03-13 1999-09-15 Siemens Corporate Research, Inc. Method for digital video processing
WO2009047366A2 (en) * 2007-10-12 2009-04-16 Toyota Motor Europe Nv Methods and systems for processing of video data
JP2013003901A (en) * 2011-06-17 2013-01-07 Sumitomo Electric Ind Ltd Electric vehicle identification apparatus and electric vehicle identification method
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN110503661A (en) * 2018-05-16 2019-11-26 武汉智云星达信息技术有限公司 A kind of target image method for tracing based on deeply study and space-time context
CN109829449A (en) * 2019-03-08 2019-05-31 北京工业大学 A kind of RGB-D indoor scene mask method based on super-pixel space-time context
CN110570451A (en) * 2019-08-05 2019-12-13 武汉大学 multithreading visual target tracking method based on STC and block re-detection
CN211773082U (en) * 2019-12-04 2020-10-27 安徽育求消防科技有限公司 Device for distinguishing electric automobile from fuel automobile based on infrared characteristics
CN111429725A (en) * 2020-02-17 2020-07-17 国网安徽电动汽车服务有限公司 Intelligent recognition charging method for electric automobile based on intelligent commercialization
CN112070111A (en) * 2020-07-28 2020-12-11 浙江大学 Multi-target detection method and system adaptive to multiband images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AMANDA BERG,ET AL.: "Semi-automatic Annotation of Objects in Visual-Thermal Video", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP》 *
QING KANG: "Lightweight convolutional neural network for vehicle recognition in thermal infrared images", 《INFRARED PHYSICS AND TECHNOLOGY》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2620362A (en) * 2022-06-15 2024-01-10 William Cowper Stephen Systems and methods for managing electrical and internal combustion vehicles
CN115287089A (en) * 2022-09-02 2022-11-04 香港理工大学 Method for preparing aromatic monomer from lignin
CN115287089B (en) * 2022-09-02 2023-08-25 香港理工大学 Method for preparing aromatic monomer from lignin
CN117373259A (en) * 2023-12-07 2024-01-09 四川北斗云联科技有限公司 Expressway vehicle fee evasion behavior identification method, device, equipment and storage medium
CN117373259B (en) * 2023-12-07 2024-03-01 四川北斗云联科技有限公司 Expressway vehicle fee evasion behavior identification method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113034378B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN109460709B (en) RTG visual barrier detection method based on RGB and D information fusion
CN110969160B (en) License plate image correction and recognition method and system based on deep learning
CN111723854B (en) Expressway traffic jam detection method, equipment and readable storage medium
CN111898491B (en) Identification method and device for reverse driving of vehicle and electronic equipment
CN110866430A (en) License plate recognition method and device
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN111832461A (en) Non-motor vehicle riding personnel helmet wearing detection method based on video stream
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
Farag et al. Deep learning versus traditional methods for parking lots occupancy classification
CN111915583A (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN111950498A (en) Lane line detection method and device based on end-to-end instance segmentation
CN111507196A (en) Vehicle type identification method based on machine vision and deep learning
CN113221739B (en) Monocular vision-based vehicle distance measuring method
CN112052768A (en) Urban illegal parking detection method and device based on unmanned aerial vehicle and storage medium
CN117197019A (en) Vehicle three-dimensional point cloud image fusion method and system
Muril et al. A review on deep learning and nondeep learning approach for lane detection system
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
Burlacu et al. Stereo vision based environment analysis and perception for autonomous driving applications
Zhao et al. Research on vehicle detection and vehicle type recognition under cloud computer vision
CN113239962A (en) Traffic participant identification method based on single fixed camera
Yang et al. Research on Target Detection Algorithm for Complex Scenes
CN115762178B (en) Intelligent electronic police violation detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant