CN113034378B - Method for distinguishing electric automobile from fuel automobile - Google Patents

Method for distinguishing electric automobile from fuel automobile Download PDF

Info

Publication number
CN113034378B
CN113034378B CN202011630190.7A CN202011630190A CN113034378B CN 113034378 B CN113034378 B CN 113034378B CN 202011630190 A CN202011630190 A CN 202011630190A CN 113034378 B CN113034378 B CN 113034378B
Authority
CN
China
Prior art keywords
image
frame
thermal infrared
target
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011630190.7A
Other languages
Chinese (zh)
Other versions
CN113034378A (en
Inventor
史文中
张英俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute HKPU
Original Assignee
Shenzhen Research Institute HKPU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute HKPU filed Critical Shenzhen Research Institute HKPU
Priority to CN202011630190.7A priority Critical patent/CN113034378B/en
Publication of CN113034378A publication Critical patent/CN113034378A/en
Application granted granted Critical
Publication of CN113034378B publication Critical patent/CN113034378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for distinguishing an electric automobile from a fuel automobile, which comprises the following steps: acquiring a thermal infrared image of a target vehicle; preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing; inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles. The invention utilizes the difference of the electric automobile and the fuel automobile on the thermal infrared image, adopts the neural network model to identify the automobile, has higher precision and can obtain better target detection result.

Description

Method for distinguishing electric automobile from fuel automobile
Technical Field
The invention relates to the technical field of remote sensing, in particular to a method for distinguishing an electric automobile from a fuel automobile.
Background
With the improvement of environmental awareness and the development of new energy automobiles, the number of electric automobiles on roads is remarkably increased, and the understanding of the number and the proportion of the electric automobiles and fuel automobiles on the roads is beneficial to departments of traffic management, environmental protection and the like to master conditions and make decisions.
For the discrimination of the electric automobile, the prior art can carry out rough judgment according to the visible light characteristics, such as inquiring the vehicle information through the color of the license plate or by identifying the license plate, but the method has larger defects and uncertainty and has low discrimination precision.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The invention aims to solve the technical problem that the precision of distinguishing the electric automobile from the fuel automobile is not high in the prior art by providing a method for distinguishing the electric automobile from the fuel automobile aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problem is as follows:
a method of distinguishing between an electric vehicle and a fuel-powered vehicle, comprising the steps of:
acquiring a thermal infrared image of a target vehicle;
preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing;
inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
The method for distinguishing the electric automobile from the fuel automobile comprises the following steps of:
performing image enhancement processing on the thermal infrared image by adopting a histogram equalization algorithm to obtain an image-enhanced thermal infrared image;
and cutting the thermal infrared image subjected to image enhancement according to the lane range to obtain a processed thermal infrared image.
The method for distinguishing the electric automobile from the fuel automobile is characterized in that the thermal infrared image is subjected to image enhancement processing by adopting a histogram equalization algorithm to obtain an image-enhanced thermal infrared image, and the method comprises the following steps:
determining the number of pixels of each gray level in the thermal infrared image;
determining a cumulative distribution function of the thermal infrared image according to the number of pixels of each gray level in the thermal infrared image;
and obtaining the thermal infrared image after image enhancement according to the cumulative distribution function and the thermal infrared image.
The method for distinguishing the electric automobile from the fuel automobile is characterized in that the trained neural network model is obtained by training through the following steps:
acquiring an original thermal infrared image of a vehicle and an original visible light image of the vehicle;
preprocessing the original thermal infrared image to obtain a processed original thermal infrared image;
marking the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a marking file;
and training a neural network model according to the processed original thermal infrared image and the label file to obtain the trained neural network model.
The method for distinguishing the electric vehicle from the fuel vehicle, wherein the acquiring of the original thermal infrared image of the vehicle and the original visible light image of the vehicle comprises:
acquiring an original thermal infrared image and an original visible light image of a vehicle on a road by adopting an unmanned aerial vehicle carrying a thermal infrared sensor; wherein the infrared sensor faces a traveling direction of the vehicle and is disposed to be inclined downward to be opposite to a rear of the vehicle.
The method for distinguishing the electric automobile from the fuel automobile is characterized in that the label file is final label information of each frame of image in the processed original thermal infrared image;
labeling the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a label file, comprising:
labeling a first frame image and a last frame image after the vehicle enters a view field in the processed original thermal infrared image by adopting a square frame according to the original visible light image to obtain labeling information of the first frame image and labeling information of the last frame image; wherein the heat flow at the tail of the vehicle in the original thermal infrared image is positioned in the square frame;
and obtaining final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image and each frame of image in the processed original thermal infrared image so as to obtain a labeling file.
The method for distinguishing the electric automobile from the fuel automobile is characterized in that the labeling information comprises: the target center coordinate is the center coordinate of the box, the height of the target is the height of the box, and the width of the target is the width of the box;
the obtaining of the final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image, and each frame of image in the processed original thermal infrared image to obtain a labeling file includes:
obtaining a confidence map of a next frame image corresponding to the first frame image of the vehicle by adopting a space-time context model from the first frame image, and taking a point with the maximum confidence coefficient in the confidence map of the next frame image as a forward target center coordinate of the next frame image until obtaining the forward target center coordinate of each frame image in the processed original thermal infrared image;
the step of obtaining, starting from the last frame of image, the reverse labeling information of the vehicle on the last frame of image corresponding to the last frame of image by using a space-time context model until obtaining the reverse labeling information of each frame of image in the processed original thermal infrared image includes:
obtaining a confidence map of a previous frame image corresponding to the last frame image of the vehicle by adopting a space-time context model from the last frame image, and taking a point with the maximum confidence coefficient in the confidence map of the previous frame image as a reverse target center coordinate of the previous frame image until obtaining the reverse target center coordinate of each frame image in the processed original thermal infrared image;
and aiming at each frame of image in the processed original thermal infrared image, obtaining the height of a target and the width of the target in the frame of image according to the forward target center coordinate and the reverse target center coordinate of the frame of image, the label information of the first frame of image and the label information of the last frame of image, thereby obtaining the final label information of each frame of image in the processed original thermal infrared image.
The method for distinguishing the electric vehicle from the fuel vehicle, wherein the neural network model comprises: SSD model and Yolov5 model.
A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program realizes the steps of any of the methods described above.
A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the steps of the method of any of the above.
Has the advantages that: the invention utilizes the difference of the electric automobile and the fuel automobile on the thermal infrared image, adopts the neural network model to identify the automobile, has higher precision and can obtain better target detection result.
Drawings
FIG. 1 is a flow chart of a method for distinguishing between an electric vehicle and a fuel-powered vehicle according to an embodiment of the present invention.
Fig. 2 is a relationship between the height of the target and the moving distance (pixel) of the target in the image in the method for distinguishing the electric vehicle from the fuel vehicle provided by the embodiment of the invention.
Fig. 3 is a relationship between the width of the target and the moving distance (pixel) of the target in the image in the method for distinguishing the electric vehicle from the fuel vehicle provided by the embodiment of the invention.
Fig. 4 is a schematic diagram of a forward tracking process using space-time context algorithm (STC) in a method for distinguishing an electric vehicle from a fuel vehicle according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a space-time context algorithm (STC) back-tracking procedure used in a method for distinguishing between electric vehicles and fuel-powered vehicles according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a tracking process of a semi-automatic labeling method provided in an embodiment of the present invention.
FIG. 7 is a loss diagram of a training process on an SSD algorithm for distinguishing electric vehicle and fuel vehicle data sets provided by an embodiment of the invention.
FIG. 8 is a loss diagram of a training process on the Yolov5 algorithm for distinguishing electric vehicle and fuel vehicle data sets provided by an embodiment of the invention.
Fig. 9 is a first schematic diagram of a model test effect obtained by training an SSD algorithm according to the method for distinguishing an electric vehicle from a fuel vehicle provided in the embodiment of the present invention.
Fig. 10 is a second schematic diagram of a model test effect obtained by training an SSD algorithm according to the method for distinguishing an electric vehicle from a fuel vehicle provided by the embodiment of the present invention.
FIG. 11 is a first schematic diagram of a model test effect obtained by training a Yolov5 algorithm by using the method for distinguishing an electric vehicle from a fuel vehicle provided by the embodiment of the invention.
FIG. 12 is a second schematic diagram of a model test effect obtained by training a Yolov5 algorithm by using the method for distinguishing an electric vehicle from a fuel vehicle provided by the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1-12, the present invention provides some embodiments of a method for distinguishing an electric vehicle from a fuel vehicle.
For the discrimination of the electric vehicle, the prior art can perform rough discrimination according to the visible light characteristics, such as inquiring vehicle information through license plate color or through recognizing license plates, but the method has larger defects and uncertainties. Firstly, differences of countries/regions exist, some countries and regions do not provide special license plates for new energy/electric vehicles, and even if the countries and regions provide special license plates, green license plates can be hung on pure electric vehicles and hybrid electric vehicles (the pure electric vehicles can be hung with green license plates at the beginning of 'D', and the hybrid electric vehicles can be hung with green license plates at the beginning of 'F'), but the following problems exist only depending on the license plates: 1) License plate identification information cannot be acquired at night or on a road section with poor illumination; 2) The electric automobile can also hang blue common license plates to cause wrong separation; 3) No useful information can be obtained from the license plate for the non-license plate, the fake license plate and the special license plate (special license plate); 4) For green vehicles, identifying the first letters "D" and "F" (motor/hybrid) of the license plate presents certain difficulties, requiring high sensor resolution and environmental conditions.
As shown in fig. 1, a method for distinguishing an electric vehicle from a fuel vehicle of the present invention comprises the steps of:
and S100, acquiring a thermal infrared image of the target vehicle.
Specifically, an unmanned aerial vehicle carrying a thermal infrared sensor is used for acquiring an original thermal infrared image of a target vehicle on a road, the unmanned aerial vehicle carrying the thermal infrared sensor is used for acquiring a thermal infrared image of the target vehicle on the road in an inclined mode (an included angle between a main optical axis of photography and the gravity direction is about 65 degrees), the sensor inclines downwards towards the advancing direction of the target vehicle, the thermal infrared image of the tail of the target vehicle on the road is acquired, the image storage format is a single picture or video, and if the image is a video, the image needs to be cut into a single picture. Of course, other methods may be used to collect thermal infrared images of the target vehicle, for example, a thermal infrared sensor may be installed on the road, and the thermal infrared sensor may be combined with road skynet monitoring.
S200, preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing.
Specifically, after the thermal infrared image is obtained, the thermal infrared image is preprocessed to obtain a processed thermal infrared image, and since the environment of the road is complicated and variable, the thermal infrared image of the target vehicle on the road needs to be preprocessed. The purpose of image enhancement is to adopt some technical means to improve the visual effect of the image or convert the image into a form more suitable for human eye observation and machine recognition. The basic methods of image enhancement mainly include gray level transformation, gray level equalization, pseudo color enhancement, smoothing, sharpening, filtering and the like.
In the present invention, a histogram equalization algorithm is adopted to perform image enhancement, and step S200 specifically includes:
and step S210, carrying out image enhancement processing on the thermal infrared image by adopting a histogram equalization algorithm to obtain the thermal infrared image after image enhancement.
Specifically, histogram equalization is a simple and effective image enhancement technique, which changes the gray scale of each pixel in an image by changing the histogram of the image, and is mainly used for enhancing the contrast of the image with a small dynamic range. The original image may be concentrated in a narrow interval due to its gray distribution, resulting in an image with insufficient clarity. The gray values with a large number of pixels in the image (namely the gray values which play a main role in the picture) are widened, and the gray values with a small number of pixels (namely the gray values which do not play a main role in the picture) are merged, so that the contrast is increased, the image is clear, and the aim of enhancement is fulfilled.
Step S210 specifically includes:
and step S211, determining the number of pixels of each gray level in the thermal infrared image.
Step S212, determining a cumulative distribution function of the thermal infrared image according to the number of pixels of each gray level in the thermal infrared image.
And S213, obtaining the thermal infrared image after image enhancement according to the cumulative distribution function and the thermal infrared image.
Specifically, a histogram equalization calculation formula is adopted to perform image enhancement on the thermal infrared image, and a histogram equalization algorithm comprises the following steps:
(1) Counting the number n of pixels of each gray level in the thermal infrared image i I is more than or equal to 0 and less than L, and L is the gray level number.
For example, the gray level may be 255, and the number of pixels of the ith gray level in the thermal infrared image is counted until the number n of pixels of each gray level in the statistical thermal infrared image is obtained i
(2) The statistical probability of the pixel with the gray level i in the thermal infrared image is as follows: p is a radical of x (i)=p(x=i)=n i N is the number of pixels in the image.
The probability of occurrence can be understood as the ratio, in particular the number of pixels of the ith gray level to the total number of pixels. And obtaining the occurrence probability of each gray level in the thermal infrared image.
(3)p x The cumulative distribution function of (d) is:
Figure GDA0003080532480000071
specifically, a cumulative distribution function is obtained according to the occurrence probability of each gray level in the thermal infrared image, that is, the occurrence probability of each gray level in the thermal infrared image is accumulated from the 0 th gray level to obtain the cumulative distribution function.
(4) Histogram equalization calculation formula:
Figure GDA0003080532480000081
cdf min is a cumulative distribution function minimum value, M and N are the length and width of the thermal infrared picture, and L is a gray scale number, cdf (v) represents a pixel value of v in the thermal infrared picture, and h (v) represents a pixel value of the thermal infrared picture after image enhancement.
The histogram of the thermal infrared image output from the thermal infrared sensor is converted into a more uniformly distributed form to enhance the overall contrast of the image.
And S220, cutting the thermal infrared image subjected to image enhancement according to the lane range to obtain a processed thermal infrared image.
When the thermal infrared sensor shoots images, the thermal infrared images comprise bidirectional vehicles on roads, and because the thermal infrared images of the electric vehicles and the fuel oil vehicles on the thermal infrared images are different, the thermal infrared images of the tails of the two vehicles are obviously different, and the thermal infrared images of the heads of the two vehicles are uncertain, the thermal infrared images of the vehicles on the unidirectional lane are only needed, and the opposite lanes of the unidirectional lane are cut off, so that lane shielding is realized, and the vehicles on the opposite lanes are prevented from influencing the distinguishing of the vehicles.
Since the thermal infrared sensor is imaged in an oblique manner, in addition to acquiring vehicle information on a desired road, it is common to collect vehicle (head imaging) information on an opposite road, which interferes with the model detection effect of the data set, so it is necessary to crop out the opposite lane in the image. In practical applications, one side or one corner of an image is usually required to be cut, and the specific method is as follows: for a batch of images with the same cutting range, image data is read in batch, the cutting range is manually set as required, and the pixel value of the image of the part to be cut is set to be 0.
Step S300, inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
Specifically, after the processed thermal infrared image is obtained, the processed thermal infrared image is input into a trained neural network model, the processed thermal infrared image is processed through the trained neural network model, and the vehicle type corresponding to the target vehicle is output, so that whether the target vehicle is an electric vehicle or a fuel vehicle is distinguished.
The neural network model comprises: SSD (single shot multi-box detector) and Yolov5. The single shot indicates that the SSD algorithm belongs to a one-stage method, and the MultiBox indicates that the SSD is multi-box prediction.
The trained neural network model is obtained by training through the following steps:
step A100, obtaining an original thermal infrared image of a vehicle and an original visible light image of the vehicle.
Specifically, training data is acquired prior to training, and a raw thermal infrared image of the vehicle and a raw visible light image of the vehicle are acquired.
Specifically, step a100 includes:
a110, acquiring an original thermal infrared image and an original visible light image of a vehicle on a road by adopting an unmanned aerial vehicle carrying a thermal infrared sensor; wherein, the infrared sensor is arranged in a downward inclination manner.
It should be noted that the platform for acquiring data such as the original thermal infrared image and the original visible light image of the vehicle on the road does not affect the training and application of the model, for example, the data acquired by the unmanned aerial vehicle may be used for training and installed on the data acquired by the thermal infrared sensor on the road for application. As another example, the training may be on data acquired by thermal infrared sensors installed on the road, while data acquired by drones are used for applications.
Step A200, preprocessing the original thermal infrared image to obtain a processed original thermal infrared image. Of course, the pre-treatment includes: image enhancement processing and lane shading processing.
Specifically, after the original thermal infrared image is obtained, the original thermal infrared image is preprocessed to obtain a processed original thermal infrared image. The specific preprocessing process may be the same as step S200.
And A300, marking the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a marking file.
When the vehicle is marked, manual marking can be adopted, and a semi-automatic marking mode can be adopted to obtain a marking file.
And (4) integrating the original thermal infrared influence and the original visible light image, and distinguishing the electric automobile and the fuel automobile in the original thermal infrared image.
In labeling, the method for distinguishing whether a particular vehicle in the data set is an electric vehicle or a fuel vehicle is as follows: and acquiring an original visible light image according to the characteristics of the vehicle on the original infrared thermal image while acquiring the original infrared thermal image. First, it is observed whether there is tail heat flow in the thermal infrared image (vehicle tail, tail flow-like region formed on the thermal infrared image due to high-temperature exhaust gas discharged from the exhaust system), and then it is observed whether the vehicle tail is in high-temperature state and the characteristics of the visible image (license plate color, brand, etc.).
In step (1): and judging whether the automobile has tail heat flow and an obvious high-temperature area on the original infrared thermal image. If the automobile has tail heat flow and an obvious high-temperature area on the infrared thermal image, the automobile is judged to be a fuel automobile no matter what color the license plate is, and if the automobile does not have the tail heat flow and the obvious high-temperature area on the infrared thermal image, the next step is carried out;
in step (2): if the automobile does not have tail heat flow and an obvious high-temperature area on the original infrared thermal image, judging whether the license plate is a new energy license plate (green), if so, judging the license plate to be an electric automobile, and if not, entering the next step;
in step (3): if the automobile has no tail heat flow and obvious high-temperature area on the infrared thermal image, and the license plate is not a new energy license plate, the image characteristics of the front and back sequences of the target need to be analyzed: a) If the target of the whole sequence has no tail heat flow and no obvious high-temperature area, the target can be judged as an electric automobile; b) If the front sequence and the back sequence accord with the characteristics of the fuel automobile, the fuel automobile is judged.
It is worth noting that there are many hybrid vehicles on the current road, which obviously conform to the performance of fuel vehicles when imaging but suspend new energy license plates of the same color as electric vehicles, and which belong to fuel vehicles according to a distinguishing method (if the vehicle has tail heat flow and an obvious high temperature area on the infrared thermal image, the vehicle is judged to be a fuel vehicle regardless of the color of the license plate). For the hybrid vehicle, only the power system of the hybrid vehicle is focused on imaging. If it has a tail heat flow and a significantly high temperature region on the thermal infrared image, the hybrid vehicle is called a fuel vehicle regardless of whether it is in a state of switching between electric and fuel. By the method, all automobiles on the road can be classified into electric automobiles or fuel automobiles, and the electric automobiles and the fuel automobiles are not mixed.
When semi-automatic labeling is adopted, a space-time context target tracking algorithm is used for assisting in carrying out semi-automatic labeling on targets of an electric automobile and a fuel automobile, the space-time context target tracking algorithm is based on a Bayes framework, a space-time relation is established between a target object and local context of the target object, the model carries out modeling on statistical correlation between low-level features (namely image intensity and position) of the target object and surrounding areas of the target object, and the optimal target position is obtained by calculating a confidence map and maximizing a position likelihood function of the target object.
And generating a labeling file containing four parameters of target upper left corner coordinates (x, y), target width and target height.
As a preferred example of the invention, when a vehicle target is labeled, a specific vehicle target continuously obtains a plurality of (frame) images from entering the visual field to exiting the visual field, the appearance of the target between adjacent frames does not change suddenly, and the position does not change greatly.
Specifically, step a300 includes:
step A310, labeling a first frame image and a last frame image after the vehicle enters a view field in the processed original thermal infrared image by adopting a square frame to obtain labeling information of the first frame image and labeling information of the last frame image; and the heat flow of the tail part of the vehicle in the original thermal infrared image is positioned in the box.
Specifically, the labeling information includes: the target center coordinate is the center coordinate of the box, the target height is the height of the box, and the target width is the width of the box.
Specifically, the first frame image and the last frame image of the target from the entering visual field to the exiting visual field are marked. The rectangle parallel to the coordinate axis is used for labeling, the fuel automobile is labeled according to the principle of containing automobile tail heat flow, and if the target is an electric automobile (without tail heat flow), a space similar to the tail heat flow of the fuel automobile needs to be reserved.
Step A320, obtaining final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image and each frame of image in the processed original thermal infrared image, so as to obtain a labeling file.
Specifically, according to the labeling information of the first frame image, the labeling information of the last frame image, and each frame image in the processed original thermal infrared image, a space-time context model is adopted to obtain the final labeling information of each frame image, so as to obtain a labeling file.
Specifically, step a320 includes:
step A321, from the first frame image, obtaining a confidence map of the vehicle in a next frame image corresponding to the first frame image by adopting a space-time context model, and taking a point with the maximum confidence coefficient in the confidence map of the next frame image as a forward target center coordinate of the next frame image until obtaining the forward target center coordinate of each frame image in the processed original thermal infrared image.
Specifically, starting from the target in the first frame, a confidence map of the target in the next frame is obtained by using a spatio-temporal context model, a point with the maximum confidence is found, the point is the target center coordinate of the next frame of the image, and the target center coordinates of the rest frames are obtained by iteration.
Step A322, from the last frame of image, obtaining a confidence map of the vehicle in a previous frame of image corresponding to the last frame of image by adopting a space-time context model, and taking a point with the maximum confidence coefficient in the confidence map of the previous frame of image as a reverse target center coordinate of the previous frame of image until obtaining the reverse target center coordinate of each frame of image in the processed original thermal infrared image.
Specifically, a confidence map of a previous frame image corresponding to the last frame image of the vehicle is obtained by adopting a space-time context model from the last frame image, and a point with the maximum confidence coefficient in the confidence map of the previous frame image is used as a reverse target center coordinate of the previous frame image until a reverse target center coordinate of each frame image in the processed original thermal infrared image is obtained;
in the step a323, in the reverse tracking process, for each frame of image in the processed original thermal infrared image, the height of the target and the width of the target in the frame of image are obtained according to the forward target center coordinate and the reverse target center coordinate of the frame of image, the label information of the first frame of image, and the label information of the last frame of image, so as to obtain the final label information of each frame of image in the processed original thermal infrared image.
The scale (including the height of the object and the width of the object) is updated according to the distance the object moves in the scene. Aiming at each frame of image in the processed original thermal infrared image, obtaining the forward Scale of the frame of image according to the forward target center coordinate and the reverse target center coordinate of the frame of image and the labeling information of the first frame of image forward (n) (i.e., the forward height of the target and the forward width of the target). Obtaining the reverse Scale of the frame image according to the forward target center coordinate and the reverse target center coordinate of the frame image and the labeling information of the last frame image reverse (n) (i.e., reverse height of the target and reverse width of the target). Obtaining the final target center of the frame image according to the forward target center coordinate and the reverse target center coordinate of the frame imageThe coordinates Loc (n) indicate the abscissa of the center of the target and the ordinate of the center of the target. And finally, obtaining the final Scale (n) of the frame image (namely the final height of the target and the final width of the target) according to the labeling information of the first frame image, the labeling information of the last frame image and the final target center coordinate of the frame image, so as to obtain the final labeling information of each frame image in the processed original thermal infrared image.
As the height and width of the target are approximately linear with the distance (pixels) the target moves in the image as shown in fig. 2 and 3, since the error is small enough to be considered linear, the update of the target dimension in the forward tracking process can be calculated by the following formula:
Figure GDA0003080532480000131
Scale forward (n) is the target dimension, scale, of the nth frame during forward tracking reverse (1) Is the target scale size of frame 1 in the back tracking process,
Figure GDA0003080532480000141
is the ordinate of the target center of the nth frame in the forward tracking process,
Figure GDA0003080532480000142
is the ordinate of the center of the object of the first frame in the backward tracking process (i.e., the last frame in the forward tracking).
And (3) starting from the last frame of the image, acquiring a confidence map of the target in the last frame (namely the last frame) by using a space-time context model, wherein the position with the maximum confidence is the central coordinate of the target in the last frame. The target scale is updated as follows:
Figure GDA0003080532480000143
Scale reverse (n) is the nth of the backtracking procedureTarget size of frame, scale forward (1) Is the target dimension, scale, of frame 1 in the forward tracking process reverse (1) Is the target scale size of frame 1 in the back tracking process,
Figure GDA0003080532480000144
is the ordinate of the center of the object of frame 1 in the forward tracking process,
Figure GDA0003080532480000145
is the ordinate of the target center of the nth frame in the reverse tracking process.
The calculation method of the final position of the target is the weighted average sum of the forward process and the reverse process, the weight is determined according to the frame number of the first frame and the last frame away from the target, and the specific calculation method is as follows:
Figure GDA0003080532480000146
loc (n) is the target center coordinate of the nth frame in the final result, loc reverse (k-n) is the target center coordinate of the (k-n) th frame in the backward tracking process (i.e. the n-th frame in the forward tracking process), and k is the total number of frames in the tracking process.
Knowing that the scale of the target changes approximately linearly with the change of the position of the target in the image, and knowing the scales of the first frame and the last frame of the target, the final scale of the target can be obtained according to the position of the target, the calculation process is as follows:
Figure GDA0003080532480000151
scale (n) is the Scale of the target in the nth frame of image in the final result, scale reverse (1) Is the target size, scale, of frame 1 in the reverse tracking process forward (1) Is the target scale size of frame 1 in the forward tracking process,
Figure GDA0003080532480000152
is the ordinate of the center of the object of frame 1 in the forward tracking process,
Figure GDA0003080532480000153
is the ordinate of the center of the object in the first frame (i.e., the last frame of the forward trace) during the backward trace.
And A400, training a neural network model according to the processed original thermal infrared image and the label file to obtain the trained neural network model.
And (3) training the manufactured data set by using a neural network model (which can be a deep convolution neural network model), and in order to prove the effectiveness of the manufactured data set, training by using a classical SSD and a latest Yolov5 target detection algorithm and testing on a test set. In the SSD algorithm experiments, the parameters were as follows: batch size: 8, learning rate: 2e-4, weight decay: 5e-4, number of iterations: 60000, learning rate decay step: (30000, 45000, 60000) for about 170 epochs (epoch). In the Yolov5 algorithm experiment, four models of Yolov5s, yolov5m, yolov5 and Yolov5x are used, the result shows that the difference of the results of the four models is very small, taking the Yolov5s model with the minimum network parameter as an example, the parameters are as follows, and the batch size is as follows: 32, image size: 640, confidence threshold: 1e-3, IOU threshold for non-maxima suppression process: 0.65, 400 sessions were trained.
Specifically, the loss function of the neural network model includes: position loss, confidence loss, and total loss; wherein the overall loss is a function of the location loss and the confidence loss.
In order to illustrate the effect of the method for distinguishing the electric vehicle from the fuel vehicle according to the embodiment of the present invention, fig. 4 shows an effect diagram of forward tracking using a space-time context algorithm (STC), fig. 5 shows an effect diagram of reverse tracking using the space-time context algorithm (STC), and fig. 6 shows an effect diagram of tracking and labeling using the algorithm according to the embodiment of the present invention. FIG. 7 shows the loss of position, loss of confidence, of a training process of a data set on an SSD algorithm, according to an embodiment of the inventionA loss and a change in the overall loss, wherein the overall loss is a weighted sum of the location loss and the confidence loss: l (x, c, L, g) = L conf (x,c)+αL loc (x,l,g),L conf (x, c) is the loss of confidence, L loc (x, l, g) is the loss of position. Position loss is L between the predicted frame and the ground truth frame 2 Loss:
Figure GDA0003080532480000161
Figure GDA0003080532480000162
is an indication parameter when
Figure GDA0003080532480000163
Time indicates that the ith prior frame matches the jth ground truth, l i Is a predicted value of the position of the corresponding bounding box of the ith prior box, and
Figure GDA0003080532480000164
is the location parameter of the jth ground truth. Confidence loss may be a number of types of logic loss:
Figure GDA0003080532480000165
Figure GDA0003080532480000166
for the confidence of the ith prior box,
Figure GDA0003080532480000167
is an indication parameter when
Figure GDA0003080532480000168
Time indicates that the ith prior box matches the jth ground truth, and the coefficient α is typically set to 0.06 using multiple classes of logic loss. FIG. 8 shows the change in GIoU (loss function) of a training process of a data set on the Yolov5 algorithm according to an embodiment of the present invention. Fig. 9 shows a result of the SSD algorithm on a test set, it can be found that the lowermost fuel car does not fully enter the field of view,this target is not labeled in the dataset, but is recognized by the SSD algorithm as a fuel automobile, with the remaining targets being correctly recognized. FIG. 10 shows a result of the SSD algorithm on the test set, where it can be seen that the top one fuel car does not completely leave the field of view, and this target is not labeled in the data set and is also identified as a fuel car by the SSD algorithm. Fig. 11 shows a result of the Yolov5 algorithm on the test set, which shows that all the targets are correctly identified and the uppermost car tail is not falsely detected, and fig. 12 shows a result of the Yolov5 algorithm on the test set, which shows that all the targets are correctly identified and the lowermost target entering the field of view is not falsely detected.
Comparing the accuracy of the box selection in fig. 4 and 5 and fig. 6 shows that the embodiment of the present invention has better precision and can be used for labeling a data set. By comparing the target detection results of fig. 9, 10, 11 and 12, it can be seen that the data set of the embodiment of the present invention has better results on both SSD and Yolov5 algorithms, but misdetection of vehicles entering and exiting the field of vision occurs in the SSD algorithms, but the detection results of these vehicles are also true in fact, but are not labeled in the data set.
Table 1 further shows the detection and quantification accuracy indexes of the two deep convolutional neural networks on the data set, where the indexes include: 1. average precision of fuel automobiles (average precision value of fuel automobiles Recall from 0 to 1); 2. average precision of electric vehicles (average precision value of electric vehicle Recall from 0 to 1); 3. fuel vehicle average accuracy (average accuracy after removing interfering objects entering and exiting the field of view); mAP (average precision of fuel vehicle and average precision of electric vehicle).
From the target detection results in table 1, the method for distinguishing the electric vehicle from the fuel vehicle provided by the embodiment of the present invention achieves or approaches 0.99 mAP under two algorithms, wherein it is found in the SSD algorithm experiment that, due to the interference of the fuel vehicle targets entering and exiting the field of view, in fact, the targets entering and exiting the field of view are not labeled in the data set, so that the interference is a kind of interference, after the interference targets are removed, the mAP achieves 0.9866 on the SSD calculation method, and the mAP exceeding 0.99 can be obtained on the Yolov5 algorithm, which indicates that the method for distinguishing the electric vehicle from the fuel vehicle provided by the embodiment of the present invention has superior distinguishing accuracy.
TABLE 1 results of target detection
Figure GDA0003080532480000171
Compared with the existing electric automobile and fuel automobile identification technology, the embodiment of the invention mainly considers the thermal infrared image characteristics and the data acquisition and processing flow of the electric automobile and the fuel automobile:
1) On the basis of comprehensively analyzing the existing vehicle identification technology, a scheme for distinguishing an electric vehicle from a fuel vehicle without depending on a visible light image is provided: meanwhile, the difference of the electric vehicle and the fuel vehicle in a thermal infrared band and the performance advantage of the deep convolution neural network in image feature learning are considered, and a space is reserved at the tail of the vehicle with large thermal infrared image feature difference to improve the identification accuracy.
2) A semi-automatic labeling method is provided by integrating a target tracking technology and a photogrammetric theory: firstly, marking a head frame and a tail frame of an image sequence, using a traditional target tracking algorithm to track forward and backward to obtain a target position, and then obtaining the scale of the target according to the upper position and the lower position of the target in the image to generate a marking file.
Through the two points, the embodiment of the invention can obtain a better target detection result and has higher practicability.
Based on the method for distinguishing the electric automobile from the fuel automobile in any embodiment, the invention further provides an embodiment of computer equipment.
The computer equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the following steps:
acquiring a thermal infrared image of a target vehicle;
preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing;
inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
Based on the method for distinguishing the electric automobile from the fuel automobile in any embodiment, the invention further provides an embodiment of a computer readable storage medium.
The computer-readable storage medium of the present invention has stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a thermal infrared image of a target vehicle;
preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing;
inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (6)

1. A method of distinguishing between electric vehicles and fuel-powered vehicles, comprising the steps of:
acquiring a thermal infrared image of a target vehicle;
preprocessing the thermal infrared image to obtain a processed thermal infrared image; wherein the pre-processing comprises: image enhancement processing and lane shading processing;
inputting the processed thermal infrared image into a trained neural network model, and outputting a vehicle type corresponding to the target vehicle through the trained neural network model; wherein the vehicle type includes: electric vehicles and fuel-powered vehicles;
the trained neural network model is obtained by training through the following steps:
acquiring an original thermal infrared image of a vehicle and an original visible light image of the vehicle;
preprocessing the original thermal infrared image to obtain a processed original thermal infrared image;
marking the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a marking file; during marking, if the vehicle has tail heat flow and a high-temperature area on the processed original thermal infrared image, the vehicle is a fuel vehicle; if the automobile does not have tail heat flow and a high-temperature area on the processed original thermal infrared image and the license plate of the automobile is a new energy license plate, the automobile is an electric automobile; if the automobile does not have tail heat flow and a high-temperature region on the processed original thermal infrared image and the license plate of the automobile is not a new energy license plate, judging whether the automobile has the tail heat flow and the high-temperature region on the whole sequence of the processed original thermal infrared image, if the automobile does not have the tail heat flow and the high-temperature region, the automobile is an electric automobile, and if the automobile has the tail heat flow and the high-temperature region, the automobile is a fuel automobile;
training a neural network model according to the processed original thermal infrared image and the label file to obtain a trained neural network model;
the acquiring of the original thermal infrared image of the vehicle and the original visible light image of the vehicle comprises:
acquiring an original thermal infrared image and an original visible light image of a vehicle on a road by adopting an unmanned aerial vehicle carrying a thermal infrared sensor; wherein the infrared sensor faces a traveling direction of the vehicle and is disposed obliquely downward so as to be opposed to a rear of the vehicle;
the label file is the final label information of each frame of image in the processed original thermal infrared image;
labeling the vehicle according to the original visible light image and the processed original thermal infrared image to obtain a labeling file, comprising:
labeling a first frame image and a last frame image after the vehicle enters a view field in the processed original thermal infrared image by adopting a square frame according to the original visible light image to obtain labeling information of the first frame image and labeling information of the last frame image; wherein the heat flow at the tail of the vehicle in the original thermal infrared image is positioned in the square frame;
obtaining final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image and each frame of image in the processed original thermal infrared image so as to obtain a labeling file;
the labeling information includes: the target center coordinate is the center coordinate of the box, the height of the target is the height of the box, and the width of the target is the width of the box;
the obtaining of the final labeling information of each frame of image according to the labeling information of the first frame of image, the labeling information of the last frame of image, and each frame of image in the processed original thermal infrared image to obtain a labeling file includes:
obtaining a confidence map of a next frame image corresponding to the vehicle in the first frame image by adopting a space-time context model from the first frame image, and taking a point with the maximum confidence coefficient in the confidence map of the next frame image as a forward target central coordinate of the next frame image until obtaining the forward target central coordinate of each frame image in the processed original thermal infrared image;
obtaining a confidence map of a previous frame image corresponding to the last frame image of the vehicle by adopting a space-time context model from the last frame image, and taking a point with the maximum confidence coefficient in the confidence map of the previous frame image as a reverse target center coordinate of the previous frame image until obtaining the reverse target center coordinate of each frame image in the processed original thermal infrared image;
for each frame of image in the processed original thermal infrared image, obtaining the height of a target and the width of the target in the frame of image according to the forward target center coordinate and the reverse target center coordinate of the frame of image, the label information of the first frame of image and the label information of the last frame of image, thereby obtaining the final label information of each frame of image in the processed original thermal infrared image;
the step of obtaining, for each frame of the processed original thermal infrared image, the height of the target and the width of the target in the frame of the image according to the forward target center coordinate and the reverse target center coordinate of the frame of the image, the label information of the first frame of the image, and the label information of the last frame of the image, so as to obtain the final label information of each frame of the processed original thermal infrared image, includes:
aiming at each frame of image in the processed original thermal infrared image, obtaining the forward Scale of the frame of image according to the forward target center coordinate and the reverse target center coordinate of the frame of image and the label information of the first frame of image forward (n); wherein the forward dimension comprises a forward height of the target and a forward width of the target;
obtaining the reverse Scale of the frame image according to the forward target center coordinate and the reverse target center coordinate of the frame image and the labeling information of the last frame image reverse (n); wherein the inverse scale comprises an inverse height of the target and an inverse width of the target;
obtaining a final target center coordinate Loc (n) of the frame image according to the forward target center coordinate and the reverse target center coordinate of the frame image;
obtaining a final Scale (n) of the frame image according to the labeling information of the first frame image, the labeling information of the last frame image and the final target center coordinate of the frame image; the final dimensions include a height of the target and a width of the target;
Figure FDA0003784989810000031
Scale forward (n) is the target dimension, scale, of the nth frame during forward tracking reverse (1) Is the target scale size of frame 1 in the back tracking process,
Figure FDA0003784989810000041
is the ordinate of the target center of the nth frame in the forward tracking process,
Figure FDA0003784989810000042
is the ordinate of the target center of the first frame in the reverse tracking process;
Figure FDA0003784989810000043
Scale reverse (n) is the target dimension, scale, of the nth frame in the reverse tracking process forward (1) Is the target Scale size, scale, of frame 1 in the forward tracking process reverse (1) Is the target scale size of frame 1 in the back tracking process,
Figure FDA0003784989810000044
is the ordinate of the center of the object of frame 1 in the forward tracking process,
Figure FDA0003784989810000045
is the ordinate of the target center of the nth frame in the reverse tracking process;
Figure FDA0003784989810000046
loc (n) is the target center coordinate of the nth frame in the final result, Loc reverse (k-n) is the target center coordinate of the (k-n) th frame in the back tracking process, and k is the total frame number of the image in the tracking process;
Figure FDA0003784989810000047
scale (n) is the Scale of the target in the nth frame of image in the final result, scale reverse (1) Is the target Scale size, scale, of frame 1 in the reverse tracking process forward (1) Is the target scale size of frame 1 in the forward tracking process,
Figure FDA0003784989810000048
is the ordinate of the target center of frame 1 in the forward tracking process,
Figure FDA0003784989810000049
is the ordinate of the center of the object in the first frame during the back tracking.
2. The method of claim 1, wherein the preprocessing the thermal infrared image to obtain a processed thermal infrared image comprises:
performing image enhancement processing on the thermal infrared image by adopting a histogram equalization algorithm to obtain an image-enhanced thermal infrared image;
and cutting the thermal infrared image subjected to image enhancement according to the lane range to obtain a processed thermal infrared image.
3. The method for distinguishing the electric vehicle from the fuel vehicle according to claim 2, wherein the image enhancement processing is performed on the thermal infrared image by using a histogram equalization algorithm to obtain an image-enhanced thermal infrared image, and the method comprises the following steps:
determining the number of pixels of each gray level in the thermal infrared image;
determining a cumulative distribution function of the thermal infrared image according to the number of pixels of each gray level in the thermal infrared image;
and obtaining the thermal infrared image after image enhancement according to the cumulative distribution function and the thermal infrared image.
4. The method of distinguishing between electric vehicles and fuel-powered vehicles according to claim 1, wherein the neural network model comprises: SSD model and Yolov5 model.
5. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program performs the steps of the method according to any of claims 1 to 4.
6. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN202011630190.7A 2020-12-30 2020-12-30 Method for distinguishing electric automobile from fuel automobile Active CN113034378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011630190.7A CN113034378B (en) 2020-12-30 2020-12-30 Method for distinguishing electric automobile from fuel automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011630190.7A CN113034378B (en) 2020-12-30 2020-12-30 Method for distinguishing electric automobile from fuel automobile

Publications (2)

Publication Number Publication Date
CN113034378A CN113034378A (en) 2021-06-25
CN113034378B true CN113034378B (en) 2022-12-27

Family

ID=76459121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011630190.7A Active CN113034378B (en) 2020-12-30 2020-12-30 Method for distinguishing electric automobile from fuel automobile

Country Status (1)

Country Link
CN (1) CN113034378B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2620362A (en) * 2022-06-15 2024-01-10 William Cowper Stephen Systems and methods for managing electrical and internal combustion vehicles
CN115287089B (en) * 2022-09-02 2023-08-25 香港理工大学 Method for preparing aromatic monomer from lignin
CN117373259B (en) * 2023-12-07 2024-03-01 四川北斗云联科技有限公司 Expressway vehicle fee evasion behavior identification method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0942395A2 (en) * 1998-03-13 1999-09-15 Siemens Corporate Research, Inc. Method for digital video processing
JP2013003901A (en) * 2011-06-17 2013-01-07 Sumitomo Electric Ind Ltd Electric vehicle identification apparatus and electric vehicle identification method
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN109829449A (en) * 2019-03-08 2019-05-31 北京工业大学 A kind of RGB-D indoor scene mask method based on super-pixel space-time context
CN110570451A (en) * 2019-08-05 2019-12-13 武汉大学 multithreading visual target tracking method based on STC and block re-detection
CN211773082U (en) * 2019-12-04 2020-10-27 安徽育求消防科技有限公司 Device for distinguishing electric automobile from fuel automobile based on infrared characteristics
CN112070111A (en) * 2020-07-28 2020-12-11 浙江大学 Multi-target detection method and system adaptive to multiband images

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE602008005186D1 (en) * 2007-10-12 2011-04-07 Cambridge Entpr Ltd METHOD AND SYSTEMS FOR PROCESSING VIDEO DATA
CN110503661A (en) * 2018-05-16 2019-11-26 武汉智云星达信息技术有限公司 A kind of target image method for tracing based on deeply study and space-time context
CN111429725B (en) * 2020-02-17 2021-09-07 国网安徽电动汽车服务有限公司 Intelligent recognition charging method for electric automobile based on intelligent commercialization

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0942395A2 (en) * 1998-03-13 1999-09-15 Siemens Corporate Research, Inc. Method for digital video processing
JP2013003901A (en) * 2011-06-17 2013-01-07 Sumitomo Electric Ind Ltd Electric vehicle identification apparatus and electric vehicle identification method
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN109829449A (en) * 2019-03-08 2019-05-31 北京工业大学 A kind of RGB-D indoor scene mask method based on super-pixel space-time context
CN110570451A (en) * 2019-08-05 2019-12-13 武汉大学 multithreading visual target tracking method based on STC and block re-detection
CN211773082U (en) * 2019-12-04 2020-10-27 安徽育求消防科技有限公司 Device for distinguishing electric automobile from fuel automobile based on infrared characteristics
CN112070111A (en) * 2020-07-28 2020-12-11 浙江大学 Multi-target detection method and system adaptive to multiband images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Lightweight convolutional neural network for vehicle recognition in thermal infrared images;Qing Kang;《Infrared Physics and Technology》;20191116;第1-7页 *
Semi-automatic Annotation of Objects in Visual-Thermal Video;Amanda Berg,et al.;《2019 IEEE/CVF International Conference on Computer Vision Workshop》;20200305;摘要,第1-5节 *

Also Published As

Publication number Publication date
CN113034378A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN110969160B (en) License plate image correction and recognition method and system based on deep learning
CN108694386B (en) Lane line detection method based on parallel convolution neural network
CN108805016B (en) Head and shoulder area detection method and device
CN105404857A (en) Infrared-based night intelligent vehicle front pedestrian detection method
CN111832461B (en) Method for detecting wearing of non-motor vehicle riding personnel helmet based on video stream
CN111723854B (en) Expressway traffic jam detection method, equipment and readable storage medium
CN111553214B (en) Method and system for detecting smoking behavior of driver
Farag et al. Deep learning versus traditional methods for parking lots occupancy classification
Naufal et al. Preprocessed mask RCNN for parking space detection in smart parking systems
CN111461221A (en) Multi-source sensor fusion target detection method and system for automatic driving
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection
CN111507196A (en) Vehicle type identification method based on machine vision and deep learning
CN110705553A (en) Scratch detection method suitable for vehicle distant view image
CN113033363A (en) Vehicle dense target detection method based on deep learning
Muril et al. A review on deep learning and nondeep learning approach for lane detection system
CN112052768A (en) Urban illegal parking detection method and device based on unmanned aerial vehicle and storage medium
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
CN111832463A (en) Deep learning-based traffic sign detection method
Burlacu et al. Stereo vision based environment analysis and perception for autonomous driving applications
Zhao et al. Research on vehicle detection and vehicle type recognition under cloud computer vision
CN115762178B (en) Intelligent electronic police violation detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant