CN116152744A - Dynamic detection method and device for electric vehicle, computer equipment and storage medium - Google Patents

Dynamic detection method and device for electric vehicle, computer equipment and storage medium Download PDF

Info

Publication number
CN116152744A
CN116152744A CN202310223479.4A CN202310223479A CN116152744A CN 116152744 A CN116152744 A CN 116152744A CN 202310223479 A CN202310223479 A CN 202310223479A CN 116152744 A CN116152744 A CN 116152744A
Authority
CN
China
Prior art keywords
electric vehicle
frame number
determined
detection
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310223479.4A
Other languages
Chinese (zh)
Inventor
汤红
李文强
余翔
陈兴委
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huafu Technology Co ltd
Original Assignee
Shenzhen Huafu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huafu Technology Co ltd filed Critical Shenzhen Huafu Technology Co ltd
Priority to CN202310223479.4A priority Critical patent/CN116152744A/en
Publication of CN116152744A publication Critical patent/CN116152744A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a dynamic detection method and device for an electric vehicle, computer equipment and a storage medium. The method comprises the following steps: acquiring scene images in a set area in real time; inputting the scene image into an electric vehicle detection model for electric vehicle detection to obtain a detection result; judging whether the detection result is that an electric vehicle exists or not; if the detection result is that the electric vehicle exists, acquiring the scene image frame number of the electric vehicle exists as the detection result so as to obtain the frame number to be determined; judging whether the frame number to be determined meets the set requirement or not; and if the frame number to be determined meets the set requirement, generating early warning information to carry out early warning prompt. By implementing the method provided by the embodiment of the invention, the electric vehicle in the elevator can be detected in real time, and real-time early warning is carried out, so that the problems in the prior art are solved.

Description

Dynamic detection method and device for electric vehicle, computer equipment and storage medium
Technical Field
The invention relates to deep learning, in particular to a dynamic detection method and device for an electric vehicle, computer equipment and a storage medium.
Background
Along with the popularization of electric vehicles, the electric vehicles bring convenience for many families in life. But simultaneously, the nonstandard use of the electric vehicle also causes great potential safety hazard to daily life. For example, many residents push the electric vehicle into an elevator for the convenience of charging the electric vehicle, and take the elevator to go upstairs to charge the electric vehicle. However, the electric car can have an influence on the service life of the elevator when entering and exiting the elevator, and serious even causes the elevator to malfunction. Meanwhile, the electric vehicle is large in volume, so that passengers entering and exiting are inconvenient to enter and exit, and passenger capacity of the elevator is limited to a great extent. Again, charging of electric vehicles in narrow corridor may cause fire accident, affect personnel evacuation, and because of the great difficulty of human supervision and low efficiency, intelligent detection of electric vehicles in elevators and timely early warning are important in current stage work.
Therefore, a new method is necessary to be designed to realize real-time detection and real-time early warning of the electric vehicle in the elevator so as to solve the problems in the prior art.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a dynamic detection method, a dynamic detection device, computer equipment and a storage medium for an electric vehicle.
In order to achieve the above purpose, the present invention adopts the following technical scheme: the dynamic detection method of the electric vehicle comprises the following steps:
acquiring scene images in a set area in real time;
inputting the scene image into an electric vehicle detection model for electric vehicle detection to obtain a detection result;
judging whether the detection result is that an electric vehicle exists or not;
if the detection result is that the electric vehicle exists, acquiring the scene image frame number of the electric vehicle exists as the detection result so as to obtain the frame number to be determined;
judging whether the frame number to be determined meets the set requirement or not;
and if the frame number to be determined meets the set requirement, generating early warning information to carry out early warning prompt.
The further technical scheme is as follows: after the judging whether the frame number to be determined meets the set requirement, the method further comprises the following steps:
and if the frame number to be determined does not meet the requirement, executing the real-time acquisition of the scene image in the set area.
The further technical scheme is as follows: the electric vehicle detection model is obtained by training a deep learning network by taking a scene image with an electric vehicle type label as a sample set.
The further technical scheme is as follows: the deep learning network is a network formed by a last convolution normalization activation module of an ELAN in the Yolov7-tiny based on the Yolov7-tiny and introducing a cross-channel residual error grouping convolution module and a convolution block attention module.
The further technical scheme is as follows: the judging whether the frame number to be determined meets the set requirement comprises the following steps:
judging whether the frame number to be determined is larger than a set threshold value or not;
if the frame number to be determined is larger than a set threshold, determining that the frame number to be determined meets a set requirement;
and if the frame number to be determined is not greater than the set threshold, determining that the frame number to be determined does not meet the set requirement.
The invention also provides a dynamic detection device of the electric vehicle, which comprises:
the image acquisition unit is used for acquiring the scene image in the set area in real time;
the detection unit is used for inputting the scene image into an electric vehicle detection model to detect the electric vehicle so as to obtain a detection result;
a first judging unit for judging whether the detection result is that an electric vehicle exists;
the frame number determining unit is used for acquiring the scene image frame number of the electric vehicle as the detection result if the electric vehicle exists as the detection result so as to obtain the frame number to be determined;
the second judging unit is used for judging whether the frame number to be determined meets the set requirement or not;
and the information generation unit is used for generating early warning information to carry out early warning prompt if the frame number to be determined meets the set requirement.
The further technical scheme is as follows: further comprises:
the model generation unit is used for training the deep learning network by taking the scene image with the electric vehicle type label as a sample set so as to obtain an electric vehicle detection model.
The further technical scheme is as follows: the second judging unit includes:
a threshold value judging subunit, configured to judge whether the frame number to be determined is greater than a set threshold value;
the first determining subunit is used for determining that the frame number to be determined meets the set requirement if the frame number to be determined is greater than a set threshold;
and the second determination subunit is used for determining that the frame number to be determined does not meet the set requirement if the frame number to be determined is not greater than the set threshold.
The invention also provides a computer device which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the method when executing the computer program.
The present invention also provides a storage medium storing a computer program which, when executed by a processor, performs the above-described method.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, the scene image in the set area is acquired, the electric vehicle detection model formed by the improved YOLOv7-tiny model is utilized to detect the electric vehicle in the scene image, when the existence of the electric vehicle is determined, the corresponding frame number is determined, and when the frame number exceeds the threshold value, the alarm information is generated, so that the electric vehicle in the elevator can be detected in real time and early-warned in real time, and the problems in the prior art are solved.
The invention is further described below with reference to the drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of an electric vehicle dynamic detection method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a dynamic detection method for an electric vehicle according to an embodiment of the present invention;
fig. 3 is a schematic sub-flowchart of a dynamic detection method for an electric vehicle according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an ELAN module structure in Yolov 7-tini according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an ELAN-RXC structure in Yolov 7-tini according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the ResXCSP structure in Yolov 7-tini according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an ELAN-CBAM structure in Yolov 7-tini according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a CBAM structure in Yolov 7-tini according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a CAM structure in Yolov 7-tiniy according to an embodiment of the invention;
FIG. 10 is a schematic diagram of a SAM structure in YOLOv 7-tini according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of an improved YOLOv7-tiny block diagram provided by an embodiment of the present invention;
FIG. 12 is a schematic diagram of an SPPCSPC structure provided by an embodiment of the present invention;
FIG. 13 is a schematic block diagram of an electric vehicle dynamic detection device provided by an embodiment of the invention;
fig. 14 is a schematic block diagram of a second judging unit of the dynamic detection device for an electric vehicle according to the embodiment of the present invention;
fig. 15 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic diagram of an application scenario of an electric vehicle dynamic detection method according to an embodiment of the present invention. Fig. 2 is a schematic flow chart of an electric vehicle dynamic detection method provided by an embodiment of the invention. The dynamic detection method of the electric vehicle is applied to the server. The server performs data interaction with the terminal and the camera, acquires scene images in the elevator in real time through the camera installed in the elevator, and detects the electric vehicle by taking an improved single-stage YOLOv7 deep learning model as a target detection model aiming at the acquired images. Because the picture that actually gathers probably has the picture resolution low, the illumination is not enough, electric motor car shelter from scheduling problem, not only detects the electric motor car at the detection in-process, also detects electric motor car locomotive simultaneously, consequently, when detecting electric motor car or electric motor car, there is the electric motor car to go into the elevator unusual promptly, can carry out relevant operation such as early warning in the follow-up.
Fig. 2 is a schematic flow chart of a dynamic detection method for an electric vehicle according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S160.
S110, acquiring scene images in a set area in real time.
In this embodiment, the scene image refers to a real-time image of a specified area or within an elevator.
S120, inputting the scene image into an electric vehicle detection model to detect the electric vehicle so as to obtain a detection result.
In this embodiment, the detection result refers to a detection result of whether or not the electric vehicle is present.
Specifically, the electric vehicle detection model is obtained by training a deep learning network through a scene image with an electric vehicle type label as a sample set.
In this embodiment, the deep learning network is a network based on YOLOv7-tiny and formed by introducing a cross-channel residual block Convolution module and a Convolution block attention module to replace the last Convolution of ELAN in YOLOv7-tiny with an active CBL (Convoltion+ Batch Normalization +Leaky ReLU) module. In YOLOv7-tiny, the ELAN module is composed of a plurality of CBL modules; the characteristics extracted by each CBL module are spliced through a channel to form spliced characteristics, and the spliced characteristics are input into the last CBL module in the modules to perform characteristic fusion and optimization; but the feature extracted only by convolution in the last CBL is expressed only in a limited way; therefore, the substitution of the cross-channel residual group convolution module and the convolution block attention module mainly introduces the cross-channel residual group convolution and attention module to increase the model feature optimization capability, thereby enhancing the expression capability of the features extracted by the model.
Considering the accuracy requirement of the detection task under the premise of ensuring real-time performance, the electric vehicle detection model of the embodiment is based on a YOLOv7-tiny model, and is characterized in that: firstly, compared with the existing single-stage detection model, the YOLOv7-tiny model has higher precision; secondly, compared with the existing double-stage detection model, under the condition that the model is smaller, the YOLOv7-tiny can still achieve similar precision, and the model reasoning time is shorter, so that the real-time requirement can be met in practical application. In training the deep learning model, it is necessary to ensure that both the gradient shortest path of each layer and the gradient longest path of the entire network can be effectively trained, instead of considering only the values of the shortest gradient path and the longest path. Thus, an ELAN (active layer aggregation network, efficient layer aggregation networks) module was introduced in YOLOv7-tiny to control the shortest longest gradient path, ensuring faster convergence, the structure of which is shown in fig. 7. Wherein the arrows in this figure (and in the figures appearing later) represent the data flow. In addition, FPN (feature pyramid, feature Pyramid Network) and PAN (path aggregation network ) are employed in the YOLOv7-tiny network structure. The FPN transmits the strong semantic features in the high-dimensional features, and enhances the features in the whole feature pyramid. However, in this process, the positioning information is not transferred. While the introduction of PAN solves just the problem of location information transfer. Finally, the YOLOv7-tiny passes through a plurality of detection heads to improve the detection effect on detection targets with different scales. These introduced structures in YOLOv7-tiny have made YOLOv7-tiny a considerable advantage in real-time detection tasks.
However, in the problem of dynamic detection of the electric vehicle in the elevator, the problems of false detection, missed detection and the like of the original YOLOv7-tiny are easily caused due to the problems of shielding of the electric vehicle, insufficient illumination, small duty ratio of the electric vehicle to be detected and the like, so that the detection performance still needs to be further improved although the real-time performance of the YOLOv7-tiny is ensured.
Based on original YOLOv7-tiny, the embodiment proposes to improve the overall performance of YOLOv7-tiny by improving the ELAN module structure without changing the overall structure of the network and adding additional parameters. The main improvement is to introduce a cross-channel residual block convolution module, namely ResXCSP, and a convolution block attention module, namely CBAM, to replace the last convolution normalization activation module of the ELAN in the original Yolov7-tiny, so as to respectively form an ELAN-RXC and an ELAN-CBAM module. The ResXCSP module has the characteristics of reducing the load of hardware equipment and increasing the parallel computing efficiency through grouping convolution on the premise of reducing the module parameters. Compared with a single convolution normalization activation module with the same parameter, the residual and cross-stage connection can effectively enhance the feature extraction capability of the module. Considering that replacing all CBL modules in an ELAN causes a significant increase in model complexity, this embodiment replaces the last CBL convolution block in a shallow network ELAN module with a ResXCSP module to form an ELAN-RXC module. This has the advantage that neither the network complexity is excessively increased nor the gradient shortest path in the network and the gradient longest path of the whole network are excessively changed. The modified ELAN-RXC architecture is shown in fig. 5, with the ResXCSP module architecture shown in fig. 6.
The CBAM module can enhance the characteristic learning capability of the deep learning detection model on a small target object. Thus, in the YOLOv7-tiny network deep layer, the addition of a convolution block attention module enables the network to focus more on small target objects in the extracted features. Similar to the ELAN-RXC, this embodiment uses a CBAM module to replace the last CBL module in the original ELAN to form an ELAN-CBAM, and the modified ELAN-CBAM module is shown in FIG. 7 below. The CBAM module is comprised of a channel attention module, CAM, and a time attention module, SAM, wherein channel attention focuses on extracting feature channel dimension features and the time attention module focuses on extracting spatial features. The structure of CBAM, CAM and SAM is shown in fig. 8, 9 and 10. The original YOLOv7-tiny model contains a total of 8 ELAN modules, five of which are in the backbone portion of the network and the other three of which are branched at the output head of the network. Considering that the extraction of effective features is emphasized by the shallow layer of the network, the embodiment adopts the ELAN-RXC to replace an ELAN module in the backbone of the network; considering the utilization of the deep layer of the network focusing on the effective characteristics, the embodiment adopts the ELAN-CBAM module to replace the ELAN module in the output head branch. The improved YOLOv7-tiny network structure is shown in figure 11. The SPPCSPC structure is shown in fig. 12. In addition, aiming at the problem of small target object detection, the detection performance of the deep learning model on the small target object can be obviously improved by setting a larger image input size. Therefore, the improved YOLOv7-tiny model proposed in this embodiment employs an image input size of 640×640 pixels.
S130, judging whether the detection result is that an electric vehicle exists or not;
and S140, if the detection result is that the electric vehicle exists, acquiring the scene image frame number of the electric vehicle exists as the detection result, so as to obtain the frame number to be determined.
In the present embodiment, the frame number to be determined refers to the frame number of the scene image in which the electric vehicle exists.
S150, judging whether the frame number to be determined meets the set requirement.
In one embodiment, referring to fig. 3, the step S150 may include steps S151 to S153.
S151, judging whether the frame number to be determined is larger than a set threshold value or not;
s152, if the frame number to be determined is larger than a set threshold, determining that the frame number to be determined meets a set requirement;
and S153, if the frame number to be determined is not greater than a set threshold, determining that the frame number to be determined does not meet the set requirement.
When the multi-frame images all indicate that the electric vehicle exists, the electric vehicle exists in the elevator or the appointed area at the moment.
And S160, if the frame number to be determined meets the set requirement, generating early warning information so as to carry out early warning prompt.
If the number of frames to be determined does not meet the requirement, the step S110 is executed.
If the detection result is that there is no electric vehicle, the step S110 is executed.
The embodiment uses the camera in the elevator and the deep learning algorithm to fully automatically acquire real-time images in the elevator, detects whether the electric vehicle exists in the images, and has the advantages of being quick and accurate. Real-time detection and real-time early warning of the electric vehicle in the elevator can be realized.
According to the dynamic detection method for the electric vehicle, the scene image in the set area is obtained, the electric vehicle detection model formed by the improved YOLOv7-tiny model is utilized to detect the electric vehicle, when the existence of the electric vehicle is determined, the corresponding frame number is determined, and when the frame number exceeds the threshold value, alarm information is generated, so that the electric vehicle in the elevator can be detected in real time and early warned in real time, and the problems in the prior art are solved.
Fig. 13 is a schematic block diagram of an electric vehicle dynamic detection device 300 according to an embodiment of the present invention. As shown in fig. 13, the present invention further provides an electric vehicle dynamic detection device 300 corresponding to the above electric vehicle dynamic detection method. The electric vehicle dynamic detection apparatus 300 includes a unit for performing the electric vehicle dynamic detection method described above, and may be configured in a server. Specifically, referring to fig. 13, the electric vehicle dynamic detection apparatus 300 includes an image acquisition unit 301, a detection unit 302, a first determination unit 303, a frame number determination unit 304, a second determination unit 305, and an information generation unit 306.
An image acquisition unit 301, configured to acquire a scene image in a set area in real time; the detection unit 302 is configured to input the scene image into an electric vehicle detection model to perform electric vehicle detection, so as to obtain a detection result; a first judging unit 303, configured to judge whether the detection result is that there is an electric vehicle; a frame number determining unit 304, configured to obtain, if the detection result is that the electric vehicle exists, a scene image frame number that the detection result is that the electric vehicle exists, so as to obtain a frame number to be determined; a second judging unit 305, configured to judge whether the frame number to be determined meets a set requirement; and the information generating unit 306 is configured to generate early warning information to perform early warning prompt if the frame number to be determined meets a set requirement.
In an embodiment, the electric vehicle dynamic detection device 300 further includes:
the model generation unit is used for training the deep learning network by taking the scene image with the electric vehicle type label as a sample set so as to obtain an electric vehicle detection model.
In an embodiment, as shown in fig. 14, the second determining unit 305 includes a threshold determining subunit 3051, a first determining subunit 3052, and a second determining subunit 3053.
A threshold value judging subunit 3051, configured to judge whether the number of frames to be determined is greater than a set threshold value; a first determining subunit 3052, configured to determine that the frame number to be determined meets a set requirement if the frame number to be determined is greater than a set threshold; and the second determining subunit 3053 is configured to determine that the number of frames to be determined does not meet the set requirement if the number of frames to be determined is not greater than the set threshold.
It should be noted that, as a person skilled in the art can clearly understand, the specific implementation process of the electric vehicle dynamic detection device 300 and each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, the description is omitted here.
The electric vehicle dynamic detection apparatus 300 described above may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 15.
Referring to fig. 15, fig. 15 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, where the server may be a stand-alone server or may be a server cluster formed by a plurality of servers.
With reference to FIG. 15, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 includes program instructions that, when executed, cause the processor 502 to perform a method of dynamic detection of an electric vehicle.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a method for dynamic detection of an electric vehicle.
The network interface 505 is used for network communication with other devices. It will be appreciated by those skilled in the art that the structure shown in fig. 15 is merely a block diagram of a portion of the structure associated with the present application and does not constitute a limitation of the computer device 500 to which the present application is applied, and that a particular computer device 500 may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to execute a computer program 5032 stored in a memory to implement the steps of:
acquiring scene images in a set area in real time; inputting the scene image into an electric vehicle detection model for electric vehicle detection to obtain a detection result; judging whether the detection result is that an electric vehicle exists or not; if the detection result is that the electric vehicle exists, acquiring the scene image frame number of the electric vehicle exists as the detection result so as to obtain the frame number to be determined; judging whether the frame number to be determined meets the set requirement or not; and if the frame number to be determined meets the set requirement, generating early warning information to carry out early warning prompt.
The electric vehicle detection model is obtained by training a deep learning network through taking a scene image with an electric vehicle type label as a sample set.
The deep learning network is a network formed by a last convolution normalization activation module of an ELAN in the Yolov7-tiny based on the Yolov7-tiny and introducing a cross-channel residual error grouping convolution module and a convolution block attention module.
In one embodiment, after implementing the step of determining whether the frame number to be determined meets the set requirement, the processor 502 further implements the following steps:
and if the frame number to be determined does not meet the requirement, executing the real-time acquisition of the scene image in the set area.
In an embodiment, when the step of determining whether the frame number to be determined meets the set requirement is implemented by the processor 502, the following steps are specifically implemented:
judging whether the frame number to be determined is larger than a set threshold value or not; if the frame number to be determined is larger than a set threshold, determining that the frame number to be determined meets a set requirement; and if the frame number to be determined is not greater than the set threshold, determining that the frame number to be determined does not meet the set requirement.
It should be appreciated that in embodiments of the present application, the processor 502 may be a central processing unit (Central Processing Unit, CPU), the processor 502 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that all or part of the flow in a method embodying the above described embodiments may be accomplished by computer programs instructing the relevant hardware. The computer program comprises program instructions, and the computer program can be stored in a storage medium, which is a computer readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer readable storage medium. The storage medium stores a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring scene images in a set area in real time; inputting the scene image into an electric vehicle detection model for electric vehicle detection to obtain a detection result; judging whether the detection result is that an electric vehicle exists or not; if the detection result is that the electric vehicle exists, acquiring the scene image frame number of the electric vehicle exists as the detection result so as to obtain the frame number to be determined; judging whether the frame number to be determined meets the set requirement or not; and if the frame number to be determined meets the set requirement, generating early warning information to carry out early warning prompt.
The electric vehicle detection model is obtained by training a deep learning network through taking a scene image with an electric vehicle type label as a sample set.
The deep learning network is a network formed by a last convolution normalization activation module of an ELAN in the Yolov7-tiny based on the Yolov7-tiny and introducing a cross-channel residual error grouping convolution module and a convolution block attention module.
In one embodiment, after executing the computer program to implement the step of determining whether the frame number to be determined meets a set requirement, the processor further implements the following steps:
and if the frame number to be determined does not meet the requirement, executing the real-time acquisition of the scene image in the set area.
In one embodiment, when the processor executes the computer program to implement the step of determining whether the frame number to be determined meets a set requirement, the method specifically includes the following steps:
judging whether the frame number to be determined is larger than a set threshold value or not; if the frame number to be determined is larger than a set threshold, determining that the frame number to be determined meets a set requirement; and if the frame number to be determined is not greater than the set threshold, determining that the frame number to be determined does not meet the set requirement.
The storage medium may be a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, or other various computer-readable storage media that can store program codes.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be combined, divided and deleted according to actual needs. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a terminal, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The dynamic detection method of the electric vehicle is characterized by comprising the following steps of:
acquiring scene images in a set area in real time;
inputting the scene image into an electric vehicle detection model for electric vehicle detection to obtain a detection result;
judging whether the detection result is that an electric vehicle exists or not;
if the detection result is that the electric vehicle exists, acquiring the scene image frame number of the electric vehicle exists as the detection result so as to obtain the frame number to be determined;
judging whether the frame number to be determined meets the set requirement or not;
and if the frame number to be determined meets the set requirement, generating early warning information to carry out early warning prompt.
2. The method for dynamically detecting an electric vehicle according to claim 1, wherein after the determining whether the number of frames to be determined meets the set requirement, further comprises:
and if the frame number to be determined does not meet the requirement, executing the real-time acquisition of the scene image in the set area.
3. The method according to claim 1, wherein the electric vehicle detection model is obtained by training a deep learning network using a scene image with an electric vehicle type tag as a sample set.
4. The method according to claim 3, wherein the deep learning network is a network based on YOLOv 7-tini and formed by introducing a cross-channel residual block convolution module and a convolution block attention module to replace the last convolution normalization activation module of ELAN in YOLOv 7-tini.
5. The method for dynamically detecting an electric vehicle according to claim 1, wherein the determining whether the number of frames to be determined meets a set requirement comprises:
judging whether the frame number to be determined is larger than a set threshold value or not;
if the frame number to be determined is larger than a set threshold, determining that the frame number to be determined meets a set requirement;
and if the frame number to be determined is not greater than the set threshold, determining that the frame number to be determined does not meet the set requirement.
6. Electric motor car dynamic detection device, its characterized in that includes:
the image acquisition unit is used for acquiring the scene image in the set area in real time;
the detection unit is used for inputting the scene image into an electric vehicle detection model to detect the electric vehicle so as to obtain a detection result;
a first judging unit for judging whether the detection result is that an electric vehicle exists;
the frame number determining unit is used for acquiring the scene image frame number of the electric vehicle as the detection result if the electric vehicle exists as the detection result so as to obtain the frame number to be determined;
the second judging unit is used for judging whether the frame number to be determined meets the set requirement or not;
and the information generation unit is used for generating early warning information to carry out early warning prompt if the frame number to be determined meets the set requirement.
7. The electric vehicle dynamic detection device of claim 6, further comprising:
the model generation unit is used for training the deep learning network by taking the scene image with the electric vehicle type label as a sample set so as to obtain an electric vehicle detection model.
8. The electric vehicle dynamic detection device according to claim 6, wherein the second judgment unit includes:
a threshold value judging subunit, configured to judge whether the frame number to be determined is greater than a set threshold value;
the first determining subunit is used for determining that the frame number to be determined meets the set requirement if the frame number to be determined is greater than a set threshold;
and the second determination subunit is used for determining that the frame number to be determined does not meet the set requirement if the frame number to be determined is not greater than the set threshold.
9. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-5.
10. A storage medium storing a computer program which, when executed by a processor, performs the method of any one of claims 1 to 5.
CN202310223479.4A 2023-03-09 2023-03-09 Dynamic detection method and device for electric vehicle, computer equipment and storage medium Pending CN116152744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310223479.4A CN116152744A (en) 2023-03-09 2023-03-09 Dynamic detection method and device for electric vehicle, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310223479.4A CN116152744A (en) 2023-03-09 2023-03-09 Dynamic detection method and device for electric vehicle, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116152744A true CN116152744A (en) 2023-05-23

Family

ID=86339041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310223479.4A Pending CN116152744A (en) 2023-03-09 2023-03-09 Dynamic detection method and device for electric vehicle, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116152744A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958086A (en) * 2023-07-21 2023-10-27 盐城工学院 Metal surface defect detection method and system with enhanced feature fusion capability

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958086A (en) * 2023-07-21 2023-10-27 盐城工学院 Metal surface defect detection method and system with enhanced feature fusion capability
CN116958086B (en) * 2023-07-21 2024-04-19 盐城工学院 Metal surface defect detection method and system with enhanced feature fusion capability

Similar Documents

Publication Publication Date Title
Dong et al. A lightweight vehicles detection network model based on YOLOv5
Rijal et al. Ensemble of deep neural networks for estimating particulate matter from images
US20200349875A1 (en) Display screen quality detection method, apparatus, electronic device and storage medium
CN116152744A (en) Dynamic detection method and device for electric vehicle, computer equipment and storage medium
CN113052006B (en) Image target detection method, system and readable storage medium based on convolutional neural network
CN112949578B (en) Vehicle lamp state identification method, device, equipment and storage medium
CN112801027A (en) Vehicle target detection method based on event camera
CN203849870U (en) Wireless network-based traffic flow monitoring and signal lamp intelligent control system
CN111008608B (en) Night vehicle detection method based on deep learning
CN112132216B (en) Vehicle type recognition method and device, electronic equipment and storage medium
CN110277833A (en) The method and terminal device of substation's pressing plate state recognition
WO2023040146A1 (en) Behavior recognition method and apparatus based on image fusion, and electronic device and medium
CN112818871A (en) Target detection method of full-fusion neural network based on half-packet convolution
CN106657936A (en) Voice warning method and system based on video monitoring of dangerous area
CN114360064B (en) Office place personnel behavior lightweight target detection method based on deep learning
CN110956611A (en) Smoke detection method integrated with convolutional neural network
US11954955B2 (en) Method and system for collecting and monitoring vehicle status information
CN116363542A (en) Off-duty event detection method, apparatus, device and computer readable storage medium
JP2024516642A (en) Behavior detection method, electronic device and computer-readable storage medium
CN115240163A (en) Traffic sign detection method and system based on one-stage detection network
Zhang et al. Real-time wildfire detection and alerting with a novel machine learning approach
CN116311181B (en) Method and system for rapidly detecting abnormal driving
CN117036363B (en) Shielding insulator detection method based on multi-feature fusion
CN114598830B (en) Defective pixel correction method
CN117710755B (en) Vehicle attribute identification system and method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination