CN115861907A - Helmet detection method and system - Google Patents

Helmet detection method and system Download PDF

Info

Publication number
CN115861907A
CN115861907A CN202310186682.9A CN202310186682A CN115861907A CN 115861907 A CN115861907 A CN 115861907A CN 202310186682 A CN202310186682 A CN 202310186682A CN 115861907 A CN115861907 A CN 115861907A
Authority
CN
China
Prior art keywords
electric vehicle
helmet
image
detection
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310186682.9A
Other languages
Chinese (zh)
Inventor
王飞
魏洪利
梅荣德
刘双
孙振
何建华
田丙富
王丽辰
朱义民
马加强
卢晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Hua Xia High Tech Information Inc
Original Assignee
Shandong Hua Xia High Tech Information Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Hua Xia High Tech Information Inc filed Critical Shandong Hua Xia High Tech Information Inc
Priority to CN202310186682.9A priority Critical patent/CN115861907A/en
Publication of CN115861907A publication Critical patent/CN115861907A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the application relates to the technical field of intelligent traffic, and particularly discloses a helmet detection method and system, wherein the method comprises the following steps: performing frame extraction processing on the road video to obtain a plurality of road images; carrying out electric vehicle detection based on the road image, and cutting the road image of the detected electric vehicle into a set size to obtain an electric vehicle image; performing helmet detection on the electric vehicle image to obtain a helmet area position; and carrying out multi-target tracking on the basis of the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results. By combining the image and video intelligent analysis technology, the helmet detection accuracy is improved, and the application efficiency is improved.

Description

Helmet detection method and system
Technical Field
The embodiment of the application relates to the technical field of intelligent traffic, in particular to a helmet detection method and system.
Background
In recent years, electric vehicles are gradually becoming transportation tools for short trips, and in case of accidents unlike other traffic accidents, riders are not protected by safety belts and safety airbags like driving automobiles, but are directly exposed outside, so that the body is more easily injured. The traffic accident case of the electric vehicle exposed by the traffic management department shows that the case of great body injury caused by not wearing the safety helmet accounts for a large proportion, so that the safety helmet is worn by electric vehicle riders, the death risk of the traffic accident caused by riding the electric bicycle can be greatly reduced, and the safety helmet has a very important effect on protecting the life safety of people.
Traditional helmet detection only detects and discerns the picture of gathering, but not detect and discern the video of gathering, and the degree of accuracy that its detected remains to be improved.
Disclosure of Invention
Therefore, the embodiment of the application provides a helmet detection method and system, starting from the practical application angle of urban road helmet detection, the image and video intelligent analysis technology is combined, the helmet detection accuracy is improved, the application efficiency is improved, and technical support is provided for further consolidating the safety guard action effect of 'one helmet with one belt'.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
according to a first aspect of embodiments of the present application, there is provided a helmet detection method, the method comprising:
performing frame extraction processing on the road video to obtain a plurality of road images;
carrying out electric vehicle detection based on the road image, and cutting the road image of the detected electric vehicle into a set size to obtain an electric vehicle image;
performing helmet detection on the electric vehicle image to obtain a helmet area position;
and performing multi-target tracking based on the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results.
Optionally, the performing multi-target tracking based on the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results includes:
acquiring images of the electric vehicle at two adjacent moments and corresponding helmet area positions;
tracking according to helmet area positions corresponding to the electric vehicle images at two adjacent moments to obtain a coordinate point Euclidean distance of the helmet area positions as area loss;
extracting the directional gradient histogram features of the helmet area according to the electric vehicle images at two adjacent moments, and calculating the cosine distance of the directional gradient histogram features as feature loss;
searching a minimum loss matching relation according to the area loss and the characteristic loss to obtain a target loss;
if the target loss is smaller than a set threshold value, judging that the electric vehicles in the electric vehicle images at two adjacent moments are the same;
and determining the electric vehicle and the helmet identification result according to the electric vehicle image and the corresponding helmet area position of the same electric vehicle.
Optionally, the electric vehicle detection based on the road image includes:
inputting the road image into an electric vehicle convolutional neural network for electric vehicle detection, and outputting the road image of the detected electric vehicle by the electric vehicle convolutional neural network; the electric vehicle convolutional neural network comprises an electric vehicle feature extraction network, an electric vehicle feature network and a regression network.
Optionally, before the inputting the road image into a convolutional neural network for electric vehicle detection, the method further comprises:
performing RGB format conversion on the road image;
carrying out feature enhancement processing on the road image after format conversion;
cutting a characteristic area of the road image subjected to the characteristic enhancement processing;
and performing equal-scale amplification on the image with the cut characteristic region based on the size of the road image to obtain a preprocessed road image.
Optionally, the performing helmet detection on the image of the electric vehicle to obtain a helmet area position includes:
inputting the electric vehicle image into a helmet convolutional neural network for helmet detection, and acquiring the coordinate area position of a helmet in the electric vehicle image; the helmet convolutional neural network comprises a helmet feature extraction network, a helmet feature network and a regression network.
Optionally, the method further comprises:
based on the images of the electric vehicles and the recognition results of the electric vehicles and the helmet, collecting data corresponding to the functional parameters according to the functional parameters configured by the operator;
and outputting the collected data, the plurality of electric vehicle images and the plurality of electric vehicle and helmet identification results to the front end for displaying.
According to a second aspect of embodiments of the present application, there is provided a helmet detection system, the system comprising:
the image module is used for performing frame extraction processing on the road video to obtain a plurality of road images;
the electric vehicle detection module is used for detecting an electric vehicle based on the road image, and cutting the road image of the detected electric vehicle into a preset size to obtain an electric vehicle image;
the helmet detection module is used for performing helmet detection on the electric vehicle image to obtain a helmet area position;
and the tracking module is used for carrying out multi-target tracking on the basis of the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results.
Optionally, the tracking module is specifically configured to:
acquiring images of the electric vehicle at two adjacent moments and corresponding helmet area positions;
tracking according to helmet area positions corresponding to the electric vehicle images at two adjacent moments to obtain a coordinate point Euclidean distance of the helmet area positions as area loss;
extracting the directional gradient histogram characteristics of the helmet area according to the electric vehicle images at two adjacent moments, and calculating the cosine distance of the directional gradient histogram characteristics as characteristic loss;
searching a minimum loss matching relation according to the regional loss and the characteristic loss to obtain a target loss;
if the target loss is smaller than a set threshold value, judging that the electric vehicles in the electric vehicle images at two adjacent moments are the same;
and determining the electric vehicle and the helmet identification result according to the electric vehicle image and the corresponding helmet area position of the same electric vehicle.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method of the first aspect.
According to a fourth aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon computer readable instructions executable by a processor to implement the method of the first aspect described above.
In summary, the embodiment of the present application provides a helmet detection method and system, which perform frame extraction processing on a road video to obtain a plurality of road images; carrying out electric vehicle detection based on the road image, and cutting the road image of the detected electric vehicle into a set size to obtain an electric vehicle image; performing helmet detection on the electric vehicle image to obtain a helmet area position; and carrying out multi-target tracking on the basis of the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results. By combining the image and video intelligent analysis technology, the helmet detection accuracy is improved, and the application efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary and that other implementation drawings may be derived from the provided drawings by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so that those skilled in the art will understand and read the present invention, and do not limit the conditions for implementing the present invention, so that the present invention has no technical essence, and any modifications of the structures, changes of the ratio relationships, or adjustments of the sizes, should still fall within the scope covered by the technical contents disclosed in the present invention without affecting the efficacy and the achievable purpose of the present invention.
Fig. 1 is a flowchart of a helmet detection method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a feature extraction network structure provided in an embodiment of the present application;
FIG. 3 is a schematic view of a multi-target tracking process according to an embodiment of the present application;
fig. 4 is a block diagram of a helmet detection system according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 6 shows a schematic diagram of a computer-readable storage medium provided in an embodiment of the present application.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 illustrates a helmet detection method provided in an embodiment of the present application, where the method includes:
step 101: performing frame extraction processing on the road video to obtain a plurality of road images;
step 102: carrying out electric vehicle detection based on the road image, and cutting the road image of the detected electric vehicle into a set size to obtain an electric vehicle image;
step 103: performing helmet detection on the electric vehicle image to obtain a helmet area position;
step 104: and carrying out multi-target tracking on the basis of the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results.
In a possible implementation, in step 104, the multi-target tracking based on the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results includes:
acquiring images of the electric vehicle at two adjacent moments and corresponding helmet area positions; tracking according to helmet area positions corresponding to the electric vehicle images at two adjacent moments to obtain a coordinate point Euclidean distance of the helmet area positions as area loss; extracting the directional gradient histogram features of the helmet area according to the electric vehicle images at two adjacent moments, and calculating the cosine distance of the directional gradient histogram features as feature loss; searching a minimum loss matching relation according to the regional loss and the characteristic loss to obtain a target loss; if the target loss is smaller than a set threshold value, judging that the electric vehicles in the electric vehicle images at two adjacent moments are the same; and determining the electric vehicle and the helmet identification result according to the electric vehicle image and the corresponding helmet area position of the same electric vehicle.
In one possible embodiment, the electric vehicle identification result is an image of a specific electric vehicle area and an electric vehicle type, and the electric vehicle type includes an electric vehicle, a tricycle and the like. The helmet recognition result includes whether the driver of the electric vehicle wears the helmet.
In a possible implementation manner, in step 102, the electric vehicle detection based on the road image includes:
inputting the road image into an electric vehicle convolutional neural network for electric vehicle detection, and outputting the road image of the detected electric vehicle by the electric vehicle convolutional neural network; the electric vehicle convolutional neural network comprises an electric vehicle feature extraction network, an electric vehicle feature network and a regression network.
In one possible embodiment, before the inputting the road image into a convolutional neural network for electric vehicle detection, the method further comprises:
performing RGB format conversion on the road image; carrying out feature enhancement processing on the road image after format conversion; cutting a characteristic area of the road image subjected to the characteristic enhancement processing; and performing equal-scale amplification on the image with the cut characteristic region based on the size of the road image to obtain a preprocessed road image.
In a possible implementation manner, in step 103, the performing helmet detection on the electric vehicle image to obtain a helmet area position includes:
inputting the electric vehicle image into a helmet convolutional neural network for helmet detection, and acquiring the coordinate area position of a helmet in the electric vehicle image; the helmet convolutional neural network comprises a helmet feature extraction network, a helmet feature network and a regression network.
In a possible implementation, after step 104, the method further comprises:
based on the images of the electric vehicles and the recognition results of the electric vehicles and the helmet, collecting data corresponding to the functional parameters according to the functional parameters configured by the operator; and outputting the collected data, the plurality of electric vehicle images and the identification results of the plurality of electric vehicles and the helmet to the front end for displaying.
The method provided by the embodiment of the application is described in detail below with reference to the accompanying drawings.
The system suitable for the embodiment of the application comprises: the system comprises a video stream acquisition unit, an image detection unit, a web function unit, an event cache processing unit and a streaming media service unit.
In a first aspect, the video stream acquiring unit is configured to acquire video data of a webcam, and send a single RGB image to the image detecting unit for detection through frame extraction processing.
The video stream acquisition unit supports a conventional rtsp/rtmp video stream protocol, unpacks the received rtp stream, acquires a complete nalu unit according to the h264/h265 technical standard, and finally acquires a complete frame of YUV image data by a specific hard decoder mpp. It should be noted that other protocols may be extended, and corresponding video protocols, such as hls and flv, need to be interfaced.
In a second aspect, the Web function unit is used for configuring system functions by an operator, and the operator configures some parameters through a Web page and outputs the parameters as required.
The method specifically comprises detection event selection, detection area division and the like. Detection event selection comprises helmet detection, retrograde motion, illegal people carrying, mask wearing and the like; the detection area is divided into a polygonal area, a rectangular area, a circular area, and a full screen.
In a third aspect, the image detection unit is configured to detect a manned electric vehicle and helmet detection in the RGB image, and may also perform mask detection and the like, and output the detected target position and the target trimming image. The image detection unit firstly detects the electric vehicle by multiple targets and then cuts the electric vehicle image entering a set area; and then, performing helmet detection according to the electric vehicle image, and outputting a current detection result and the electric vehicle image.
The image detection unit comprises two modules, an image preprocessing module and a deep learning reasoning module.
The image preprocessing module is used for converting YUV format image data acquired by the video decoder into an RGB format through a color space conversion algorithm, then cutting the image data after the image data is enhanced through a Laplace algorithm, and sending the image data to the deep learning reasoning module for detection and identification after the image data is scaled according to the original image in equal proportion.
The deep learning inference module is composed of a convolutional neural network, in the method provided by the embodiment of the application, the convolutional neural network is firstly established, and a large number of helmet pictures are used as a training set; the trained convolutional neural network can realize automatic detection and identification of the helmet. The convolutional neural network detects the electric vehicle (large target) first, cuts the electric side area and then detects the helmet (small target).
The network structure is composed of a feature extraction network, a feature network, and a regression network. The whole network framework follows the conventional end-to-end identification network structure, such as yolo and ssd, but on the feature network, a structure formed by stacking basic units is designed, the number of the structures can be selected according to the computing power of AI edge computing hardware, and the hardware with higher computing power has more layers of the operable feature extraction network and higher abstract feature extraction and fitting performance. The more the number of layers of the neural network is, the larger the parameter quantity is, the higher the corresponding floating point operation times FLops per second is, and the higher the required calculation power is.
The basic elements of the feature extraction network are shown in fig. 2. The unit mainly comprises a convolution operator ShiftConv and a self-Attention module, and residual error connection is adopted to ensure that characteristic information cannot be lost due to deepening of the network layer number.
The ShiftConv module divides the input feature map into five groups, wherein the first four groups of features are translated along different spaces, the last group of features are kept unchanged, and then the feature information of adjacent elements is acquired by using a convolution kernel with the dimension of 1 x 1, so that the sensing view of the 1 x 1 convolution kernel is changed from 1 to 3, the function similar to that of the 3 x 3 convolution kernel is realized under the condition of not increasing the number of parameters, and the reasoning speed of the model is favorably improved.
Relu is an activation function in a neural network, increases a nonlinear function, and improves the information fitting capability.
The attribute module realizes self-attribute function, which is beneficial for network to pay Attention to more discriminant characteristics of target in learning process, neglect redundant information, reduce interference and further improve overall performance of network. The Attention module performs self-Attention operation in a mode of dividing different windows for reducing the operation amount, specifically, the window sizes are preset to be three groups of 4 × 4, 8 × 8 and 16 × 16, the input feature map is divided into N groups, then self-Attention is performed on each group on different window sizes, and finally information aggregation is performed on different groups through 1 convolution and integration. The self-attention is carried out by referring to an image feature association mechanism, for example, for a helmet, the association between the image feature association mechanism and a feature area of a face is large, the association between the image feature association mechanism and feet is small, the visual neural network focuses on information of the face position after training, other position information is ignored, and the accuracy is improved.
The self-orientation is evolved from covariance, and its formula is:
Figure SMS_1
Figure SMS_2
in a fourth aspect, the event cache processing unit is configured to cache events of the entire system, and implement high-performance processing of concurrent events by combining an epoll mechanism of the linux system, so as to improve the real-time performance of the entire system.
And the event cache processing unit establishes a logical relationship between the results of the image detection unit and triggers a certain event logical processing, such as screenshot, information uploading and the like. The logical relation refers to the establishment of time sequence and space relation of discrete target detection results, such as whether the multi-target tracking electric vehicle runs in a specified lane or not, whether the electric vehicle runs in the wrong direction or not and the like.
The event cache processing unit comprises a communication event module and an event detection module, the communication module realizes an external data interaction function with the client, and the event detection module establishes spatial time sequence relation for data in the image detection unit on the basis of the data in the unit so as to generate a specific behavior event. The functions included in the event detection module include multi-target tracking and behavior judgment. The multi-target tracking function comprises Hog feature extraction, hog feature matching, klman filtering processing and loss matching.
The multi-object tracking function flow is shown in fig. 3, and the tracked object is an electric vehicle. After the image detection unit T1 detects the electric vehicle target at a moment, the image detection unit stores the coordinates of a rectangular target frame (the rectangular target frame represents the position of the electric vehicle in the image and is surrounded by the rectangular frame) and the image data in the target frame, and performs three-aspect processing with the data at the previous moment T0; firstly, tracking the rectangular frame areas stored at two moments through a Kalman filter according to the tracking of the coordinate areas to obtain the Euclidean distance of the coordinate positions of the targets at the two moments as area loss; and secondly, obtaining the cosine distance of the hog features of the target at two moments as the feature loss by extracting the hog features of the directional gradient histogram of the image in the coordinate region and performing tracking processing after re-identification. And thirdly, the final tracking process realized by combining the results of the first two aspects. After area loss and characteristic loss are synthesized, the matching relation of minimum loss among targets is searched, and the matching relation is similar to Hungary algorithm and serves as loss matching. And finally, comparing the threshold values, and judging that the loss value of the target matched at two moments is smaller than the threshold value as the same target. The threshold is a loss setting fixed value of the same target, and if the loss setting fixed value is smaller than the loss setting fixed value, the targets appearing at the front moment and the rear moment are judged to be the same target.
In a fifth aspect, the streaming media service unit is a unit that outputs information to the outside of the whole system, and is configured to display the detection and recognition result (the image and the position of the helmet, i.e., the rectangular frame identifier) of the event cache processing unit in the form of a video stream at the front end.
The streaming media service unit supports the webrtc protocol and is divided into a conventional communication module, an h264 coding module and an rtp packet sending module. The conventional communication module is used for monitoring a video playing request sent by a client, establishing corresponding transaction connection according to the request and distributing a unique sequence identifier for the transaction connection. The h264 coding module converts the RGB image after detection and identification into YUV format through spatial conversion, and then carries out h264 coding through the mpp module of rockchip. The RTP packet sending module sends nalu unit packets generated by h264 coding, specifies a serial number of a client in an RTP packet header, prevents streaming, and simultaneously adds a time jitter detection and RTP packet reordering function to ensure the accuracy of RTP packet sending.
In summary, the embodiment of the present application provides a helmet detection method, which obtains a plurality of road images by performing frame extraction processing on a road video; carrying out electric vehicle detection based on the road image, and cutting the road image of the detected electric vehicle into a set size to obtain an electric vehicle image; performing helmet detection on the electric vehicle image to obtain a helmet area position; and performing multi-target tracking based on the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results. By combining the image and video intelligent analysis technology, the helmet detection accuracy is improved, and the application efficiency is improved.
Based on the same technical concept, embodiments of the present application further provide a helmet detection system, as shown in fig. 4, the system includes:
the image module 401 is configured to perform frame extraction processing on a road video to obtain a plurality of road images;
an electric vehicle detection module 402, configured to perform electric vehicle detection based on the road image, and perform size-setting clipping on the road image of the detected electric vehicle to obtain an electric vehicle image;
a helmet detection module 403, configured to perform helmet detection on the electric vehicle image to obtain a helmet area position;
a tracking module 404, configured to perform multi-target tracking based on the helmet area position and the corresponding electric vehicle image, so as to obtain a plurality of electric vehicles and helmet identification results.
In a possible implementation, the tracking module 404 is specifically configured to:
acquiring images of the electric vehicle at two adjacent moments and corresponding helmet area positions; tracking according to helmet area positions corresponding to the electric vehicle images at two adjacent moments to obtain a coordinate point Euclidean distance of the helmet area positions as area loss; extracting the directional gradient histogram features of the helmet area according to the electric vehicle images at two adjacent moments, and calculating the cosine distance of the directional gradient histogram features as feature loss; searching a minimum loss matching relation according to the regional loss and the characteristic loss to obtain a target loss; if the target loss is smaller than a set threshold value, judging that the electric vehicles in the electric vehicle images at two adjacent moments are the same; and determining the electric vehicle and the helmet recognition result according to the electric vehicle image and the corresponding helmet area position of the same electric vehicle.
The embodiment of the application also provides electronic equipment corresponding to the method provided by the embodiment. Please refer to fig. 5, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. The electronic device 20 may include: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the computer program to perform the method provided by any one of the foregoing embodiments.
The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one physical port 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 202 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 201 is used for storing a program, and the processor 200 executes the program after receiving an execution instruction, and the method disclosed by any of the foregoing embodiments of the present application may be applied to the processor 200, or implemented by the processor 200.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.
The electronic device provided by the embodiment of the application and the method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.
Referring to fig. 6, the computer-readable storage medium is an optical disc 30, on which a computer program (i.e., a program product) is stored, and when the computer program is executed by a processor, the computer program performs the method of any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above embodiment of the present application and the method provided by the embodiment of the present application have the same advantages as the method adopted, run or implemented by the application program stored in the computer-readable storage medium.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best mode of use of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the devices in an embodiment may be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the creation apparatus of a virtual machine according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A helmet detection method, comprising:
performing frame extraction processing on the road video to obtain a plurality of road images;
carrying out electric vehicle detection based on the road image, and cutting the road image of the detected electric vehicle into a set size to obtain an electric vehicle image;
performing helmet detection on the electric vehicle image to obtain a helmet area position;
and carrying out multi-target tracking on the basis of the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results.
2. The method of claim 1, wherein said multi-object tracking based on said helmet zone location and corresponding electric vehicle images, resulting in a number of electric vehicle and helmet identification results, comprises:
acquiring images of the electric vehicle at two adjacent moments and corresponding helmet area positions;
tracking according to helmet area positions corresponding to the electric vehicle images at two adjacent moments to obtain a coordinate point Euclidean distance of the helmet area positions as area loss;
extracting the directional gradient histogram features of the helmet area according to the electric vehicle images at two adjacent moments, and calculating the cosine distance of the directional gradient histogram features as feature loss;
searching a minimum loss matching relation according to the regional loss and the characteristic loss to obtain a target loss;
if the target loss is smaller than a set threshold value, judging that the electric vehicles in the electric vehicle images at two adjacent moments are the same;
and determining the electric vehicle and the helmet identification result according to the electric vehicle image and the corresponding helmet area position of the same electric vehicle.
3. The method of claim 1, wherein the performing electric vehicle detection based on the road image comprises:
inputting the road image into an electric vehicle convolutional neural network for electric vehicle detection, and outputting the road image of the detected electric vehicle by the electric vehicle convolutional neural network; the electric vehicle convolutional neural network comprises an electric vehicle feature extraction network, an electric vehicle feature network and a regression network.
4. The method of claim 3, wherein prior to said inputting said road image into a convolutional neural network for electric vehicle detection, said method further comprises:
performing RGB format conversion on the road image;
carrying out feature enhancement processing on the road image after format conversion;
cutting a characteristic area of the road image subjected to the characteristic enhancement processing;
and performing equal-scale amplification on the image with the cut characteristic region based on the size of the road image to obtain a preprocessed road image.
5. The method of claim 1, wherein performing helmet detection on the electric vehicle image to obtain a helmet area position comprises:
inputting the electric vehicle image into a helmet convolutional neural network for helmet detection, and acquiring the coordinate area position of a helmet in the electric vehicle image; the helmet convolutional neural network comprises a helmet feature extraction network, a helmet feature network and a regression network.
6. The method of claim 1, wherein the method further comprises:
based on the images of the electric vehicles and the recognition results of the electric vehicles and the helmet, collecting data corresponding to the functional parameters according to the functional parameters configured by the operator;
and outputting the collected data, the plurality of electric vehicle images and the identification results of the plurality of electric vehicles and the helmet to the front end for displaying.
7. A helmet detection system, the system comprising:
the image module is used for performing frame extraction processing on the road video to obtain a plurality of road images;
the electric vehicle detection module is used for carrying out electric vehicle detection based on the road image and cutting the road image of the detected electric vehicle into a set size to obtain an electric vehicle image;
the helmet detection module is used for performing helmet detection on the electric vehicle image to obtain a helmet area position;
and the tracking module is used for carrying out multi-target tracking on the basis of the helmet area position and the corresponding electric vehicle image to obtain a plurality of electric vehicles and helmet identification results.
8. The system of claim 7, wherein the tracking module is specifically configured to:
acquiring images of the electric vehicle at two adjacent moments and corresponding helmet area positions;
tracking according to helmet area positions corresponding to the electric vehicle images at two adjacent moments to obtain a coordinate point Euclidean distance of the helmet area positions as area loss;
extracting the directional gradient histogram features of the helmet area according to the electric vehicle images at two adjacent moments, and calculating the cosine distance of the directional gradient histogram features as feature loss;
searching a minimum loss matching relation according to the regional loss and the characteristic loss to obtain a target loss;
if the target loss is smaller than a set threshold value, judging that the electric vehicles in the electric vehicle images at two adjacent moments are the same;
and determining the electric vehicle and the helmet identification result according to the electric vehicle image and the corresponding helmet area position of the same electric vehicle.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor executes when executing the computer program to implement the method according to any of claims 1-6.
10. A computer-readable storage medium having computer-readable instructions stored thereon, the computer-readable instructions being executable by a processor to implement the method of any one of claims 1-6.
CN202310186682.9A 2023-03-02 2023-03-02 Helmet detection method and system Pending CN115861907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310186682.9A CN115861907A (en) 2023-03-02 2023-03-02 Helmet detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310186682.9A CN115861907A (en) 2023-03-02 2023-03-02 Helmet detection method and system

Publications (1)

Publication Number Publication Date
CN115861907A true CN115861907A (en) 2023-03-28

Family

ID=85659598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310186682.9A Pending CN115861907A (en) 2023-03-02 2023-03-02 Helmet detection method and system

Country Status (1)

Country Link
CN (1) CN115861907A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130305437A1 (en) * 2012-05-19 2013-11-21 Skully Helmets Inc. Augmented reality motorcycle helmet
CN104200668A (en) * 2014-07-28 2014-12-10 四川大学 Image-analysis-based detection method for helmet-free motorcycle driving violation event
CN109448025A (en) * 2018-11-09 2019-03-08 国家体育总局体育科学研究所 Short-track speeding skating sportsman's automatically tracks and track modeling method in video
CN110503000A (en) * 2019-07-25 2019-11-26 杭州电子科技大学 A kind of teaching new line rate measurement method based on face recognition technology
AU2020100711A4 (en) * 2020-05-05 2020-06-11 Chang, Cheng Mr The retrieval system of wearing safety helmet based on deep learning
CN111444766A (en) * 2020-02-24 2020-07-24 浙江科技学院 Vehicle tracking method and device based on image processing, computer equipment and storage medium
CN112164228A (en) * 2020-09-15 2021-01-01 深圳市点创科技有限公司 Helmet-free behavior detection method for driving electric vehicle, electronic device and storage medium
CN112381132A (en) * 2020-11-11 2021-02-19 上汽大众汽车有限公司 Target object tracking method and system based on fusion of multiple cameras
CN113486850A (en) * 2021-07-27 2021-10-08 浙江商汤科技开发有限公司 Traffic behavior recognition method and device, electronic equipment and storage medium
CN113887343A (en) * 2021-09-16 2022-01-04 浙江工商大学 Method for detecting helmet-free behavior of riding personnel based on multitask deep learning
CN113887304A (en) * 2021-09-01 2022-01-04 的卢技术有限公司 Road occupation operation monitoring method based on target detection and pedestrian tracking
CN114387549A (en) * 2022-01-11 2022-04-22 山东华夏高科信息股份有限公司 Zebra crossing gift pedestrian visual detection system and method based on deep learning
CN114898297A (en) * 2022-05-30 2022-08-12 浙江嘉兴数字城市实验室有限公司 Non-motor vehicle illegal behavior determination method based on target detection and target tracking
CN115546260A (en) * 2022-09-21 2022-12-30 中国船舶集团有限公司第七一一研究所 Target identification tracking method and device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130305437A1 (en) * 2012-05-19 2013-11-21 Skully Helmets Inc. Augmented reality motorcycle helmet
CN104200668A (en) * 2014-07-28 2014-12-10 四川大学 Image-analysis-based detection method for helmet-free motorcycle driving violation event
CN109448025A (en) * 2018-11-09 2019-03-08 国家体育总局体育科学研究所 Short-track speeding skating sportsman's automatically tracks and track modeling method in video
CN110503000A (en) * 2019-07-25 2019-11-26 杭州电子科技大学 A kind of teaching new line rate measurement method based on face recognition technology
CN111444766A (en) * 2020-02-24 2020-07-24 浙江科技学院 Vehicle tracking method and device based on image processing, computer equipment and storage medium
AU2020100711A4 (en) * 2020-05-05 2020-06-11 Chang, Cheng Mr The retrieval system of wearing safety helmet based on deep learning
CN112164228A (en) * 2020-09-15 2021-01-01 深圳市点创科技有限公司 Helmet-free behavior detection method for driving electric vehicle, electronic device and storage medium
CN112381132A (en) * 2020-11-11 2021-02-19 上汽大众汽车有限公司 Target object tracking method and system based on fusion of multiple cameras
CN113486850A (en) * 2021-07-27 2021-10-08 浙江商汤科技开发有限公司 Traffic behavior recognition method and device, electronic equipment and storage medium
CN113887304A (en) * 2021-09-01 2022-01-04 的卢技术有限公司 Road occupation operation monitoring method based on target detection and pedestrian tracking
CN113887343A (en) * 2021-09-16 2022-01-04 浙江工商大学 Method for detecting helmet-free behavior of riding personnel based on multitask deep learning
CN114387549A (en) * 2022-01-11 2022-04-22 山东华夏高科信息股份有限公司 Zebra crossing gift pedestrian visual detection system and method based on deep learning
CN114898297A (en) * 2022-05-30 2022-08-12 浙江嘉兴数字城市实验室有限公司 Non-motor vehicle illegal behavior determination method based on target detection and target tracking
CN115546260A (en) * 2022-09-21 2022-12-30 中国船舶集团有限公司第七一一研究所 Target identification tracking method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUANG HAN 等: "Method based on the cross-layer attention mechanism and multiscale perception for safety helmet-wearing detection", 《COMPUTERS AND ELECTRICAL ENGINEERING》 *
张国鹏: "基于局部特征和度量学习的行人重识别模型研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
焦珊珊 等: "多目标跨摄像头跟踪技术", 《国防科技》 *

Similar Documents

Publication Publication Date Title
CN108133484B (en) Automatic driving processing method and device based on scene segmentation and computing equipment
CN110781768A (en) Target object detection method and device, electronic device and medium
CN112633258B (en) Target determination method and device, electronic equipment and computer readable storage medium
US9460343B2 (en) Method and system for proactively recognizing an action of a road user
CN112163543A (en) Method and system for detecting illegal lane occupation of vehicle
CN110909699A (en) Video vehicle non-guide driving detection method and device and readable storage medium
Ding et al. Fast lane detection based on bird’s eye view and improved random sample consensus algorithm
CN109377694B (en) Monitoring method and system for community vehicles
CN107845290A (en) Junction alarm method, processing system, junction alarm system and vehicle
WO2018068312A1 (en) Device and method for detecting abnormal traffic event
CN113947892B (en) Abnormal parking monitoring method and device, server and readable storage medium
CN112749622A (en) Emergency lane occupation identification method and device
Fang et al. A vision-based safety driver assistance system for motorcycles on a smartphone
CN115861907A (en) Helmet detection method and system
CN113076851A (en) Method and device for acquiring vehicle violation data and computer equipment
CN114913470B (en) Event detection method and device
CN114202936B (en) Traffic guidance robot and control method thereof
Xiong et al. Fast and robust approaches for lane detection using multi‐camera fusion in complex scenes
KR101340014B1 (en) Apparatus and method for providing location information
CN115797880A (en) Method and device for determining driving behavior, storage medium and electronic device
Ojala et al. Motion detection and classification: ultra-fast road user detection
Hasan Yusuf et al. Real-Time Car Parking Detection with Deep Learning in Different Lighting Scenarios
Panda et al. Application of Image Processing In Road Traffic Control
CN112597924A (en) Electric bicycle track tracking method, camera device and server
CN111753593A (en) Real-time detection method, system and device for riding vehicle of vehicle-mounted all-round system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination