CN114299431A - In-school safety detection method and system based on deep learning - Google Patents

In-school safety detection method and system based on deep learning Download PDF

Info

Publication number
CN114299431A
CN114299431A CN202111615684.2A CN202111615684A CN114299431A CN 114299431 A CN114299431 A CN 114299431A CN 202111615684 A CN202111615684 A CN 202111615684A CN 114299431 A CN114299431 A CN 114299431A
Authority
CN
China
Prior art keywords
school
deep learning
information
feature
safety
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111615684.2A
Other languages
Chinese (zh)
Inventor
吴贝宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Normal University
Original Assignee
Anhui Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Normal University filed Critical Anhui Normal University
Priority to CN202111615684.2A priority Critical patent/CN114299431A/en
Publication of CN114299431A publication Critical patent/CN114299431A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a method and a system for detecting safety in school based on deep learning, and relates to the field of deep learning. A safety detection method in school based on deep learning comprises the following steps: collecting image data and basic data information of students during school, and performing data preprocessing; performing feature extraction on an original frame image of a student during school through a convolutional neural network, and generating a feature map; weighting the characteristic diagram on two dimensions of a channel and a spatial axis by using an attention mechanism, and adjusting a characteristic response value to emphasize or inhibit information so as to improve the expressive force of the network; and carrying out weighted summation according to the characteristic diagram to obtain an attention weight coefficient, and further obtaining a safety judgment result. The method can identify the abnormal situation of the student based on the deep learning model, and predict and block the dangerous situation before the dangerous situation occurs.

Description

In-school safety detection method and system based on deep learning
Technical Field
The invention relates to the field of deep learning, in particular to a safety detection method and a safety detection system in a school based on deep learning.
Background
The human vision system can quickly and accurately locate objects with high discrimination in the visual field or scene areas (also called salient objects), so that the simulation, research and exploration of the human vision perception capability in the vision field are initiated. Research shows that a human eye vision attention mechanism analyzes and integrates partial information in a vision space, and further understanding of the whole scene is established. The salient object detection also aims at efficiently filtering non-important information in a visual scene, extracting, simulating and predicting high-level information in human visual perception, and exploring and simulating a mechanism of a human visual perception system.
At present, due to the complexity of spatio-temporal information modeling and the lack of video salient object detection data sets, the research on salient object detection models based on video data has not been greatly expanded. In addition, processing video data often faces problems of complex scenes, camera shake, variable target positions, etc., so that the video saliency detection problem is more challenging than that of still images.
Potential safety hazards may appear everywhere in places where students study, live, rest and move during school, people needed by the society are cultivated in order to keep safe, tidy and comfortable human-breeding environments of the students during school, and an accurate real-time detection method is urgently needed to improve the safety of the students during school.
Disclosure of Invention
The invention aims to provide a safety detection method in school based on deep learning, which can identify the occurrence of abnormal conditions of students based on a deep learning model, reduce the workload of managers, predict and block dangerous conditions before the occurrence of the dangerous conditions, generate abnormal information for students possibly sending the dangerous conditions and facilitate the reference of related personnel.
Another object of the present invention is to provide a deep learning-based security detection system at a calibration, which is capable of operating a deep learning-based security detection method at a calibration.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a safety detection method in school based on deep learning, which includes collecting image data and basic data information of a student during the school, and performing data preprocessing; performing feature extraction on an original frame image of a student during school through a convolutional neural network, and generating a feature map; weighting the characteristic diagram on two dimensions of a channel and a spatial axis by using an attention mechanism, and adjusting a characteristic response value to emphasize or inhibit information so as to improve the expressive force of the network; and carrying out weighted summation according to the characteristic diagram to obtain an attention weight coefficient, and further obtaining a safety judgment result.
In some embodiments of the present invention, the acquiring image data and basic data information of a student during a school, and performing data preprocessing includes: and acquiring a portrait frame in each image data, detecting at least one key pixel point of each portrait through the portrait key points, and performing softmax normalization on the image according to the detected portrait frame and key pixels to obtain the portrait image with the same size.
In some embodiments of the present invention, the above further includes: the basic data information includes pre-recorded facial image information of students, certificate number information, name information, graduation time information, and school name information.
In some embodiments of the present invention, the above feature extracting, by a convolutional neural network, the original frame image of the student during the school, and generating the feature map includes: according to the feature extraction of a depth convolution network based on an original frame image, multi-scale feature expressions are respectively extracted from the input front frame and the input rear frame, and the multi-stage feature extraction under the original frame image is completed through the obtained multi-scale feature expressions.
In some embodiments of the present invention, the weighting the feature map in two dimensions of the channel and the spatial axis by using the attention mechanism, and adjusting the feature response value to emphasize or suppress the information, so as to improve the expressiveness of the network includes: and inputting the characteristic values of the two dimensions of the channel and the spatial axis into an attention mechanism weight calculation, and calculating the attention weight of the corresponding dimension by using a nonlinear function.
In some embodiments of the present invention, the above further includes: and aggregating the spatial information of the feature values of the input channel and spatial axis in two dimensions through an average pooling operation, and then feeding the aggregated spatial information into a multi-layer perceptron with a hidden layer to combine the output feature vectors to generate an attention map in two dimensions of the channel and spatial axis.
In some embodiments of the present invention, the obtaining the attention weight coefficient by performing weighted summation according to the feature map and further obtaining the result of the security judgment includes: and performing feature dimensionality reduction on the feature graph, outputting pixel level classification results of two adjacent frames of images by using a classifier, and performing feature extraction, weighting and summation on the classification results to obtain an attention weight coefficient.
In a second aspect, an embodiment of the present application provides a safety detection system based on deep learning at school, which includes a data preprocessing module, configured to collect image data and basic data information of a student during the school, and perform data preprocessing;
the feature extraction module is used for extracting features of original frame images of students in the school period through a convolutional neural network and generating a feature map;
the weighting module is used for weighting the characteristic diagram on two dimensions of a channel and a spatial axis by using an attention mechanism, and adjusting a characteristic response value to emphasize or inhibit information so as to improve the expression force of the network;
and the output module is used for carrying out weighted summation according to the characteristic diagram to obtain an attention weight coefficient so as to obtain a safety judgment result.
In some embodiments of the invention, the above includes: at least one memory for storing computer instructions; at least one processor in communication with the memory, wherein the at least one processor, when executing the computer instructions, causes the system to: the device comprises a data preprocessing module, a feature extraction module, a weighting module and an output module.
In a third aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements a method as any one of deep learning-based security detection methods.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects:
the video saliency object detection model based on the attribute mechanism in the deep learning model takes adjacent front and back frame data with a sequence as input, and for a network structure, compared with a method based on a cyclic neural network, the method focuses more on saliency detection accuracy in a short-time frame. Therefore, the abnormal conditions of the students can be identified, the workload of managers is reduced, the students can be predicted and blocked before dangerous conditions occur, abnormal information is generated for the students possibly sending the dangerous conditions, and related personnel can conveniently look up the abnormal information.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram illustrating steps of a safety detection method in a school based on deep learning according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating detailed steps of a safety detection method in a calibration based on deep learning according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a safety detection system module in a school based on deep learning according to an embodiment of the present invention;
fig. 4 is an electronic device according to an embodiment of the present invention.
Icon: 10-a data preprocessing module; 20-a feature extraction module; 30-a weighting module; 40-an output module; 101-a memory; 102-a processor; 103-communication interface.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
It is to be noted that the term "comprises," "comprising," or any other variation thereof is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the individual features of the embodiments can be combined with one another without conflict.
Example 1
Referring to fig. 1, fig. 1 is a schematic diagram illustrating steps of a safety detection method in a calibration based on deep learning according to an embodiment of the present invention, which is shown as follows:
s100, collecting image data and basic data information of students during school, and performing data preprocessing;
in some embodiments, a proof picture of a student during a school is obtained, which may include, for example, a degree certificate or a graduation certificate. The way to obtain the proof picture of the student during school includes: after receiving an instruction of acquiring the proof picture of the student during the school, the pre-stored proof picture of the student during the school is automatically called, or the camera in the campus is automatically opened for shooting to acquire the proof picture of the student during the school. The academic certificate refers to a document capable of certifying an academic, and the academic certificate of the present embodiment includes a degree certificate or a graduation certificate. The academic degree certificate or the graduation certificate is recorded with facial images, certificate numbers, names, sexes, times of admission, graduation times, dates of birth, professions, learning forms (such as common system of the whole day) and school name information.
The method includes the steps of acquiring a collected original frame image, uploading the original frame image to a server by a shooting terminal after the image is shot by a camera in a campus, wherein the original frame image is an image directly shot by the terminal and is not subjected to processing such as clipping. Then, the original frame image is preprocessed to obtain an input image. In the actual photographing environment of the cylindrical label, the center of the label on the cylindrical surface is not directly opposite to the center of the imaging surface, so that the formed image mainly has various imaging conditions.
Step S110, performing feature extraction on an original frame image of a student during school through a convolutional neural network, and generating a feature map;
in some embodiments, the original frame image is typically RGB three channel (channel), so the input original frame image typically has three dimensions: (length, width, channel). For example, for a 28 × 28 RGB picture, the dimension is (28,28, 3). The input picture is 2-dimensional (8,8), the convolution is (3,3), and the output is also 2-dimensional (6, 6). For example, the dimension of the input original frame image which is three-dimensional (8,8,3) convolution becomes (3,3,3), and the last dimension of the input original frame image is consistent with the input channel dimension. The convolution at this time is the sum of all the elements of the three channels after respective multiplication, i.e. the sum of the previous 9 products, now the sum of the 27 products. Therefore, the output dimension does not change, again (6, 6).
Step S120, weighting the characteristic diagram on two dimensions of a channel and a spatial axis by using an attention mechanism, and adjusting a characteristic response value to emphasize or inhibit information so as to improve the expression of the network;
in some embodiments, feature extraction power is further improved through an attention mechanism, meanwhile, feature maps are weighted in two dimensions of a channel axis and a space axis by means of channel attention and space attention, and feature response values are adjusted to emphasize or inhibit information, so that expressive power of the network is improved.
And step S130, carrying out weighted summation according to the characteristic diagram to obtain an attention weight coefficient, and further obtaining a safety judgment result.
In some embodiments, in the security classification process, all features that are valid for classification are used as well as possible in the classification, so that the classification effect is as good as possible. For example, by neural network based methods: the LSTM-attention model, the convolutional neural network model and the like can utilize the relation between the real-time image frames and the frames of students, intuitively see that some similarities exist between safety, judge the safety by utilizing the similarities, and greatly improve compared with the prior art that only the characteristics of the safety are used as classification bases by adopting the graph convolutional neural network to learn the information between nodes with different distances and propagating and capturing the correlation between the safety through iterative transformation. Meanwhile, the multi-granularity graph convolution network obtains the correlation information of each node under different characteristics by constructing a plurality of probability transfer matrixes with different depth distances, can mine the correlation among different characteristics of the attention model, and overcomes the defect that a single convolution neural network does not completely utilize all image background information.
Example 2
Referring to fig. 2, fig. 2 is a detailed step diagram of a safety detection method in calibration based on deep learning according to an embodiment of the present invention, which is shown as follows:
step S200, a portrait frame in each image data is collected, at least one key pixel point of each portrait is detected through the portrait key points, and softmax normalization is carried out on the image according to the detected portrait frame and the key pixels to obtain portrait images with the same size.
Step S210, the basic data information comprises the pre-recorded face image information, certificate number information, name information, graduation time information and school name information of the student.
And step S220, respectively extracting multi-scale feature expressions from the input front frame and the input rear frame according to the feature extraction of the depth convolution network based on the original frame image, and completing multi-level feature extraction under the original frame image through the obtained multi-scale feature expressions.
Step S230, inputting the feature values of the two dimensions of the channel and the spatial axis into an attention mechanism weight calculation, and calculating the attention weight of the corresponding dimension by using a nonlinear function.
Step S240, aggregating the input spatial information of feature values in two dimensions of channel and spatial axis through an average pooling operation, and then feeding the aggregated spatial information into a multi-layered perceptron with a hidden layer to merge the output feature vectors, thereby generating an attention map in two dimensions of channel and spatial axis.
And step S250, performing feature dimensionality reduction on the feature map, outputting pixel level classification results of two adjacent frames of images by using a classifier, and performing feature extraction, weighting and summation on the classification results to obtain an attention weight coefficient.
In some embodiments, the classifier outputs the pixel level classification results of two adjacent frames of images, an attention mechanism and space-time feature fusion are used for end-to-end training, and the training process of the model is accelerated by using a GPU parallel computing technology; by using the trained model, the salient object detection of the image data can be realized by giving the pair of video frames to be detected; firstly, starting from a short-time sequence dependence relationship, taking adjacent front and back frame data with a sequence as input, respectively improving the accuracy of intra-frame salient object detection through a self-attention mechanism and an attention mechanism, and simultaneously capturing the consistency of inter-frame salient objects; under the framework of a similar network, a multi-level feature extraction module, a self-attention module, an attention mechanism, space-time feature fusion and the like in a video salient object detection model are designed and constructed to be combined to form a feature image for feature dimension reduction, a classifier is used for outputting pixel-level classification results of two adjacent frames of images, and the classification results are subjected to feature extraction, weighting and summation to obtain an attention weight coefficient.
Example 3
Referring to fig. 3, fig. 3 is a schematic diagram of a safety detection system module at calibration based on deep learning according to an embodiment of the present invention, which is shown as follows:
the data preprocessing module 10 is used for collecting image data and basic data information of students during a school and carrying out data preprocessing;
the feature extraction module 20 is used for performing feature extraction on the original frame image of the student during the school through a convolutional neural network and generating a feature map;
the weighting module 30 is used for weighting the feature map in two dimensions of a channel and a spatial axis by using an attention mechanism, and adjusting a feature response value to emphasize or inhibit information so as to improve the expression force of the network;
and the output module 40 is used for carrying out weighted summation according to the characteristic diagram to obtain an attention weight coefficient so as to obtain a safety judgment result.
As shown in fig. 4, an embodiment of the present application provides an electronic device, which includes a memory 101 for storing one or more programs; a processor 102. The one or more programs, when executed by the processor 102, implement the method of any of the first aspects as described above.
Also included is a communication interface 103, and the memory 101, processor 102 and communication interface 103 are electrically connected to each other, directly or indirectly, to enable transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, and the processor 102 executes the software programs and modules stored in the memory 101 to thereby execute various functional applications and data processing. The communication interface 103 may be used for communicating signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a Random Access Memory 101 (RAM), a Read Only Memory 101 (ROM), a Programmable Read Only Memory 101 (PROM), an Erasable Read Only Memory 101 (EPROM), an electrically Erasable Read Only Memory 101 (EEPROM), and the like.
The processor 102 may be an integrated circuit chip having signal processing capabilities. The Processor 102 may be a general-purpose Processor 102, including a Central Processing Unit (CPU) 102, a Network Processor 102 (NP), and the like; but may also be a Digital Signal processor 102 (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware components.
In the embodiments provided in the present application, it should be understood that the disclosed method and system can be implemented in other ways. The method and system embodiments described above are merely illustrative, for example, the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In another aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed by the processor 102, implements the method according to any one of the first aspect described above. The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory 101 (ROM), a Random Access Memory 101 (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, the in-calibration safety detection method and system based on deep learning provided by the embodiment of the present application can use adjacent and sequential front and back frame data as input based on the video saliency object detection model of the attention mechanism in the deep learning model, and as for the network structure, compared with the method based on the recurrent neural network, the method and system based on deep learning focus more on saliency detection accuracy in a short time frame. Therefore, the abnormal conditions of the students can be identified, the workload of managers is reduced, the students can be predicted and blocked before dangerous conditions occur, abnormal information is generated for the students possibly sending the dangerous conditions, and related personnel can conveniently look up the abnormal information.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A safety detection method in school based on deep learning is characterized by comprising the following steps:
collecting image data and basic data information of students during school, and performing data preprocessing;
performing feature extraction on an original frame image of a student during school through a convolutional neural network, and generating a feature map;
weighting the characteristic diagram on two dimensions of a channel and a spatial axis by using an attention mechanism, and adjusting a characteristic response value to emphasize or inhibit information so as to improve the expressive force of the network;
and carrying out weighted summation according to the characteristic diagram to obtain an attention weight coefficient, and further obtaining a safety judgment result.
2. The safety inspection method for the school based on deep learning of claim 1, wherein the collecting the image data and basic data information of the student during the school and the preprocessing the data comprises:
and acquiring a portrait frame in each image data, detecting at least one key pixel point of each portrait through the portrait key points, and performing softmax normalization on the image according to the detected portrait frame and key pixels to obtain the portrait image with the same size.
3. The in-school safety detection method based on deep learning of claim 2, further comprising:
the basic data information includes pre-recorded facial image information of students, certificate number information, name information, graduation time information, and school name information.
4. The safety detection method based on deep learning at school according to claim 1, wherein the extracting features of the original frame images of the students during school through the convolutional neural network and generating the feature map comprises:
according to the feature extraction of a depth convolution network based on an original frame image, multi-scale feature expressions are respectively extracted from the input front frame and the input rear frame, and the multi-stage feature extraction under the original frame image is completed through the obtained multi-scale feature expressions.
5. The method for on-calibration security detection based on deep learning of claim 1, wherein the weighting of the feature map in two dimensions of channel and spatial axes by using an attention mechanism, and the adjustment of the feature response value to emphasize or suppress the information so as to improve the expressive power of the network comprises:
and inputting the characteristic values of the two dimensions of the channel and the spatial axis into an attention mechanism weight calculation, and calculating the attention weight of the corresponding dimension by using a nonlinear function.
6. The in-school safety detection method based on deep learning of claim 5, further comprising:
and aggregating the spatial information of the feature values of the input channel and spatial axis in two dimensions through an average pooling operation, and then feeding the aggregated spatial information into a multi-layer perceptron with a hidden layer to combine the output feature vectors to generate an attention map in two dimensions of the channel and spatial axis.
7. The in-school safety detection method based on deep learning of claim 1, wherein the obtaining of the result of the safety judgment by performing weighted summation according to the feature map to obtain the attention weight coefficient comprises:
and performing feature dimensionality reduction on the feature graph, outputting pixel level classification results of two adjacent frames of images by using a classifier, and performing feature extraction, weighting and summation on the classification results to obtain an attention weight coefficient.
8. An in-school safety detection system based on deep learning, comprising:
the data preprocessing module is used for acquiring image data and basic data information of students during a school and carrying out data preprocessing;
the feature extraction module is used for extracting features of original frame images of students in the school period through a convolutional neural network and generating a feature map;
the weighting module is used for weighting the characteristic diagram on two dimensions of a channel and a spatial axis by using an attention mechanism, and adjusting a characteristic response value to emphasize or inhibit information so as to improve the expression force of the network;
and the output module is used for carrying out weighted summation according to the characteristic diagram to obtain an attention weight coefficient so as to obtain a safety judgment result.
9. The in-school safety detection system based on deep learning of claim 8, comprising:
at least one memory for storing computer instructions;
at least one processor in communication with the memory, wherein the at least one processor, when executing the computer instructions, causes the system to perform: the device comprises a data preprocessing module, a feature extraction module, a weighting module and an output module.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202111615684.2A 2021-12-27 2021-12-27 In-school safety detection method and system based on deep learning Pending CN114299431A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111615684.2A CN114299431A (en) 2021-12-27 2021-12-27 In-school safety detection method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111615684.2A CN114299431A (en) 2021-12-27 2021-12-27 In-school safety detection method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN114299431A true CN114299431A (en) 2022-04-08

Family

ID=80970287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111615684.2A Pending CN114299431A (en) 2021-12-27 2021-12-27 In-school safety detection method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114299431A (en)

Similar Documents

Publication Publication Date Title
CN109376667B (en) Target detection method and device and electronic equipment
US10943145B2 (en) Image processing methods and apparatus, and electronic devices
CN108710847B (en) Scene recognition method and device and electronic equipment
CN111539370B (en) Image pedestrian re-identification method and system based on multi-attention joint learning
US10726244B2 (en) Method and apparatus detecting a target
CN109858461B (en) Method, device, equipment and storage medium for counting dense population
Baldwin et al. Time-ordered recent event (TORE) volumes for event cameras
CN108805047B (en) Living body detection method and device, electronic equipment and computer readable medium
US20200012923A1 (en) Computer device for training a deep neural network
CN110929622A (en) Video classification method, model training method, device, equipment and storage medium
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
US11704563B2 (en) Classifying time series image data
JP7419080B2 (en) computer systems and programs
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
CN111488805B (en) Video behavior recognition method based on salient feature extraction
CN112767645B (en) Smoke identification method and device and electronic equipment
CN110532959B (en) Real-time violent behavior detection system based on two-channel three-dimensional convolutional neural network
CN112487207A (en) Image multi-label classification method and device, computer equipment and storage medium
CN109063776B (en) Image re-recognition network training method and device and image re-recognition method and device
CN112802076A (en) Reflection image generation model and training method of reflection removal model
CN110942456B (en) Tamper image detection method, device, equipment and storage medium
Pooja et al. Adaptive sparsity through hybrid regularization for effective image deblurring
Revi et al. Gan-generated fake face image detection using opponent color local binary pattern and deep learning technique
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN116311434A (en) Face counterfeiting detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination