CN112437275B - Video analysis method based on intelligent camera - Google Patents

Video analysis method based on intelligent camera Download PDF

Info

Publication number
CN112437275B
CN112437275B CN202011314510.8A CN202011314510A CN112437275B CN 112437275 B CN112437275 B CN 112437275B CN 202011314510 A CN202011314510 A CN 202011314510A CN 112437275 B CN112437275 B CN 112437275B
Authority
CN
China
Prior art keywords
layer
model
camera
reasoning
ssd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011314510.8A
Other languages
Chinese (zh)
Other versions
CN112437275A (en
Inventor
朱常玉
单建华
吴晓欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pin Ming Technology Co ltd
Original Assignee
Pin Ming Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pin Ming Technology Co ltd filed Critical Pin Ming Technology Co ltd
Priority to CN202011314510.8A priority Critical patent/CN112437275B/en
Publication of CN112437275A publication Critical patent/CN112437275A/en
Application granted granted Critical
Publication of CN112437275B publication Critical patent/CN112437275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a video analysis method based on an intelligent camera, which comprises the following steps: s1, installing a camera for monitoring a specified direction on a preset position; s2, designing a labeled object, and designing and training a detection model according to the labeled object to obtain a monitoring model; s3, pruning operators which are not supported by the monitoring model chip to obtain a pruned part and a residual part, wherein the pruned part is subjected to reasoning through a CPU (central processing unit) and the residual part is subjected to reasoning through the chip and serves as a model algorithm; s4, installing the model algorithm to the camera; and S5, starting the model algorithm, and transmitting the alarm picture obtained by the camera to a cloud platform. Through the intelligent camera, the network model is detected through directional design, chip calculation and cpu calculation are divided, the performance is greatly improved, the installation and deployment are convenient, a server is not required to be adopted for carrying out massive operation, the hardware cost is greatly reduced, automatic control monitoring is adopted, the manual maintenance cost of a construction site is effectively reduced, the algorithm identification accuracy rate is high, the real-time performance is good, and the requirement of engineering application can be met.

Description

Video analysis method based on intelligent camera
Technical Field
The invention relates to the field of artificial intelligence of video monitoring, in particular to a video analysis method based on an intelligent camera.
Background
With the continuous upgrading of computer technology and network technology, artificial intelligence is continuously developed, and the application of the artificial intelligence is continuously expanded. The core of artificial intelligence is information detection and information processing, and unlike traditional processing methods, the processing speed and data processing amount are increased by an aggregation order of magnitude.
At present, most of artificial intelligence algorithm computing platforms are arranged in a server, and one or more display cards are required to be configured. With the development of chip technology, intelligent algorithm computing platforms are gradually turning to edge terminals, such as cameras. Meanwhile, in the digital transformation of the building construction site, an artificial intelligence detection algorithm is also applied to the digital transformation, such as safety helmet detection, reflective garment detection, personnel area intrusion detection and the like.
Currently, the installation configuration of the algorithm local is mostly based on a server, and the following problems exist in this way:
firstly, the problem of high-temperature heat dissipation of a server is solved;
secondly, the problem of loosening of the display card in the transportation process is solved;
thirdly, the machine is relatively large and expensive, and also has a certain maintenance cost.
Therefore, an intelligent detection algorithm based on the edge camera comes along. Compared with a server, the camera has a small volume, and meanwhile, the computing power of the chip is limited, so that how to design a reasonable algorithm model and enable the algorithm model to operate in the camera with certain computing power becomes a key for solving the problem.
Disclosure of Invention
The invention aims to provide a video analysis method based on an intelligent camera, which can greatly improve the identification accuracy, is convenient to install and deploy and saves the cost.
In order to solve the technical problem, the invention provides a video analysis method based on an intelligent camera, which comprises the following steps:
s1, installing a camera for monitoring a specified direction on a preset position;
s2, designing a labeled object, and designing and training a detection model according to the labeled object to obtain a monitoring model;
s3, clipping operators which are not supported by the monitoring model chip to obtain a clipped part and a residual part, wherein the clipped part is subjected to reasoning through a CPU (Central processing Unit) and the residual part is subjected to reasoning through the chip to serve as a model algorithm;
s4, installing the model algorithm to the camera;
and S5, starting the model algorithm, and transmitting the alarm picture obtained by the camera to a cloud platform.
The detection model is an improved ssd model, a basic module of the improved ssd model comprises a first dense layer and a second dense layer, the first dense layer comprises a 3 x 3 convolution kernel and is used for capturing small-scale targets, and the second dense layer comprises two 3 x 3 convolution kernels and is used for learning visual characteristics of large-scale targets.
Wherein the S2 comprises:
the detection model method adopts a conv + bn + relu combination for calculation.
Wherein the S3 comprises:
clipping a Flatten layer, a PriorBox layer, a Concat layer, a Reshape layer, a Softmax layer and a detectionOutPut layer in a prototxt model file in the improved ssd model to be used as a clipping part;
and converting the cut prototxt model file and the coffee weight file into a wk model corresponding to the Haisis chip as the residual part.
Wherein, the deduction of the cutting part through the CPU comprises:
adopting SVP _ NNIE _ Ssd _ PriorBoxForward to carry out reasoning on the PriorBox layer;
adopting SVP _ NNIE _ Ssd _ SoftmaxForwarding to reason the Softmax layer;
and adopting SVP _ NNIE _ Ssd _ DetectionOutForwarding to detect the DetectionOutPut layer.
The reasoning for the PriorBox layer by adopting SVP _ NNIE _ Ssd _ PriorBox Forward comprises the following steps:
calculating a square frame according to the minimum side length minsize and the maximum side length maxsize of each PriorBox layer;
calculating a rectangular frame according to the aspect ratio aspect _ ratio;
and mapping the characteristic diagram size of each PriorBox layer back to the original diagram position, wherein the characteristic diagram size is 2, 19, 10, 5, 3 and 1.
The reasoning for the Softmax layer by using the SVP _ NNIE _ ss _ SoftmaxForward includes:
by using f i As confidence, reasoning is done on the softmax layer.
Wherein, the applying SVP _ NNIE _ Ssd _ DetectionOutForward to the DetectionOutPut layer includes:
and calculating the real image coordinate value of the front topk of the detectionOutPut layer as a predicted value for decoding to carry out reasoning calculation.
Wherein the S4 comprises:
performing cross compilation on the model algorithm by using arm-himix200-linux to obtain a compiled file;
packing the compiled file by using an rpmbild tool to obtain a packed file;
and installing the packed file to the camera on line.
Wherein the S1 comprises:
the pitch angle range of the camera is-15 degrees.
Compared with the prior art, the video analysis method based on the intelligent camera provided by the embodiment of the invention has the following advantages:
according to the video analysis method based on the intelligent camera, the network model is designed and detected in a directional mode through the intelligent camera, chip calculation and cpu calculation are divided, performance is greatly improved, installation and deployment are convenient, massive operation is not needed to be carried out through a server, hardware cost is greatly reduced, automatic control monitoring is adopted, the manual maintenance cost of a construction site is effectively reduced, algorithm identification accuracy is high, real-time performance is good, and the requirement of engineering application can be met. And adopt this kind of screen analysis mode, realized the shooting of camera and the integration of analysis, need not to carry out the analysis after passing back data, but directly carry out the analysis at the camera, only need pass back analysis result can, realize deploying promptly and use the effect, reduce the deployment cost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart illustrating steps of an embodiment of a video analysis method based on an intelligent camera according to the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, fig. 1 is a schematic flowchart illustrating steps of an embodiment of a video analysis method based on an intelligent camera according to the present application.
In a specific embodiment, the video analysis method based on an intelligent camera provided by the invention comprises the following steps:
s1, installing a camera for monitoring a specified direction on a preset position; the number of the cameras for monitoring the designated direction is not limited, and one or more cameras can be arranged according to the requirement.
S2, designing a labeled object, and designing and training a detection model according to the labeled object to obtain a monitoring model; the labeled object is the last model detection object, and includes a safety helmet, a reflective garment, a person, etc., such as detecting whether a construction worker wears the safety helmet, or whether a person in a designated area wears the reflective garment, or whether a person exists in a certain dangerous area, etc.
S3, pruning operators which are not supported by the monitoring model chip to obtain a pruned part and a residual part, wherein the pruned part is subjected to reasoning through a CPU (central processing unit) and the residual part is subjected to reasoning through the chip and serves as a model algorithm; the mode that the chip and the CPU are respectively used for calculation processing is adopted, a professional server is not needed for operation, and the hardware cost is reduced.
S4, installing the model algorithm to the camera; after the algorithm and the training of the detection model are completed, the detection model is downloaded and installed in the camera and integrated with the shooting function of the camera, so that the rear part only needs to receive the processed picture, a server required by processing does not need to be arranged, the space occupied by the server can be saved, the detection model can be used after the camera is installed, and the deployed hardware cost and the labor are lower.
And S5, starting the model algorithm, and transmitting the alarm picture obtained by the camera to a cloud platform.
Through the intelligent camera, the network model is detected through directional design, chip calculation and cpu calculation are divided, the performance is greatly improved, the installation and deployment are convenient, a server is not required to be adopted for carrying out massive operation, the hardware cost is greatly reduced, automatic control monitoring is adopted, the manual maintenance cost of a construction site is effectively reduced, the algorithm identification accuracy rate is high, the real-time performance is good, and the requirement of engineering application can be met. And adopt this kind of screen analysis mode, realized the shooting of camera and the integration of analysis, need not to carry out the analysis after passing back data, but directly carry out the analysis at the camera, only need pass back analysis result can, realize deploying promptly and use the effect, reduce the deployment cost.
In the present invention, the detection model is not limited, and in order to increase the detection speed, in an embodiment, the detection model is an improved ssd model, a base module of the improved ssd model includes a first dense layer and a second dense layer, the first dense layer includes a 3 × 3 convolution kernel for capturing the small-scale target, and the second dense layer includes two 3 × 3 convolution kernels for learning the visual characteristics of the large-scale target.
The two dense layers are adopted to obtain the sensing visual fields with different scales, so that mutual comparison can be realized, flexible selection can be performed according to different detection precisions, or different combination modes are adopted to obtain expected detection effects.
The pre-activation combination conv + relu + bn in the existing DenseNet, and to increase the speed, in one embodiment, the S2 includes:
the detection model method adopts a conv + bn + relu combination for calculation.
The combination of conv + BN + relu is adopted, so that the combined calculation of the convolution and the BN is conveniently carried out, and the speed of an inference stage is accelerated.
The present invention includes, but is not limited to, the above-mentioned combination calculation, and if there are other more preferred combinations, the substitution can be made to achieve the desired calculation effect, and the present invention is not limited to this.
In the invention, after the design of the detection model is finished, continuous training is needed, the training times and the films adopted by the training are not limited, in one embodiment, the model is trained on the Avida RTX2080Ti, and the model is stored after being trained.
In the present invention, the monitoring model obtained after training needs to be cut and transplanted, one part of the monitoring model is reasoned on the CPU, and the other part of the monitoring model is reasoned on the chip, the selected chip is not limited in the present invention, and the part needing to be cut is not limited in one embodiment, the S3 includes:
clipping a Flatten layer, a PriorBox layer, a Concat layer, a Reshape layer, a Softmax layer and a detectionOutPut layer in a prototxt model file in the improved ssd model to be used as a clipping part;
and converting the cut prototxt model file and the coffee weight file into a wk model corresponding to the Haisis chip as the residual part.
At this time, operators supported by the Haesi chip are different, operators which are not supported in the model need to be cut, layers including a Flatten layer (13 layers in total), a PriorBox layer (6 layers in total), a Consat layer (3 layers), a Reshape layer, a Softmax layer and a DetectionOutPut layer are deleted from the model file protoxt, and then the cut protoxt model file and the cafemodel weight file are converted into a wk model required by the Haesi chip by using a tool RuyiStudio.
It should be noted that other chips may be used in the present invention, and the cut portion is not the above-mentioned portion, and may be other different portions, which is not limited in the present invention.
In the present invention, the time division of the inference of the back-off portion of the CPU is not limited, and in one embodiment, the inference of the pruning portion by the CPU includes:
adopting SVP _ NNIE _ Ssd _ PriorBox forward to carry out reasoning on the PriorBox layer;
adopting SVP _ NNIE _ Ssd _ SoftmaxForwarding to reason the Softmax layer;
and adopting SVP _ NNIE _ Ssd _ DetectionOutForward to the DetectionOutPut layer.
Specifically, in an embodiment, the performing inference on the PriorBox layer by using SVP _ NNIE _ ss _ PriorBox forwarding includes:
calculating a square frame according to the minimum side length minsize and the maximum side length maxsize of each PriorBox layer;
calculating a rectangular frame according to the aspect ratio aspect _ ratio;
and mapping the characteristic diagram size of each PriorBox layer back to the original diagram position, wherein the characteristic diagram size is 2, 19, 10, 5, 3 and 1.
In the Softmax layer, because the cpu part is generally an arm processor and has relatively weak performance, the cpu reasoning part needs to optimize acceleration, and the statistical helmet detection algorithm has 40998 anchors, and in the Softmax layer reasoning calculation, the calculation formula is as follows:
Figure BDA0002790909350000061
then at least 40998 exp () methods are called, however the exp () computation is relatively time consuming, requiring up to 20-30ms. />
Therefore, taking this into account, the softmax calculation is removed and f is directly used i As confidence, the experiment finds that the precision is not influenced.
That is, in one embodiment, the reasoning for the Softmax layer using SVP _ NNIE _ ss _ SoftmaxForward includes:
by using f i As confidence, reasoning is done on the softmax layer.
The present invention includes, but is not limited to, the above-mentioned reasoning manner for the softmax layer.
In the detectionOut layer, the fact that predicted values of all anchors are decoded into actual coordinate values of an image is found by analyzing an original code function SVP _ NNIE _ Ssd _ detectionOutForward, an exp () function is calculated for each anchor, however, the confidence coefficient of topk is only required to be the highest when NMS calculation is actually carried out, so that the actual image coordinate values of topk before calculation by modifying a decoding mode can be greatly reduced in operation time.
Therefore, in an embodiment, the applying SVP _ NNIE _ Ssd _ detectionoutpurward to the DetectionOutPut layer includes:
and calculating the real image coordinate value of the front topk of the detectionOutPut layer as a predicted value for decoding to carry out reasoning calculation.
In the present invention, after the detection model is completed, it needs to be installed in the camera, and the present invention does not limit this process, and in an embodiment, the S4 includes:
performing cross compilation on the model algorithm by using arm-himix200-linux to obtain a compiled file;
packing the compiled file by using an rpmbild tool to obtain a packed file;
and installing the packed file to the camera on line.
Because the compact light-weight files are adopted, the size of the packed files is only 5-6M, and the installation efficiency is greatly improved.
After the installation is completed, the camera is started, an algorithm starting button is arranged on a web interface of the camera, a starting algorithm is clicked, a camera flow channel reads video stream information, the video stream information is transmitted to an algorithm model in the camera, inference calculation is carried out by utilizing a chip in the camera and an optimized cpu, if the safety helmet is detected to be worn by a worker in a construction site, if the safety helmet is not worn, an alarm picture which is not worn is transmitted to a cloud end, if the safety helmet is detected to be worn by the worker in the construction site, if the safety helmet is not worn by the worker in the construction site, the safety helmet is also alarmed to the cloud end, and if the safety helmet is not worn by the worker in the construction site.
Of course, an alarm or the like may be provided, which is issued after discovery.
In the present invention, the installation of the camera is not limited, and generally, the S1 includes:
the pitch angle range of the camera is-15 degrees.
In summary, according to the video analysis method based on the intelligent camera provided by the embodiment of the invention, the detection network model is designed directionally through the intelligent camera, the chip calculation and the cpu calculation are divided, the performance is greatly improved, the installation and the deployment are convenient, a server is not required to be adopted for carrying out massive calculation, the hardware cost is greatly reduced, the automatic control monitoring is adopted, the labor maintenance cost of a construction site is effectively reduced, the algorithm recognition accuracy is high, the real-time performance is good, and the requirement of engineering application can be met. And adopt this kind of screen analysis mode of looking, realized the shooting of camera and the integration of analysis, need not to carry out the analysis after passing data back, but directly carry out the analysis at the camera, only need pass back the analysis result can, realize deploying the effect that uses promptly, reduce and deploy the cost.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A video analysis method based on an intelligent camera is characterized by comprising the following steps:
s1, installing a camera for monitoring a specified direction on a preset position;
s2, designing a labeled object, and designing and training a detection model according to the labeled object to obtain a monitoring model;
s3, pruning operators which are not supported by the monitoring model chip to obtain a pruned part and a residual part, wherein the pruned part is subjected to reasoning through a CPU (central processing unit) and the residual part is subjected to reasoning through the chip and serves as a model algorithm;
the deduction part is deduced through the CPU and comprises the following steps:
adopting SVP _ NNIE _ Ssd _ PriorBoxForward to carry out reasoning on the PriorBox layer;
adopting SVP _ NNIE _ Ssd _ SoftmaxForward to carry out reasoning on a Softmax layer;
adopting SVP _ NNIE _ Ssd _ DetectionOutForward to carry out reasoning on a DetectionOutPut layer;
the reasoning for the PriorBox layer by adopting SVP _ NNIE _ Ssd _ PriorBox Forward comprises the following steps:
calculating a square frame according to the minimum side length minsize and the maximum side length maxsize of each PriorBox layer;
calculating a rectangular frame according to the aspect ratio aspect _ ratio;
mapping the feature map size of each PriorBox layer back to the original image position, wherein the feature map size is 2, 19, 10, 5, 3 and 1;
the adopting SVP _ NNIE _ Ssd _ SoftmaxForward to reason the Softmax layer comprises the following steps:
by using fi As confidence, reasoning the softmax layer;
the reasoning for the detective output put layer by adopting SVP _ NNIE _ Ssd _ detective output forward includes:
calculating the real image coordinate value of the front topk of the detectionOutPut layer as a predicted value for decoding to carry out reasoning calculation;
s4, installing the model algorithm to the camera;
and S5, starting the model algorithm, and transmitting the alarm picture obtained by the camera to a cloud platform.
2. The smart-camera based video analysis method of claim 1, wherein the detection model is a modified ssd model, and a base module of the modified ssd model comprises a first dense layer and a second dense layer, the first dense layer comprises a 3 x 3 convolution kernel for capturing small-scale targets, and the second dense layer comprises two 3 x 3 convolution kernels for learning visual characteristics of large-scale targets.
3. The intelligent camera-based video analysis method according to claim 2, wherein the S2 comprises:
the detection model algorithm is calculated by adopting a conv + bn + relu combination.
4. The intelligent camera-based video analysis method according to claim 3, wherein the S3 comprises:
clipping a Flatten layer, a PriorBox layer, a Concat layer, a Reshape layer, a Softmax layer and a DetectionOutPut layer in a prototxt model file in the improved ssd model to be used as a clipping part;
and converting the cut prototxt model file and the coffee weight file into a wk model corresponding to the Haisis chip as the residual part.
5. The intelligent camera-based video analysis method according to claim 1, wherein the S4 comprises:
performing cross compilation on the model algorithm by using arm-himix200-linux to obtain a compiled file;
packing the compiled file by using an rpmbild tool to obtain a packed file;
and installing the packed file to the camera on line.
6. The intelligent camera-based video analysis method according to claim 1, wherein the S1 comprises:
the pitch angle range of the camera is-15 degrees.
CN202011314510.8A 2020-11-20 2020-11-20 Video analysis method based on intelligent camera Active CN112437275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011314510.8A CN112437275B (en) 2020-11-20 2020-11-20 Video analysis method based on intelligent camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011314510.8A CN112437275B (en) 2020-11-20 2020-11-20 Video analysis method based on intelligent camera

Publications (2)

Publication Number Publication Date
CN112437275A CN112437275A (en) 2021-03-02
CN112437275B true CN112437275B (en) 2023-03-24

Family

ID=74693345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011314510.8A Active CN112437275B (en) 2020-11-20 2020-11-20 Video analysis method based on intelligent camera

Country Status (1)

Country Link
CN (1) CN112437275B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783642A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Image identification method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6644231B1 (en) * 2019-04-26 2020-02-12 Awl株式会社 Image analysis device and image analysis system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783642A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Image identification method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
sample_svp_nnie_software.c;Ghustwb;《https://github.com/Ghustwb/Hi3559_NNIE_SSD/blob/master/sample_nnie_software/sample_svp_nnie_software.c》;20190718;代码第1415-1831行 *
uayaon.移植深度学习算法模型到海思AI芯片.《https://zhuanlan.zhihu.com/p/103776174》.2020, *
移植深度学习算法模型到海思AI芯片;uayaon;《https://zhuanlan.zhihu.com/p/103776174》;20200122;正文第1-10页 *
针对小目标的深度学习行人检测算法研究;陈奇华;《硕士论文电子期刊》;20200115;正文第4.2、5.2节;图4-6、5-1 *

Also Published As

Publication number Publication date
CN112437275A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
Doshi et al. From satellite imagery to disaster insights
Lestari et al. Fire hotspots detection system on CCTV videos using you only look once (YOLO) method and tiny YOLO model for high buildings evacuation
Wu et al. Real-time video fire detection via modified YOLOv5 network model
CN112528971B (en) Power transmission line abnormal target detection method and system based on deep learning
US11482030B2 (en) System and method for automatic detection and recognition of people wearing personal protective equipment using deep learning
CN112070043A (en) Safety helmet wearing convolutional network based on feature fusion, training and detecting method
Ma et al. Smart fire alarm system with person detection and thermal camera
CN112437275B (en) Video analysis method based on intelligent camera
CN112861646A (en) Cascade detection method for oil unloading worker safety helmet in complex environment small target recognition scene
CN116846059A (en) Edge detection system for power grid inspection and monitoring
JP7480838B2 (en) Road deterioration diagnosis device, road deterioration diagnosis method, and program
CN117523437A (en) Real-time risk identification method for substation near-electricity operation site
CN116310979B (en) Image identification method, risk management and control platform and method, and safety management and control platform
CN117011772A (en) Risk prompting method, device and storage medium for power transmission line
CN111222477A (en) Vision-based method and device for detecting two hands leaving steering wheel
CN112990169B (en) Coal-rock interface identification method and coal cutting track determination method and device
CN115482489A (en) Improved YOLOv 3-based power distribution room pedestrian detection and trajectory tracking method and system
CN115661704A (en) Multi-target detection method for mine excavation environment
CN113569956A (en) Mountain fire disaster investigation and identification method based on AI algorithm
CN114694073A (en) Intelligent detection method and device for wearing condition of safety belt, storage medium and equipment
CN114241311A (en) Detection method for foreign matter and environmental abnormal state of power transmission line
CN114490825A (en) Safety analysis model of nuclear reactor equipment
CN114821444A (en) Unmanned overhead traveling crane operation area safety detection method based on visual perception
KR102647428B1 (en) System and method for controlling artificial intelligence smart wind power capable of power prediction
CN117893846A (en) Electric bucket tooth counting method based on deep learning target detection algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Block C, 5th Floor, Building B, Paradise Software Park, No. 3 Xidoumen Road, Xihu District, Hangzhou, Zhejiang 310000

Applicant after: Pin Ming Technology Co.,Ltd.

Address before: 310012 Room C, 5 / F, building B, Paradise Software Park, 3 xidoumen Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU PINMING SAFETY CONTROL INFORMATION TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant