CN111242010A - Method for judging and identifying identity of litter worker based on edge AI - Google Patents

Method for judging and identifying identity of litter worker based on edge AI Download PDF

Info

Publication number
CN111242010A
CN111242010A CN202010026367.6A CN202010026367A CN111242010A CN 111242010 A CN111242010 A CN 111242010A CN 202010026367 A CN202010026367 A CN 202010026367A CN 111242010 A CN111242010 A CN 111242010A
Authority
CN
China
Prior art keywords
edge
garbage
throwing
identity
identifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010026367.6A
Other languages
Chinese (zh)
Inventor
余齐齐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Bohai Zhongtian Information Technology Co ltd
Original Assignee
Xiamen Bohai Zhongtian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Bohai Zhongtian Information Technology Co ltd filed Critical Xiamen Bohai Zhongtian Information Technology Co ltd
Priority to CN202010026367.6A priority Critical patent/CN111242010A/en
Publication of CN111242010A publication Critical patent/CN111242010A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of judging litter and identifying the identity of illegal persons, in particular to a method for judging and identifying the identity of a litter person based on an edge AI (artificial intelligence) algorithm, wherein the method for judging and identifying the identity of the litter person based on the edge AI algorithm provided by the invention realizes calculation resources and storage resources for accelerating identification through an edge AI calculation method, can quickly identify whether a litter action exists, adopts a face recognition technology to identify the identity of the litter person when judging the litter action by adopting the edge AI algorithm, helps to make person identity tracing, traces back the litter person to the illegal person through the illegal action, and is favorable for educating the individual who throws the litter and establishing a responsibility system.

Description

Method for judging and identifying identity of litter worker based on edge AI
Technical Field
The invention relates to the technical field of judging litter and identifying illegal personnel identities, in particular to a method for judging and identifying the identity of a litter personnel based on an edge AI.
Background
With the rapid development of social economy in China, the problems of further improving city grade, improving city image, beautifying city environment and the like become important subjects in front of governments, environment improvement, city management optimization, new city image building, city grade improvement and full development of activities of civilized cities are promoted.
With the continuous deepening of the urbanization process, the insufficient urban environment monitoring efficiency becomes an urgent problem to be solved, household garbage is taken as an important influence factor influencing the urban environment, a large amount of household garbage is generated along with the improvement of the urban living standard, the effective control of the garbage becomes a major problem of urban environmental protection work, a large amount of randomly discarded garbage in the past causes harm to water and soil resources of people, and the survival and development of people are directly threatened; information collection, processing and analysis of sanitation work as an important ring has not yet been mature solution, and is usually only discovered and stopped by sanitation workers, so that the efficiency is low. Under the condition, the original environment monitoring mode cannot meet the actual needs of the digital city, and the comprehensive and efficient environment detection system based on artificial intelligence becomes the inevitable appeal of urbanization development.
However, the current irreconcilable phenomenon of garbage throwing is still all the more than ever, and the garbage throwing is difficult to effectively control and depends on the independent consciousness of people. The existing artificial intelligence algorithm model is used for identifying whether a garbage throwing behavior exists or not, and the existing artificial intelligence algorithm model firstly judges whether a person enters a garbage throwing forbidden area or not, and then voice broadcasting is carried out to remind that garbage can not be thrown.
The method only plays a role in reminding, the realization effect depends on the quality of individuals, and the behavior of forbidding throwing garbage randomly cannot be really realized.
Disclosure of Invention
In order to solve the problem that the prior art mentioned in the background art can only remind people who forbid garbage throwing and cannot confirm the identity of garbage throwing personnel, the invention provides a method for judging and identifying the identity of garbage throwing personnel based on edge AI, which comprises the following steps:
s100, performing original data algorithm model learning by using a fastercnnn framework of a deep learning target detection algorithm based on Region Proposal by a cloud algorithm center;
s200, distributing an original data example model obtained by the famtercnn framework learning to each edge computing node by the cloud example center through a cloud unloading mode of cloud computing;
s300, the field camera transmits the collected video and pictures of the personnel throwing garbage to an edge computing node, the edge computing node is combined with an existing example model to carry out computing, whether the personnel throwing garbage has illegal behaviors of throwing garbage or not is judged according to a computing result, if yes, an instruction is sent to a field terminal device, and the field terminal device gives a prompt according to the instruction;
and S400, the edge computing node transmits the judgment result and the collected video and picture of the person throwing garbage to a cloud computing center for storage or viewing.
On the basis of the scheme, further, the raw data example model is obtained through the following steps:
s110, constructing an image database for identifying people and garbage throwing behaviors, and performing model depth model training and labeling on the image to form source data;
and S120, performing AI modeling based on the source data and the fastercnnn algorithm network to generate an original data example model.
On the basis of the above scheme, further, the step S110 includes the following steps:
s111, obtaining pictures containing people and garbage throwing behaviors as data sources;
and S112, carrying out model depth model training and labeling on the data source and forming source data, wherein the labeling comprises the position of the image, the name of the image, the width and the height of the image, the dimension of the image, the name of a labeled object and the xy coordinate value of the Bbox.
On the basis of the above scheme, further, the step S120 includes the following steps:
s121, extracting feature maps of the candidate images from the original images through a multilayer convolutional neural network, wherein the feature maps can be shared for use by a subsequent RPN network and a full connection layer;
s122, generating region prosages through RPN layers by the feature maps, firstly generating a stack of anchors, then cutting and filtering, judging whether the anchors belong to the foreground or the background through softmax, and meanwhile correcting the anchors by utilizing bounding box regression to form more accurate prosages;
s123, obtaining a propofol feature map with a fixed size on a Roi Pooling layer by the propofol;
s124, the generic feature map performs full-connection operation, specific category classification is performed on the full-connection layer and softmax, and a bounding box regression operation is completed by utilizing L1 Loss to obtain the accurate position of the object, so that an original data example model is generated.
On the basis of the above scheme, further, the S122 includes the following steps:
the fastercnnn adopts the SPP layer in the SPP-net, and introduces a multitask loss function for calculating loss of bbox regression and classification.
On the basis of the above scheme, further, the multitask loss function is:
Figure BDA0002362613390000031
wherein, the p isiThe probability of being the target is predicted for the anchor,
Figure BDA0002362613390000032
Nclsis the total number of anchors, ti=(tx,ty,tw,th) Is a vector representing the anchor, the offset predicted by the RPN training stage,
Figure BDA0002362613390000041
is and tiVectors with the same dimension represent anchors, the offset of the RPN training stage relative to gt is actual, gt is a labeled correct target, lambda is a labeled number, and L isregTo mark the correct number of targets, LclsTo classify the number, NregThe number of positive anchors.
On the basis of the above scheme, further, the calculation of the loss of bbox regression and classification is expressed by using a thought vector (x, y, w, h), and the calculation steps are as follows:
1) and (Δ x, Δ y), Δ x ═ PWdx(P),Δy=Phdy(P);
2) Scaling (S)W,Sh),SW=PWdW(P),Sh=Phdh(P);
Wherein x, y, w, h respectively represent the central point coordinate and width and height of the image, PWIs the target position, PhIs a target height, dx(P) is the target abscissa, dy(P) is the object ordinate, dw(P) is the zoom size, dh(P) is the zoom height, SwTo target zoom size, ShThe target zoom height is obtained.
On the basis of the scheme, the equipment model adopted by the EDGE computing node is VA-TU-DL, and preferably VA-TU-EDGE-B01.
Compared with the prior art, the method for judging and identifying the identity of the litter thrower based on the edge AI has the following technical effects:
the invention can quickly and accurately identify whether the garbage throwing behavior exists in the garbage throwing forbidden area or not, and identify the identity of the garbage throwing personnel, thereby helping to establish a set of complete garbage throwing forbidden system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram of a method for determining and identifying the identity of a person who discards garbage according to an edge AI of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The invention provides a method for judging and identifying the identity of a litter worker based on an edge AI, which comprises the following steps:
s100, performing original data algorithm model learning by using a fastercnnn framework of a deep learning target detection algorithm based on Region Proposal by a cloud algorithm center;
s200, distributing an original data example model obtained by the famtercnn framework learning to each edge computing node by the cloud example center through a cloud unloading mode of cloud computing;
s300, the field camera transmits the collected video and pictures of the personnel throwing garbage to an edge computing node, the edge computing node is combined with an existing example model to carry out computing, whether the personnel throwing garbage has illegal behaviors of throwing garbage or not is judged according to a computing result, if yes, an instruction is sent to a field terminal device, and the field terminal device gives a prompt according to the instruction;
and S400, the edge computing node transmits the judgment result and the collected video and picture of the person throwing garbage to a cloud computing center for storage or viewing.
In specific implementation, as shown in fig. 1, the method comprises the following steps:
s100, performing original data algorithm model learning by using a fastercnnn framework of a deep learning target detection algorithm based on Region Proposal by a cloud algorithm center;
s200, distributing an original data example model obtained by the famtercnn framework learning to each edge computing node by the cloud example center through a cloud unloading mode of cloud computing;
s300, the field camera transmits the collected video and pictures of the personnel throwing garbage to an edge computing node, the edge computing node is combined with an existing example model to carry out computing, whether the personnel throwing garbage has illegal behaviors of throwing garbage or not is judged according to a computing result, if yes, an instruction is sent to the field terminal device, and the field terminal device gives a prompt according to the instruction.
The edge computing device detects that a person holding garbage enters a monitoring site and starts tracking the person holding garbage and the garbage on hands until the garbage is thrown out, the edge computing device calculates a calculation result value by combining a process video of the garbage thrown out with an example model, judges whether the person throwing the garbage has illegal behaviors of throwing the garbage according to the calculation result, specifically judges whether the illegal behaviors of throwing the garbage exist or not by comparing a numerical value of the result with a set threshold value through the example model, and if the illegal behaviors of throwing the garbage are lower than the threshold value, the garbage is thrown; if the instruction exists, the instruction is sent to the field terminal equipment, and the field terminal equipment gives a prompt according to the instruction, wherein the prompt can be used for playing voice or other related operations, such as broadcasting through a sound-producing device or an alarm bell;
it should be noted that, not only can the field video data be acquired by the camera, but also a person skilled in the art can implement the method by other devices for acquiring image or video data, and redundant description is not repeated here;
and S400, the edge computing node transmits the judgment result and the collected video and picture of the person throwing garbage to a cloud computing center for storage or viewing.
The edge computing node transmits the judgment result and the collected video and picture of the personnel throwing garbage to the cloud computing center for storage or checking, so that the identity of the illegal personnel can be identified conveniently, the illegal personnel can be traced back by illegal behaviors, and education and responsibility system establishment for the individual throwing garbage are facilitated.
The method for judging and identifying the identity of the garbage thrower based on the edge AI can quickly and accurately identify whether the garbage thrower exists in the garbage throwing forbidden area or not, identify the identity of the garbage thrower and help establish a complete garbage throwing forbidden system.
Preferably, the raw data example model is obtained by the following steps:
s110, constructing an image database for identifying people and garbage throwing behaviors, and performing model depth model training and labeling on the image to form source data;
and S120, performing AI modeling based on the source data and the fastercnnn algorithm network to generate an original data example model.
Preferably, the raw data example model is obtained by the following steps:
s110, constructing an image database for identifying people and garbage throwing behaviors, and performing model depth model training and labeling on the image to form source data;
and S120, performing AI modeling based on the source data and the fastercnnn algorithm network to generate an original data example model.
In specific implementation, as a CNN network target detection method, fastercnnn extracts feature maps of an original image by using a convolutional layer + pooling layer mode, and provides support for a subsequent RPN layer and a full connection layer; the RPN network is mainly used for generating region explosals;
firstly, after the picture with any size passes Conv layers, the original image is cut into 1/16, then the Feature Map enters into Intemediate layer filter to generate Anchor box finally, then judge that anchors belong to the foreground or the background, namely judge that the object is or is not, namely generate a matrix with MxN size on the image, firstly, performing convolution with 1x1 to output the image with WxHx18 size,
then whether the object is finally obtained through a softmax function and a resurcope layer, and meanwhile, a bounding box regression correction anchor box forms accurate propofol;
secondly, utilizing the prosals generated by the RPN and the feature map obtained by the last layer of the VGG16 to finally obtain the prosal feature map with fixed size, and entering the back to perform target identification and positioning by utilizing full connection operation;
finally, forming a feature map with a fixed size on a Roi Pooling layer for full connection operation, and classifying specific categories by using Softmax, wherein the feature map is mapped into values (0,1) through a Softmax function, and the cumulative sum of the values is 1 to meet the property of probability, so that the value with the largest probability can be selected as a prediction target, classifying according to the values, then completing bounding box regression operation by using L1 Loss to obtain the accurate position of the object, namely rpn _ Loss _ bbox uses a smooth L1 calculation function, performing prediction marking on the anchor box in rpn-data, and calculating the offset between the anchor box and gt _ boxes; meanwhile, the bounding box regression obtains the position offset bbox _ pred of each region proxy, and regression obtains a more accurate target detection position and finally obtains the accurate position of the object;
finally, an example model of the raw data is generated in the above manner.
Preferably, the step S110 includes the steps of:
s111, obtaining pictures containing people and garbage throwing behaviors as data sources;
and S112, carrying out model depth model training and labeling on the data source and forming source data, wherein the labeling comprises the position of the image, the name of the image, the width and the height of the image, the dimension of the image, the name of a labeled object and the xy coordinate value of the Bbox.
In specific implementation, model depth model training and labeling are carried out by the following method:
Figure BDA0002362613390000081
Figure BDA0002362613390000091
preferably, the step S120 includes the steps of:
s121, extracting feature maps of the candidate images from the original images through a multilayer convolutional neural network, wherein the feature maps can be shared for use by a subsequent RPN network and a full connection layer;
s122, generating region prosages through RPN layers by the feature maps, firstly generating a stack of anchors, then cutting and filtering, judging whether the anchors belong to the foreground or the background through softmax, and meanwhile correcting the anchors by utilizing bounding box regression to form more accurate prosages;
s123, obtaining a propofol feature map with a fixed size on a Roi Pooling layer by the propofol;
s124, the generic feature map performs full-connection operation, specific category classification is performed on the full-connection layer and softmax, and a bounding box regression operation is completed by utilizing L1 Loss to obtain the accurate position of the object, so that an original data example model is generated.
Preferably, the S122 includes the steps of:
the fastercnnn adopts the SPP layer in the SPP-net, and introduces a multitask loss function for calculating loss of bbox regression and classification.
Preferably, the multitask penalty function is:
Figure BDA0002362613390000101
wherein, the p isiThe probability of being the target is predicted for the anchor,
Figure BDA0002362613390000102
Nclsis the total number of anchors, ti=(tx,ty,tw,th) Is a vector representing the anchor, the offset predicted by the RPN training stage,
Figure BDA0002362613390000103
is and tiVectors with the same dimension represent anchors, the offset of the RPN training stage relative to gt is actual, gt is a labeled correct target, lambda is a labeled number, and L isregTo mark the correct number of targets, LclsTo classify the number, NregThe number of positive anchors.
Preferably, the computation of the loss of bbox regression and classification, expressed using a thought vector (x, y, w, h), is performed as follows:
1) and (Δ x, Δ y), Δ x ═ PWdx(P),Δy=Phdy(P);
2) Scaling (S)W,Sh),SW=PWdW(P),Sh=Phdh(P);
Wherein x, y, w, h respectively represent the central point coordinate and width and height of the image, PWIs the target position, PhIs a target height, dx(P) is the target abscissa, dy(P) is the object ordinate, dw(P) is the zoom size, dh(P) is the zoom height, SwTo target zoom size, ShThe target zoom height is obtained.
Preferably, the equipment model adopted by the edge computing node is VA-TU-DL.
In particular, the EDGE equipment model is preferably VA-TU-EDGE-B01.
In order to test the application of the method for judging and identifying the identity of the people who throw garbage based on the edge AI in practice, the following tests are provided:
100000 human face pictures and 100000 garbage throwing pictures are collected by the example center for marking training, and a human face recognition model and a garbage throwing recognition model are established. The edge equipment is provided with a face recognition model and a garbage throwing recognition model, 1500 face pictures and 1000 garbage throwing pictures are collected by a camera of the edge equipment every day, and the model studies and judges the pictures collected by the camera and outputs the studied and judged results. The pictures collected on site are returned to the calculation center for continuous marking training, the accuracy of the face recognition model reaches 98%, and the accuracy of the garbage throwing recognition model reaches 90%.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A method for judging and identifying the identity of a litter worker based on an edge AI is characterized by comprising the following steps:
s100, performing original data algorithm model learning by using a fastercnnn framework of a deep learning target detection algorithm based on Region Proposal by a cloud algorithm center;
s200, distributing an original data example model obtained by the famtercnn framework learning to each edge computing node by the cloud example center through a cloud unloading mode of cloud computing;
s300, the field camera transmits the collected video and pictures of the personnel throwing garbage to an edge computing node, the edge computing node is combined with an existing example model to carry out computing, whether the personnel throwing garbage has illegal behaviors of throwing garbage or not is judged according to a computing result, if yes, an instruction is sent to a field terminal device, and the field terminal device gives a prompt according to the instruction;
and S400, the edge computing node transmits the judgment result and the collected video and picture of the person throwing garbage to a cloud computing center for storage or viewing.
2. The method for determining and identifying identity of people throwing garbage based on edge AI according to claim 1, wherein the raw data example model is obtained by:
s110, constructing an image database for identifying people and garbage throwing behaviors, and performing model depth model training and labeling on the image to form source data;
and S120, performing AI modeling based on the source data and the fastercnnn algorithm network to generate an original data example model.
3. The method for determining and identifying identity of people throwing garbage based on edge AI according to claim 2, wherein the step S110 comprises the steps of:
s111, obtaining pictures containing people and garbage throwing behaviors as data sources;
and S112, carrying out model depth model training and labeling on the data source and forming source data, wherein the labeling comprises the position of the image, the name of the image, the width and the height of the image, the dimension of the image, the name of a labeled object and the xy coordinate value of the Bbox.
4. The method for determining and identifying identity of people throwing garbage based on edge AI according to claim 2, wherein the step S120 comprises the steps of:
s121, extracting feature maps of the candidate images from the original images through a multilayer convolutional neural network, wherein the feature maps can be shared for use by a subsequent RPN network and a full connection layer;
s122, generating region prosages through RPN layers by the feature maps, firstly generating a stack of anchor boxes, then cutting and filtering, judging whether the anchors belong to the foreground or the background through softmax, and meanwhile correcting the anchor boxes by utilizing bounding box regression to form more accurate prosages;
s123, obtaining a propofol feature map with a fixed size on a Roi Pooling layer by the propofol;
s124, the generic feature map performs full-connection operation, specific category classification is performed on the full-connection layer and softmax, and a bounding box regression operation is completed by utilizing L1 Loss to obtain the accurate position of the object, so that an original data example model is generated.
5. The method for determining and identifying identity of people throwing garbage based on edge AI according to claim 4, wherein the step S122 comprises the steps of:
the fastercnnn adopts the SPP layer in the SPP-net, and introduces a multitask loss function for calculating loss of bbox regression and classification.
6. The method for determining and identifying identity of people throwing garbage based on edge AI according to claim 5, wherein the multitasking loss function is:
Figure FDA0002362613380000021
wherein, the p isiThe probability of being the target is predicted for the anchor,
Figure FDA0002362613380000022
Nclsis the total number of anchors, ti=(tx,ty,tw,th) Is a vector representing the anchor, the offset predicted by the RPN training stage,
Figure FDA0002362613380000031
is and tiVectors with the same dimension represent anchors, the offset of the RPN training stage relative to gt is actual, gt is a labeled correct target, lambda is a labeled number, and L isregTo mark the correct number of targets, LclsTo classify the number, NregThe number of positive anchors.
7. The method for determining and identifying identity of people throwing garbage based on edge AI of claim 5, wherein: the computation of the loss of bbox regression and classification, expressed using thought vectors (x, y, w, h), is performed as follows:
1) and (Δ x, Δ y), Δ x ═ PWdx(P),Δy=Phdy(P);
2) Scaling (S)W,Sh),SW=PWdW(P),Sh=Phdh(P);
Wherein x, y, w, h respectively represent the central point coordinate and width and height of the image, PWIs the target position, PhIs a target height, dx(P) is the target abscissa, dy(P) is the object ordinate, dw(P) is the zoom size, dh(P) is the zoom height, SwTo target zoom size, ShThe target zoom height is obtained.
8. The method for determining and identifying identity of people throwing garbage based on edge AI according to claim 1, wherein: the equipment model adopted by the edge computing node is VA-TU-DL.
CN202010026367.6A 2020-01-10 2020-01-10 Method for judging and identifying identity of litter worker based on edge AI Pending CN111242010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010026367.6A CN111242010A (en) 2020-01-10 2020-01-10 Method for judging and identifying identity of litter worker based on edge AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010026367.6A CN111242010A (en) 2020-01-10 2020-01-10 Method for judging and identifying identity of litter worker based on edge AI

Publications (1)

Publication Number Publication Date
CN111242010A true CN111242010A (en) 2020-06-05

Family

ID=70864096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010026367.6A Pending CN111242010A (en) 2020-01-10 2020-01-10 Method for judging and identifying identity of litter worker based on edge AI

Country Status (1)

Country Link
CN (1) CN111242010A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115846A (en) * 2020-09-15 2020-12-22 上海迥灵信息技术有限公司 Identification method and identification device for littering garbage behavior and readable storage medium
CN115394065A (en) * 2022-10-31 2022-11-25 之江实验室 AI-based automatic identification packet loss behavior alarm method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107666594A (en) * 2017-09-18 2018-02-06 广东电网有限责任公司东莞供电局 Method for monitoring illegal operation in real time through video monitoring
CN107730904A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Multitask vehicle driving in reverse vision detection system based on depth convolutional neural networks
CN107730906A (en) * 2017-07-11 2018-02-23 银江股份有限公司 Zebra stripes vehicle does not give precedence to the vision detection system of pedestrian behavior
CN107766889A (en) * 2017-10-26 2018-03-06 济南浪潮高新科技投资发展有限公司 A kind of the deep learning computing system and method for the fusion of high in the clouds edge calculations
CN109167963A (en) * 2018-09-28 2019-01-08 广东马上到网络科技有限公司 A kind of monitoring method and system of public civilization
CN109165582A (en) * 2018-08-09 2019-01-08 河海大学 A kind of detection of avenue rubbish and cleannes appraisal procedure
CN109598303A (en) * 2018-12-03 2019-04-09 江西洪都航空工业集团有限责任公司 A kind of rubbish detection method based on City scenarios
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110189292A (en) * 2019-04-15 2019-08-30 浙江工业大学 A kind of cancer cell detection method based on Faster R-CNN and density estimation
CN110378935A (en) * 2019-07-22 2019-10-25 四创科技有限公司 Parabolic recognition methods based on image, semantic information
CN110659622A (en) * 2019-09-27 2020-01-07 北京文安智能技术股份有限公司 Detection method, device and system for garbage dumping

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730904A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Multitask vehicle driving in reverse vision detection system based on depth convolutional neural networks
CN107730906A (en) * 2017-07-11 2018-02-23 银江股份有限公司 Zebra stripes vehicle does not give precedence to the vision detection system of pedestrian behavior
CN107666594A (en) * 2017-09-18 2018-02-06 广东电网有限责任公司东莞供电局 Method for monitoring illegal operation in real time through video monitoring
CN107766889A (en) * 2017-10-26 2018-03-06 济南浪潮高新科技投资发展有限公司 A kind of the deep learning computing system and method for the fusion of high in the clouds edge calculations
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN109165582A (en) * 2018-08-09 2019-01-08 河海大学 A kind of detection of avenue rubbish and cleannes appraisal procedure
CN109167963A (en) * 2018-09-28 2019-01-08 广东马上到网络科技有限公司 A kind of monitoring method and system of public civilization
CN109598303A (en) * 2018-12-03 2019-04-09 江西洪都航空工业集团有限责任公司 A kind of rubbish detection method based on City scenarios
CN110189292A (en) * 2019-04-15 2019-08-30 浙江工业大学 A kind of cancer cell detection method based on Faster R-CNN and density estimation
CN110378935A (en) * 2019-07-22 2019-10-25 四创科技有限公司 Parabolic recognition methods based on image, semantic information
CN110659622A (en) * 2019-09-27 2020-01-07 北京文安智能技术股份有限公司 Detection method, device and system for garbage dumping

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
I_LINDA: "Mask R-CNN—Faster RCNN学习笔记", Retrieved from the Internet <URL:《https://blog.csdn.net/qq_34713831/article/details/85122631》> *
Y.Z.Y.: "目标检测——Faster R-CNN(三)", Retrieved from the Internet <URL:《https://blog.csdn.net/qq_42823043/article/details/90744280》> *
做计算机视觉的小硕妹子: "论文阅读之Siamese RPN以及一些其他内容的补充", Retrieved from the Internet <URL:《https://blog.csdn.net/weixin_44287997/article/details/100782196》> *
大哲子: "图像处理—对象的定位与识别", Retrieved from the Internet <URL:《https://blog.csdn.net/qq_36183496/article/details/98581895》> *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115846A (en) * 2020-09-15 2020-12-22 上海迥灵信息技术有限公司 Identification method and identification device for littering garbage behavior and readable storage medium
CN112115846B (en) * 2020-09-15 2024-03-01 上海迥灵信息技术有限公司 Method and device for identifying random garbage behavior and readable storage medium
CN115394065A (en) * 2022-10-31 2022-11-25 之江实验室 AI-based automatic identification packet loss behavior alarm method and device

Similar Documents

Publication Publication Date Title
CN108596277B (en) Vehicle identity recognition method and device and storage medium
CN108062349A (en) Video frequency monitoring method and system based on video structural data and deep learning
CN113822247B (en) Method and system for identifying illegal building based on aerial image
CN111353338B (en) Energy efficiency improvement method based on business hall video monitoring
CN112668375B (en) Tourist distribution analysis system and method in scenic spot
CN111626162A (en) Overwater rescue system based on space-time big data analysis and drowning warning situation prediction method
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN111242010A (en) Method for judging and identifying identity of litter worker based on edge AI
CN114267082B (en) Bridge side falling behavior identification method based on depth understanding
CN111935450A (en) Intelligent suspect tracking method and system and computer readable storage medium
CN112819068A (en) Deep learning-based real-time detection method for ship operation violation behaviors
CN112637550B (en) PTZ moving target tracking method for multi-path 4K quasi-real-time spliced video
CN110490150A (en) A kind of automatic auditing system of picture violating the regulations and method based on vehicle retrieval
CN111125290B (en) Intelligent river patrol method and device based on river growth system and storage medium
CN115294519A (en) Abnormal event detection and early warning method based on lightweight network
CN112395967A (en) Mask wearing monitoring method, electronic device and readable storage medium
CN113095301A (en) Road occupation operation monitoring method, system and server
CN115223204A (en) Method, device, equipment and storage medium for detecting illegal wearing of personnel
CN113536997B (en) Intelligent security system and method based on image recognition and behavior analysis
CN112836590B (en) Flood disaster monitoring method and device, electronic equipment and storage medium
CN114821486B (en) Personnel identification method in power operation scene
CN115830381A (en) Improved YOLOv 5-based detection method for mask not worn by staff and related components
CN114170677A (en) Network model training method and equipment for detecting smoking behavior
CN113963310A (en) People flow detection method and device for bus station and electronic equipment
CN110443197A (en) Intelligent understanding method and system for visual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605