CN116977925B - Video safety management system for omnibearing intelligent monitoring - Google Patents

Video safety management system for omnibearing intelligent monitoring Download PDF

Info

Publication number
CN116977925B
CN116977925B CN202310921425.5A CN202310921425A CN116977925B CN 116977925 B CN116977925 B CN 116977925B CN 202310921425 A CN202310921425 A CN 202310921425A CN 116977925 B CN116977925 B CN 116977925B
Authority
CN
China
Prior art keywords
module
intelligent
video
detection
camera module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310921425.5A
Other languages
Chinese (zh)
Other versions
CN116977925A (en
Inventor
郭建南
谭海军
杨铭瑞
康健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Smart Agriculture Service Co ltd
Original Assignee
Guangzhou Smart Agriculture Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Smart Agriculture Service Co ltd filed Critical Guangzhou Smart Agriculture Service Co ltd
Priority to CN202310921425.5A priority Critical patent/CN116977925B/en
Publication of CN116977925A publication Critical patent/CN116977925A/en
Application granted granted Critical
Publication of CN116977925B publication Critical patent/CN116977925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

The application relates to a video safety management system for omnibearing intelligent monitoring, which is characterized by comprising the following modules: the camera module is used for acquiring video or photo information of the catering center according to the setting of the user; the AI analysis module is used for carrying out real-time detection or detection analysis according to the setting of a user; the intelligent inspection module, the camera module can change the position according to the preset track to obtain more detailed video or photo information; and the monitoring management module can carry out corresponding statistical analysis according to the selection parameters of the user. The application can fully automatically analyze illegal actions such as wearing no mask, smoking, wearing no cap, using a mobile phone and the like in real time or non-real time with high precision, can intelligently identify whether mouse patients exist, and warn related personnel to ensure food safety.

Description

Video safety management system for omnibearing intelligent monitoring
Technical Field
The invention belongs to the technical field of intelligent monitoring, and particularly relates to a video safety management system, a video safety management method, a video safety management device, computer equipment and a computer readable storage medium for omnibearing intelligent monitoring.
Background
The food security platform system is mainly used for monitoring each production link of a kitchen and early warning the nonstandard behavior of food production in the food security production process, and the intelligent help supervision department monitors and manages each dining hall of schools, institutions and the like in real time.
In the prior art, although the links can be monitored, intelligent analysis and real-time analysis cannot be performed, and the method can only be used for subsequent investigation and evidence collection, and for food safety production, it is necessary to be able to discover and timely process nonstandard behaviors in real time.
Disclosure of Invention
In order to solve the problems, the invention provides a video safety management system for omnibearing intelligent monitoring, which is characterized by comprising the following modules:
the camera module is used for acquiring video or photo information of the catering center according to the setting of the user;
The AI analysis module is used for carrying out real-time detection or detection analysis according to the setting of a user;
the intelligent inspection module, the camera module can change the position according to the preset track to obtain more detailed video or photo information;
and the monitoring management module is used for carrying out corresponding statistical analysis according to the selection parameters of the user.
As an embodiment, the system further comprises an intelligent guide rail for use with the camera module.
As an embodiment, the system further includes an intelligent control module, capable of sending a control instruction according to the information acquired by the camera module, so as to control the intelligent inspection module to move.
As an embodiment, the AI analysis module implements mask wear detection, cap detection, smoke detection, and mouse detection.
As an embodiment, the system further comprises an early warning module, and when the abnormal condition or the irregular condition is detected in real time, the early warning module sends out warning information to related responsible personnel.
The application also provides a video safety management method for omnibearing intelligent monitoring, which is characterized by comprising the following steps:
S1, acquiring information, namely acquiring video or photo information of a catering center according to the setting of a user;
s2, AI analysis step, real-time detection or detection analysis is carried out according to the setting of the user;
s3, an intelligent inspection step, wherein the camera module performs position change according to a preset track so as to obtain more detailed video or photo information;
and S4, monitoring and managing, namely performing corresponding statistical analysis according to the selection parameters of the user.
As an embodiment, the method further comprises: the camera module is matched with the intelligent guide rail for use, and multi-angle information acquisition is completed.
As an embodiment, the method further includes sending, by using an intelligent control module, an instruction according to the information obtained by the camera module, so as to control the intelligent inspection module to move.
As an embodiment, the AI analysis step implements mask wear detection, cap detection, smoke detection, and mouse disease detection.
As an embodiment, the method further comprises sending a warning message to the relevant responsible person by the early warning module when the nonstandard or abnormal condition is detected in real time.
Furthermore, the invention provides a computer device comprising a memory, a processor and a transceiver in communication connection in turn, wherein the memory is used for storing a computer program, the transceiver is used for receiving and transmitting messages, and the processor is used for reading the computer program and executing any one of the methods.
The present invention provides a computer readable storage medium having instructions stored thereon which, when executed on a computer, perform any of the methods described above.
The invention has the beneficial effects that:
The invention provides a video safety management system for omnibearing intelligent monitoring, which is characterized by comprising the following modules:
The camera module is used for acquiring video or photo information of the catering center according to the setting of the user; the AI analysis module is used for carrying out real-time detection or detection analysis according to the setting of a user; the intelligent inspection module, the camera module can change the position according to the preset track to obtain more detailed video or photo information; and the monitoring management module is used for carrying out corresponding statistical analysis according to the selection parameters of the user. The application realizes AI real-time monitoring and analysis of nonstandard operation behaviors of the kitchen, such as real-time monitoring and alarm intelligent timing inspection without wearing a mask, smoking and the like, and analysis of abnormal behaviors and alarm; the intelligent sample reserving, the sample reserving cabinet is opened to automatically photograph, the door is closed to automatically photograph the record, abnormality and problems can be found in time, and the food safety level is improved.
The application also relates to a special deep neural network which is specially suitable for identifying abnormal scenes in the food safety field, belongs to the original contribution of the application, and can realize real-time analysis and detection.
Drawings
FIG. 1 is a diagram showing AI real-time analysis of the invention
FIG. 2 is a diagram showing the intelligent patrol record according to the invention
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be briefly described below with reference to the accompanying drawings and the description of the embodiments or the prior art, and it is obvious that the following description of the structure of the drawings is only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art. It should be noted that the description of these examples is for aiding in understanding the present invention, but is not intended to limit the present invention.
It should be understood that although the terms first and second, etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first object may be referred to as a second object, and similarly a second object may be referred to as a first object, without departing from the scope of example embodiments of the invention.
It should be understood that for the term "and/or" that may appear herein, it is merely one association relationship that describes an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: three cases of A alone, B alone or both A and B exist; as another example, A, B and/or C may represent the presence of any one of A, B and C or any combination thereof; for the term "/and" that may appear herein, which is descriptive of another associative object relationship, it means that there may be two relationships, e.g., a/and B, it may be expressed that: the two cases of A and B exist independently or simultaneously; in addition, for the character "/" that may appear herein, it is generally indicated that the context associated object is an "or" relationship.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be briefly described below with reference to the accompanying drawings and the description of the embodiments or the prior art, and it is obvious that the following description of the structure of the drawings is only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art. It should be noted that the description of these examples is for aiding in understanding the present invention, but is not intended to limit the present invention.
It should be understood that although the terms first and second, etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first object may be referred to as a second object, and similarly a second object may be referred to as a first object, without departing from the scope of example embodiments of the invention.
It should be understood that for the term "and/or" that may appear herein, it is merely one association relationship that describes an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: three cases of A alone, B alone or both A and B exist; as another example, A, B and/or C may represent the presence of any one of A, B and C or any combination thereof; for the term "/and" that may appear herein, which is descriptive of another associative object relationship, it means that there may be two relationships, e.g., a/and B, it may be expressed that: the two cases of A and B exist independently or simultaneously; in addition, for the character "/" that may appear herein, it is generally indicated that the context associated object is an "or" relationship.
In order to solve the problems in the prior art, as shown in FIGS. 1-2, FIG. 1 is a diagram showing the AI real-time analysis of the present invention
FIG. 2 is a diagram showing an intelligent patrol record according to the present invention; the invention provides a video safety management system for omnibearing intelligent monitoring for solving the problems,
The device is characterized by comprising the following modules:
the camera module is used for acquiring video or photo information of the catering center according to the setting of the user;
The AI analysis module is used for carrying out real-time detection or detection analysis according to the setting of a user;
the intelligent inspection module, the camera module can change the position according to the preset track to obtain more detailed video or photo information;
and the monitoring management module is used for carrying out corresponding statistical analysis according to the selection parameters of the user.
As an embodiment, the system further comprises an intelligent guide rail for use with the camera module.
As a specific embodiment, when the scene is abnormal, such as wearing no mask, smoking, wearing no cap, curing a mouse, and the like, the camera module can move along the guide rail under the instruction of the intelligent control module, so that the camera module can focus the abnormal scene information in a high-definition manner, the eyes of a background analyst can be further identified, and the system performs high-definition evidence collection, so that the warning education on related personnel can be performed later. The intelligent guide rail is arranged in relation to the shooting range of the shooting module, and the constraint conditions of the intelligent guide rail and the shooting range are 1) the secondary full coverage of scenes is realized, namely each scene must appear at least 2 times in all acquired video or image evidences; 2) Each frame of image, which is spatially at a different location, must have a scene intersection with images acquired by other camera modules.
As an embodiment, the system further includes an intelligent control module, capable of sending a control instruction according to the information acquired by the camera module, so as to control the intelligent inspection module to move.
As an embodiment, the AI analysis module implements mask wear detection, cap detection, smoke detection, and mouse detection.
Optionally, the detection is implemented by a deep neural network model, and the deep neural network comprises an input layer, one or more hidden layers and an output layer;
when training, the input layer is used for receiving pictures containing masks, caps, smoking and mice; when the method is used for real-time or non-real-time analysis, the input layer receives image information acquired in real time or a scene image to be analyzed;
Optionally, the hidden layer comprises one or more convolution layers, one or more pooling layers;
optionally, the deep neural network adopts a loss function, and is defined as follows:
N represents the size of a sample data set in the history record, i represents the value of 1-N, and y i represents a label corresponding to a sample x i; /(I) Representing the weight of sample x i at its tag y i, the b vector includes/>And b j,/>Representing the deviation of sample x i at its tag y i, b j representing the deviation at output node j;
optionally, the pooling method is as follows:
xe=f(weφ(xe-1));
Where x e represents the output of the current layer, w e represents the weight of the current layer, phi represents the log likelihood loss function, and x e-1 represents the output of the previous layer;
N represents the size of the sample data set contained in the history; yi represents the label value corresponding to the sample feature vector x i; w yi denotes the weight of sample feature vector x i at its label yi, and θ yi denotes the vector angle of sample x i with its corresponding label yi;
the output layer is used for outputting abnormal types, including abnormal types such as mask, cap, smoking, mice and the like;
And training the deep neural network continuously until a preset condition is met, so as to obtain a trained deep neural network model.
As an embodiment, the system further comprises an early warning module, and when the abnormal condition or the irregular condition is detected in real time, the early warning module sends out warning information to related responsible personnel.
The application also provides a video safety management method for omnibearing intelligent monitoring, which is characterized by comprising the following steps:
S1, acquiring information, namely acquiring video or photo information of a catering center according to the setting of a user;
s2, AI analysis step, real-time detection or detection analysis is carried out according to the setting of the user;
s3, an intelligent inspection step, wherein the camera module performs position change according to a preset track so as to obtain more detailed video or photo information;
and S4, monitoring and managing, namely performing corresponding statistical analysis according to the selection parameters of the user.
As an embodiment, the method further comprises: the camera module is matched with the intelligent guide rail for use, and multi-angle information acquisition is completed.
As a specific embodiment, when the scene is abnormal, such as wearing no mask, smoking, wearing no cap, curing a mouse, and the like, the camera module can move along the guide rail under the instruction of the intelligent control module, so that the camera module can focus the abnormal scene information in a high-definition manner, the eyes of a background analyst can be further identified, and the system performs high-definition evidence collection, so that the warning education on related personnel can be performed later. The intelligent guide rail is arranged in relation to the shooting range of the shooting module, and the constraint conditions of the intelligent guide rail and the shooting range are 1) the secondary full coverage of scenes is realized, namely each scene must appear at least 2 times in all acquired video or image evidences; 2) Each frame of image, which is spatially at a different location, must have a scene intersection with images acquired by other camera modules.
As an embodiment, the method further includes sending, by using an intelligent control module, an instruction according to the information obtained by the camera module, so as to control the intelligent inspection module to move.
As an embodiment, the AI analysis step implements mask wear detection, cap detection, smoke detection, and mouse disease detection.
Optionally, the detection is implemented by a deep neural network model, and the deep neural network comprises an input layer, one or more hidden layers and an output layer;
when training, the input layer is used for receiving pictures containing masks, caps, smoking and mice; when the method is used for real-time or non-real-time analysis, the input layer receives image information acquired in real time or a scene image to be analyzed;
Optionally, the hidden layer comprises one or more convolution layers, one or more pooling layers;
optionally, the deep neural network adopts a loss function, and is defined as follows:
N represents the size of a sample data set in the history record, i represents the value of 1-N, and y i represents a label corresponding to a sample x i; /(I) Representing the weight of sample x i at its tag y i, the b vector includes/>And b j,/>Representing the deviation of sample x i at its tag y i, b j representing the deviation at output node j;
optionally, the pooling method is as follows:
xe=f(weφ(xe-1));
Where x e represents the output of the current layer, w e represents the weight of the current layer, phi represents the log likelihood loss function, and x e-1 represents the output of the previous layer;
N represents the size of the sample data set contained in the history; yi represents the label value corresponding to the sample feature vector x i; w yi denotes the weight of sample feature vector x i at its label yi, and θ yi denotes the vector angle of sample x i with its corresponding label yi;
the output layer is used for outputting abnormal types, including abnormal types such as mask, cap, smoking, mice and the like;
And training the deep neural network continuously until a preset condition is met, so as to obtain a trained deep neural network model.
As an embodiment, the method further comprises sending a warning message to the relevant responsible person by the early warning module when the nonstandard or abnormal condition is detected in real time.
Furthermore, the invention provides a computer device comprising a memory, a processor and a transceiver in communication connection in turn, wherein the memory is used for storing a computer program, the transceiver is used for receiving and transmitting messages, and the processor is used for reading the computer program and executing any one of the methods.
The present invention provides a computer readable storage medium having instructions stored thereon which, when executed on a computer, perform any of the methods described above.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (3)

1. The video safety management system for omnibearing intelligent monitoring is characterized by comprising the following modules:
the camera module is used for acquiring video or photo information of the catering center according to the setting of the user;
The AI analysis module is used for carrying out real-time detection or detection analysis according to the setting of a user and realizing mask wearing detection, cap detection, smoking detection and mouse disease detection;
The intelligent inspection module enables the camera module to change the position according to a preset track so as to obtain more detailed video or photo information;
The monitoring management module is used for carrying out corresponding statistical analysis according to the selection parameters of the user;
When the scene is abnormal, including no mask, smoking, no hat and mouse, the camera module moves along the guide rail under the instruction of the intelligent control module until the camera module can focus the abnormal scene information in a high definition mode, so that the eyes of background analysts can be conveniently and further identified, and the system performs high definition evidence collection so as to be convenient for subsequent warning education on related personnel;
the video safety management system also comprises an intelligent guide rail matched with the camera module;
the intelligent guide rail is arranged in relation to the shooting range of the shooting module, and the constraint conditions of the intelligent guide rail and the shooting range are as follows:
1) Realizing secondary full coverage of scenes, i.e. each scene must appear at least 2 times in all acquired video or image evidence;
2) Each frame of image at different positions in space must have scene intersection with images acquired by other camera modules;
the detection is realized by adopting a deep neural network model, and the deep neural network comprises an input layer, one or more hidden layers and an output layer;
when training, the input layer is used for receiving pictures containing masks, caps, smoking and mice;
when the method is used for real-time or non-real-time analysis, the input layer receives image information acquired in real time or a scene image to be analyzed;
The hidden layer comprises one or more convolution layers and one or more pooling layers;
The deep neural network adopts a loss function to define as follows:
N represents the size of a data set containing a sample, i represents the values 1-N, yi represents the label corresponding to the sample x i; /(I) Representing the weight of sample x i at its label yi, the b vector includes/>And b j,/>Representing the deviation of sample x i at its label yi, b j representing the deviation at output node j;
the pooling method comprises the following steps:
wherein, Representing the output of the current layer,/>Representing the weights of the current layer,/>Representing a log-likelihood loss function,An output representing the previous layer; /(I)The vector angle represented by sample x i and its corresponding label yi;
The output layer is used for outputting abnormal types, including mask, cap, smoking and mouse abnormality;
And training the deep neural network continuously until a preset condition is met, so as to obtain a trained deep neural network model.
2. The video safety management system for omnibearing intelligent monitoring according to claim 1, further comprising an intelligent control module capable of sending a control instruction according to the information acquired by the camera module for controlling the intelligent patrol module to move.
3. The video security management system of claim 1, further comprising an early warning module, wherein the early warning module sends out a warning message to the responsible person when an irregular or abnormal condition is detected in real time.
CN202310921425.5A 2023-07-25 2023-07-25 Video safety management system for omnibearing intelligent monitoring Active CN116977925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310921425.5A CN116977925B (en) 2023-07-25 2023-07-25 Video safety management system for omnibearing intelligent monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310921425.5A CN116977925B (en) 2023-07-25 2023-07-25 Video safety management system for omnibearing intelligent monitoring

Publications (2)

Publication Number Publication Date
CN116977925A CN116977925A (en) 2023-10-31
CN116977925B true CN116977925B (en) 2024-05-24

Family

ID=88470681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310921425.5A Active CN116977925B (en) 2023-07-25 2023-07-25 Video safety management system for omnibearing intelligent monitoring

Country Status (1)

Country Link
CN (1) CN116977925B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493847A (en) * 2018-12-14 2019-03-19 广州玛网络科技有限公司 Sound recognition system and voice recognition device
CN110166741A (en) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 Environment control method, device, equipment and storage medium based on artificial intelligence
CN110769195A (en) * 2019-10-14 2020-02-07 国网河北省电力有限公司衡水供电分公司 Intelligent monitoring and recognizing system for violation of regulations on power transmission line construction site
CN110879917A (en) * 2019-11-08 2020-03-13 北京交通大学 Electric power system transient stability self-adaptive evaluation method based on transfer learning
CN113222299A (en) * 2021-06-10 2021-08-06 湘南学院 Intelligent vehicle scheduling system and method for scenic spot
CN113255563A (en) * 2021-06-10 2021-08-13 湘南学院 Scenic spot people flow control system and method
CN113992153A (en) * 2021-11-19 2022-01-28 珠海康晋电气股份有限公司 Visual real-time monitoring distributed management system of photovoltaic power plant
CN114708127A (en) * 2022-04-15 2022-07-05 广东南粤科教研究院 Student point system comprehensive assessment method and system
CN114724332A (en) * 2022-04-13 2022-07-08 湖北鲲鹏芯科技有限公司 Food material safety monitoring system and monitoring method
WO2022160040A1 (en) * 2021-01-26 2022-08-04 Musashi Auto Parts Canada Inc. System and method for manufacturing quality control using automated visual inspection
CN115756040A (en) * 2022-11-18 2023-03-07 苏州展桦天亿通信有限公司 Intelligent alarm patrol monitoring system
CN117331700A (en) * 2023-10-24 2024-01-02 广州一玛网络科技有限公司 Computing power network resource scheduling system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220383037A1 (en) * 2021-05-27 2022-12-01 Adobe Inc. Extracting attributes from arbitrary digital images utilizing a multi-attribute contrastive classification neural network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493847A (en) * 2018-12-14 2019-03-19 广州玛网络科技有限公司 Sound recognition system and voice recognition device
CN110166741A (en) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 Environment control method, device, equipment and storage medium based on artificial intelligence
CN110769195A (en) * 2019-10-14 2020-02-07 国网河北省电力有限公司衡水供电分公司 Intelligent monitoring and recognizing system for violation of regulations on power transmission line construction site
CN110879917A (en) * 2019-11-08 2020-03-13 北京交通大学 Electric power system transient stability self-adaptive evaluation method based on transfer learning
WO2022160040A1 (en) * 2021-01-26 2022-08-04 Musashi Auto Parts Canada Inc. System and method for manufacturing quality control using automated visual inspection
CN113222299A (en) * 2021-06-10 2021-08-06 湘南学院 Intelligent vehicle scheduling system and method for scenic spot
CN113255563A (en) * 2021-06-10 2021-08-13 湘南学院 Scenic spot people flow control system and method
CN113992153A (en) * 2021-11-19 2022-01-28 珠海康晋电气股份有限公司 Visual real-time monitoring distributed management system of photovoltaic power plant
CN114724332A (en) * 2022-04-13 2022-07-08 湖北鲲鹏芯科技有限公司 Food material safety monitoring system and monitoring method
CN114708127A (en) * 2022-04-15 2022-07-05 广东南粤科教研究院 Student point system comprehensive assessment method and system
CN115756040A (en) * 2022-11-18 2023-03-07 苏州展桦天亿通信有限公司 Intelligent alarm patrol monitoring system
CN117331700A (en) * 2023-10-24 2024-01-02 广州一玛网络科技有限公司 Computing power network resource scheduling system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张文博 等.《无人机测绘技术应用及成本研究》.吉林科学技术出版社,2022,(第1版),231-232. *
罗庆生 等.《智能作战机器人》.北京理工大学出版社,2013,(第1版),137-139. *

Also Published As

Publication number Publication date
CN116977925A (en) 2023-10-31

Similar Documents

Publication Publication Date Title
JP7136546B2 (en) Automatic object and activity tracking in live video feeds
CN105426820B (en) More people's anomaly detection methods based on safety monitoring video data
CN103839085B (en) A kind of detection method of compartment exception crowd density
CN110689054A (en) Worker violation monitoring method
US11507105B2 (en) Method and system for using learning to generate metrics from computer vision-derived video data
US11631306B2 (en) Methods and system for monitoring an environment
CN118072255B (en) Intelligent park multisource data dynamic monitoring and real-time analysis system and method
CN110197158B (en) Security cloud system and application thereof
CN109377713A (en) A kind of fire alarm method and system
CN111553264B (en) Campus non-safety behavior detection and early warning method suitable for primary and secondary school students
CN111325119B (en) Video monitoring method and system for safe production
CN116419059A (en) Automatic monitoring method, device, equipment and medium based on behavior label
Martínez-Mascorro et al. Suspicious behavior detection on shoplifting cases for crime prevention by using 3D convolutional neural networks
CN116977925B (en) Video safety management system for omnibearing intelligent monitoring
Zennayi et al. Unauthorized access detection system to the equipments in a room based on the persons identification by face recognition
CN112633157B (en) Real-time detection method and system for safety of AGV working area
CN117423049A (en) Method and system for tracking abnormal event by real-time video
CN116682034A (en) Dangerous behavior detection method under complex production operation scene
CN111723767A (en) Image processing method and device and computer storage medium
Hameete et al. Intelligent Multi-Camera Video Surveillance.
Sun et al. [Retracted] Using Big Data‐Based Neural Network Parallel Optimization Algorithm in Sports Fatigue Warning
CN114049682A (en) Human body abnormal behavior identification method, device, equipment and storage medium
Joshi et al. Unsupervised synthesis of anomalies in videos: Transforming the normal
CN115273128A (en) Method and device for detecting people on belt conveyor, electronic equipment and storage medium
Vaishnavi et al. Implementation of Abnormal Event Detection using Automated Surveillance System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant