CN111653023A - Intelligent factory supervision method - Google Patents

Intelligent factory supervision method Download PDF

Info

Publication number
CN111653023A
CN111653023A CN202010442738.9A CN202010442738A CN111653023A CN 111653023 A CN111653023 A CN 111653023A CN 202010442738 A CN202010442738 A CN 202010442738A CN 111653023 A CN111653023 A CN 111653023A
Authority
CN
China
Prior art keywords
personnel
staff
information
factory
supervision method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010442738.9A
Other languages
Chinese (zh)
Inventor
罗小华
汤凯
吴巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ouyiyun Technology Co ltd
Original Assignee
Shenzhen Ouyiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ouyiyun Technology Co ltd filed Critical Shenzhen Ouyiyun Technology Co ltd
Priority to CN202010442738.9A priority Critical patent/CN111653023A/en
Publication of CN111653023A publication Critical patent/CN111653023A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Educational Administration (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Game Theory and Decision Science (AREA)
  • Evolutionary Biology (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Alarm Systems (AREA)

Abstract

The invention relates to an intelligent factory supervision method, which comprises the steps of classifying workers according to organization structures or professional posts of enterprises, and further realizing the identification of the workers, the outsiders and visitors when entering a factory; carrying out face recognition through machine vision, and further counting attendance information of the staff; detecting the on-duty, off-duty and off-duty conditions of the staff and recording the corresponding time through machine vision; detecting and counting data of good products and defective products in the products through machine vision, and calculating to obtain capacity data; detecting the violation behaviors of personnel in the plant area through machine vision; and carrying out positioning detection on the real-time positions of the personnel in the factory. The intelligent factory supervision method has the advantages of high intelligent degree, high supervision efficiency, low labor cost, unified data standard and small management difficulty, and can find various problems existing in the factory operation process in time, so that various hidden dangers are eliminated in time conveniently, and the safety of various aspects of production is improved.

Description

Intelligent factory supervision method
Technical Field
The invention relates to an intelligent factory supervision method.
Background
At present, revolutionary breakthrough and cross fusion in important fields and front-edge directions of information technology, new energy, new materials, biotechnology and the like are causing a new industrial revolution, will generate subversive influence on enterprise safety production, and change the management pattern of a plant area. Particularly, the deep integration of the new generation information technology and the manufacturing industry promotes the deep revolution of the manufacturing mode, the production organization mode and the industrial form. Technologies such as artificial intelligence and big data are used for reconstructing an enterprise security management technology system; under the strong support of ubiquitous information such as the internet, the internet of things, cloud computing and big data, the intelligent factory area becomes a main mode of future manufacturing, and repeated and general skill labor is continuously replaced by intelligent equipment and a production mode. With the shift of the center of gravity of the industrial value chain from the production end to research, development, design, marketing, and the like, the industrial form is shifted from production-type manufacturing to service-type manufacturing.
At present, only the following pain points exist in the safety management of factories: 1. the existing equipment has poor application efficiency. At present, a great deal of equipment such as servers, switches, cameras and internet of things equipment purchased by factories are always in an operating state, a great deal of operation and maintenance resources are consumed, and data monitored by the equipment are not effectively utilized and activated. For example, a large number of cameras in a factory are only used for security management, and particularly, picture data shot in a production workshop is not input to a management layer as a management basis and an existing statistical analysis method is not formed. 2. And (5) remote branch management. The global design of the remote substation is very critical. Without global design, remote branch plants can be provided with a set of standards for each plant, and the remote branch plants are difficult to twist once out of control. For example: n sets of schemes are established in N factories and are mutually independent, so that the problems of data isolated island, cap chimney type development, various solution scheme research and formulation, difficulty in gathering nonstandard data and the like can be continuously generated. The final results are that the unified management cannot be realized, the real information of the site cannot be obtained in real time, and the real data cannot be obtained. Data among various plants are inconsistent, mismatched and not fused, and even final summarized data is difficult to achieve. 3. The supervision manpower input is large. In order to ensure the productivity, a large amount of manpower is used for supervision, and the reject ratio and the production time of the product are counted by means of manpower and semi-automation of detection equipment; analyzing the cause of the problem by shooting a brain bag; identifying whether the product has a problem or not by naked eyes, and which link of the process has the problem; the body can carry the quality. When the production line is small, the manual monitoring can be reluctant to come, and along with the expansion of the production line, the situations of labor shortage, inadequate monitoring and the like can occur in a management mode of manual monitoring; when the production line reaches the 100-degree scale, the management mode of the human monitoring cannot be implemented at all. Even if it can be guaranteed that each production line is monitored by a specially-assigned person, the problem that efficient collaboration cannot be achieved is caused, and a large amount of communication cost is generated.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an intelligent factory supervision method which can completely intelligently and uniformly supervise various conditions in factory operation and effectively find abnormal conditions, aiming at the prior art.
The technical scheme adopted by the invention for solving the problems is as follows: an intelligent factory supervision method is characterized in that: comprises that
Classifying the staff according to the organization architecture or the occupation post of the enterprise, and further realizing the identification of the staff, the outsiders and the visitors when entering the factory;
carrying out face recognition through machine vision, and further counting attendance information of the staff;
detecting the on-duty, off-duty and off-duty conditions of the staff and recording the corresponding time through machine vision;
detecting and counting data of good products and defective products in the products through machine vision, and calculating to obtain capacity data;
detecting the violation behaviors of personnel in the plant area through machine vision;
and carrying out positioning detection on the real-time positions of the personnel in the factory.
Preferably, on the basis of realizing the identification of the staff, the staff outside the office and the visitor, corresponding passing means are correspondingly adopted, and the related information of the corresponding staff is informed through a display screen and/or a user terminal.
As an improvement, when the factory worker working hours are identified, controlling a gate arranged at an entrance guard position to open for passing, and simultaneously prompting the working safety general knowledge content corresponding to different classes of employees through a display screen and/or user terminals of the employees;
when the personnel is identified as the outsider who is registered with the information, the guiding information is transmitted to the access control administrator, the access control administrator guides the outsider to a corresponding place according to the guiding information, and meanwhile, corresponding safety information and operation hazards are prompted through a display screen and/or a user terminal of the outsider;
when the visitor is identified as the visitor registered with the information, transmitting guidance information to the corresponding contact person to be visited, leading the visitor to a gate sentry by the contact person to be visited, and simultaneously prompting corresponding external detection, consulting an item system and informing safety risk by a display screen and/or a user terminal of the visitor;
and forming entrance guard intrusion warning information when the unidentifiable personnel enter the factory area, and pushing the entrance guard intrusion warning information to related personnel.
In order to avoid production problems and improve production safety, when the fact that the off-duty time of the staff on the set key post exceeds the corresponding set time threshold is detected, key post off-duty overtime warning information is formed, and the key post off-duty overtime warning information is pushed to related staff.
In order to conveniently and timely search the reason of the over-high reject ratio and reduce the production risk, when the reject ratio of each product is higher than the corresponding set value, alarm information with the over-high reject ratio is formed and is pushed to related personnel; and when the capacity data is smaller than the corresponding set value, forming the alarm information of capacity reduction, and pushing the alarm information of capacity reduction to related personnel.
Flexibly, related management personnel can configure the communication relationship between each visual machine and the visual processing module capable of detecting the violation behaviors.
In order to avoid the potential safety hazard that the control blind area exists, further improve the security of production, the vision machine is including setting up the fixed vision machine at fixed position, still including the removal vision machine that can walk at the control blind area of fixed vision machine.
Preferably, the detection of the violation of the personnel in the plant area is carried out by using a full convolutional neural network algorithm.
Preferably, the detection of whether the product is good or bad is performed using the Attention-Fusion-Yolo algorithm incorporating the Attention mechanism.
Preferably, the person on Shift, off Shift and off Shift detection is performed using the Faster R-CNN algorithm.
Compared with the prior art, the invention has the advantages that: the intelligent factory supervision method has the advantages of high intelligent degree, high supervision efficiency, low labor cost, unified data standard and small management difficulty, and can find various problems existing in the factory operation process in time, so that various hidden dangers are eliminated in time conveniently, and the safety of various aspects of production is improved.
Drawings
Fig. 1 is a diagram of a monitoring system architecture to which an intelligent factory monitoring method is applied in an embodiment of the present invention.
Fig. 2 is a network topology diagram of a monitoring system applying the intelligent factory monitoring method in the embodiment of the invention.
FIG. 3 is a diagram of an on Shift detection algorithm for personnel applying the intelligent factory supervision method in the embodiment of the present invention.
FIG. 4 is a diagram of a ZF-Net network structure in the application of the intelligent factory supervision method in the embodiment of the present invention.
Fig. 5 is an overall structure diagram of an RPN network in the application intelligent factory supervision method in the embodiment of the present invention.
Fig. 6 is a detailed structure diagram of an RPN network in the application intelligent factory supervision method in the embodiment of the present invention.
FIG. 7 is a diagram illustrating a classification regression network structure applied in the intelligent plant supervision method according to an embodiment of the present invention.
Fig. 8 is a network structure diagram of a generator in the GAN model in the method for intelligent plant supervision according to an embodiment of the present invention.
FIG. 9 is a diagram illustrating a Fusion-YOLO detection structure in the intelligent factory supervision method according to an embodiment of the present invention.
Fig. 10 is a structural diagram of a channel attention mechanism in the intelligent plant supervision method according to an embodiment of the present invention.
Fig. 11 is an overall flowchart of an algorithm model of a personnel violation behavior based on a full convolution neural network in the intelligent plant supervision method according to the embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
As shown in fig. 1, the intelligent plant monitoring method in this embodiment may be implemented by an intelligent plant monitoring system, where the intelligent plant monitoring system includes a cloud server having computing and controlling functions, and further includes a visual machine, an access gate, various terminals, and a personal identification card, which are in communication connection with the cloud server. And various information can be checked on each terminal machine according to the permission through the application.
The intelligent factory supervision method comprises the following contents.
The method comprises the steps of classifying workers according to organization structures or professional posts of enterprises, correspondingly storing identity information and classification information of the workers in a cloud server, and enabling a specific manager to have the authority of deleting and modifying the personnel information on the cloud server. On the basis, when people enter the factory, the identification of the staff, the staff outside the factory and the visitors is realized. According to different types of personnel, the specific effective time of entering the factory can be correspondingly set, and if the personnel are the working hours of the personnel, the information of the personnel on the cloud server is continuously effective before leaving the factory. And when the personnel are the outsiders, setting a corresponding working period on the cloud server, and identifying the corresponding outsiders to be invalid after the set working period is exceeded. When the personnel are the visitors, corresponding access time is set on the cloud server, and the corresponding visitors cannot be identified within non-access time.
The identification equipment is arranged at the position where the personnel information needs to be identified, and the personnel identification equipment is usually arranged at the doorways of factories, workshops and office buildings. When identifying people, various existing identification technologies and identification equipment can be adopted for identification, for example, the identification can be carried out in a chip card and induction gate mode, and the identification can also be carried out by a visual machine capable of carrying out face identification.
When the work safety awareness of the staff is enhanced, the gate machine arranged at the position of the access control is controlled to be opened for the staff to pass, meanwhile, the work safety general knowledge content corresponding to different classified staff is prompted through the display screen and/or the user terminal of the staff, namely, the staff identification equipment transmits the staff information to the cloud server, and when the cloud server identifies the staff, the prompt information can be controlled to be sent to the display screen arranged at the position of the access control or the mobile phone registered by the staff.
When entrance guard department realized staff's discernment, also realized the acquirement of staff's attendance information, use promptly in this embodiment to carry out this kind of vision machine of face identification and carry out the discernment of staff and the acquirement of attendance information, machine vision advances people's face identification promptly, and then acquires staff's attendance information, makes things convenient for cloud server to carry out staff's attendance information's statistics, and then conveniently has the administrator of authority to look over.
When the personnel is identified as the outsider who is registered on the cloud server by the manager, the leading information is transmitted to the access control administrator, the access control administrator leads the outsider to a corresponding place according to the leading information, and meanwhile, corresponding safety information and operation hazards are prompted through the display screen and/or the user terminal registered by the outsider, so that the outsider can fully acquire the safety condition content of work, and the safety production work is facilitated.
When the visitor registered by the manager on the cloud server is identified, the guiding information is transmitted to the corresponding contact person to be accessed, the contact person to be accessed arrives at a gate sentry to guide the visitor, and meanwhile, the display screen and/or the user terminal of the visitor prompt corresponding external detection, a visiting item system and safety risk notification so as to play a role in the access safety of the visitor.
In addition, entrance guard intrusion alarm information is formed when people which cannot be identified enter a factory area, and the entrance guard intrusion alarm information is pushed to related people. So relevant staff can in time discover the condition that the personnel were gone into to the entrance guard, and the personnel that in time correspond simultaneously go to know specific condition, avoids the various potential safety hazards who brings.
In addition, each person entering the factory area can comprehensively know the distribution situation of the persons in the factory area by acquiring the terminal held by each person or the positioning information of other articles capable of realizing position detection, and particularly, the gathering situation of the persons can be effectively known, adjusted and managed in real time during epidemic situations such as epidemic diseases and the like.
Visual machines such as cameras are arranged at places where the employees work, and the on duty, off duty and off duty conditions of the employees and the corresponding time are detected through the machine vision of the visual machines. The visual machine transmits the collected pictures to the cloud server, the cloud server analyzes and calculates the collected pictures, and then the detection of the on-duty, off-duty and off-duty conditions of the staff and the record of the corresponding time are judged, so that the manager can conveniently acquire the on-duty condition information of the staff in real time.
In order to avoid production problems and improve production safety, when the fact that the off-duty time of the staff on the set key post exceeds the corresponding set time threshold is detected, key post off-duty overtime warning information is formed, and the key post off-duty overtime warning information is pushed to related staff, so that the staff managing the key post can conveniently return to the working post as soon as possible, and potential safety hazards in factory operation are avoided.
In the embodiment, the fast R-CNN algorithm is used for detecting the on Shift, off Shift and off Shift conditions of the personnel. In the traditional detection algorithm, a background model is constructed by utilizing a frame sequence mainly through a background modeling method, a current frame is differed with a background to obtain a foreground, and the background is updated in real time to adapt to dynamic changes of a scene. However, the conventional detection algorithm is easily affected by changes of the objective conditions such as smoke, light and the like in the scene under the complex factory floor background. In the embodiment, the on-duty situation of the staff in the complex factory environment is accurately identified by a regional intrusion target detection algorithm based on deep learning Faster R-CNN.
The Faster R-CNN algorithm includes three parts: performing feature extraction on the video image by using the ZF-net convolutional neural network to generate a feature map; the RPN network processes the characteristic diagram and outputs rectangular candidate regions with various scales and aspect ratios through a Rol posing layer; and the classification regression network judges whether the employee leaves the post according to the characteristics in the rectangular candidate area.
A feature extraction network:
the feature extraction network of the Faster R-CNN algorithm is a convolutional neural network, and the ZF-Net convolutional neural network is used as the feature extraction network in the Faster R-CNN algorithm in the embodiment. The video image of the working area is subjected to multilayer convolution, pooling and excitation layers from the input ZF-NET, and finally a feature map comprising the characteristics of color, texture and the like is obtained. The extraction network structure is shown in fig. 4.
In fig. 4, conv1 to conv5 represent convolutional layers, ReLU represents an active layer, LRN represents a local response normalization layer, and pooling represents a pooling layer. The number of channels conv1 to conv5 is 96, 256, 384, 384 and 256, the convolution step size is 2, 2, 1, 1 and 1, and the convolution kernel size is 7, 5, 3, 3 and 3. In ZF-Net, a ReLU function is used as an activation function, which is a commonly used non-linear activation function that can map continuous variable values of inputs between 0 and 1. The mathematical expression of the ReLU function is as follows: f (x) max (0, x).
The ReLU function has less calculation amount in the reverse conduction process, and when the ReLU is used as an activation function, the output of part of neurons is 0, so that the neural network has sparsity, the interdependence relation among parameters is reduced, and over-fitting is inhibited.
RPN network:
fig. 5 shows an overall structure of the RPN network. The RPN serves as the core of a Faster R-CNN algorithm and generates a target prediction box for a detection network. The RPN is also a convolution neural network, which takes the feature map output by the feature extraction network as input and outputs rectangular candidate regions with various scales and aspect ratios.
The RPN first uses a sliding window with convolution kernel size of 3 × 3 to slide on the feature map, but since the detection target shapes and sizes are different, if the detection is performed uniformly in windows of the same size, the detection effect is inevitably affected, and the detection accuracy is reduced, so the fast R-CNN algorithm allocates 9 reference rectangular frames for each sliding window position to adapt to various targets, and for each position of each input feature map (minimum unit of feature map), rectangular candidate windows of 9 scales are used: three areas {128 × 128, 256 × 256, 512 × 512}, three ratios { 1:1,1: 2,2: 1, each position on the feature map corresponds to 9 rectangular candidate frames mapped to the corresponding position of the original image, so as to detect the target area features with different scales as much as possible and judge whether the rectangular candidate areas contain employees. The features at each position where the sliding window passes through are mapped into a 256-dimensional feature vector, and two convolution layers with convolution kernel size of 1 x 1 are used to simulate two fully connected layers, and the fraction of the reference rectangular frame and the correction parameters are output.
Fig. 6 is a detailed structure diagram of the RPN network. And outputting 4 × 9-36 correction parameters by one full-connection layer, wherein each reference rectangular frame corresponds to 4 correction parameters respectively, correcting the reference rectangular frame by using the correction parameters of each sliding window position, and finally correcting each reference rectangular frame of each sliding window position to obtain 9 candidate regions, so that the finally generated candidate region frame is more adaptive to the detection target. The 4 correction parameters output by the RPN to each reference rectangular frame are respectively tx,ty,tw,thAnd correcting the reference rectangular frame by using the 4 correction parameters to obtain a rectangular candidate region, wherein a correction formula of the reference rectangular frame is as follows: x ═ watx+xa;y=haty+ya;w=waexp(tw); h=haexp(th)。
Where x, y, w, h denote the center abscissa, center coordinate, width and height of the rectangular candidate region, and x _ a, y _ a, w _ a, h _ a denote the center abscissa, center ordinate, width and height of the reference rectangular frame. Another fully-connected layer outputs 2 × 9 ═ 18 scores for candidate regions, each candidate region corresponding to 2 scores, representing the likelihood of containing and not containing employees in the candidate region, respectively. And finally, normalizing the scores by utilizing a Softmax layer so as to obtain the confidence coefficient of whether the staff is contained in the candidate region frame.
Classification regression:
as shown in fig. 7, which is a structure diagram of a classification regression network, after a candidate region frame is obtained, a classification regression operation is performed on the candidate region. In the fast R-CNN algorithm, a classification regression network takes a feature map output by a feature extraction network and a candidate region output by an RPN network as input, and outputs correction parameters of the candidate region and the confidence coefficient of a detection target. Because the shapes and sizes of rectangular candidate regions output by the RPN are different, the dimensions of feature vectors contained in different candidate regions are different, and the feature vectors contained in the candidate regions cannot be directly sent into the fully-connected layer, so the classification regression network uses one ROIploling layer to pool the features contained in the candidate regions into feature maps with the same size and shape, then uses two fully-connected fc6 and fc7 to perform feature mapping on the feature maps, then uses fully-connected layers fc/cls and fc/bbox _ reg to respectively output scores and correction parameters of detection targets corresponding to the candidate regions, and finally uses a softmax layer to normalize the scores, thereby obtaining the confidence degrees of the candidate regions corresponding to the categories.
In the Faster R-CNN, a local maximum value is screened out by using a non-maximum value suppression algorithm according to a comparison rule, and the method for suppressing the local minimum value solves the problems that a large number of detection frames output by an RPN network for each target contain the same or different targets and are mutually overlapped, so that the optimal rectangular candidate frame is screened out.
The visual machine such as a camera can be arranged at the position corresponding to the product on the production line, the machine vision is utilized by the visual machine to detect good products and defective products in the product, data statistics of the good product rate is further realized, and meanwhile, productivity data can be obtained through calculation. Specifically, a visual machine is used for collecting product pictures on a production line and then transmitting the pictures to a cloud server, and the cloud server analyzes and calculates the product pictures and then detects whether the products are good products or defective products. Meanwhile, the quantity of produced products and the quantity of good products can be counted, and the capacity data of the production line can be obtained.
In order to conveniently and timely search the reason of the over-high reject ratio and reduce the production risk, when the reject ratio of each product is higher than the corresponding set value, alarm information with the over-high reject ratio is formed and is pushed to related personnel; and when the capacity data is smaller than the corresponding set value, forming the alarm information of capacity reduction, and pushing the alarm information of capacity reduction to related personnel.
In this embodiment, an Attention-based Attention-Fusion-YOLO algorithm is used to detect whether the product is good or bad.
The product productivity detection is to detect the real-time productivity of a product through a computer vision technology, belongs to target detection, and has a larger challenge in the detection of small targets in a complex environment although the detection accuracy of the existing algorithm for large targets such as vehicles, pedestrians and the like reaches a higher level. Most of the prior art have the problems of low accuracy, high requirement on environment, low detection speed and the like, and can not meet the requirement of real-time detection.
The problem of product detection will also be solved in this embodiment by using the YOLO-based improved algorithm Attention-Fusion-YOLO. The following problems mainly need to be solved in the problem of product detection:
constructing a complex factory scene product data set:
the current popular public data set is not designed aiming at the product productivity, and a complex scene product data set used by a training network is constructed according to the project requirement. And constructing a data set on the basis of images and videos acquired by field cameras in a factory, wherein the image background is a real industrial scene.
And (3) performing data enhancement on the data set by adopting a DenseGAN algorithm:
and learning the mapping relation between random noise and the small-scale product image by using the GAN model, and generating a new image similar to the sample. The DenseGAN data enhancement method model uses DenseNet to improve a generator in the GAN model, uses a plurality of cascade blocks in a transposed convolutional neural network, ensures that gradient information can be transmitted in each layer of the network, and learns the mapping relation between Gaussian random noise and product pictures.
The generator is composed of 3 cascaded Dense Block structures, wherein each Dense Block structure comprises 5 connection layers which are connected in a Dense connection mode, and the output of each front layer is used as the input of the rear layer. Each Dense Block is connected through deconvolution, the deconvolution can enlarge the scale of the characteristic diagram of the front layer through point multiplication operation after matrix transposition, and finally the characteristic diagram with low resolution is converted to the same scale size as the original image through multilayer deconvolution connection. After a Block, a Network In Network structure is introduced, a 1 × 1 convolution kernel is used for carrying out linear feature fusion on feature maps of the channels, and a generator is used for inputting random noise, up-sampling the noise to the same scale as original training data through deconvolution operation and the like, and simultaneously generating an image similar to the original training data. The network structure of the generator is shown in fig. 8. The method comprises the steps of taking a random vector of 100 dimensions as input, gradually fitting noise data into training data through dimension expansion of a full connection layer and upsampling of 3 Dense blocks, performing upsampling and Tanh activation on the last layer, and finally outputting an image with the same scale as that of a training data set.
Design of fusion yolo detection framework:
the YOLO model predicts on 7 × 7 feature maps, and has higher performance in detecting large-scale human faces, but has poor detection effect on small-scale targets. Considering the feature granularity difference of product information expressed in different layers of the convolutional neural network, wherein a shallow feature map has a smaller receptive field and is the global feature expression of the product, a deep feature map has a larger receptive field and is the local feature expression of the product, and by fusing the feature maps of different receptive fields, the expression capability of the feature map on the image is improved, so that the features with different granularities are all involved in the detection of small target products, and the purpose of improving the detection precision of small-scale products is achieved.
In a prediction layer of the YOLO, each grid corresponds to two prediction frames and detection of one target class, after an input image is divided into 7 × 7 grids, the whole YOLO detection network has 98 prediction frames, and when small-scale products exist in the input image, the detection effect on the small-scale products is not ideal. To overcome this problem, the prediction layer of YOLO is modified, the input image is divided into a 14 × 14 mesh structure using denser grids, and in the last prediction layer, prediction frames are initialized using 3 different-sized frames in the standard reference frame clustering result, while a network structure of one class is predicted for one grid in YOLO, and a class of one object is modified to be predicted for each prediction frame. The prediction boxes with different sizes are respectively responsible for detecting the targets with corresponding sizes. The Fusion-YOLO detection structure proposed based on the above improved method is shown in FIG. 9.
Introducing a channel attention mechanism:
the channel attention mechanism is configured as shown in fig. 10. In a small-scale target detection task, a channel attention mechanism is introduced, the strength of each channel characteristic diagram on the small-scale target representation capability is learned, different weights are given to the characteristic diagram channel dimensions, and the detection network is more concerned on the characteristic diagram with the strong small-scale target representation capability through screening and weighting of different channel characteristic diagrams. In the training process of the network, the feature graph with higher weight will guide the convergence of the network, and the model is better focused on the detection of small-scale targets. The channel Attention machine was made on the channel after each 1 x 1 convolution of the Fusion YOLO, and the Fusion YOLO algorithm introducing the channel Attention mechanism is called the Attention-Fusion-YOLO algorithm.
The visual machine in the plant area can also realize the capture of staff behaviors, and the detection of staff violation behaviors in the plant area is carried out through machine vision. The distance is that a visual machine in the factory is used for collecting pictures of the personnel and then transmitting the pictures to the cloud server, whether the personnel have violation behaviors or not is judged through analysis and calculation of the cloud server, and when the personnel have violation information, warning information is sent to relevant personnel. The violation behaviors can be specifically set according to needs, such as smoking and mobile phone usage. In order to avoid potential safety hazards existing in the monitoring blind area and further improve the production safety, the visual machine for detecting the violation behaviors of the personnel comprises a fixed visual machine arranged at a fixed position and a movable visual machine capable of walking in the monitoring blind area of the fixed visual machine. Related management personnel can flexibly configure the communication relation between each visual machine and the visual processing module capable of detecting the violation behaviors, and further purposefully detect the behaviors of the personnel in a relatively sensitive area. In the embodiment, the complete convolution neural network algorithm is used for detecting the violation behaviors of the personnel in the factory.
Aiming at the problem of detecting the violation behaviors of personnel, the traditional behavior detection algorithm mainly obtains the local characteristics of an action area by calculating a gradient direction histogram of gray level image pixels, so as to judge whether the personnel behaviors violate the rules. However, real-time performance and robustness of detection are difficult to guarantee, so that the problem of personnel behavior detection by adopting the latest deep learning algorithm is urgently needed. A personnel behavior identification method based on a full convolution neural network is proposed. The identification of plant employee behavior can be viewed as a multi-label classification problem, i.e., a behavior sample has multiple characteristics such as whether to smoke, whether to make a call, etc. Firstly, extracting features of a real-time image of a person by using a specific convolutional neural network, and then classifying a plurality of behaviors in parallel. The overall flow chart of the algorithm model is shown in fig. 11.
And (3) constructing an employee behavior data set:
we collect the data set in the field, frame sample the surveillance video using OpenCV, once per second. In order to enhance the generalization capability of the algorithm model, the acquired data set needs to be processed, and irrelevant images (such as non-working time behaviors) are removed firstly; the data set comprises behavior data of not less than twenty employees, the behaviors comprise calling, smoking and normal behaviors, the picture of each behavior is not less than 1000, and the proportion is kept at 1:1: 1. Meanwhile, the processed data set needs to be labeled correspondingly, and the labeling types are calling, smoking and normal behaviors.
Constructing a neural network model:
compared with the classical convolutional neural network, the full convolutional neural network can better solve the problem of image segmentation at the semantic level. The classic CNN network generally uses a full connection layer to obtain a feature vector of a fixed length for classification after convolution layers, while the FCN network can accept an input image of any size, and uses a deconvolution layer to up-sample the featuremap of the last convolution layer to restore it to the same size as the input image, thereby generating a prediction for each pixel while preserving spatial information of the original input image and performing pixel-by-pixel classification on the up-sampled feature map. And finally, calculating the loss of softmax classification pixel by pixel, which is equivalent to that each pixel corresponds to one training sample. In the image preprocessing stage, the FCN is adopted to perform semantic segmentation on the image, and the character part is reserved. Before classification, the semantic segmentation is firstly carried out on the pictures, so that other irrelevant features can be filtered out, and more effective features are reserved. And finally, after loading a data set and training a neural network, obtaining a detection algorithm model, and inputting the image captured in real time into the model, so that whether the behavior of the staff is normal behavior or high-risk behavior such as calling, smoking and the like can be detected.
When the terminal equipment is used, a user can log in through various APPs on various terminals, and then various information stored in the cloud service can be checked according to the authority. The intelligent factory supervision method has the advantages of high intelligent degree, high supervision efficiency, low labor cost, unified data standard and small management difficulty, and can find various problems existing in the factory operation process in time, so that various hidden dangers are eliminated in time conveniently, and the safety of various aspects of production is improved.

Claims (10)

1. An intelligent factory supervision method is characterized in that: comprises that
Classifying the staff according to the organization architecture or the occupation post of the enterprise, and further realizing the identification of the staff, the outsiders and the visitors when entering the factory;
carrying out face recognition through machine vision, and further counting attendance information of the staff;
detecting the on-duty, off-duty and off-duty conditions of the staff and recording the corresponding time through machine vision;
detecting and counting data of good products and defective products in the products through machine vision, and calculating to obtain capacity data;
detecting the violation behaviors of personnel in the plant area through machine vision;
and carrying out positioning detection on the real-time positions of the personnel in the factory.
2. The intelligent plant supervision method according to claim 1, characterized in that: on the basis of realizing the identification of the staff, the staff of the outsiders and the visitors, corresponding passing means are correspondingly adopted, and the related information of the corresponding staff is informed through a display screen and/or a user terminal.
3. The intelligent plant supervision method according to claim 2, characterized in that: when the work hours are identified as factory workers, controlling a gate arranged at the entrance guard position to open for passing, and simultaneously prompting the work safety general knowledge content corresponding to different classification employees through a display screen and/or employee user terminals;
when the personnel is identified as the outsider who is registered with the information, the guiding information is transmitted to the access control administrator, the access control administrator guides the outsider to a corresponding place according to the guiding information, and meanwhile, corresponding safety information and operation hazards are prompted through a display screen and/or a user terminal of the outsider;
when the visitor registered with the information is identified, transmitting guidance information to the corresponding accessed contact person, leading the visitor to a gate sentry by the accessed contact person, and simultaneously prompting corresponding external detection, visiting an item system and informing safety risk by a display screen and/or a user terminal of the visitor;
and forming entrance guard intrusion alarm information when the unidentifiable personnel enter the factory area, and pushing the entrance guard intrusion alarm information to related personnel.
4. The intelligent plant supervision method according to claim 1, characterized in that: and when the fact that the off-duty time of the staff on the set key post exceeds the corresponding set time threshold is detected, key post off-duty overtime warning information is formed, and the key post off-duty overtime warning information is pushed to related staff.
5. The intelligent plant supervision method according to claim 1, characterized in that: when the reject ratio of each product is higher than the corresponding set value, alarm information with overhigh reject ratio is formed and is pushed to related personnel; and when the capacity data is smaller than the corresponding set value, forming the alarm information of capacity reduction, and pushing the alarm information of capacity reduction to related personnel.
6. The intelligent plant supervision method according to claim 1, characterized in that: and related management personnel can configure the communication relationship between each visual machine and the visual processing module capable of detecting the violation behaviors.
7. The intelligent plant supervision method according to claim 6, characterized in that: the vision machine comprises a fixed vision machine arranged at a fixed position and a movable vision machine capable of walking in a monitoring blind area of the fixed vision machine.
8. The intelligent plant supervision method according to any of claims 1 to 7, characterized in that: and detecting the violation behaviors of the personnel in the plant area by using a full convolutional neural network algorithm.
9. The intelligent plant supervision method according to any of claims 1 to 7, characterized in that: the Attention mechanism is introduced, and an Attention-Fusion-Yolo algorithm is used for detecting whether the product is good or bad.
10. The intelligent plant supervision method according to any of claims 1 to 7, characterized in that: and (4) detecting the on Shift, off Shift and off Shift conditions of the personnel by using a Faster R-CNN algorithm.
CN202010442738.9A 2020-05-22 2020-05-22 Intelligent factory supervision method Pending CN111653023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010442738.9A CN111653023A (en) 2020-05-22 2020-05-22 Intelligent factory supervision method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010442738.9A CN111653023A (en) 2020-05-22 2020-05-22 Intelligent factory supervision method

Publications (1)

Publication Number Publication Date
CN111653023A true CN111653023A (en) 2020-09-11

Family

ID=72348347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010442738.9A Pending CN111653023A (en) 2020-05-22 2020-05-22 Intelligent factory supervision method

Country Status (1)

Country Link
CN (1) CN111653023A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462698A (en) * 2020-10-29 2021-03-09 深圳市益鸿智能科技有限公司 Intelligent factory control system and method based on big data
CN113066212A (en) * 2021-03-30 2021-07-02 中国长江电力股份有限公司 Convenient and safe entrance informing system and method
CN113359654A (en) * 2021-07-15 2021-09-07 四川环龙技术织物有限公司 Internet-of-things-based papermaking mesh blanket production intelligent monitoring system and method
CN113379247A (en) * 2021-06-10 2021-09-10 鑫安利中(北京)科技有限公司 Modeling method and system of enterprise potential safety hazard tracking model
CN113469950A (en) * 2021-06-08 2021-10-01 海南电网有限责任公司电力科学研究院 Method for diagnosing abnormal heating defect of composite insulator based on deep learning
CN113554834A (en) * 2021-08-03 2021-10-26 匠人智慧(江苏)科技有限公司 Chemical industry park visual identification application system based on artificial intelligence technology
CN113610006A (en) * 2021-08-09 2021-11-05 中电科大数据研究院有限公司 Overtime labor discrimination method based on target detection model
CN113705988A (en) * 2021-08-14 2021-11-26 浙江宏瑞达工程管理有限公司 Supervision personnel performance management method and system, storage medium and intelligent terminal
CN113807630A (en) * 2020-12-23 2021-12-17 京东科技控股股份有限公司 Method, device, equipment and storage medium for acquiring requirements of robot service platform
CN116151836A (en) * 2023-04-21 2023-05-23 四川华鲲振宇智能科技有限责任公司 Intelligent foreground auxiliary service system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903009A (en) * 2014-03-27 2014-07-02 北京大学深圳研究生院 Industrial product detection method based on machine vision
CN107703146A (en) * 2017-09-30 2018-02-16 北京得华机器人技术研究院有限公司 A kind of auto-parts vision detection system and method
CN108491899A (en) * 2018-02-11 2018-09-04 西玛特易联(苏州)科技有限公司 A kind of exhibitions stream of people based on RFID technique monitors the implementation method of system
CN110349310A (en) * 2019-07-03 2019-10-18 源创客控股集团有限公司 A kind of making prompting cloud platform service system for garden enterprise
CN110705482A (en) * 2019-10-08 2020-01-17 中兴飞流信息科技有限公司 Personnel behavior alarm prompt system based on video AI intelligent analysis
CN110717448A (en) * 2019-10-09 2020-01-21 杭州华慧物联科技有限公司 Dining room kitchen intelligent management system
CN111079577A (en) * 2019-12-02 2020-04-28 重庆紫光华山智安科技有限公司 Calculation method and system for dynamic area aggregation early warning real-time recommendation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903009A (en) * 2014-03-27 2014-07-02 北京大学深圳研究生院 Industrial product detection method based on machine vision
CN107703146A (en) * 2017-09-30 2018-02-16 北京得华机器人技术研究院有限公司 A kind of auto-parts vision detection system and method
CN108491899A (en) * 2018-02-11 2018-09-04 西玛特易联(苏州)科技有限公司 A kind of exhibitions stream of people based on RFID technique monitors the implementation method of system
CN110349310A (en) * 2019-07-03 2019-10-18 源创客控股集团有限公司 A kind of making prompting cloud platform service system for garden enterprise
CN110705482A (en) * 2019-10-08 2020-01-17 中兴飞流信息科技有限公司 Personnel behavior alarm prompt system based on video AI intelligent analysis
CN110717448A (en) * 2019-10-09 2020-01-21 杭州华慧物联科技有限公司 Dining room kitchen intelligent management system
CN111079577A (en) * 2019-12-02 2020-04-28 重庆紫光华山智安科技有限公司 Calculation method and system for dynamic area aggregation early warning real-time recommendation

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462698A (en) * 2020-10-29 2021-03-09 深圳市益鸿智能科技有限公司 Intelligent factory control system and method based on big data
CN113807630A (en) * 2020-12-23 2021-12-17 京东科技控股股份有限公司 Method, device, equipment and storage medium for acquiring requirements of robot service platform
CN113807630B (en) * 2020-12-23 2024-03-05 京东科技控股股份有限公司 Method, device, equipment and storage medium for acquiring requirements of robot service platform
CN113066212A (en) * 2021-03-30 2021-07-02 中国长江电力股份有限公司 Convenient and safe entrance informing system and method
CN113066212B (en) * 2021-03-30 2022-03-18 中国长江电力股份有限公司 Convenient and safe entrance informing system and method
CN113469950A (en) * 2021-06-08 2021-10-01 海南电网有限责任公司电力科学研究院 Method for diagnosing abnormal heating defect of composite insulator based on deep learning
CN113379247A (en) * 2021-06-10 2021-09-10 鑫安利中(北京)科技有限公司 Modeling method and system of enterprise potential safety hazard tracking model
CN113379247B (en) * 2021-06-10 2024-03-29 锐仕方达人才科技集团有限公司 Modeling method and system for enterprise potential safety hazard tracking model
CN113359654A (en) * 2021-07-15 2021-09-07 四川环龙技术织物有限公司 Internet-of-things-based papermaking mesh blanket production intelligent monitoring system and method
CN113554834A (en) * 2021-08-03 2021-10-26 匠人智慧(江苏)科技有限公司 Chemical industry park visual identification application system based on artificial intelligence technology
CN113610006A (en) * 2021-08-09 2021-11-05 中电科大数据研究院有限公司 Overtime labor discrimination method based on target detection model
CN113610006B (en) * 2021-08-09 2023-09-08 中电科大数据研究院有限公司 Overtime labor discrimination method based on target detection model
CN113705988A (en) * 2021-08-14 2021-11-26 浙江宏瑞达工程管理有限公司 Supervision personnel performance management method and system, storage medium and intelligent terminal
CN113705988B (en) * 2021-08-14 2023-12-19 浙江宏瑞达工程管理有限公司 Method and system for managing performance of staff, storage medium and intelligent terminal
CN116151836A (en) * 2023-04-21 2023-05-23 四川华鲲振宇智能科技有限责任公司 Intelligent foreground auxiliary service system

Similar Documents

Publication Publication Date Title
CN111653023A (en) Intelligent factory supervision method
CN110428522B (en) Intelligent security system of wisdom new town
CN112200043B (en) Intelligent danger source identification system and method for outdoor construction site
Franklin et al. Anomaly detection in videos for video surveillance applications using neural networks
CN109376637A (en) Passenger number statistical system based on video monitoring image processing
Maksymiv et al. Real-time fire detection method combining AdaBoost, LBP and convolutional neural network in video sequence
CN110619277A (en) Multi-community intelligent deployment and control method and system
CN105894701A (en) Large construction vehicle identification and alarm method for preventing external damage to transmission lines
CN111738336A (en) Image detection method based on multi-scale feature fusion
CN116629465B (en) Smart power grids video monitoring and risk prediction response system
KR102333143B1 (en) System for providing people counting service
CN112257643A (en) Smoking behavior and calling behavior identification method based on video streaming
Tomar et al. Crowd analysis in video surveillance: A review
CN114187541A (en) Intelligent video analysis method and storage device for user-defined service scene
CN116798176A (en) Data management system based on big data and intelligent security
CN111798356B (en) Rail transit passenger flow abnormal pattern recognition method based on big data
Aakroum et al. Deep learning for inferring the surface solar irradiance from sky imagery
Alsaedi et al. Design and Simulation of Smart Parking System Using Image Segmentation and CNN
CN110795995A (en) Data processing method, device and computer readable storage medium
CN113822240B (en) Method and device for extracting abnormal behaviors from power field operation video data
Preetha A fuzzy rule-based abandoned object detection using image fusion for intelligent video surveillance systems
Daniya et al. ICSA-ECNN based image forgery detection in face images
Nyajowi et al. CNN real-time detection of vandalism using a hybrid-LSTM deep learning neural networks
Yousefi et al. Energy aware multi-object detection method in visual sensor network
Lizhong et al. Research on detection and tracking of moving target in intelligent video surveillance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination