CN110008831A - A kind of Intellectualized monitoring emerging system based on computer vision analysis - Google Patents

A kind of Intellectualized monitoring emerging system based on computer vision analysis Download PDF

Info

Publication number
CN110008831A
CN110008831A CN201910134732.2A CN201910134732A CN110008831A CN 110008831 A CN110008831 A CN 110008831A CN 201910134732 A CN201910134732 A CN 201910134732A CN 110008831 A CN110008831 A CN 110008831A
Authority
CN
China
Prior art keywords
image
edge
moving object
area
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910134732.2A
Other languages
Chinese (zh)
Inventor
杨建生
魏小兵
郭帅
张权
苏冠旗
于萍
王志勇
程超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHONGKE INNOVATION (BEIJING) TECHNOLOGY Co Ltd
Jinneng Datuhe Thermal Power Co Ltd
Original Assignee
ZHONGKE INNOVATION (BEIJING) TECHNOLOGY Co Ltd
Jinneng Datuhe Thermal Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHONGKE INNOVATION (BEIJING) TECHNOLOGY Co Ltd, Jinneng Datuhe Thermal Power Co Ltd filed Critical ZHONGKE INNOVATION (BEIJING) TECHNOLOGY Co Ltd
Priority to CN201910134732.2A priority Critical patent/CN110008831A/en
Publication of CN110008831A publication Critical patent/CN110008831A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Intellectualized monitoring emerging system based on computer vision analysis, including video monitoring system, face identification system, video structure analyzing system, Multisensor video fusion system.Video monitoring system is responsible for acquiring video information and plant area's important area temperature information;Face identification system is responsible for identifying plant area worker;Video structure analyzing system can determine whether whether safe wearing cap, vehicle enter unauthorized area, whether, employee consistent with its working region carries package to employee work clothes to employee;Multisensor video fusion system is responsible for being managed collectively the various information of video monitoring system, face identification system and video structure analyzing system.The system can be monitored and analyze to plant area in real time, find plant area's abnormal conditions and early warning in time, to provide safeguard for plant area's normal operation.

Description

Intelligent monitoring fusion system based on computer vision analysis
Technical Field
The invention relates to a factory running state monitoring system, in particular to an intelligent monitoring fusion system based on computer vision analysis, and belongs to the field of safety monitoring.
Background
Safety is always placed first in the production run of a plant. However, in the daily production of a factory, there are many safety hazards, such as the temperature of a critical area is too high, a worker enters a working area without wearing a safety helmet, a vehicle enters an area without permission to enter, a worker carries objects into a critical area where the objects are prohibited, and the worker goes wrong in the working area, and the like, which seriously threatens the normal operation of the factory.
Disclosure of Invention
In order to solve the problems, the invention provides an intelligent monitoring and fusion system based on computer vision analysis, which comprises a video monitoring system, a face recognition system, a video structural analysis system and a video fusion system.
The video monitoring system consists of a front-end camera, a thermal infrared imager and a video monitoring platform; the front-end camera is responsible for collecting video information; the thermal infrared imager is responsible for acquiring temperature information of important areas of a plant area; the video monitoring platform receives video information collected by the front-end camera and temperature information of important areas of the plant, collected by the thermal infrared imager, displays the video information, transmits the video information to the video structural analysis system, and transmits the video information and the temperature information of the important areas of the plant to the video fusion system.
The face recognition system consists of a face snapshot device, a face recognition module and an employee identity database; the face snapshot device collects images of staff entering and leaving each key area of the factory area and transmits the images to the face recognition module; the employee identity database stores the identity information and facial features of each employee; the face recognition module receives the employee image collected by the face snapshot device, performs face recognition, determines identity information of the employee through comparison with the employee identity database, and transmits a recognition result to the video fusion system.
The extraction process of the face features of the staff in the face recognition system is as follows: acquiring a face image of each employee in the factory, wherein the size of the image is MxN, and K face images can be obtained if K employees are in common; carrying out gray scale linear transformation processing on K human face images and3 x 3 median filtering to obtain K face gray level images; connecting the gray values of the pixels of each line of each face gray image, forming a line vector with dimension D being M multiplied by N for each face gray image, and recording the formed line vector x for the ith face gray imageiAnd calculating an average faceArranging K row vectors formed by K face gray level images to form a K multiplied by D dimensional matrix, and performing K-L transformation on the matrix to obtain a characteristic face space: w ═ u1,u2,u3,…,up) In the formula: p is the set dimensionality after dimensionality reduction; calculating the facial feature of each employee, and for the jth employee, calculating the facial feature omega of each employeejThe calculation formula of (2) is as follows: omegaj=wT(xj-Ψ)。
The human face recognition module in the human face recognition system performs the human face recognition process as follows: acquiring a face image captured by a face capturing device and entering and exiting each key area of a factory area, performing gray linear transformation on the face image to obtain a face gray image, and connecting gray values of pixels of each line of the face gray image to obtain a line vector y; calculating the facial feature omega of the face imagej=wT(xj- Ψ), wherein: w is the characteristic face space determined in claim 2: w ═ u1,u2,u3,…,up) Ψ represents an average face obtained in claim 2; calculate ΩySelecting the Euclidean distance from the facial features stored in the employee identity database to omegayAnd the staff identity corresponding to the facial feature is the face recognition result.
The video structured analysis system is composed of a safety helmet identification module, a license plate identification module, a work clothes identification module and a carrying object identification module, receives video information collected by a video monitoring system, judges whether a worker wears a safety helmet through the safety helmet identification module, judges whether a vehicle enters an unauthorized area through the license plate identification module, judges whether a worker work clothes is consistent with a work area through the work clothes identification module, judges whether the worker carries a package through the carrying object identification module, and transmits an analysis result to the video fusion system.
The process that the safety helmet identification module judges whether the staff wear the safety helmet is as follows:
reading video information collected by a camera in a detection area in real time, extracting a previous frame and a current frame, and carrying out gray processing on the previous frame and the current frame to obtain gray images of the previous frame and the current frame; processing the gray level images of the previous frame and the current frame by using an interframe difference method to obtain a moving object image; constructing one or more moving object estimation rectangular frames to extract all mutually independent parts in the moving object image, and if K mutually independent parts exist in the image, obtaining K moving object estimation rectangular frames; extracting the corresponding part of the gray level image and the color image of each moving object estimation rectangular frame in the current frame to obtain K moving object estimation gray level images and K moving object estimation color images; carrying out edge detection on the K moving object estimated gray level images one by using a Canny operator to obtain K moving object edge images, and constructing a moving object rectangular frame in each moving object edge image to contain all edges in the image to obtain K moving object rectangular frames; confirming a human body rectangular frame in the moving object rectangular frame according to the length-width ratio of the human body, supposing that k human body rectangular frames are judged, extracting the image of the corresponding part of each human body rectangular frame in the moving object estimated color image to obtain k human body color images; for each human body rectangular frame, positioning the head of a person in the human body rectangular frame according to the upper half-body characteristics of the human body, and positioning the safety helmet estimation areas according to the head proportion to obtain k safety helmet estimation areas; extracting the image of the corresponding part of each safety helmet estimation area in the corresponding human body color image to obtain k safety helmet area color images, and judging the safety helmet area color images one by one according to the color; the method comprises the following specific steps:
step1, extracting a previous frame and a current frame, and performing graying processing on the previous frame and the current frame by adopting a weighted average method, namely distributing different weights to three components of R, G and B of each pixel point, wherein the specific formula is as follows: f (x, y) ═ 0.3R (x, y) +0.59G (x, y) +0.11B (x, y), where R (x, y), G (x, y), B (x, y) refer to the R, G, B components of the pixel with coordinates (x, y), respectively, and f (x, y) refers to the gray-scale value after conversion of the pixel with coordinates (x, y);
step2, processing the gray level images of the previous frame and the current frame by using an interframe difference method to obtain a moving object image, wherein the process is as follows: subtracting the gray values of the pixels corresponding to the two frames of images, and taking the absolute value of the gray values to obtain a difference image Dn(x,y)=|fn(x,y)-fn-1(x, y) |, wherein fn(x, y) denotes a gray value of a pixel having coordinates (x, y) in the current frame, fn-1(x, y) refers to the gray value of the pixel with the coordinate (x, y) in the previous frame; setting a threshold TaIf D isn(x,y)>T,Rn255 if Dn(x,y)<T,RnObtaining a moving object graph R as 0; r is a binary image, wherein a white area represents a moving object area;
and 3, constructing K moving object estimation rectangular frames, extracting all mutually independent parts in the moving object image, acquiring the position information of each moving object estimation rectangular frame, and storing the position information in a matrix LO, wherein the specific process is as follows:
step 1: taking the lower left corner of the moving object image R obtained in the step2 as an origin, establishing a coordinate axis, wherein the moving object image R is in a first quadrant; setting a straight line x as h, wherein the initial value of h is 0;
step 2: shifting the straight line x h by one unit to the right, and judging the pixel value of each pixel on the straight line x h;
step 3: if the pixel value of each pixel on the straight line x-h is 0, returning to Step 2; if a pixel having a pixel value other than 0 exists on the straight line x-h, let h be xmin
Step 4: shifting the straight line x h by one unit to the right, and judging the pixel value of each pixel on the straight line x h;
step 5: if the pixel with the pixel value not being 0 exists on the straight line x ═ h, recording the ordinate value corresponding to the uppermost non-0 pixel and the ordinate value corresponding to the lowermost non-0 pixel on the straight line, storing the ordinate values in the array Y, and returning to Step 4; if the pixel value of each pixel on the straight line x-h is 0, let h be xmax
Step 6: comparing the sizes of the elements in the array Y to obtain the maximum value Y in the YmaxAnd the minimum value ymin(ii) a With a straight line y ═ ymaxAs the upper edge, a straight line y ═ yminAs the lower edge, a straight line x ═ xminAs the left edge, the straight line x ═ xmaxConstructing a motion estimation area rectangular frame as a right edge; zero clearing of array Y and recording LOk=(ymax,ymin,xmin,xmaxK) and storing in a matrix LO, wherein the initial value of k is 1, and the value of k is added with 1 after the rectangular frame of the motion estimation area is constructed each time;
step 7: repeating the steps 2 to 6 until the straight line x ═ h moves to the rightmost side of the binary image, and ending;
step4, extracting the corresponding part image of each moving object estimation rectangular frame in the gray level image and the color image of the current frame to obtain K moving object estimation gray level images G1,...,GKAnd K moving objects estimation color map P1,…,PK(ii) a The extraction process is as follows: for the ith moving object estimation rectangular frame, reading the corresponding position information LO from the matrix LO obtained in step3i=(ymax,ymin,xmin,xmaxI), confirming the specific position of the moving object estimation rectangular frame by using the upper edge, the lower edge, the left edge and the right edge so as to extract;
step5, using Canny operator to estimate gray level G of moving object one by one1To GKPerforming edge detection to obtain K edge images, comparing all edge pixels in each edge image with pixels at corresponding positions in the moving object image, and deleting the edge pixels at the positions corresponding to the moving object imageObtaining final edge images of the moving objects by edge pixels with unequal pixel values; constructing a moving object rectangular frame in each moving object edge image to contain all edges in the image, and obtaining K moving object rectangular frames M1,…,MK
The process of constructing the rectangular frame of the moving object is as follows: firstly, traversing the moving object edge map from bottom to top to obtain the leftmost edge pixel X of each linele={xl1,xl2,xl3,xl4,xl5,…,xlmAnd the abscissa X of the rightmost edge pixelre={xr1,xr2,xr3,xr4,xr5,…,xrmIn which xliRepresents the abscissa, x, of the leftmost edge pixel of the ith rowriThe abscissa of the rightmost edge pixel of the ith row is represented, and m represents the number of rows of the edge map of the moving object; then, traversing the moving object edge image from left to right to obtain the uppermost edge pixel Y of each columnhe={yh1,yh2,yh3,yh4,yh5,...,yhnAnd the ordinate Y of the lowermost edge pixelle={yl1,yl2,yl3,yl4,yl5,...,ylnIn which yhiRepresenting the ordinate, y, of the uppermost edge pixel of the ith columnliRepresenting the ordinate of the lowermost edge pixel of the ith column, and n represents the edge image column number of the moving object; finally, with XleWith the smallest element in (1) as the left edge, XreMiddle largest element as right edge, YheWith the largest element in (3) as the upper edge, YleThe minimum element in the middle is used as a lower edge to construct a rectangular frame of the moving object;
step6, according to the human body proportion, the rectangular frame M of the moving object is processed1To MKJudging to obtain human body rectangular frames, if k human body rectangular frames are judged, extracting the corresponding part of each human body rectangular frame in the moving object estimation color image corresponding to the human body rectangular frameObtaining k human body color images;
recording the length of a rectangular frame of the moving object as L, the width as W and the lambda as W/L, setting thresholds α and β, if the length of the rectangular frame of the moving object is more than α and the lambda is less than β, regarding the rectangular frame as a rectangular frame of the human body, otherwise, regarding the rectangular frame as other objects;
step7, for each human body rectangular frame, positioning the head of a person in the human body rectangular frame according to the upper body characteristics of the human body, and positioning the safety helmet estimation areas according to the head proportion to obtain k safety helmet estimation areas;
the head positioning process is as follows: firstly, a rectangular frame of a human body is cut to obtain a head area, which specifically comprises the following steps: taking the upper edge of the human body rectangular frame as the upper edge of the head area, taking the positions of the upper edge and the lower edge 1/3 of the human body rectangular frame close to the upper edge as the lower edge of the head area, and taking the left edge and the right edge of the human body rectangular frame as the left edge and the right edge of the head area; second, traverse the head region from bottom to top to obtain the abscissa, X, of the leftmost and rightmost edge points of each rowld={xl1,xl2,xl3,xl4,xl5,…,xlgAnd Xrd={xr1,xr2,xr3,xr4,xr5,…,xrgIn which xliRepresents the abscissa, x, of the leftmost edge point of the ith rowriRepresenting the coordinate of the rightmost edge point of the ith row, and g representing the row number of the head area; then, the distance D ═ X from the leftmost edge to the rightmost edge is calculatedrd-XldObtaining D ═ D1,D2,D3,D4,D5,…,DgD, calculating the difference D between the next item and the previous item of the element in DΔ={D2-D1,D3-D2,D4-D3,D5-D4,…,Dg-Dg-1}; finally find DΔUpdating the lower edge of the head region to be the p-th line from the 11 th element to the line number p corresponding to the largest element in the last element;
the process of constructing the rectangular frame of the safety helmet comprises the following steps: respectively find out XldFrom xlpTo xlgSmallest element xmlAnd XrdFrom xrpTo xrgMaximum element x ofmrX is equal to xmlAs the precise head region left edge, x ═ xmrAs a right edge, taking the upper edge and the lower edge of the head region as an upper edge and a lower edge of a precise head to obtain a precise head region; then, the precise head area is further segmented to obtain a safety helmet area, which is as follows: the upper, left and right edges of the helmet region coincide with the precision head region, with the positions of the precision head region upper and lower edges 1/3 near the upper edge as the lower edge of the helmet region;
step 8, extracting the image of the rectangular frame of the safety helmet at the corresponding position of the colorful picture of the human body to obtain a colorful picture of a safety helmet area, and judging the colorful picture of the safety helmet area one by one according to the color;
the judgment process is as follows: the color map of the safety helmet area is converted into an HSV image, and as the safety helmet in a common factory has red, blue, white and yellow, the red, blue, white and yellow are taken as judgment standards, and the value ranges of hue H, saturation S and lightness V corresponding to the four colors are as follows:
red: h:0-10 and 156-180; s, 43-255; v46-255
Blue color: h100-124; s, 43-255; v46-255
White: h is 0 to 180; s is 0 to 30; v221-
Yellow: 26-34 parts of H; s, 43-255; v46-255
The color of each pixel is classified as follows: if the H, S and V values of the pixel to be classified all meet the value range of a certain color, judging that the color of the pixel belongs to the color; after the classification is finished, calculating the proportion of the red, blue, white and yellow pixels to all the pixels to obtain a red proportion TrBlue color ratio TbWhite ratio ofExample TwAnd yellow ratio Ty(ii) a Setting a threshold T1If T isr、Tb、Tw、TyIn which there is a value exceeding T1If yes, the safety helmet is judged to be worn; if T isr、Tb、Tw、TyAre all less than T1And judging that the safety helmet is not worn.
The license plate recognition module judges whether the module vehicle enters an unauthorized area or not according to the following steps:
extracting a current frame, and performing graying and gray stretching treatment on the current frame by using an average value method to obtain a gray image of the current frame; performing edge extraction on the gray image of the current frame by using a Canny operator to obtain an edge image; performing four times of expansion, four times of corrosion and median filtering on the edge map to eliminate noise; determining a straight line in the edge image by using Hough transformation, determining the position of the license plate according to the proportion characteristics of the upper and lower parallels, the left and right parallels and the length and width of the license plate, and constructing a license plate recognition area; projecting the image of the license plate recognition area in the vertical direction, and segmenting according to the characteristics of the peak value group to obtain each character image; identifying each character image by using a BP neural network to obtain each character so as to identify the license plate number; after the license plate number is identified, the database is searched to determine the authorization area corresponding to the license plate, the authorization area is compared with the area where the current camera is located, and if the area where the camera is located does not belong to the license plate authorization area, warning is carried out.
The process of judging whether the employee work clothes are consistent with the work areas of the employee by the work clothes identification module is as follows:
reading video information acquired by a camera in a detection area in real time, extracting a current frame and a few frames before, and performing graying processing on the current frame and the few frames before to obtain grayscale images of the current frame and the few frames before; modeling, training and updating the gray level images of the previous frames by using a Gaussian mixture model to obtain a final model; judging each pixel point of the gray level image of the current frame by using the model to obtain a moving object image; constructing one or more moving object rectangular frames to extract all mutually independent parts in the moving object image, and if k mutually independent parts exist in the image, obtaining k moving object rectangular frames; confirming a human body rectangular frame in the object rectangular frame according to the length-width ratio of the human body, supposing that l human body rectangular frames are judged, extracting the image of each human body rectangular frame at the corresponding position of the current frame color picture to obtain l human body color pictures; for each human body rectangular frame, determining the position of the upper body according to the human body proportion, taking the position of the upper body as a working clothes estimation area, and extracting the image of the corresponding part of the working clothes estimation area in the corresponding human body color image to obtain l working clothes color images; judging the color image of each piece of work clothes according to the color, which comprises the following steps: converting the color image of the working clothes into an HSV image, classifying each pixel, if the H, S and V values of the pixel meet the value range of a certain color, classifying the pixel into the color, calculating the proportion of each color pixel to all pixels, setting a threshold value T to be 0.6, if the proportion of the certain color exceeds the threshold value T, judging the working clothes into the color, inquiring a database, obtaining the color of the working clothes in a working area corresponding to the position of the current camera, comparing the color with the judged color, and if the color is not consistent, warning; and if the proportion of all the colors does not exceed the threshold value T, judging that the staff does not wear the work clothes, and warning.
The process of judging whether the employee carries the package by the object carrying identification module is as follows:
reading video information acquired by a camera in a detection area in real time, extracting a current frame and previous frames, and carrying out gray processing on the current frame and the previous frames to obtain gray maps of the current frame and the previous frames; modeling, training and updating the gray level images of the previous frames by using a vibe algorithm to obtain a final model; judging each pixel point of the gray image of the current frame by using the model to obtain a motion area, and constructing a human body boundary rectangle containing all the motion areas; constructing an object estimation rectangle according to the human body proportion; judging whether the object is carried or not by taking the symmetry of the object in the object estimation rectangle as a standard; the method comprises the following specific steps:
step1, reading video information collected by a camera, and extracting previous frames and current frames; carrying out graying processing on each frame, and carrying out graying on the image by adopting a weighted average method: carrying out weighting operation on each pixel point, and distributing different weights to the R, G and B components, wherein f (x, y) is 0.3R (x, y) +0.59G (x, y) +0.11B (x, y), and x and y refer to pixel point coordinates;
step2, carrying out modeling training and updating on the gray level images of the previous frames by using a vibe algorithm to obtain a final model;
step3, judging each pixel point of the gray image of the current frame by using the final model to obtain a motion area, and constructing a human body boundary rectangle containing all the motion areas;
step4, forming an object estimation rectangle according to the human body proportion, specifically as follows: the position 1/5, close to the upper edge, of the upper edge and the lower edge of the human body boundary rectangle is used as the upper edge of the object estimation rectangle, the position 2/5, close to the lower edge, between the upper edge and the lower edge of the human body boundary rectangle is used as the lower edge of the object estimation rectangle, and the left edge and the right edge of the human body boundary rectangle are respectively used as the left edge and the right edge of the object estimation rectangle to form the object estimation rectangle;
step5, judging whether the object is carried or not by taking the symmetry of the object estimation area as a standard, wherein the process is as follows: establishing a coordinate system by taking the lower left corner of the object estimation rectangle as an origin, wherein the object estimation rectangle is in the first quadrant, the object is taken to estimate the midpoint m of the object part at the upper edge of the rectangle, and the abscissa x of the object part is obtainedmX is equal to xmCalculating the area of the object in the left region of the symmetry axis S1, the area of the object in the right region of the symmetry axis S2, and the degree of symmetry for the symmetry axisSetting threshold values th and tl, if tl is less than β and less than th, judging that the object is not carried, otherwise, judging that the object is carried.
The video fusion system is characterized in that the video monitoring system, the face recognition system and the video structural analysis system are fused to form a user-oriented video application system, a series of functions such as video viewing, video alarm prompting, video playback, alarm recording, user management, role management and authority management are realized in one software, and unified management of video application is facilitated.
The invention has the beneficial effects that: the system can monitor and analyze the working condition of a factory in real time, and can warn and check in time once dangerous conditions that the staff do not carry a safety helmet, the staff carry objects to enter an object carrying forbidding area, a vehicle enters an unauthorized area, the staff goes wrong with a working area, the temperature of a key area is too high and the like are found.
Drawings
Fig. 1 is a structural diagram of an intelligent monitoring and fusion system based on computer vision analysis according to the present invention.
Detailed Description
The structure of the intelligent monitoring fusion system based on computer vision analysis is shown in FIG. 1, and a video monitoring system consists of a front-end camera, a thermal infrared imager and a video monitoring platform; the front-end camera is responsible for collecting video information; the thermal infrared imager is responsible for acquiring temperature information of important areas of a plant area; the video monitoring platform receives video information collected by the front-end camera and temperature information of important areas of the plant, collected by the thermal infrared imager, displays the video information, transmits the video information to the video structural analysis system, and transmits the video information and the temperature information of the important areas of the plant to the video fusion system.
The face recognition system consists of a face snapshot device, a face recognition module and an employee identity database; the face snapshot device collects images of staff entering and leaving each key area of the factory area and transmits the images to the face recognition module; the employee identity database stores the identity information and facial features of each employee; the face recognition module receives the employee image collected by the face snapshot device, performs face recognition, determines identity information of the employee through comparison with the employee identity database, and transmits a recognition result to the video fusion system.
The video structured analysis system is composed of a safety helmet identification module, a license plate identification module, a work clothes identification module and an object carrying identification module, receives video information collected by a video monitoring system, judges whether a worker wears a safety helmet through the safety helmet identification module, judges whether a vehicle enters an unauthorized area through the license plate identification module, judges whether a work clothes of the worker is consistent with a work area through the work clothes identification module, judges whether the worker carries a package through the object carrying identification module, and transmits an analysis result to the video fusion system.
The video fusion system comprises a video alarm prompt module, a video playback module, an alarm recording module, a user management module, a role management module and a permission management module.
The process that the safety helmet identification module judges whether the staff wear the safety helmet is as follows:
step1, extracting a previous frame and a current frame, and performing graying processing on the previous frame and the current frame by adopting a weighted average method, namely distributing different weights to three components of R, G and B of each pixel point, wherein the specific formula is as follows: f (x, y) ═ 0.3R (x, y) +0.59G (x, y) +0.11B (x, y), where R (x, y), G (x, y), B (x, y) refer to the R, G, B components of the pixel with coordinates (x, y), respectively, and f (x, y) refers to the gray-scale value after conversion of the pixel with coordinates (x, y);
step2, processing the gray level images of the previous frame and the current frame by using an interframe difference method to obtain a moving object image, wherein the process is as follows: subtracting the gray values of the pixels corresponding to the two frames of images, and taking the absolute value of the gray values to obtain a difference image Dn(x,y)=|fn(x,y)-fn-1(x, y) |, wherein fn(x, y) denotes a gray value of a pixel having coordinates (x, y) in the current frame, fn-1(x, y) refers to the gray value of the pixel with the coordinate (x, y) in the previous frame; setting a threshold TaIn this example, take TaThe value being 1/10, i.e. T, of the current frame pixela=fn(x, y)/10; if D isn(x,y)>T,Rn255 if Dn(x,y)<T,RnObtaining a moving object graph R as 0; r is a binary image, wherein a white area represents a moving object area;
and 3, constructing k moving object estimation rectangular frames, extracting all mutually independent parts in the moving object image, acquiring the position information of each moving object estimation rectangular frame, and storing the position information in a matrix LO, wherein the specific process is as follows:
step 1: taking the lower left corner of the moving object image R obtained in the step2 as an origin, establishing a coordinate axis, wherein the moving object image R is in a first quadrant; setting a straight line x as h, wherein the initial value of h is 0;
step 2: shifting the straight line x h by one unit to the right, and judging the pixel value of each pixel on the straight line x h;
step 3: if the pixel value of each pixel on the straight line x-h is 0, returning to Step 2; if a pixel having a pixel value other than 0 exists on the straight line x-h, let h be xmin
Step 4: shifting the straight line x h by one unit to the right, and judging the pixel value of each pixel on the straight line x h;
step 5: if the pixel with the pixel value not being 0 exists on the straight line x ═ h, recording the ordinate value corresponding to the uppermost non-0 pixel and the ordinate value corresponding to the lowermost non-0 pixel on the straight line, storing the ordinate values in the array Y, and returning to Step 4; if the pixel value of each pixel on the straight line x-h is 0, let h be xmax
Step 6: comparing the sizes of the elements in the array Y to obtain the maximum value Y in the YmaxAnd the minimum value ymin(ii) a With a straight line y ═ ymaxAs the upper edge, a straight line y ═ yminAs the lower edge, a straight line x ═ xminAs the left edge, the straight line x ═ xmaxConstructing a motion estimation area rectangular frame as a right edge; zero clearing of array Y and recording LOk=(ymax,ymin,xmin,xmaxK) and storing in a matrix LO, wherein the initial value of k is 1, and the value of k is added with 1 after the rectangular frame of the motion estimation area is constructed each time;
step 7: repeating the steps 2 to 6 until the straight line x ═ h moves to the rightmost side of the binary image, and ending;
step4, extracting the corresponding part image of each moving object estimation rectangular frame in the gray level image and the color image of the current frame to obtain K moving object estimation gray level images G1,…,GKAnd K moving objects estimation color map P1,…,PK(ii) a The extraction process is as follows: for the ith moving object estimation rectangular frame, reading the corresponding position information LO from the matrix LO obtained in step3i=(ymax,ymin,xmin,xmaxI), confirming the specific position of the moving object estimation rectangular frame by using the upper edge, the lower edge, the left edge and the right edge so as to extract;
step5, using Canny operator to estimate gray level G of moving object one by one1To GKPerforming edge detection to obtain K edge images, comparing all edge pixels in each edge image with pixels at corresponding positions in the moving object image, and deleting edge pixels with unequal pixel values at corresponding positions in the moving object image from the edge pixels to obtain a final moving object edge image; constructing a moving object rectangular frame in each moving object edge image to contain all edges in the image, and obtaining K moving object rectangular frames M1,…,MK
The process of constructing the rectangular frame of the moving object is as follows: firstly, traversing the moving object edge map from bottom to top to obtain the leftmost edge pixel X of each linele={xl1,xl2,xl3,xl4,xl5,…,xlmAnd the abscissa X of the rightmost edge pixelre={xr1,xr2,xr3,xr4,xr5,…,xrmIn which xliRepresents the leftmost side of the ith rowAbscissa, x, of the edge pixelriThe abscissa of the rightmost edge pixel of the ith row is represented, and m represents the number of rows of the edge map of the moving object; then, traversing the moving object edge image from left to right to obtain the uppermost edge pixel Y of each columnhe={yh1,yh2,yh3,yh4,yh5,...,yhnAnd the ordinate Y of the lowermost edge pixelle={yl1,yl2,yl3,yl4,yl5,…,ylnIn which yhiRepresenting the ordinate, y, of the uppermost edge pixel of the ith columnliRepresenting the ordinate of the lowermost edge pixel of the ith column, and n represents the edge image column number of the moving object; finally, with XleWith the smallest element in (1) as the left edge, XreMiddle largest element as right edge, YheWith the largest element in (3) as the upper edge, YleThe minimum element in the middle is used as a lower edge to construct a rectangular frame of the moving object;
step6, according to the human body proportion, the rectangular frame M of the moving object is processed1To MKJudging to obtain human body rectangular frames, supposing that k human body rectangular frames are judged, extracting images of corresponding parts of each human body rectangular frame in the moving object estimation color image corresponding to the human body rectangular frame to obtain k human body color images;
recording the length of a rectangular frame of the moving object as L, the width as W and lambda as W/L, setting thresholds α and β, wherein α and β respectively take 3 and 5 in the example, if α < lambda < β, the rectangular frame is regarded as a rectangular frame of the human body, otherwise, the rectangular frame is regarded as other objects;
step7, for each human body rectangular frame, positioning the head of a person in the human body rectangular frame according to the upper body characteristics of the human body, and positioning the safety helmet estimation areas according to the head proportion to obtain k safety helmet estimation areas;
the head positioning process is as follows: firstly, a rectangular frame of a human body is cut to obtain a head area, which specifically comprises the following steps: taking the upper edge of the rectangular frame of the human body as the headThe positions of the upper edge of the area, the upper edge of the human body rectangular frame and the position of the lower edge 1/3 close to the upper edge are used as the lower edge of the head area, and the left edge and the right edge of the human body rectangular frame are used as the left edge and the right edge of the head area; second, traverse the head region from bottom to top to obtain the abscissa, X, of the leftmost and rightmost edge points of each rowld={xl1,xl2,xl3,xl4,xl5,…,xlgAnd Xrd={xr1,xr2,xr3,xr4,xr5,…,xrgIn which xliRepresents the abscissa, x, of the leftmost edge point of the ith rowriRepresenting the coordinate of the rightmost edge point of the ith row, and g representing the row number of the head area; then, the distance D ═ X from the leftmost edge to the rightmost edge is calculatedrd-XldObtaining D ═ D1,D2,D3,D4,D5,…,DgD, calculating the difference D between the next item and the previous item of the element in DΔ={D2-D1,D3-D2,D4-D3,D5-D4,…,Dg-Dg-1}; finally find DΔUpdating the lower edge of the head region to be the p-th line from the 11 th element to the line number p corresponding to the largest element in the last element;
the process of constructing the rectangular frame of the safety helmet comprises the following steps: respectively find out XldFrom xlpTo xlgSmallest element xmlAnd XrdFrom xrpTo xrgMaximum element x ofmrX is equal to xmlAs the precise head region left edge, x ═ xmrAs a right edge, taking the upper edge and the lower edge of the head region as an upper edge and a lower edge of a precise head to obtain a precise head region; then, the precise head area is further segmented to obtain a safety helmet area, which is as follows: the upper, left and right edges of the helmet region coincide with the precision head region, with the positions of the precision head region upper and lower edges 1/3 near the upper edge as the lower edge of the helmet region;
step 8, extracting the image of the rectangular frame of the safety helmet at the corresponding position of the colorful picture of the human body to obtain a colorful picture of a safety helmet area, and judging the colorful picture of the safety helmet area one by one according to the color;
the judgment process is as follows: the color map of the safety helmet area is converted into an HSV image, and as the safety helmet in a common factory has red, blue, white and yellow, the red, blue, white and yellow are taken as judgment standards, and the value ranges of hue H, saturation S and lightness V corresponding to the four colors are as follows:
red: h:0-10 and 156-180; s, 43-255; v46-255
Blue color: h100-124; s, 43-255; v46-255
White: h is 0 to 180; s is 0 to 30; v221-
Yellow: 26-34 parts of H; s, 43-255; v46-255
The color of each pixel is classified as follows: if the H, S and V values of the pixel to be classified all meet the value range of a certain color, judging that the color of the pixel belongs to the color; after the classification is finished, calculating the proportion of the red, blue, white and yellow pixels to all the pixels to obtain a red proportion TrBlue color ratio TbWhite ratio TwAnd yellow ratio Ty(ii) a Setting a threshold T1Example T1Taking 0.5; if T isr、Tb、Tw、TyIn which there is a value exceeding T1If yes, the safety helmet is judged to be worn; if T isr、Tb、Tw、TyAre all less than T1And judging that the safety helmet is not worn.
The process of judging whether the employee carries the package by the object carrying identification module is as follows:
step1, reading video information collected by a camera, and extracting previous frames and current frames; carrying out graying processing on each frame, and carrying out graying on the image by adopting a weighted average method: carrying out weighting operation on each pixel point, and distributing different weights to the R, G and B components, wherein f (x, y) is 0.3R (x, y) +0.59G (x, y) +0.11B (x, y), and x and y refer to pixel point coordinates;
step2, carrying out modeling training and updating on the gray level images of the previous frames by using a vibe algorithm to obtain a final model;
step3, judging each pixel point of the gray image of the current frame by using the final model to obtain a motion area, and constructing a human body boundary rectangle containing all the motion areas;
step4, forming an object estimation rectangle according to the human body proportion, specifically as follows: the position 1/5, close to the upper edge, of the upper edge and the lower edge of the human body boundary rectangle is used as the upper edge of the object estimation rectangle, the position 2/5, close to the lower edge, between the upper edge and the lower edge of the human body boundary rectangle is used as the lower edge of the object estimation rectangle, and the left edge and the right edge of the human body boundary rectangle are respectively used as the left edge and the right edge of the object estimation rectangle to form the object estimation rectangle;
step5, judging whether the object is carried or not by taking the symmetry of the object estimation area as a standard, wherein the process is as follows: establishing a coordinate system by taking the lower left corner of the object estimation rectangle as an origin, wherein the object estimation rectangle is in the first quadrant, the object is taken to estimate the midpoint m of the object part at the upper edge of the rectangle, and the abscissa x of the object part is obtainedmX is equal to xmCalculating the area of the object in the left region of the symmetry axis S1, the area of the object in the right region of the symmetry axis S2, and the degree of symmetry for the symmetry axisSetting threshold values th and tl, wherein th takes a value of 0.75 and tl takes a value of 1.33, if tl is less than β and less than th, judging that the object is not carried, otherwise, judging that the object is carried.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be conceived by those skilled in the art within the technical scope of the present invention will be covered by the scope of the present invention.

Claims (7)

1. An intelligent monitoring and fusion system based on computer vision analysis is characterized by comprising a video monitoring system, a face recognition system, a video structural analysis system and a video fusion system; wherein,
the video monitoring system consists of a front-end camera, a thermal infrared imager and a video monitoring platform; the front-end camera is responsible for collecting video information; the thermal infrared imager is responsible for acquiring temperature information of important areas of a plant area; the video monitoring platform receives video information acquired by a front-end camera and temperature information of important areas of a plant acquired by a thermal infrared imager, displays the video information, transmits the video information to the video structural analysis system, and transmits the video information and the temperature information of the important areas of the plant to the video fusion system;
the face recognition system consists of a face snapshot device, a face recognition module and an employee identity database; the face snapshot device collects images of staff entering and leaving each key area of the factory area and transmits the images to the face recognition module; the employee identity database stores the identity information and facial features of each employee; the face recognition module receives the employee image collected by the face snapshot device, performs face recognition, determines identity information of the employee through comparison with an employee identity database, and transmits a recognition result to the video fusion system;
the video structured analysis system is composed of a safety helmet identification module, a license plate identification module, a work clothes identification module and an object carrying identification module, receives video information collected by a video monitoring system, judges whether a worker wears a safety helmet through the safety helmet identification module, judges whether a vehicle enters an unauthorized area through the license plate identification judgment module, judges whether a worker work clothes is consistent with a work area of the worker through the work clothes identification module, judges whether the worker carries a package through the object carrying identification module, and transmits an analysis result to the video fusion system;
the video fusion system comprises a video alarm prompt module, a video playback module, an alarm recording module, a user management module, a role management module and a permission management module.
2. An intelligent monitoring fusion system based on computer vision analysis as claimed in claim 1, wherein the face feature of each employee stored in the employee identity database in the face recognition system is calculated as follows:
acquiring a face image of each employee in the factory, wherein the size of the image is MxN, and K face images can be obtained if K employees are in common; carrying out gray linear transformation processing and 3 x 3 median filtering processing on the K face images to obtain K face gray images; the gray value of each line of pixels of each face gray image is comparedAnd if the face gray level image is not in the original image, and the face gray level image is not in the original image, so that the face gray level image is not in the original imageiAnd calculating an average faceArranging K row vectors formed by K face gray level images to form a K multiplied by D dimensional matrix, and performing K-L transformation on the matrix to obtain a characteristic face space: w ═ u1,u2,u3,…,up) In the formula: p is the set dimensionality after dimensionality reduction; calculating the facial feature of each employee, and for the jth employee, calculating the facial feature omega of each employeejThe calculation formula of (2) is as follows: omegaj=wT(xj-Ψ)。
3. The intelligent monitoring fusion system based on computer vision analysis as claimed in claim 1, wherein the face recognition module in the face recognition system performs face recognition by the following steps:
acquiring a face image captured by a face capturing device and entering and exiting each key area of a factory area, performing gray linear transformation on the face image to obtain a face gray image, and connecting gray values of pixels of each line of the face gray image to obtain a line vector y; calculating the facial feature omega of the face imagej=wT(xj- Ψ), wherein: w is the characteristic face space determined in claim 2: w ═ u1,u2,u3,…,up) Ψ represents an average face obtained in claim 2; calculate ΩySelecting the Euclidean distance from the facial features stored in the employee identity database to omegayAnd the staff identity corresponding to the facial feature is the face recognition result.
4. The intelligent monitoring fusion system based on computer vision analysis as claimed in claim 1, wherein the process of the helmet identification module in the video structural analysis system determining whether the employee has worn the helmet is as follows:
reading video information collected by a camera in a detection area in real time, extracting a previous frame and a current frame, and carrying out gray processing on the previous frame and the current frame to obtain gray images of the previous frame and the current frame; processing the gray level images of the previous frame and the current frame by using an interframe difference method to obtain a moving object image; constructing one or more moving object estimation rectangular frames to extract all mutually independent parts in the moving object image, and if K mutually independent parts exist in the image, obtaining K moving object estimation rectangular frames; extracting the corresponding part of the gray level image and the color image of each moving object estimation rectangular frame in the current frame to obtain K moving object estimation gray level images and K moving object estimation color images; carrying out edge detection on the K moving object estimated gray level images one by using a Canny operator to obtain K moving object edge images, and constructing a moving object rectangular frame in each moving object edge image to contain all edges in the image to obtain K moving object rectangular frames; confirming a human body rectangular frame in the moving object rectangular frame according to the length-width ratio of the human body, supposing that k human body rectangular frames are judged, extracting the image of the corresponding part of each human body rectangular frame in the moving object estimated color image to obtain k human body color images; for each human body rectangular frame, positioning the head of a person in the human body rectangular frame according to the upper half-body characteristics of the human body, and positioning the safety helmet estimation areas according to the head proportion to obtain k safety helmet estimation areas; extracting the image of the corresponding part of each safety helmet estimation area in the corresponding human body color image to obtain k safety helmet area color images, and judging the safety helmet area color images one by one according to the color; the method comprises the following specific steps:
step1, extracting a previous frame and a current frame, and performing graying processing on the previous frame and the current frame by adopting a weighted average method, namely distributing different weights to three components of R, G and B of each pixel point, wherein the specific formula is as follows: f (x, y) ═ 0.3R (x, y) +0.59G (x, y) +0.11B (x, y), where R (x, y), G (x, y), B (x, y) refer to the R, G, B components of the pixel with coordinates (x, y), respectively, and f (x, y) refers to the gray-scale value after conversion of the pixel with coordinates (x, y);
step2, using the frameThe inter-difference method processes the gray level images of the previous frame and the current frame to obtain a moving object image, and the process is as follows: subtracting the gray values of the pixels corresponding to the two frames of images, and taking the absolute value of the gray values to obtain a difference image Dn(x,y)=|fn(x,y)-fn-1(x, y) |, wherein fn(x, y) denotes a gray value of a pixel having coordinates (x, y) in the current frame, fn-1(x, y) refers to the gray value of the pixel with the coordinate (x, y) in the previous frame; setting a threshold TaIf D isn(x,y)>T,Rn255 if Dn(x,y)<T,RnObtaining a moving object graph R as 0; r is a binary image, wherein a white area represents a moving object area;
and 3, constructing K moving object estimation rectangular frames, extracting all mutually independent parts in the moving object image, acquiring the position information of each moving object estimation rectangular frame, and storing the position information in a matrix LO, wherein the specific process is as follows:
step 1: taking the lower left corner of the moving object image R obtained in the step2 as an origin, establishing a coordinate axis, wherein the moving object image R is in a first quadrant; setting a straight line x as h, wherein the initial value of h is 0;
step 2: shifting the straight line x h by one unit to the right, and judging the pixel value of each pixel on the straight line x h;
step 3: if the pixel value of each pixel on the straight line x-h is 0, returning to Step 2; if a pixel having a pixel value other than 0 exists on the straight line x-h, let h be xmin
Step 4: shifting the straight line x h by one unit to the right, and judging the pixel value of each pixel on the straight line x h;
step 5: if the pixel with the pixel value not being 0 exists on the straight line x ═ h, recording the ordinate value corresponding to the uppermost non-0 pixel and the ordinate value corresponding to the lowermost non-0 pixel on the straight line, storing the ordinate values in the array Y, and returning to Step 4; if the pixel value of each pixel on the straight line x-h is 0, let h be xmax
Step 6: comparing the sizes of the elements in the array Y to obtain the maximum value Y in the YmaxAnd the minimum value ymin(ii) a With a straight line y ═ ymaxAs upper edge, straight line y=yminAs the lower edge, a straight line x ═ xminAs the left edge, the straight line x ═ xmaxConstructing a motion estimation area rectangular frame as a right edge; zero clearing of array Y and recording LOk=(ymax,ymin,xmin,xmaxK) and storing in a matrix LO, wherein the initial value of k is 1, and the value of k is added with 1 after the rectangular frame of the motion estimation area is constructed each time;
step 7: repeating the steps 2 to 6 until the straight line x ═ h moves to the rightmost side of the binary image, and ending;
step4, extracting the corresponding part image of each moving object estimation rectangular frame in the gray level image and the color image of the current frame to obtain K moving object estimation gray level images G1,…,GKAnd K moving objects estimation color map P1,…,PK(ii) a The extraction process is as follows: for the ith moving object estimation rectangular frame, reading the corresponding position information LO from the matrix LO obtained in step3i=(ymax,ymin,xmin,xmaxI), confirming the specific position of the moving object estimation rectangular frame by using the upper edge, the lower edge, the left edge and the right edge so as to extract;
step5, using Canny operator to estimate gray level G of moving object one by one1To GKPerforming edge detection to obtain K edge images, comparing all edge pixels in each edge image with pixels at corresponding positions in the moving object image, and deleting edge pixels with unequal pixel values at corresponding positions in the moving object image from the edge pixels to obtain a final moving object edge image; constructing a moving object rectangular frame in each moving object edge image to contain all edges in the image, and obtaining K moving object rectangular frames M1,…,MK
The process of constructing the rectangular frame of the moving object is as follows: firstly, traversing the moving object edge map from bottom to top to obtain the leftmost edge pixel X of each linele={xl1,xl2,xl3,xl4,xl5,…,xlmAnd the abscissa X of the rightmost edge pixelre={xr1,xr2,xr3,xr4,xr5,…,xrmIn which xliRepresents the abscissa, x, of the leftmost edge pixel of the ith rowriThe abscissa of the rightmost edge pixel of the ith row is represented, and m represents the number of rows of the edge map of the moving object; then, traversing the moving object edge image from left to right to obtain the uppermost edge pixel Y of each columnhe={yh1,yh2,yh3,yh4,yh5,...,yhnAnd the ordinate Y of the lowermost edge pixelle={yl1,yl2,yl3,yl4,yl5,...,ylnIn which yhiRepresenting the ordinate, y, of the uppermost edge pixel of the ith columnliRepresenting the ordinate of the lowermost edge pixel of the ith column, and n represents the edge image column number of the moving object; finally, with XleWith the smallest element in (1) as the left edge, XreMiddle largest element as right edge, YheWith the largest element in (3) as the upper edge, YleThe minimum element in the middle is used as a lower edge to construct a rectangular frame of the moving object;
step6, according to the human body proportion, the rectangular frame M of the moving object is processed1To MKJudging to obtain human body rectangular frames, supposing that k human body rectangular frames are judged, extracting images of corresponding parts of each human body rectangular frame in the moving object estimation color image corresponding to the human body rectangular frame to obtain k human body color images;
recording the length of a rectangular frame of the moving object as L, the width as W and the lambda as W/L, setting thresholds α and β, if the length of the rectangular frame of the moving object is more than α and the lambda is less than β, regarding the rectangular frame as a rectangular frame of the human body, otherwise, regarding the rectangular frame as other objects;
step7, for each human body rectangular frame, positioning the head of a person in the human body rectangular frame according to the upper body characteristics of the human body, and positioning the safety helmet estimation areas according to the head proportion to obtain k safety helmet estimation areas;
the head positioning process is as follows: firstly, a rectangular frame of a human body is cut to obtain a head area, which specifically comprises the following steps: get human body rectangleThe upper edge of the frame is used as the upper edge of the head area, the positions of the upper edge and the lower edge 1/3 of the human body rectangular frame, which are close to the upper edge, are used as the lower edge of the head area, and the left edge and the right edge of the human body rectangular frame are used as the left edge and the right edge of the head area; second, traverse the head region from bottom to top to obtain the abscissa, X, of the leftmost and rightmost edge points of each rowld={xl1,xl2,xl3,xl4,xl5,…,xlgAnd Xrd={xr1,xr2,xr3,xr4,xr5,…,xrgIn which xliRepresents the abscissa, x, of the leftmost edge point of the ith rowriRepresenting the coordinate of the rightmost edge point of the ith row, and g representing the row number of the head area; then, the distance D ═ X from the leftmost edge to the rightmost edge is calculatedrd-XldObtaining D ═ D1,D2,D3,D4,D5,…,DgD, calculating the difference D between the next item and the previous item of the element in DΔ={D2-D1,D3-D2,D4-D3,D5-D4,…,Dg-Dg-1}; finally find DΔUpdating the lower edge of the head region to be the p-th line from the 11 th element to the line number p corresponding to the largest element in the last element;
the process of constructing the rectangular frame of the safety helmet comprises the following steps: respectively find out XldFrom xlpTo xlgSmallest element xmlAnd XrdFrom xrpTo xrgMaximum element x ofmrX is equal to xmlAs the precise head region left edge, x ═ xmrAs a right edge, taking the upper edge and the lower edge of the head region as an upper edge and a lower edge of a precise head to obtain a precise head region; then, the precise head area is further segmented to obtain a safety helmet area, which is as follows: the upper, left and right edges of the helmet region coincide with the precision head region, with the positions of the precision head region upper and lower edges 1/3 near the upper edge as the lower edge of the helmet region;
step 8, extracting the image of the rectangular frame of the safety helmet at the corresponding position of the colorful picture of the human body to obtain a colorful picture of a safety helmet area, and judging the colorful picture of the safety helmet area one by one according to the color;
the judgment process is as follows: the color map of the safety helmet area is converted into an HSV image, and as the safety helmet in a common factory has red, blue, white and yellow, the red, blue, white and yellow are taken as judgment standards, and the value ranges of hue H, saturation S and lightness V corresponding to the four colors are as follows:
red: h:0-10 and 156-180; s, 43-255; v46-255
Blue color: h100-124; s, 43-255; v46-255
White: h is 0 to 180; s is 0 to 30; v221-
Yellow: 26-34 parts of H; s, 43-255; v46-255
The color of each pixel is classified as follows: if the H, S and V values of the pixel to be classified all meet the value range of a certain color, judging that the color of the pixel belongs to the color; after the classification is finished, calculating the proportion of the red, blue, white and yellow pixels to all the pixels to obtain a red proportion TrBlue color ratio TbWhite ratio TwAnd yellow ratio Ty(ii) a Setting a threshold T1If T isr、Tb、Tw、TyIn which there is a value exceeding T1If yes, the safety helmet is judged to be worn; if T isr、Tb、Tw、TyAre all less than T1And judging that the safety helmet is not worn.
5. An intelligent monitoring and fusion system based on computer vision analysis as claimed in claim 1, wherein the step of judging whether the module vehicle enters the unauthorized area by license plate recognition in the video structural analysis system is:
extracting a current frame, and performing graying and gray stretching treatment on the current frame by using an average value method to obtain a gray image of the current frame; performing edge extraction on the gray image of the current frame by using a Canny operator to obtain an edge image; performing four times of expansion, four times of corrosion and median filtering on the edge map to eliminate noise; determining a straight line in the edge image by using Hough transformation, determining the position of the license plate according to the proportion characteristics of the upper and lower parallels, the left and right parallels and the length and width of the license plate, and constructing a license plate recognition area; projecting the image of the license plate recognition area in the vertical direction, and segmenting according to the characteristics of the peak value group to obtain each character image; identifying each character image by using a BP neural network to obtain each character so as to identify the license plate number; after the license plate number is identified, the database is searched to determine the authorization area corresponding to the license plate, the authorization area is compared with the area where the current camera is located, and if the area where the camera is located does not belong to the license plate authorization area, warning is carried out.
6. The intelligent monitoring fusion system based on computer vision analysis as claimed in claim 1, wherein the step of the staff work clothes recognition module in the video structured analysis system judging whether the staff work clothes are consistent with the work area is as follows:
reading video information acquired by a camera in a detection area in real time, extracting a current frame and a few frames before, and performing graying processing on the current frame and the few frames before to obtain grayscale images of the current frame and the few frames before; modeling, training and updating the gray level images of the previous frames by using a Gaussian mixture model to obtain a final model; judging each pixel point of the gray level image of the current frame by using the model to obtain a moving object image; constructing one or more moving object rectangular frames to extract all mutually independent parts in the moving object image, and if k mutually independent parts exist in the image, obtaining k moving object rectangular frames; confirming a human body rectangular frame in the object rectangular frame according to the length-width ratio of the human body, supposing that l human body rectangular frames are judged, extracting the image of each human body rectangular frame at the corresponding position of the current frame color picture to obtain l human body color pictures; for each human body rectangular frame, determining the position of the upper body according to the human body proportion, taking the position of the upper body as a working clothes estimation area, and extracting the image of the corresponding part of the working clothes estimation area in the corresponding human body color image to obtain l working clothes color images; judging the color image of each piece of work clothes according to the color, which comprises the following steps: converting the color image of the working clothes into an HSV image, classifying each pixel, if the H, S and V values of the pixel meet the value range of a certain color, classifying the pixel into the color, calculating the proportion of each color pixel to all pixels, setting a threshold value T to be 0.6, if the proportion of the certain color exceeds the threshold value T, judging the working clothes into the color, inquiring a database, obtaining the color of the working clothes in a working area corresponding to the position of the current camera, comparing the color with the judged color, and if the color is not consistent, warning; and if the proportion of all the colors does not exceed the threshold value T, judging that the staff does not wear the work clothes, and warning.
7. The intelligent monitoring and fusion system based on computer vision analysis as claimed in claim 1, wherein the step of judging whether the employee carries the package by the object carrying identification module in the video structural analysis system is as follows:
reading video information acquired by a camera in a detection area in real time, extracting a current frame and previous frames, and carrying out gray processing on the current frame and the previous frames to obtain gray maps of the current frame and the previous frames; modeling, training and updating the gray level images of the previous frames by using a vibe algorithm to obtain a final model; judging each pixel point of the gray image of the current frame by using the model to obtain a motion area, and constructing a human body boundary rectangle containing all the motion areas; constructing an object estimation rectangle according to the human body proportion; judging whether the object is carried or not by taking the symmetry of the object in the object estimation rectangle as a standard; the method comprises the following specific steps:
step1, reading video information collected by a camera, and extracting previous frames and current frames; carrying out graying processing on each frame, and carrying out graying on the image by adopting a weighted average method: carrying out weighting operation on each pixel point, and distributing different weights to the R, G and B components, wherein f (x, y) is 0.3R (x, y) +0.59G (x, y) +0.11B (x, y), and x and y refer to pixel point coordinates;
step2, carrying out modeling training and updating on the gray level images of the previous frames by using a vibe algorithm to obtain a final model;
step3, judging each pixel point of the gray image of the current frame by using the final model to obtain a motion area, and constructing a human body boundary rectangle containing all the motion areas;
step4, forming an object estimation rectangle according to the human body proportion, specifically as follows: the position 1/5, close to the upper edge, of the upper edge and the lower edge of the human body boundary rectangle is used as the upper edge of the object estimation rectangle, the position 2/5, close to the lower edge, between the upper edge and the lower edge of the human body boundary rectangle is used as the lower edge of the object estimation rectangle, and the left edge and the right edge of the human body boundary rectangle are respectively used as the left edge and the right edge of the object estimation rectangle to form the object estimation rectangle;
step5, judging whether the object is carried or not by taking the symmetry of the object estimation area as a standard, wherein the process is as follows: establishing a coordinate system by taking the lower left corner of the object estimation rectangle as an origin, wherein the object estimation rectangle is in the first quadrant, the object is taken to estimate the midpoint m of the object part at the upper edge of the rectangle, and the abscissa x of the object part is obtainedmX is equal to xmCalculating the area of the object in the left region of the symmetry axis S1, the area of the object in the right region of the symmetry axis S2, and the degree of symmetry for the symmetry axisSetting threshold values th and tl, if tl is less than β and less than th, judging that the object is not carried, otherwise, judging that the object is carried.
CN201910134732.2A 2019-02-23 2019-02-23 A kind of Intellectualized monitoring emerging system based on computer vision analysis Pending CN110008831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910134732.2A CN110008831A (en) 2019-02-23 2019-02-23 A kind of Intellectualized monitoring emerging system based on computer vision analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910134732.2A CN110008831A (en) 2019-02-23 2019-02-23 A kind of Intellectualized monitoring emerging system based on computer vision analysis

Publications (1)

Publication Number Publication Date
CN110008831A true CN110008831A (en) 2019-07-12

Family

ID=67165928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910134732.2A Pending CN110008831A (en) 2019-02-23 2019-02-23 A kind of Intellectualized monitoring emerging system based on computer vision analysis

Country Status (1)

Country Link
CN (1) CN110008831A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428587A (en) * 2019-07-19 2019-11-08 国网安徽省电力有限公司建设分公司 A kind of engineering site early warning interlock method and system
CN110517461A (en) * 2019-08-30 2019-11-29 成都智元汇信息技术股份有限公司 A method of it prevents people from carrying package and escapes safety check
CN110533811A (en) * 2019-08-28 2019-12-03 深圳市万睿智能科技有限公司 The method and device and system and storage medium of safety cap inspection are realized based on SSD
CN110930624A (en) * 2019-12-06 2020-03-27 深圳北斗国芯科技有限公司 Safety in production monitored control system based on big dipper
CN111083441A (en) * 2019-12-18 2020-04-28 广州穗能通能源科技有限责任公司 Construction site monitoring method and device, computer equipment and storage medium
CN111898514A (en) * 2020-07-24 2020-11-06 燕山大学 Multi-target visual supervision method based on target detection and action recognition
CN112071006A (en) * 2020-09-11 2020-12-11 湖北德强电子科技有限公司 High-efficiency low-resolution image area intrusion recognition algorithm and device
CN112466086A (en) * 2020-10-26 2021-03-09 福州微猪信息科技有限公司 Visual identification early warning device and method for farm work clothes
CN112532927A (en) * 2020-11-17 2021-03-19 南方电网海南数字电网研究院有限公司 Intelligent safety management and control system for construction site
CN112528821A (en) * 2020-12-06 2021-03-19 杭州晶一智能科技有限公司 Pedestrian crosswalk pedestrian detection method based on motion detection
CN112669505A (en) * 2019-12-16 2021-04-16 丰疆智能科技股份有限公司 Integrated management system for shower and entrance guard of farm and management method thereof
CN112883969A (en) * 2021-03-01 2021-06-01 河海大学 Rainfall intensity detection method based on convolutional neural network
CN112949367A (en) * 2020-07-07 2021-06-11 南方电网数字电网研究院有限公司 Method and device for detecting color of work clothes based on video stream data
CN113379144A (en) * 2021-06-24 2021-09-10 深圳开思信息技术有限公司 Store purchase order generation method and system for online automobile distribution purchase platform
CN113408501A (en) * 2021-08-19 2021-09-17 北京宝隆泓瑞科技有限公司 Oil field park detection method and system based on computer vision
CN113977603A (en) * 2021-10-29 2022-01-28 连云港福润食品有限公司 Monitoring robot based on target detection, identification and tracking for worker production specification
US11346938B2 (en) 2019-03-15 2022-05-31 Msa Technology, Llc Safety device for providing output to an individual associated with a hazardous environment
CN115925243A (en) * 2022-12-24 2023-04-07 山西百澳智能玻璃股份有限公司 Method and system for regulating and controlling heating temperature of glass

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262729A (en) * 2011-08-03 2011-11-30 山东志华信息科技股份有限公司 Fused face recognition method based on integrated learning
CN103235938A (en) * 2013-05-03 2013-08-07 北京国铁华晨通信信息技术有限公司 Method and system for detecting and identifying license plate
CN107632565A (en) * 2017-08-28 2018-01-26 上海欧忆能源科技有限公司 Power construction building site intellectualized management system and method
CN108833831A (en) * 2018-06-15 2018-11-16 陈在新 A kind of power construction intelligent safety monitor system
CN109034535A (en) * 2018-06-21 2018-12-18 中国化学工程第六建设有限公司 Construction site wisdom monitoring method, system and computer readable storage medium
CN109117827A (en) * 2018-09-05 2019-01-01 武汉市蓝领英才科技有限公司 Work clothes work hat wearing state automatic identifying method and alarm system based on video
CN109218673A (en) * 2018-09-20 2019-01-15 国网江苏省电力公司泰州供电公司 The system and method for power distribution network construction safety coordinated management control is realized based on artificial intelligence
CN109215155A (en) * 2018-09-29 2019-01-15 东莞方凡智能科技有限公司 A kind of building site management system based on technology of Internet of things
CN109240311A (en) * 2018-11-19 2019-01-18 国网四川省电力公司电力科学研究院 Outdoor power field construction operation measure of supervision based on intelligent robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262729A (en) * 2011-08-03 2011-11-30 山东志华信息科技股份有限公司 Fused face recognition method based on integrated learning
CN103235938A (en) * 2013-05-03 2013-08-07 北京国铁华晨通信信息技术有限公司 Method and system for detecting and identifying license plate
CN107632565A (en) * 2017-08-28 2018-01-26 上海欧忆能源科技有限公司 Power construction building site intellectualized management system and method
CN108833831A (en) * 2018-06-15 2018-11-16 陈在新 A kind of power construction intelligent safety monitor system
CN109034535A (en) * 2018-06-21 2018-12-18 中国化学工程第六建设有限公司 Construction site wisdom monitoring method, system and computer readable storage medium
CN109117827A (en) * 2018-09-05 2019-01-01 武汉市蓝领英才科技有限公司 Work clothes work hat wearing state automatic identifying method and alarm system based on video
CN109218673A (en) * 2018-09-20 2019-01-15 国网江苏省电力公司泰州供电公司 The system and method for power distribution network construction safety coordinated management control is realized based on artificial intelligence
CN109215155A (en) * 2018-09-29 2019-01-15 东莞方凡智能科技有限公司 A kind of building site management system based on technology of Internet of things
CN109240311A (en) * 2018-11-19 2019-01-18 国网四川省电力公司电力科学研究院 Outdoor power field construction operation measure of supervision based on intelligent robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
富吉勇: ""基于全方位视觉的遗留物及其放置者检测的研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈曦: ""面向大型工地的视觉监管系统关键技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11346938B2 (en) 2019-03-15 2022-05-31 Msa Technology, Llc Safety device for providing output to an individual associated with a hazardous environment
CN110428587A (en) * 2019-07-19 2019-11-08 国网安徽省电力有限公司建设分公司 A kind of engineering site early warning interlock method and system
CN110533811A (en) * 2019-08-28 2019-12-03 深圳市万睿智能科技有限公司 The method and device and system and storage medium of safety cap inspection are realized based on SSD
CN110517461A (en) * 2019-08-30 2019-11-29 成都智元汇信息技术股份有限公司 A method of it prevents people from carrying package and escapes safety check
CN110930624A (en) * 2019-12-06 2020-03-27 深圳北斗国芯科技有限公司 Safety in production monitored control system based on big dipper
CN112669505A (en) * 2019-12-16 2021-04-16 丰疆智能科技股份有限公司 Integrated management system for shower and entrance guard of farm and management method thereof
CN111083441A (en) * 2019-12-18 2020-04-28 广州穗能通能源科技有限责任公司 Construction site monitoring method and device, computer equipment and storage medium
CN112949367A (en) * 2020-07-07 2021-06-11 南方电网数字电网研究院有限公司 Method and device for detecting color of work clothes based on video stream data
CN111898514A (en) * 2020-07-24 2020-11-06 燕山大学 Multi-target visual supervision method based on target detection and action recognition
CN111898514B (en) * 2020-07-24 2022-10-18 燕山大学 Multi-target visual supervision method based on target detection and action recognition
CN112071006A (en) * 2020-09-11 2020-12-11 湖北德强电子科技有限公司 High-efficiency low-resolution image area intrusion recognition algorithm and device
CN112466086A (en) * 2020-10-26 2021-03-09 福州微猪信息科技有限公司 Visual identification early warning device and method for farm work clothes
CN112532927A (en) * 2020-11-17 2021-03-19 南方电网海南数字电网研究院有限公司 Intelligent safety management and control system for construction site
CN112528821A (en) * 2020-12-06 2021-03-19 杭州晶一智能科技有限公司 Pedestrian crosswalk pedestrian detection method based on motion detection
CN112883969A (en) * 2021-03-01 2021-06-01 河海大学 Rainfall intensity detection method based on convolutional neural network
CN112883969B (en) * 2021-03-01 2022-08-26 河海大学 Rainfall intensity detection method based on convolutional neural network
CN113379144A (en) * 2021-06-24 2021-09-10 深圳开思信息技术有限公司 Store purchase order generation method and system for online automobile distribution purchase platform
CN113408501A (en) * 2021-08-19 2021-09-17 北京宝隆泓瑞科技有限公司 Oil field park detection method and system based on computer vision
CN113977603A (en) * 2021-10-29 2022-01-28 连云港福润食品有限公司 Monitoring robot based on target detection, identification and tracking for worker production specification
CN115925243A (en) * 2022-12-24 2023-04-07 山西百澳智能玻璃股份有限公司 Method and system for regulating and controlling heating temperature of glass

Similar Documents

Publication Publication Date Title
CN110008831A (en) A kind of Intellectualized monitoring emerging system based on computer vision analysis
CN107622229B (en) Video vehicle re-identification method and system based on fusion features
CN104063722B (en) A kind of detection of fusion HOG human body targets and the safety cap recognition methods of SVM classifier
CN101510257B (en) Human face similarity degree matching method and device
CN108319934A (en) Safety cap wear condition detection method based on video stream data
CN112396658B (en) Indoor personnel positioning method and system based on video
CN110991348B (en) Face micro-expression detection method based on optical flow gradient amplitude characteristics
CN109635758B (en) Intelligent building site video-based safety belt wearing detection method for aerial work personnel
CN101715111B (en) Method for automatically searching abandoned object in video monitoring
CN102622584B (en) Method for detecting mask faces in video monitor
CN112819094A (en) Target detection and identification method based on structural similarity measurement
Nagane et al. Moving object detection and tracking using Matlab
CN113139521A (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN112101260B (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN114612823A (en) Personnel behavior monitoring method for laboratory safety management
CN105069816B (en) A kind of method and system of inlet and outlet people flow rate statistical
CN104143077B (en) Pedestrian target search method and system based on image
CN104036250A (en) Video pedestrian detecting and tracking method
CN113989858B (en) Work clothes identification method and system
CN101908153A (en) Method for estimating head postures in low-resolution image treatment
CN112434545A (en) Intelligent place management method and system
CN111460917B (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN107045630B (en) RGBD-based pedestrian detection and identity recognition method and system
CN110688969A (en) Video frame human behavior identification method
KR102423934B1 (en) Smart human search integrated solution through face recognition and multiple object tracking technology of similar clothes color

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190712

WD01 Invention patent application deemed withdrawn after publication