CN112949367A - Method and device for detecting color of work clothes based on video stream data - Google Patents

Method and device for detecting color of work clothes based on video stream data Download PDF

Info

Publication number
CN112949367A
CN112949367A CN202010644888.8A CN202010644888A CN112949367A CN 112949367 A CN112949367 A CN 112949367A CN 202010644888 A CN202010644888 A CN 202010644888A CN 112949367 A CN112949367 A CN 112949367A
Authority
CN
China
Prior art keywords
determining
rectangular frame
color
work clothes
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010644888.8A
Other languages
Chinese (zh)
Inventor
韩利群
郭晓斌
林冬
袁路路
吴旦
陈海倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Power Grid Digital Grid Research Institute Co Ltd
Original Assignee
Southern Power Grid Digital Grid Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Power Grid Digital Grid Research Institute Co Ltd filed Critical Southern Power Grid Digital Grid Research Institute Co Ltd
Priority to CN202010644888.8A priority Critical patent/CN112949367A/en
Publication of CN112949367A publication Critical patent/CN112949367A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses a method and a device for detecting the color of a work clothes based on video stream data. The method and the device determine the first rectangular frame and the second rectangular frame in sequence, thereby clearly determining the area of the work clothes on the human body object, eliminating the influence of scene colors and improving the detection success rate.

Description

Method and device for detecting color of work clothes based on video stream data
Technical Field
The invention relates to the technical field of digital video intelligent analysis, in particular to a method for detecting colors of a work clothes based on video stream data.
Background
In internet technology, in traditional video detection, analyzing identified content needs manual implementation. However, the method has high labor intensity, large workload and high error probability, and does not have the function of detecting and tracking the moving target in the range.
In the related technology, the intelligent video detection, analysis and recognition technology overcomes the defect of human eye recognition of the traditional detection and recognition system, and introduces the technologies such as behavior recognition and the like into the monitoring of the power industry. However, the static classification detector method usually identifies people by mistake in field use, resulting in misjudgment based on the judgment result of the object.
The dynamic video detection streaming method reduces the interference of static similar targets by detecting the moving human body targets and then identifying the working clothes, and is beneficial to improving the accuracy of judgment of the detected target people.
However, the dynamic video detection streaming method is not fully adaptive to multiple scenes of the work clothes, and particularly cannot stably reflect color change sensitivity, so that the color of the work clothes is identified. In actual application, each scene needs to be debugged independently, so that the use cost and the maintenance cost are high, and the scenes are difficult to convert into products to be provided for users.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method and an apparatus for detecting a color of a work clothes based on video stream data.
According to one aspect of the invention, a method for detecting the color of a work clothes based on video stream data is provided. The method comprises the following steps: determining a first rectangular frame according to a current frame picture and a previous frame picture in video stream data, wherein the first rectangular frame is a minimum area containing a human body object in the video stream data; determining a second rectangular frame in the first rectangular frame, wherein the second rectangular frame is a minimum area containing the work clothes on the human body object; and detecting the color of the work clothes in the second rectangular frame.
According to an embodiment of the present invention, determining the first rectangular frame according to the current frame picture and the previous frame picture in the video stream data includes: acquiring a gray value corresponding to each pixel point in a current frame picture and a previous frame picture; determining a motion area of the human body object according to the gray value; and determining the first rectangular frame according to the motion area.
According to an embodiment of the present invention, determining the motion region of the human subject according to the gray value comprises: calculating Dn(x,y)=|fn(x,y)-fn-1(x, y) |, wherein fn(x, y) is the gray value of the pixel point with the coordinate (x, y) in the current frame picture, fn-1(x, y) is the gray value of the pixel point with the coordinate (x, y) in the previous frame picture; and determining the motion region as Dn(x, y) is greater than a predetermined threshold value.
According to an embodiment of the present invention, determining the first rectangular frame according to the motion region includes: traversing the motion area from bottom to top to obtain the abscissa xl of the leftmost edge pixel point of each linee∈{xl1,xl2,xl3,xl4,xl5,…,xlmAnd the abscissa xr of the rightmost edge pixel pointe∈{xr1,xr2,xr3,xr4,xr5,…,xrmIn which xleThe abscissa, xr, representing the leftmost edge pixel in row eeRepresenting the abscissa of the rightmost edge pixel point of the e-th row, and m represents the total row number of the motion area; traversing the motion area from left to right to obtain the ordinate yh of the edge pixel point at the top of each rowf∈{yh1,yh2,yh3,yh4,yh5,...,yhnAnd ordinate yl of the lowermost edge pixelf∈{yl1,yl2,yl3,yl4,yl5,...,ylnIn which yhfOrdinate, yl, representing the uppermost edge pixel of the f-th columnfRepresenting the ordinate of the edge pixel point at the bottommost side of the f-th row, wherein n represents the total row number of the motion area; with { xl1,xl2,xl3,xl4,xl5,…,xlmThe smallest element in (f) } isTo the left edge, { xr1,xr2,xr3,xr4,xr5,…,xrmThe largest element in { as the right edge, { yh1,yh2,yh3,yh4,yh5,...,yhnLargest element in { acyl } as the upper edge, { yl1,yl2,yl3,yl4,yl5,...,ylnThe smallest element in the rectangle is used as the lower edge to determine the first rectangle frame.
According to an embodiment of the invention, before determining the motion region of the human object from the gray value, the method further comprises: determining the similarity between the motion area and a preset human body feature model; and if the similarity is larger than a preset threshold value X, determining that the human body object exists in the motion area.
According to an embodiment of the present invention, determining the second rectangular frame in the first rectangular frame includes: the head region of the human subject is removed in the motion region.
According to an embodiment of the present invention, removing the head region of the human subject in the motion region comprises: traversing the motion area from bottom to top to obtain the abscissa xl of the leftmost edge pixel point of each lined∈{xl1,xl2,xl3,xl4,xl5,…,xlgAnd the abscissa xr of the rightmost edge pixel pointd∈{xr1,xr2,xr3,xr4,xr5,…,xrgIn which xldThe abscissa, xr, representing the leftmost edge pixel in row ddRepresenting the abscissa of the rightmost edge pixel point of the d-th row, and g representing the total row number of the motion area; calculating the distance D from the leftmost edge pixel point to the rightmost edge pixel point in the D-th lined(ii) a Calculating the distance D between the D-th line and the D-1 th linedDifference Δ D ofd(ii) a Determining a lower edge of the head region as Δ DdThe row number corresponding to the largest element in (a).
According to an embodiment of the present invention, detecting the color of the coveralls in the second rectangular frame includes: extracting the color image of the work clothes in the second rectangular frame from the current frame picture; determining the proportion of pixels with preset colors occupying all pixels in the color image; and if the proportion is larger than a preset threshold value, determining the color of the work clothes to be the preset color.
According to an embodiment of the present invention, before determining a proportion of pixels having a preset color occupying all pixels in the color image, the method further includes: converting the color image into an HSV image; setting the value ranges of the hue H, the saturation S and the lightness V corresponding to the preset color; determining the proportion of the pixels with the preset color occupying all the pixels in the color image comprises the following steps: and determining the proportion of the pixel points falling into the preset value range occupying all the pixel points of the HSV image.
According to another aspect of the present invention, there is provided a video stream data-based work clothes color detection apparatus for performing the aforementioned video stream data-based work clothes color detection method. The device includes: the first determining module is used for determining a first rectangular frame according to a current frame picture and a previous frame picture in the video stream data, wherein the first rectangular frame is a minimum area containing a human body in the video stream data; the second determining module is used for determining a second rectangular frame in the first rectangular frame, wherein the second rectangular frame is the minimum area containing the work clothes on the human body; and the detection module is used for detecting the color of the work clothes in the second rectangular frame.
The method and the device determine the first rectangular frame and the second rectangular frame in sequence, thereby clearly determining the area of the work clothes on the human body object, eliminating the influence of scene colors and improving the detection success rate.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method for detecting color of a work clothes based on video stream data according to an embodiment of the present invention;
FIG. 2 is a flow chart of an embodiment of a method for detecting color of a work clothes based on video stream data according to the present invention; and
fig. 3 is a schematic block diagram of a color detection apparatus for a work clothes based on video stream data according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present patent may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the invention, a method for detecting the color of a work clothes based on video stream data is provided. Fig. 1 is a flowchart of a method for detecting color of a work clothes based on video stream data according to an embodiment of the present invention, as shown in fig. 1, the method includes: determining a first rectangular frame according to a current frame picture and a previous frame picture in video stream data, wherein the first rectangular frame is a minimum area containing a human body object in the video stream data; determining a second rectangular frame in the first rectangular frame, wherein the second rectangular frame is a minimum area containing the work clothes on the human body object; and detecting the color of the work clothes in the second rectangular frame.
In the related art, for people in scenes with different colors, the detection of the color of the work clothes is easily influenced by the color of the scenes, so the detection success rate is low. In the embodiment of the invention, the first rectangular frame and the second rectangular frame are determined in sequence, so that the area of the work clothes on the human body object is clearly determined, the influence of scene colors is eliminated, and the detection success rate is improved.
According to an embodiment of the present invention, determining the first rectangular frame by the current frame picture and the previous frame picture in the video stream data includes: acquiring a gray value corresponding to each pixel point in a current frame picture and a previous frame picture; determining a motion area of the human body object according to the gray value; and determining based on the motion regionAnd determining the first rectangular frame. Specifically, the step of determining the motion region of the human subject based on the gray value comprises: calculating Dn(x,y)=|fn(x,y)-fn-1(x, y) |, wherein fn(x, y) is the gray value of the pixel point with the coordinate (x, y) in the current frame picture, fn-1(x, y) is the gray value of the pixel point with coordinates (x, y) in the previous frame picture, DnAnd (x, y) is the difference value of the gray values of the corresponding pixel points in the current frame picture and the previous frame picture, and the motion area is determined to be the area of which the difference value is greater than a preset threshold value T.
In the embodiment of the present invention, the gray value may be obtained by performing graying processing on the color image by a common method such as a component method, a maximum value method, an average value method, or a weighted average method, but considering that human eyes have different sensitivity degrees to colors, a weighted average method is preferred, that is, the RGB three components are weighted and averaged by different weights according to importance and other indicators. Because human eyes have highest sensitivity to green and lowest sensitivity to blue, a more reasonable gray image can be obtained by performing weighted average on RGB three components according to the following formula, it should be noted that the weights of the three components are not limited to listed empirical values, and other weights capable of obtaining a more reasonable gray image are still within the protection range of the patent.
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j)
In the embodiment of the invention, because the video sequence acquired by the camera has the characteristic of continuity, if no motion region exists in a scene, the change of continuous frames is weak, and if the motion region exists, the change between the continuous frames can be obviously changed. Therefore, pixel points corresponding to different frames are subtracted by an interframe difference method, the absolute value of the gray difference is judged, and when the absolute value exceeds a certain threshold value, the motion area can be judged, so that the target detection function is realized. Setting a set threshold value T, and carrying out binarization processing on pixel points one by one to obtain a binarized image Rn'. Wherein, the point with the gray value of 255 is the foreground (moving area) point, and the point with the gray value of 0 is the background point; for image Rn' conducting connectivity analysis, and finally obtaining the product containing the totalImage R of the whole motion regionn
Figure BDA0002572764200000061
The embodiment of the invention can improve the speed of the subsequent algorithm and the comprehensive application effect of the system by carrying out gray processing on the collected color image, thereby achieving more ideal requirements. The principle of the interframe difference method is simple, the calculation amount is small, and the motion area in the scene can be quickly detected. It is within the scope of the present patent to detect motion regions by using the interframe difference method in combination with other detection algorithms.
According to an embodiment of the present invention, determining the first rectangular frame by the motion region includes: traversing the motion area from bottom to top to obtain the abscissa xl of the leftmost edge pixel point of each linee∈{xl1,xl2,xl3,xl4,xl5,…,xlmAnd the abscissa xr of the rightmost edge pixel pointe∈{xr1,xr2,xr3,xr4,xr5,…,xrmIn which xleThe abscissa, xr, representing the leftmost edge pixel in row eeRepresenting the abscissa of the rightmost edge pixel point of the e-th line, and m represents the total line number of the motion area; traversing the motion area from left to right to obtain the ordinate yh of the uppermost edge pixel point of each rowf∈{yh1,yh2,yh3,yh4,yh5,...,yhnAnd ordinate yl of the lowermost edge pixelf∈{yl1,yl2,yl3,yl4,yl5,...,ylnIn which yhfOrdinate, yl, representing the uppermost edge pixel of the f-th columnfRepresenting the ordinate of the edge pixel point at the bottommost side of the f-th row, wherein n represents the total row number of the motion area; with { xl1,xl2,xl3,xl4,xl5,…,xlmThe smallest element in { is left edge, { xr1,xr2,xr3,xr4,xr5,…,xrmThe largest element in { as the right edge, { yh1,yh2,yh3,yh4,yh5,...,yhnLargest element in { acyl } as the upper edge, { yl1,yl2,yl3,yl4,yl5,...,ylnDetermining the first rectangular frame by using the minimum element in the rectangle as a lower edge. .
This embodiment describes the determination step of the first rectangular frame in detail. The moving object and the background are independently separated by constructing the rectangular frame, so that the accuracy of judgment of the detection target person is improved.
According to an embodiment of the invention, before determining the moving region of the human object from the gray values, the method further comprises: determining the similarity between the motion area and a preset human body feature model; and if the similarity is larger than a preset threshold value X, determining that the human body object exists in the motion area.
In the embodiment of the invention, the human body feature model is obtained by training and identifying a neural network model training classifier, and the specific method comprises the following steps: during training, inputting a large number of human body pictures as positive samples, inputting a large number of unmanned pictures as negative samples, and training and learning through a neural network model training classifier to obtain a human body feature model; during recognition, inputting a foreground picture of the motion area, and training and learning through the foreground picture of the motion area and a neural network model training classifier to obtain a human body feature model.
In the embodiment, the human body feature model is used for matching with the motion region, and the human body feature model is obtained by training and identifying the neural network model training classifier, so that the accuracy of motion region matching can be improved.
According to an embodiment of the present invention, determining the second rectangular frame in the first rectangular frame includes: the head region of the human subject is removed in the motion region. In particular, from bottom to top the movementTraversing the region to obtain the abscissa xl of the leftmost edge pixel point of each lined∈{xl1,xl2,xl3,xl4,xl5,…,xlgAnd the abscissa xr of the rightmost edge pixel pointd∈{xr1,xr2,xr3,xr4,xr5,…,xrgIn which xldThe abscissa, xr, representing the leftmost edge pixel in row ddRepresenting the abscissa of the rightmost edge pixel point of the d-th row, and g representing the total row number of the motion area; calculating the distance D from the leftmost edge pixel point to the rightmost edge pixel point in the D-th lined(ii) a Calculating the distance D between the D-th line and the D-1 th linedDifference Δ D ofd(ii) a Determining the lower edge of the head region as Δ DdThe row number corresponding to the largest element in (a).
In an embodiment of the present invention, determining the second rectangular frame in the first rectangular frame includes: the lower body region of the human subject is removed in the motion region. Specifically, the position of the lower edge 1/M of the first rectangular frame close to the upper edge is taken as the lower edge of the body area, wherein the value of M is preferably 3.
This embodiment describes the determination step of the second rectangular frame in detail. The upper body position of the human body object is determined according to the human body proportion, and the upper body position is used as the estimation area of the work clothes, so that the color detection accuracy of the work clothes is improved.
According to an embodiment of the present invention, detecting the color of the coveralls in the second rectangular frame includes: extracting the color image of the work clothes in the second rectangular frame from the current frame picture; determining the proportion of the preset color occupying all pixel points in the color image; and if the proportion is larger than a preset threshold value, determining the color of the work clothes to be the preset color.
In this embodiment, before determining the ratio of the preset color occupying all the pixels in the color image, the method further includes: converting the color image into an HSV image; setting the value ranges of the hue H, the saturation S and the lightness V corresponding to the preset color; determining the proportion of the preset color occupying all the pixel points in the color image comprises the following steps: determining the proportion of all pixel points of the HSV image occupied by the pixel points falling into the preset value range to obtain a numerical value proportion T, setting a threshold value T1, and judging that the working clothes are worn if one value in the T values of the specific colors exceeds T1; if the T values of the specific colors are all smaller than T1, the work clothes are judged not to be worn.
In the embodiment, the color image is converted into the HSV image, so that the hue, the vividness and the brightness of the color are expressed very visually, and the color contrast is facilitated. In addition, according to the proportion of all the pixel points of the HSV image occupied by the pixel points falling into the preset value range, the stability of identifying the color of the work clothes in the color change sensitive environment is improved.
In order to further explain the implementation process of the above embodiment, the invention further provides a specific embodiment. Fig. 2 is a flowchart of a specific example of a method for detecting color of a work clothes based on video stream data according to an embodiment of the present invention, as shown in fig. 2, including the following steps 201 to 207.
Step 201, acquiring video stream data.
A camera is arranged in an activity area of an operator in a construction site, video stream of the camera is obtained, the video stream data is decoded frame by frame, and a gray level image of a current frame and gray level images of previous frames are obtained by extracting the current frame and the previous frames and performing gray level processing.
Step 202, active target extraction.
Extracting a previous frame and a current frame, and performing graying processing on the previous frame and the current frame by adopting a weighted average method, namely distributing different weights to three components of R, G and B of each pixel point, wherein the specific formula is as follows: where, R (x, y), G (x, y), and B (x, y) respectively refer to R, G, and B components of the pixel having the coordinates (x, y), and f (x, y) refers to the converted gray-scale value of the pixel having the coordinates (x, y).
Processing the gray level images of the previous frame and the current frame by using an interframe difference method to obtain motion imagesAnimal body diagram (sports area), the process is as follows: subtracting the gray values of the pixels corresponding to the two frames of images, and taking the absolute value of the gray values to obtain a difference image Dn(x,y)=|fn(x,y)-fn-1(x, y) |, wherein fn(x, y) denotes a gray value of a pixel having coordinates (x, y) in the current frame, fn-1(x, y) refers to the gray value of the pixel with the coordinate (x, y) in the previous frame; setting a threshold Ta, if Dn(x,y)>T,Rn255 if Dn(x,y)<T,RnObtaining a moving object map (moving region) R at 0; and R is a binary image, wherein a white area represents a moving object area.
In step 203, a moving object rectangular frame is constructed (the moving object rectangular frame is equivalent to the first rectangular frame).
Traversing the moving object edge map from bottom to top to obtain the leftmost edge pixel xl of each linee∈{xl1,xl2,xl3,xl4,xl5,…,xlm} and the abscissa xr of the rightmost edge pixele∈{xr1,xr2,xr3,xr4,xr5,…,xrmIn which xleThe abscissa, xr, representing the leftmost edge pixel in row eeThe abscissa of the rightmost edge pixel point in the line e is represented, and m represents the total line number of the edge image (motion area) of the moving object; traversing the edge graph of the moving object from left to right to obtain the ordinate yh of the edge pixel point at the top edge of each columnf∈{yh1,yh2,yh3,yh4,yh5,...,yhnAnd ordinate yl of the lowermost edge pixelf∈{yl1,yl2,yl3,yl4,yl5,...,ylnIn which yhfOrdinate, yl, representing the uppermost edge pixel of the f-th columnfThe vertical coordinate of the edge pixel point at the bottommost edge of the f-th row is represented, and n represents the total row number of the edge image (motion area) of the moving object; with { xl1,xl2,xl3,xl4,xl5,…,xlmThe smallest element in { is left edge, { xr1,xr2,xr3,xr4,xr5,…,xrmThe largest element in { as the right edge, { yh1,yh2,yh3,yh4,yh5,...,yhnLargest element in { acyl } as the upper edge, { yl1,yl2,yl3,yl4,yl5,...,ylnAnd constructing a rectangular frame of the moving object by taking the minimum element in the motion picture as a lower edge.
And step 204, matching the human body target.
Intercepting the picture of the moving target judged in the second step and the third step to be matched with the human body characteristic model, if the picture is more than the similarity X, judging that the foreground picture has the human body target, and entering the next step; otherwise, judging that no human body target exists in the target image to be detected, and returning to the step two to continue the extraction operation of the moving target; wherein the greater the value of X, the higher the likelihood that the target is a human:
step 205, shoulder positioning (equivalent to building a second rectangular box).
Cutting a rectangular frame of a human body to obtain a body area, which specifically comprises the following steps: taking the upper edge of the human body rectangular frame as the upper edge of the body area, taking the position of the lower edge 1/3 of the human body rectangular frame close to the upper edge as the lower edge of the body area, and taking the left edge and the right edge of the human body rectangular frame as the left edge and the right edge of the body area; traversing the body region from bottom to top obtains the abscissa, xl, of the leftmost and rightmost edge points of each rowd∈{xl1,xl2,xl3,xl4,xl5,…,xlgAnd xrd∈{xr1,xr2,xr3,xr4,xr5,…,xrgIn which xldRepresents the x (x) r (x) x (dRepresenting the coordinate of the rightmost edge point of the d-th line, and g representing the line number of the body area; then, the distance D from the leftmost edge to the rightmost edge is calculatedd=xrd-xldTo obtain Dd∈{D1,D2,D3,D4,D5,…,Dg}, calculating DdDifference Δ D between the latter and former terms of the element in (A)d={D2-D1,D3-D2,D4-D3,D5-D4,…,Dg-Dg-1}; finally finding out Delta DdUpdating the lower edge of the head region to be the p-th line from the 1 st element to the line number p corresponding to the largest element in the last element;
and step 206, matching colors of the work clothes.
Extracting the image of the rectangular frame of the work clothes at the corresponding position of the colorful image of the human body to obtain the colorful image of the work clothes area, and judging the colorful image of the work clothes area one by one according to the color; the judgment process is as follows: the color image of the work clothes area is converted into an HSV image, the work clothes have red, blue and other colors, the red and blue are taken as examples to serve as judgment standards, and the value ranges of hue H, saturation S and lightness V corresponding to the four colors are as follows:
red: h:0-10 and 156-180; s, 43-255; v46-255
Blue color: h100-124; s, 43-255; v46-255
And step 207, matching the specified colors of the work clothes.
The color of each pixel is classified as follows: if the H, S, V values of the pixels to be classified all meet the value range of a certain color, judging that the color of the pixel belongs to the color; after the classification is finished, calculating the proportion of the red and blue pixels to all the pixels to obtain a red proportion Tr and a blue proportion Tb; setting a threshold T1, and if one value in Tr and Tb exceeds T1, determining that the work clothes are worn; and if both Tr and Tb are less than T1, judging that the work clothes are not worn.
According to an embodiment of the present invention, there is also provided a video stream data-based work clothes color detection apparatus for performing the aforementioned video stream data-based work clothes color detection method. Fig. 3 is a schematic block diagram of a detection apparatus for detecting color of a work clothes based on video stream data according to an embodiment of the present invention, as shown in fig. 3, the detection apparatus includes:
a first determining module 31, configured to determine a first rectangular frame according to a current frame picture and a previous frame picture in the video stream data, where the first rectangular frame is a minimum area containing a human body in the video stream data; a second determining module 32, connected to the first determining module 31, for determining a second rectangular frame in the first rectangular frame determined by the first determining module, wherein the second rectangular frame is a minimum area containing the work clothes on the human body; and a detection module 33, connected to the second determination module 32, for detecting the color of the work clothes in the second rectangular frame determined by the second determination module.
In the embodiment of the present invention, the first determining module 31 includes: the acquisition submodule is used for acquiring a gray value corresponding to each pixel point in the current frame picture and the previous frame picture; the first determining submodule is connected to the obtaining submodule and used for determining a motion area according to the gray value obtained by the obtaining submodule; and a second determination submodule, connected to the first determination submodule, for determining the first rectangular frame based on the motion region determined by the first determination submodule.
In an embodiment of the present invention, the second determining module 32 includes: a first removal submodule for removing a head region of the human subject in the motion region; and a second removal submodule for removing a lower body region of the human body object in the motion region.
In the embodiment of the present invention, the detection module 33 includes: the third determining submodule is used for determining the proportion of the pixel points with the preset color occupying all the pixel points in the color image; and the fourth determining submodule is connected to the third determining submodule and is used for determining the color of the working clothes to be the preset color.
In summary, the embodiments of the present invention provide a method and an apparatus for detecting a color of a work clothes based on video stream data. The method and the device determine the first rectangular frame and the second rectangular frame in sequence, thereby clearly determining the area of the work clothes on the human body object, eliminating the influence of scene colors and improving the detection success rate.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for detecting the color of a work clothes based on video stream data is characterized by comprising the following steps:
determining a first rectangular frame according to a current frame picture and a previous frame picture in video stream data, wherein the first rectangular frame is a minimum area containing a human body object in the video stream data;
determining a second rectangular frame in the first rectangular frame, wherein the second rectangular frame is a minimum area containing the work clothes on the human body object; and
and detecting the color of the work clothes in the second rectangular frame.
2. The method of claim 1, wherein determining the first rectangular frame based on a current frame picture and a previous frame picture in the video stream data comprises:
acquiring a gray value corresponding to each pixel point in the current frame picture and the previous frame picture;
determining a motion area of the human body object according to the gray value; and
and determining the first rectangular frame according to the motion area.
3. The method of claim 2, wherein determining the motion region of the human subject according to the gray value comprises:
calculating Dn(x,y)=|fn(x,y)-fn-1(x, y) |, wherein fn(x, y) is the gray value of the pixel point with the coordinate (x, y) in the current frame picture, fn-1(x, y) is the previous frame pictureThe gray value of the pixel point with the coordinate (x, y) in the slice; and
determining the motion region as Dn(x, y) is greater than a predetermined threshold value.
4. The method of claim 2, wherein determining the first rectangular box based on the motion region comprises:
traversing the motion area from bottom to top to obtain the abscissa xl of the leftmost edge pixel point of each linee∈{xl1,xl2,xl3,xl4,xl5,…,xlmAnd the abscissa xr of the rightmost edge pixel pointe∈{xr1,xr2,xr3,xr4,xr5,…,xrmIn which xleThe abscissa, xr, representing the leftmost edge pixel in row eeRepresenting the abscissa of the rightmost edge pixel point of the e-th line, and m represents the total line number of the motion area;
traversing the motion area from left to right to obtain the ordinate yh of the uppermost edge pixel point of each rowf∈{yh1,yh2,yh3,yh4,yh5,...,yhnAnd ordinate yl of the lowermost edge pixelf∈{yl1,yl2,yl3,yl4,yl5,...,ylnIn which yhfOrdinate, yl, representing the uppermost edge pixel of the f-th columnfRepresenting the ordinate of the edge pixel point at the bottommost side of the f-th row, wherein n represents the total row number of the motion area;
with { xl1,xl2,xl3,xl4,xl5,…,xlmThe smallest element in { is left edge, { xr1,xr2,xr3,xr4,xr5,…,xrmThe largest element in { as the right edge, { yh1,yh2,yh3,yh4,yh5,...,yhnLargest element in { acyl } as the upper edge, { yl1,yl2,yl3,yl4,yl5,...,ylnDetermining the first rectangular frame by using the minimum element in the rectangle as a lower edge.
5. The method for detecting color of work clothes based on video stream data according to any one of claims 2 to 4, wherein before determining the motion region of the human subject according to the gray value, the method further comprises:
determining the similarity between the motion area and a preset human body feature model;
and if the similarity is larger than a preset threshold value X, determining that the human body object exists in the motion area.
6. The method of claim 5, wherein determining the second rectangular box in the first rectangular box comprises: removing a head region of the human subject in the motion region.
7. The method of claim 6, wherein removing the head region of the human subject in the motion region comprises:
traversing the motion area from bottom to top to obtain the abscissa xl of the leftmost edge pixel point of each lined∈{xl1,xl2,xl3,xl4,xl5,…,xlgAnd the abscissa xr of the rightmost edge pixel pointd∈{xr1,xr2,xr3,xr4,xr5,…,xrgIn which xldThe abscissa, xr, representing the leftmost edge pixel in row ddRepresenting the abscissa of the rightmost edge pixel point of the d-th line, and g representing the total line number of the motion area;
calculate the leftmost edge of row dDistance D from edge pixel to rightmost edge pixeld
Calculating the distance D between the D-th line and the D-1 th linedDifference Δ D ofd
Determining a lower edge of the head region as Δ DdThe row number corresponding to the largest element in (a).
8. The method of claim 1, wherein detecting the color of the work clothes in the second rectangular frame comprises:
extracting a color image of the work clothes in the second rectangular frame from the current frame picture;
determining the proportion of pixels with preset colors occupying all pixels in the color image; and
and if the proportion is larger than a preset threshold value, determining the color of the work clothes as the preset color.
9. The method for detecting color of a work clothes based on video stream data as claimed in claim 8,
before determining the proportion of the pixels with the preset color occupying all the pixels in the color image, the method further comprises the following steps:
converting the color image into an HSV image; and
setting value ranges of the hue H, the saturation S and the lightness V corresponding to the preset colors;
determining the proportion of the pixels with the preset color occupying all the pixels in the color image comprises the following steps: and determining the proportion of the pixel points falling into the preset value range occupying all the pixel points of the HSV image.
10. A color detection device for a work clothes based on video stream data, comprising:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining a first rectangular frame according to a current frame picture and a previous frame picture in video stream data, and the first rectangular frame is a minimum area containing a human body in the video stream data;
a second determining module, configured to determine a second rectangular frame in the first rectangular frame, where the second rectangular frame is a minimum area that includes the work clothes on the human body; and
and the detection module is used for detecting the color of the work clothes in the second rectangular frame.
CN202010644888.8A 2020-07-07 2020-07-07 Method and device for detecting color of work clothes based on video stream data Pending CN112949367A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010644888.8A CN112949367A (en) 2020-07-07 2020-07-07 Method and device for detecting color of work clothes based on video stream data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010644888.8A CN112949367A (en) 2020-07-07 2020-07-07 Method and device for detecting color of work clothes based on video stream data

Publications (1)

Publication Number Publication Date
CN112949367A true CN112949367A (en) 2021-06-11

Family

ID=76234556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010644888.8A Pending CN112949367A (en) 2020-07-07 2020-07-07 Method and device for detecting color of work clothes based on video stream data

Country Status (1)

Country Link
CN (1) CN112949367A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408440A (en) * 2021-06-24 2021-09-17 展讯通信(上海)有限公司 Video data jam detection method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008831A (en) * 2019-02-23 2019-07-12 晋能大土河热电有限公司 A kind of Intellectualized monitoring emerging system based on computer vision analysis
CN110427808A (en) * 2019-06-21 2019-11-08 武汉倍特威视系统有限公司 Police uniform recognition methods based on video stream data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008831A (en) * 2019-02-23 2019-07-12 晋能大土河热电有限公司 A kind of Intellectualized monitoring emerging system based on computer vision analysis
CN110427808A (en) * 2019-06-21 2019-11-08 武汉倍特威视系统有限公司 Police uniform recognition methods based on video stream data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408440A (en) * 2021-06-24 2021-09-17 展讯通信(上海)有限公司 Video data jam detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107578035B (en) Human body contour extraction method based on super-pixel-multi-color space
CN107578418B (en) Indoor scene contour detection method fusing color and depth information
CN106960446B (en) Unmanned ship application-oriented water surface target detection and tracking integrated method
CN110298297B (en) Flame identification method and device
Graf et al. Multi-modal system for locating heads and faces
CN105404847B (en) A kind of residue real-time detection method
CN113139521B (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN106951870B (en) Intelligent detection and early warning method for active visual attention of significant events of surveillance video
KR20160143494A (en) Saliency information acquisition apparatus and saliency information acquisition method
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN105493141B (en) Unstructured road border detection
CN107705288A (en) Hazardous gas spillage infrared video detection method under pseudo- target fast-moving strong interferers
CN104951742A (en) Detection method and system for sensitive video
CN106327488B (en) Self-adaptive foreground detection method and detection device thereof
CN104021544B (en) A kind of greenhouse vegetable disease monitor video extraction method of key frame, that is, extraction system
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN113449606B (en) Target object identification method and device, computer equipment and storage medium
CN105069816B (en) A kind of method and system of inlet and outlet people flow rate statistical
CN116309607B (en) Ship type intelligent water rescue platform based on machine vision
Xiong et al. Early smoke detection of forest fires based on SVM image segmentation
Wu et al. Video surveillance object recognition based on shape and color features
CN112949367A (en) Method and device for detecting color of work clothes based on video stream data
Nugroho et al. Negative content filtering for video application
CN106446832B (en) Video-based pedestrian real-time detection method
Wang et al. An efficient method of shadow elimination based on image region information in HSV color space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination