CN112330706A - Mine personnel safety helmet segmentation method and device - Google Patents
Mine personnel safety helmet segmentation method and device Download PDFInfo
- Publication number
- CN112330706A CN112330706A CN202011234672.0A CN202011234672A CN112330706A CN 112330706 A CN112330706 A CN 112330706A CN 202011234672 A CN202011234672 A CN 202011234672A CN 112330706 A CN112330706 A CN 112330706A
- Authority
- CN
- China
- Prior art keywords
- target
- superpixel
- block
- pixel
- super
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000011218 segmentation Effects 0.000 title claims abstract description 29
- 239000013598 vector Substances 0.000 claims abstract description 60
- 238000012706 support-vector machine Methods 0.000 claims abstract description 23
- 238000005469 granulation Methods 0.000 claims abstract description 10
- 230000003179 granulation Effects 0.000 claims abstract description 10
- 230000008859 change Effects 0.000 claims description 21
- 150000001875 compounds Chemical class 0.000 claims description 8
- 239000006185 dispersion Substances 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 abstract description 6
- 230000009286 beneficial effect Effects 0.000 abstract 1
- 238000012549 training Methods 0.000 description 7
- 238000005286 illumination Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000003709 image segmentation Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 239000000428 dust Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000003245 coal Substances 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a mine personnel safety helmet segmentation method and a mine personnel safety helmet segmentation device, wherein the method comprises the following steps: extracting color feature vectors, texture feature vectors and target contour features of a plurality of superpixel blocks formed by target image granulation; inputting the color feature vector and the texture feature vector into a pre-trained Support Vector Machine (SVM) so as to divide all superpixel blocks into a target superpixel block and a background superpixel block; correcting the misclassified superpixel blocks according to the target contour characteristics; judging whether a background pixel point exists in the corrected target superpixel block or not; and if not, segmenting the target image according to the target super pixel block. The method and the device can reduce the difficulty of dividing the target of the person, thereby being beneficial to the application of the technologies such as person detection, identification, positioning and tracking and the like.
Description
Technical Field
The application relates to the technical field of image segmentation, in particular to a mine personnel safety helmet segmentation method and device.
Background
Mine personnel safety helmet segmentation is to independently separate safety helmet pixel areas in personnel images by using a method. The safety helmet segmentation is one of key technologies for realizing intelligent video monitoring of coal mine personnel, is a core content of computer vision in mine intelligent monitoring application, can promote application of related technologies such as mine personnel scheduling management, target detection and identification and position information prediction thereof based on the computer vision, and can improve the management and control efficiency of personnel operation areas.
There are many image segmentation methods, such as threshold method based on image gray feature, pixel point region growing method, edge detection method, graph segmentation method based on graph theory, deep learning neural network method, etc., but these methods have the following problems: the key of the threshold method lies in the reasonable selection of the gray threshold, which is suitable for processing the image with clear gray boundary between the target and the background; the region growing method has ideal image segmentation effect for lacking prior information, but is easy to cause excessive segmentation; the edge detection method is easy to have discontinuous boundary contour lines and poor in image region structure; the graph segmentation method requires a user to specify a target and a background in the image segmentation process, and is not suitable for automatic segmentation; the deep neural network segmentation method requires a large amount of input data, a long processing time and high requirements on computer hardware. The results in mine video image processing are difficult to meet with practical requirements.
Disclosure of Invention
In order to solve the problems in the background art, the application provides a mine personnel safety helmet segmentation method and a mine personnel safety helmet segmentation device.
In a first aspect, the present application provides a mine personnel safety helmet segmentation method, including:
extracting color feature vectors, texture feature vectors and target contour features of a plurality of superpixel blocks formed by target image granulation;
inputting the color feature vector and the texture feature vector into a pre-trained Support Vector Machine (SVM) so as to divide all superpixel blocks into a target superpixel block and a background superpixel block;
correcting the misclassified superpixel blocks according to the target contour features;
judging whether a background pixel point exists in the corrected target superpixel block or not;
and if not, segmenting the target image according to the target super pixel block.
Preferably, the determining whether a background pixel point exists in the corrected target superpixel block includes:
extracting a boundary mask of a superpixel area of the corrected target superpixel block;
calculating a difference set of a boundary mask of a superpixel region of the modified target superpixel block and a contour edge of the modified target superpixel block;
if it isThen determining saidAnd background pixel points exist in the corrected target superpixel blocks.
Preferably, the method further comprises:
if yes, classifying the corrected target superpixel blocks again to obtain target pixel point superpixel blocks and background pixel point superpixel blocks;
filtering the background pixel super-pixel block;
and segmenting the target image according to the super pixel block of the target pixel point.
Preferably, the classifying the corrected target superpixel block again includes:
according to the difference setAnd the boundary line of the pixel point of the super pixel block decomposes the super pixel block into the super pixel block of the target pixel point and the super pixel block of the background pixel point.
Preferably, the target in the target image is a safety helmet, and the target contour feature includes:
the safety helmet comprises a safety helmet image, and is characterized in that a zero slope point, a slope catastrophe point and at most two bulges exist on an outer contour line of the safety helmet, and the absolute value of the slope of a curve segment positioned among the zero slope point, the slope catastrophe point and the bulges is monotonously changed in the x direction or the y direction in a plane where the safety helmet image is positioned.
Preferably, the correcting the misclassified superpixel blocks according to the target contour features comprises:
extracting a boundary mask of a superpixel region of the target superpixel block and analyzing the change characteristic of the slope of a straight line between pixels on the boundary mask;
and modifying the category of the target superpixel block according to the change characteristics and the target contour characteristics.
Preferably, the modifying the category to which the target super pixel block belongs according to the variation characteristic and the target contour feature includes:
judging whether the change characteristics accord with the target contour features or not;
if yes, reserving the target superpixel block;
if not, detecting the number of the superpixel blocks contained in the boundary mask of the superpixel region of the target superpixel block;
based on the number, modifying the class to which the target superpixel block belongs according to the change characteristic and the target contour feature.
Preferably, the extracting the color feature vectors and the texture feature vectors of the plurality of super-pixel blocks formed by the target image granulation includes:
and describing the texture feature vector by adopting a multi-order matrix of the gray distribution mean value of the pixel values in the super pixel block, wherein the texture feature vector comprises four attributes of dispersion degree, variance, skewness and kurtosis.
Preferably, the color feature vector is calculated by using the following formula:
in the formula (I), the compound is shown in the specification,in order to be a color feature vector,、、、、、、、、、、、feature components on the four color models;
the dispersity, the variance, the skewness and the kurtosis of the texture feature vector are respectively calculated by adopting the following formula:
in the formula (I), the compound is shown in the specification,as the degree of dispersion,is the variance of the received signal and the received signal,in order to obtain the degree of skewness,in order to be the kurtosis,in order to be a gray scale level,a gray histogram function corresponding to the super pixel block.
In a second aspect, the present application provides a mine personnel safety helmet segmenting device, comprising:
the extraction module is used for extracting color feature vectors, texture feature vectors and target contour features of a plurality of super pixel blocks formed by target image granulation;
the classification module is used for inputting the color feature vector and the texture feature vector into a pre-trained Support Vector Machine (SVM) so as to divide all superpixel blocks into two types of target superpixel blocks and background superpixel blocks;
the correction module is used for correcting the category of the target superpixel block according to the target contour feature;
the judging module is used for judging whether the corrected target superpixel block has background pixel points;
and the segmentation module is used for segmenting the target image according to the target super-pixel block when the background pixel point does not exist in the corrected target super-pixel block.
In the mine personnel safety helmet segmentation method and device provided by the embodiment of the application, the target image is segmented into the plurality of superpixel blocks based on the SLIC model, the plurality of superpixel blocks are classified based on the Support Vector Machine (SVM), the classified superpixel blocks are subjected to class correction, and finally the target image is segmented by adopting the corrected superpixel blocks, so that the defects that the collected personnel video image has uneven illumination, distorted color information, random shadow distribution, difficulty in distinguishing the target and background boundaries and the like due to the influence of various severe conditions such as dust interference, high noise, poor illumination conditions and the like can be avoided, the difficulty in segmenting the personnel target is reduced, and the application of the technologies such as personnel detection, identification, positioning and tracking and the like is facilitated.
Drawings
FIG. 1 shows a schematic flow diagram of a mine personnel helmet segmentation method of an embodiment of the present application;
fig. 2 shows a block schematic diagram of a mine personnel safety cap separation device of an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The underground coal mine environment is special, and is influenced by various severe conditions such as dust interference, large noise, poor illumination conditions and the like, so that the collected personnel video images have the defects of uneven illumination, distorted color information, random shadow distribution, difficulty in distinguishing the target from the background boundary and the like, the personnel target segmentation difficulty is directly increased, and the application of the technologies such as personnel detection, identification, positioning and tracking is not facilitated.
The safety helmet is important safety protection equipment which must be worn by mine workers, is a necessary condition for guaranteeing the safety operation of the workers, and represents the existence of the workers. The safety helmet segmentation can promote the application of related technologies such as mine personnel scheduling management, target detection and identification and position information prediction thereof based on computer vision, can improve the management and control efficiency of the personnel working area, and can also effectively reduce the complexity of the whole body of the segmented personnel and the data processing capacity of a compression algorithm to personnel images.
Therefore, the application provides a mine personnel safety helmet segmentation method and device. Before introducing the method and the device for segmenting the safety helmet of the mine personnel, firstly, a support vector machine is introducedThe training process of (1).
During training, the collected images with the safety helmet can be divided into sample images and target images, and the sample images are used for training the support vector machineSupport vector machine with training completedFor segmenting the target image.
Firstly, carrying out pixel-level labeling on a safety helmet region in a sample image, and simultaneously recording the positions of pixel points of the safety helmet region in a labeled imageInputting different signals within a certain rangeValue of sample imagePerforming superpixel granulation, and extracting the pixel point position of each superpixel in the sample imageAnd is andintersection finding operationWill beIn (A) belong toAll pixel points of the safety helmet super pixel block are used as the safety helmet super pixel blockIn the middle do not belong toAll pixel points of (2) are used as background superpixel blocks.
Then, respectively extracting color characteristic vectors and texture characteristic vectors of the helmet super-pixel block and the background super-pixel block in the sample image, combining the color characteristic vectors and the texture characteristic vectors as characteristic variables, and adopting the characteristic variables to support a vector machineAnd (5) training.
In some embodiments, the actual characteristics of the image of the downhole personnel and the color characteristic requirements of the task of segmenting the safety helmet are combined to select 、、 、Four color models describe color characteristics. Therefore, the feature components of the superpixel of the target image under the above four color models can form a 12-dimensional color feature vector, which can be calculated by using the following formula:
in the formula (I), the compound is shown in the specification,in order to be a color feature vector,、、、、、、、、、、、are feature components on the four color models.
In some embodiments, due to the fact that the pixel regions of the safety helmet of the underground personnel, the skin of the personnel, the work clothes, the environment background and the like have obvious texture differences, the method for distinguishing the super pixels of the safety helmet and the non-safety helmet by using the texture features is an effective segmentation method. The super-pixel histogram can reflect the frequency of the pixel gray value in the super-pixel of the target image on each gray level, and can select four attributes of dispersion degree, variance, skewness and kurtosis to represent a texture feature vector, which can be calculated by adopting the following formula:
in the formula (I), the compound is shown in the specification,as the degree of dispersion,is the variance of the received signal and the received signal,in order to obtain the degree of skewness,in order to be the kurtosis,in order to be a gray scale level,a gray histogram function corresponding to the super pixel block.
finally, setting a safety helmet superpixel block as a positive sample and a label as '1'; setting a background super-pixel block as a negative sample and a label as '0'; and adopts an automatic hyper-parameter optimization mode to train the support vector machine。
In some embodiments, cross-validation may be employed to evaluateThe accuracy of the classification of the sample images can be calculated, in particularIs predicted by the errorEvaluating the accuracy of its classification, of a certain valueIt is described thatProbability of misprediction for unknown test samples. For example, to increaseThe prediction accuracy of the method is trained by adopting an automatic hyper-parameter optimization modeThen calculates again after being optimized by hyper-parametersAt the time of mispredicting the loss valueIllustrating training by hyper-parametric optimizationThe prediction error for an unknown test sample is relatively small.
The method for segmenting the safety helmet of the mine personnel is described in detail below.
Fig. 1 shows a schematic flow diagram of a mine personnel safety cap segmentation method of an embodiment of the present application. As shown in fig. 1, the mine personnel safety helmet segmentation method and device comprises the following steps:
step 110, extracting color feature vectors, texture feature vectors and target contour features of a plurality of superpixel blocks formed by target image granulation.
In this embodiment, the method for granulating the target image is the same as the method for granulating the sample image, and the color feature vectors and texture feature vectors of the plurality of superpixel blocks formed by extracting and granulating the target image are also the same as the processing method in the sample image, which is not repeated herein.
In some embodiments, the objects in the image of the objects are helmets, which are produced strictly according to national standards GB2811-2019, are the most salient pixel regions in the image of mine personnel, having fixed geometric outlines in addition to a specific color class.
In the embodiment, the morphological characteristics of the shell part of the safety helmet are mainly studied, the main body part of the safety helmet except the brim and the peak is in a similar hemispherical shape, the external contour is formed by smooth curves, and the contour line of the safety helmet isMust have a zero slope pointAnd slope discontinuityTwo types of points, one or two pointed "bumps" are present on the outline of the helmet in the frontal image taken at the central position of the helmet, so that a maximum of two "bumps" may be present, and are located at the zero slope pointSlope discontinuity pointAnd a curved section in the region of the "bulgeAbsolute value of slope ofMonotonously changes in the x direction or the y direction in the plane of the helmet image.
Thus, the target profile features may include: the safety helmet comprises a safety helmet body, a safety helmet image, a slope image and a slope image, wherein a zero slope point, a slope catastrophe point and at most two bulges are arranged on an outer contour line of the safety helmet body, and the absolute value of the slope of a curve section positioned between the zero slope point, the slope catastrophe point and the bulges is monotonously changed in the x direction or the y direction in a plane where the safety helmet image is positioned.
Step 120, inputting the color feature vector and the texture feature vector into a pre-trained support vector machineIn the method, all the superpixel blocks are divided into two types of target superpixel blocks and background superpixel blocks.
And step 130, modifying the category of the target superpixel block according to the target contour characteristics.
In the embodiment, the support vector machine is supported due to participationThe trained sample images lack global representativeness in the whole data sample set, and simultaneously, a support vector machine obtained through hyper-parameter optimization trainingThe prediction capability of the test sample is limited, so that a small number of negative samples are wrongly divided into positive samples, and the positive samples need to be corrected.
In some embodiments, the target superpixel block may be modified by:
step 1301, extracting the boundary mask of the superpixel region of the target superpixel block and analyzing the change characteristics of the slope of the straight line between the pixels on the boundary mask.
In the present embodiment, a boundary mask of a superpixel region of a target superpixel block is extractedAnd adopting morphological expansion operator to process, then selecting boundary mask of superpixel region of target superpixel blockThe contour line of any point aboveStarting point of (2)Calculating the slope of the straight line between adjacent unit pixels in sequence according to the clockwise direction or the counterclockwise directionUntil it returns to the starting pointAnd analyzing the slope of the straight lineThe change characteristic of (c).
Step 1302, modifying the category of the target superpixel block according to the change characteristics and the target contour characteristics.
First, the boundary mask of the super pixel area of the target super pixel block is judgedGenerated from several superpixel blocks.
For example, boundary mask of superpixel region of target superpixel blockGenerated from a single superpixel block. Slope of straight lineIs completely in accordance with the target contour feature, the change characteristics are judged as a safety helmet area which is divided by a superpixel block(ii) a Slope of straight lineIf the change characteristic of (a) is partially in accordance with the target profile feature, the slope of the line that will not be in accordance with the target profile featureDefined as the slope of an abnormal straight lineBy a region adjacency graphRetrieving slope of line due to abnormalityNeighbor superpixel block corresponding to intervalExtracting its boundary mask andgenerating a new superpixel region mask after fusingUpdating the slope of the lineAnd analyzing the change characteristics of the super-pixels, if the proportion of the parts conforming to the target contour features is increased, retaining the newly-added super-pixels, otherwise, regarding the super-pixels as misclassification samples, and filtering the samples until the slope of the straight line conforming to the target contour featuresThe ratio of (a) is maximized.
As another example, boundary mask of superpixel region of target superpixel blockGenerated from a plurality of superpixel blocks. Slope of straight lineIs determined to be a helmet area divided by a plurality of superpixel blocks when the change characteristics of (a) completely accord with the target contour characteristics(ii) a Slope of straight lineIn which only part of the slope of the straight lineIs in accordance with the target contour feature and is in accordance with the slope of the straight line of the target contour featureAnd slope of line not conforming to target profile featureSimultaneously distributed over each super-pixel block boundary, thenRetrieving the slope of a line that does not match the target profile featureInterval corresponds toExtracting its boundary mask andgenerating a new superpixel region mask after fusingUpdating the slope of the lineAnd analyzing the change characteristics to conform to the slope of the straight line of the target contour characteristicsThe new addition is reserved when the occupied proportion is increasedSuperpixels, and the steps are executed in a loop until the slope of the straight line conforming to the target contour characteristicsMaximizing the proportion; if the slope of the straight line of the target contour feature is metIn which the slope of the line only partially conforms to the profile feature of the targetIs in accordance with the target contour feature and is in accordance with the slope of the straight line of the target contour featureAnd slope of line not conforming to target profile featureDistributed on different super-pixel boundaries, is composed ofRetrieving the slope of a line that does not match the target profile featureInterval corresponds toAnd removing the target contour features to update the slope of the straight line conforming to the target contour featuresAnd analyzing the change characteristics to conform to the slope of the straight line of the target contour characteristicsIf the proportion is increased, the correction is effective, and the cyclic execution is carried out until the linear slope of the target contour characteristic is metMaximizing the proportion;
slope of straight lineIf the target contour feature is completely not met, the method is characterized byRetrieving the slope of the lineCorresponding super pixels are removed one by one and the slope of the straight line is updatedSlope of straight lineAnd judging the target contour characteristics which are not met with the target contour characteristics as a background area, and filtering all superpixel blocks contained in the background area.
Step 140, determining whether the modified target superpixel block has background pixel points.
In some embodiments, the following steps may be employed for the determination:
in step 1401, the boundary mask of the superpixel region of the corrected target superpixel block is extracted.
Step 1402, adoptAnd the operator extracts the contour edge of the corrected target superpixel block.
Step 1403, calculate the difference set of the boundary mask of the superpixel region of the modified target superpixel block and the contour edge of the modified target superpixel block。
In step 1404, ifAnd determining that no background pixel point exists in the corrected target superpixel block.
Step 1405, ifAnd determining that background pixel points exist in the corrected target superpixel block.
It should be noted that, if it is determined that the background pixel does not exist in the corrected target super pixel block, step 150 is executed, and if it is determined that the background pixel exists in the corrected target super pixel block, step 160 is executed.
And 150, segmenting the target image according to the target superpixel blocks.
And 160, classifying the corrected target superpixel blocks again to obtain target pixel point superpixel blocks and background pixel point superpixel blocks, filtering the background pixel point superpixel blocks, and segmenting the target image according to the target pixel point superpixel blocks.
In this embodiment, the revised target superpixel blocks are again classified according to the difference setThe boundary line of the pixel point of the super pixel block decomposes the super pixel block into a target pixel super pixel block and a background pixel super pixel block.
In the embodiment of the application, a target image is granulated into a plurality of superpixel blocks based on a SLIC model, the superpixel blocks are classified based on a Support Vector Machine (SVM), the classified superpixel blocks are corrected, and finally the corrected superpixel blocks are adopted to divide the target image, so that the defects that the video image of an acquired person is uneven in illumination, distorted in color information, random in shadow distribution, difficult to distinguish between the target and a background boundary and the like due to the influence of various severe conditions such as dust interference, high noise, poor illumination condition and the like can be avoided, the difficulty in dividing the person target is reduced, and the application of the technologies such as person detection, identification, positioning and tracking and the like is facilitated.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the embodiments of the present application are further described below by way of apparatus embodiments.
Fig. 2 shows a block schematic diagram of a mine personnel safety cap separation device of an embodiment of the present application. As shown in fig. 2, the mine personnel safety helmet dividing device includes:
the extracting module 210 is configured to extract color feature vectors, texture feature vectors, and target contour features of a plurality of super-pixel blocks formed by the target image granulation.
The classification module 220 is configured to input the color feature vectors and the texture feature vectors into a pre-trained support vector machine SVM, so as to divide all superpixel blocks into two types, namely a target superpixel block and a background superpixel block.
And a correction module 230 for correcting the misclassified superpixel blocks according to the target contour features.
And the judging module 240 is configured to judge whether a background pixel point exists in the corrected target superpixel block.
And a segmentation module 250, configured to segment the target image according to the target super-pixel block when there is no background pixel point in the corrected target super-pixel block.
In some embodiments, the determining module 240 is specifically configured to:
extracting a boundary mask of a superpixel area of the corrected target superpixel block;
calculating a difference set between a boundary mask of a superpixel region of the modified target superpixel block and a contour edge of the modified target superpixel block;
If it isIf so, determining that no background pixel point exists in the corrected target superpixel block;
if it isAnd determining that background pixel points exist in the corrected target superpixel block.
In some embodiments, the segmentation module 250 is further configured to classify the corrected target super-pixel block again when a background pixel exists in the corrected target super-pixel block, so as to obtain a target pixel super-pixel block and a background pixel super-pixel block; filtering background pixel super-pixel blocks; and segmenting the target image according to the superpixel blocks of the target pixel points.
In some embodiments, the segmentation module 250 is further configured to segment the difference set according to the difference setThe boundary line of the pixel point of the super pixel block decomposes the super pixel block into a target pixel super pixel block and a background pixel super pixel block.
In some embodiments, the target in the target image is a hard hat, and the target contour features include: the safety helmet comprises a safety helmet body, a safety helmet image, a slope image and a slope image, wherein a zero slope point, a slope catastrophe point and at most two bulges are arranged on an outer contour line of the safety helmet body, and the absolute value of the slope of a curve section positioned between the zero slope point, the slope catastrophe point and the bulges is monotonously changed in the x direction or the y direction in a plane where the safety helmet image is positioned.
In some embodiments, the modification module 230 is specifically configured to:
extracting a boundary mask of a superpixel region of a target superpixel block and analyzing the change characteristic of the slope of a straight line between pixels on the boundary mask;
and correcting the misclassified superpixel blocks according to the change characteristics and the target contour characteristics.
In some embodiments, the modification module 230 is further specifically configured to:
judging whether the change characteristics accord with the target contour characteristics;
if yes, reserving the target superpixel block;
if not, detecting the number of the superpixel blocks contained in the boundary mask of the superpixel region of the target superpixel block;
and modifying the category to which the target superpixel block belongs according to the change characteristics and the target contour features on the basis of the number.
In some embodiments, the extraction module 210 is specifically configured to:
and describing texture feature vectors by adopting a multi-order matrix of the gray distribution mean value of pixel values in the super-pixel blocks, wherein the texture feature vectors comprise four attributes of dispersion degree, variance, skewness and kurtosis.
In some embodiments, the color feature vector is calculated using the following equation:
in the formula (I), the compound is shown in the specification,in order to be a color feature vector,、、、、、、、、、、、feature components on the four color models;
the dispersity, the variance, the skewness and the kurtosis of the texture feature vector are respectively calculated by adopting the following formula:
in the formula (I), the compound is shown in the specification,as the degree of dispersion,is the variance of the received signal and the received signal,in order to obtain the degree of skewness,in order to be the kurtosis,in order to be a gray scale level,a gray histogram function corresponding to the super pixel block.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The above embodiments are preferred embodiments of the present application, and the protection scope of the present application is not limited by the above embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.
Claims (10)
1. A mine personnel safety helmet segmentation method is characterized by comprising the following steps:
extracting color feature vectors, texture feature vectors and target contour features of a plurality of superpixel blocks formed by target image granulation;
inputting the color feature vector and the texture feature vector into a pre-trained Support Vector Machine (SVM) so as to divide all superpixel blocks into a target superpixel block and a background superpixel block;
correcting the misclassified superpixel blocks according to the target contour features;
judging whether a background pixel point exists in the corrected target superpixel block or not;
and if not, segmenting the target image according to the target super pixel block.
2. The method of claim 1, wherein said determining whether a background pixel point exists in the modified target superpixel block comprises:
extracting a boundary mask of a superpixel area of the corrected target superpixel block;
calculating a difference set of a boundary mask of a superpixel region of the modified target superpixel block and a contour edge of the modified target superpixel block;
3. The method of claim 2, further comprising:
if yes, classifying the corrected target superpixel blocks again to obtain target pixel point superpixel blocks and background pixel point superpixel blocks;
filtering the background pixel super-pixel block;
and segmenting the target image according to the super pixel block of the target pixel point.
4. The method of claim 3, wherein said re-classifying the modified target superpixel block comprises:
5. The method of claim 1, wherein the object in the object image is a helmet, the object contour feature comprising:
the safety helmet comprises a safety helmet image, and is characterized in that a zero slope point, a slope catastrophe point and at most two bulges exist on an outer contour line of the safety helmet, and the absolute value of the slope of a curve segment positioned among the zero slope point, the slope catastrophe point and the bulges is monotonously changed in the x direction or the y direction in a plane where the safety helmet image is positioned.
6. The method of claim 5, wherein said modifying the misclassified superpixel based on the target contour features comprises:
extracting a boundary mask of a superpixel region of the target superpixel block and analyzing the change characteristic of the slope of a straight line between pixels on the boundary mask;
and modifying the category of the target superpixel block according to the change characteristics and the target contour characteristics.
7. The method of claim 6, wherein said modifying the class to which the target superpixel block belongs according to the variance characteristic and the target contour characteristic comprises:
judging whether the change characteristics accord with the target contour features or not;
if yes, reserving the target superpixel block;
if not, detecting the number of the superpixel blocks contained in the boundary mask of the superpixel region of the target superpixel block;
based on the number, modifying the class to which the target superpixel block belongs according to the change characteristic and the target contour feature.
8. The method of claim 1, wherein the extracting color feature vectors and texture feature vectors of a plurality of superpixel blocks formed by target image granulation comprises:
and describing the texture feature vector by adopting a multi-order matrix of the gray distribution mean value of the pixel values in the super pixel block, wherein the texture feature vector comprises four attributes of dispersion degree, variance, skewness and kurtosis.
9. The method of claim 8, wherein the color feature vector is calculated using the following equation:
in the formula (I), the compound is shown in the specification,in order to be a color feature vector,、、、、、、、、、、、feature components on the four color models;
the dispersity, the variance, the skewness and the kurtosis of the texture feature vector are respectively calculated by adopting the following formula:
in the formula (I), the compound is shown in the specification,as the degree of dispersion,is the variance of the received signal and the received signal,in order to obtain the degree of skewness,in order to be the kurtosis,in order to be a gray scale level,a gray histogram function corresponding to the super pixel block.
10. A mine personnel safety helmet segmenting device, comprising:
the extraction module is used for extracting color feature vectors, texture feature vectors and target contour features of a plurality of super pixel blocks formed by target image granulation;
the classification module is used for inputting the color feature vector and the texture feature vector into a pre-trained Support Vector Machine (SVM) so as to divide all superpixel blocks into two types of target superpixel blocks and background superpixel blocks;
the correction module is used for correcting the category of the target superpixel block according to the target contour feature;
the judging module is used for judging whether the corrected target superpixel block has background pixel points;
and the segmentation module is used for segmenting the target image according to the target super-pixel block when the background pixel point does not exist in the corrected target super-pixel block.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011234672.0A CN112330706A (en) | 2020-11-07 | 2020-11-07 | Mine personnel safety helmet segmentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011234672.0A CN112330706A (en) | 2020-11-07 | 2020-11-07 | Mine personnel safety helmet segmentation method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112330706A true CN112330706A (en) | 2021-02-05 |
Family
ID=74316463
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011234672.0A Withdrawn CN112330706A (en) | 2020-11-07 | 2020-11-07 | Mine personnel safety helmet segmentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112330706A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114120358A (en) * | 2021-11-11 | 2022-03-01 | 国网江苏省电力有限公司技能培训中心 | Super-pixel-guided deep learning-based identification method for head-worn safety helmet of person |
CN115439474A (en) * | 2022-11-07 | 2022-12-06 | 山东天意机械股份有限公司 | Rapid positioning method for power equipment fault |
-
2020
- 2020-11-07 CN CN202011234672.0A patent/CN112330706A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114120358A (en) * | 2021-11-11 | 2022-03-01 | 国网江苏省电力有限公司技能培训中心 | Super-pixel-guided deep learning-based identification method for head-worn safety helmet of person |
CN114120358B (en) * | 2021-11-11 | 2024-04-26 | 国网江苏省电力有限公司技能培训中心 | Super-pixel-guided deep learning-based personnel head-mounted safety helmet recognition method |
CN115439474A (en) * | 2022-11-07 | 2022-12-06 | 山东天意机械股份有限公司 | Rapid positioning method for power equipment fault |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhao et al. | Cloud shape classification system based on multi-channel cnn and improved fdm | |
CN109961049B (en) | Cigarette brand identification method under complex scene | |
CN110992381B (en) | Moving object background segmentation method based on improved Vibe+ algorithm | |
CN112288706A (en) | Automatic chromosome karyotype analysis and abnormality detection method | |
CN109684959B (en) | Video gesture recognition method and device based on skin color detection and deep learning | |
CN107230188B (en) | Method for eliminating video motion shadow | |
CN111738271B (en) | Method for identifying blocked fruits in natural environment | |
CN108268823A (en) | Target recognition methods and device again | |
CN105513053A (en) | Background modeling method for video analysis | |
JP6932402B2 (en) | Multi-gesture fine division method for smart home scenes | |
CN112330706A (en) | Mine personnel safety helmet segmentation method and device | |
Campos et al. | Discrimination of abandoned and stolen object based on active contours | |
Cheng et al. | Urban road extraction via graph cuts based probability propagation | |
CN114241542A (en) | Face recognition method based on image stitching | |
CN112446417B (en) | Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation | |
CN110544262A (en) | cervical cell image segmentation method based on machine vision | |
CN113723314A (en) | Sugarcane stem node identification method based on YOLOv3 algorithm | |
CN112307894A (en) | Pedestrian age identification method based on wrinkle features and posture features in community monitoring scene | |
CN108564020B (en) | Micro-gesture recognition method based on panoramic 3D image | |
CN116524410A (en) | Deep learning fusion scene target detection method based on Gaussian mixture model | |
Dantas et al. | A deterministic technique for identifying dicotyledons in images | |
CN111415350B (en) | Colposcope image identification method for detecting cervical lesions | |
CN213241250U (en) | Miner safety helmet detection system | |
CN110599518B (en) | Target tracking method based on visual saliency and super-pixel segmentation and condition number blocking | |
Sujatha et al. | An innovative moving object detection and tracking system by using modified region growing algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210205 |