CN114005058A - Dust identification method and device and terminal equipment - Google Patents

Dust identification method and device and terminal equipment Download PDF

Info

Publication number
CN114005058A
CN114005058A CN202111263432.8A CN202111263432A CN114005058A CN 114005058 A CN114005058 A CN 114005058A CN 202111263432 A CN202111263432 A CN 202111263432A CN 114005058 A CN114005058 A CN 114005058A
Authority
CN
China
Prior art keywords
dust
image
object contour
contour
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111263432.8A
Other languages
Chinese (zh)
Inventor
刘勇
马永祥
郭艳
尹绪昆
曹金岗
乔艳敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute Of Applied Mathematics Hebei Academy Of Sciences
Original Assignee
Institute Of Applied Mathematics Hebei Academy Of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute Of Applied Mathematics Hebei Academy Of Sciences filed Critical Institute Of Applied Mathematics Hebei Academy Of Sciences
Priority to CN202111263432.8A priority Critical patent/CN114005058A/en
Publication of CN114005058A publication Critical patent/CN114005058A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus

Abstract

The application is applicable to the technical field of image processing, and provides a dust identification method, a dust identification device and terminal equipment. The dust identification method comprises the following steps: acquiring a video image of a target area; processing the video image to obtain a foreground binary image; carrying out object contour detection on the foreground binary image to obtain an object contour set containing a plurality of object contours; and detecting each object contour in the object contour set based on at least one of color features, shape features and diffusivity features of the dust, and determining whether each object contour contains the dust. The step of densely arranging a large amount of dust detectors can be saved in this application, avoids the installation to the influence of production activity, alleviates the complexity of installation, reduces the cost of coming into service and device maintenance.

Description

Dust identification method and device and terminal equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a dust identification method and device and terminal equipment.
Background
In places such as metallurgy, building materials, metal mines, quarries, building garbage treatment, building blasting, ports, industrial storage yards and the like, dust is generated in the processes of raw material transportation, storage yard construction, material conveying, production operation and the like, namely the dust is discharged in a common unorganized mode. The inorganically-discharged dust has the characteristics of non-centralized and irregular discharge, uncertain diffusion, large diffusion range and the like, and is very easy to cause non-point source pollution, so that the difficulty in effectively monitoring and treating the inorganically-discharged dust is very high, and the ideal effect of the traditional single-point dust removal technology such as a cloth bag/filter cartridge dust remover, an electrostatic dust remover, water spray/mist and other dust removal equipment is difficult to effectively exert when the inorganically-discharged dust is faced.
At present, dust concentration monitors are adopted in part of industries and enterprises to monitor dust, but the mode can only detect the dust concentration, PM2.5, PM10, temperature, humidity, atmospheric pressure, wind direction and wind power of local points of the installation of the online dust concentration monitors, and the dust concentration detection of any position in an area cannot be realized. If the accurate monitoring of regional dust is to be realized, a large number of dust concentration monitors must be densely arranged in the whole storage yard, operation and other places, so that the investment cost is high, the installation is complex, the maintenance cost is high, and the installation support can seriously influence the activities of vehicle movement, personnel operation and the like in the storage yard, the operation and other places.
Disclosure of Invention
In order to overcome the problems in the related art, the embodiment of the application provides a dust identification method and device and terminal equipment.
The application is realized by the following technical scheme:
in a first aspect, an embodiment of the present application provides a dust identification method, including: acquiring a video image of a target area; processing the video image to obtain a foreground binary image; carrying out object contour detection on the foreground binary image to obtain an object contour set containing a plurality of object contours; and detecting each object contour in the object contour set based on at least one of color features, shape features and diffusivity features of the dust, and determining whether each object contour contains the dust.
According to the dust identification method, the video image of the target area is obtained, the video image is processed to obtain the foreground binary image, then the object contour detection is carried out on the foreground binary image, and the object contour set comprising a plurality of object contours is obtained. And finally, detecting whether each object contour in the object contour set contains dust or not based on the characteristics of the dust. Therefore, the dust identification method can save the step of densely arranging a large number of dust detectors, avoid the influence of the installation process on production activities, reduce the installation complexity and reduce the cost of use and device maintenance.
Based on the first aspect, in some possible implementations, the detecting each object contour in the set of object contours based on at least one of a color feature, a shape feature, and a diffusivity feature of dust, and determining whether each object contour includes a dust contour includes: extracting a region contour of each motion region in the object contour, and determining a minimum bounding rectangle of a vertical boundary of the region contour, and pixel area and barycentric coordinates of the region contour; identifying dust in the area outline based on the aspect ratio of the minimum circumscribed rectangle and the rectangle degree to obtain a suspected dust area, wherein the rectangle degree is the ratio of the area of the outline to the area of the minimum circumscribed rectangle; and calculating a pixel gray value range corresponding to the dust in the dust color feature detection rule, and identifying the suspected dust area based on the pixel gray value range.
Based on the first aspect, in some possible implementation manners, identifying the suspected dust area based on the pixel grayscale value range to obtain a first dust area, and after identifying the suspected dust area based on the pixel grayscale value range, the method further includes: and determining whether the first dust region is a real dust region or not based on the change rate of the area of the same region outline corresponding to the adjacent video frames.
Based on the first aspect, in some possible implementations, the determining whether the first dust region is a real dust region based on a rate of change of an area of a same region contour corresponding to adjacent video frames includes: for any two objects, if the gravity center coordinates of the motion profiles meet a preset relation, determining that the two objects are the same object and counting the two objects; otherwise, adding the two objects in a preset suspected dust list, and counting and timing the two objects; if the number of the second objects in the suspected dust list exceeds a number threshold and the increase rate of the area of the second objects reaches an increase rate threshold, determining that the second objects are dust; and if the existence time of the first object in the suspected dust list exceeds a time threshold, eliminating the first object from the suspected dust list, wherein the first object is any one object in the suspected dust list.
Based on the first aspect, in some possible implementation manners, the processing the video image to obtain a foreground binary image includes: defogging the video image to obtain a gray image; and carrying out background modeling on the gray level image, and extracting a foreground moving object by a background difference method to obtain the foreground binary image.
Based on the first aspect, in some possible implementations, the performing defogging processing on the video image to obtain a grayscale image includes: calculating the gray level histogram of the video image by channels to obtain the maximum gray level value and the minimum gray level valueThe maximum gray value and the minimum gray value form an extreme value range; dividing the video image into a plurality of image blocks; for a first image block with a gray value outside the extreme value range, assigning the gray value of the first image block to be 0 or 255; for a second image block with the gray value within the extreme value range, linearly transforming the gray value of the second image block to 0-255 to obtain a gray image; wherein coefficients of the linear transformation
Figure BDA0003326301040000031
k 'is a constant value, and k' is a constant value,
Figure BDA0003326301040000032
is the intra gray variance of said second image block,
Figure BDA0003326301040000033
the image noise variance of the second image block.
Based on the first aspect, in some possible implementations, before the object contour detection is performed on the foreground binary image, the method includes: and carrying out corrosion and expansion treatment on the foreground binary image to remove image noise in the foreground binary image.
In a second aspect, an embodiment of the present application provides a dust identification device, including: the acquisition module is used for acquiring a video image of a target area; the processing module is used for processing the video image to obtain a foreground binary image; the object contour identification module is used for carrying out object contour detection on the foreground binary image to obtain an object contour set containing a plurality of object contours; and the dust identification module is used for detecting each object contour in the object contour set based on at least one of color features, shape features and diffusivity features of dust, and determining whether the object contour set contains dust contours.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the dust identification method according to any one of the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the dust identification method according to any one of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to execute the dust identification method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of a dust identification method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a dust identification method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a video image defogging process according to an embodiment of the present application;
FIG. 4 is a schematic flowchart of background modeling of a grayscale image by a Gaussian mixture model according to an embodiment of the present application;
fig. 5 is a foreground binary image processed by a gaussian mixture model according to an embodiment of the present disclosure;
FIG. 6 is a schematic flow chart of determining an object contour set of a foreground binary image according to an embodiment of the present disclosure;
fig. 7 is a schematic flow chart of determining whether an image contains dust based on a dust diffusivity characteristic provided by an embodiment of the present application;
fig. 8 is an image judged to be dust through diffusivity characteristics provided by an embodiment of the present application, and meanwhile, an experimental visual calibration is provided;
fig. 9 is a schematic structural diagram of a dust identification device provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a dust identification device provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In places such as metallurgy, building materials, metal mines, quarries, building garbage treatment, building blasting, ports, industrial storage yards and the like, dust is generated in the processes of raw material transportation, storage yard construction, material conveying, production operation and the like, namely the dust is commonly discharged in an unorganized way. The inorganically-discharged dust has the characteristics of non-centralized and irregular discharge, uncertain diffusion, large diffusion range and the like, and is very easy to cause non-point source pollution, so that the difficulty in effectively monitoring and treating the inorganically-discharged dust is very high, and the ideal effect of the traditional single-point dust removal technology such as a cloth bag/filter cartridge dust remover, an electrostatic dust remover, water spray/mist and other dust removal equipment is difficult to effectively exert when the inorganically-discharged dust is faced.
At present, dust concentration monitors are adopted in part of industries and enterprises to monitor dust, but the mode can only detect the dust concentration, PM2.5, PM10, temperature, humidity, atmospheric pressure, wind direction and wind power of local points of the installation of the online dust concentration monitors, and the dust concentration detection of any position in an area cannot be realized. If the accurate monitoring of regional dust is to be realized, a large number of dust concentration monitors must be densely arranged in the whole storage yard, operation and other places, so that the investment cost is high, the installation is complex, the maintenance cost is high, and the installation support can seriously influence the activities of vehicle movement, personnel operation and the like in the storage yard, the operation and other places.
Based on the above problem, an embodiment of the present application provides a dust identification method, which includes obtaining a video image of a target area, processing the video image to obtain a foreground binary image, and then performing object contour detection on the foreground binary image to obtain an object contour set including a plurality of object contours. And finally, detecting whether each object contour in the object contour set contains dust or not based on the characteristics of the dust.
For example, the embodiment of the present application can be applied to the exemplary scenario shown in fig. 1. The scene comprises a video image acquisition device 10 and a terminal device 20. The video image capturing device 10 is used for capturing a video image of a target area, which may contain dust, and sending the video image to the terminal device 20. The terminal device 20 is used to identify whether dust is contained in the video image.
For example, the terminal device 20 processes the video image to obtain a foreground binary image, then performs object contour detection on the foreground binary image to obtain an object contour set including a plurality of object contours, and then detects whether each object contour in the object contour set includes dust based on the characteristics of the dust.
In this embodiment, the terminal device 20 may be an industrial robot, a mobile phone, a computer, a tablet computer, a notebook computer, a netbook, a Personal Digital Assistant (PDA), and other terminals, and the specific type of the terminal device 20 is not limited in this embodiment.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to fig. 1, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 2 is a schematic flow chart of a dust identification method according to an embodiment of the present application, and with reference to fig. 2, the dust identification method is described in detail as follows:
in step 101, a video image of a target area is acquired.
The image obtained by the camera in real time is a video stream. For example, different video images of the target scene at various time points may be acquired by the camera. For example, the image of the target scene in the video frame may be saved through an OpenCv algorithm library. And calling a cap.read () algorithm in the OpenCv algorithm library, and storing the video stream acquired by the camera according to frames. Of course, video images at different time points in the video stream may be obtained by other algorithms.
In one scene, a terminal device can send a video image acquisition instruction to a video image acquisition device at intervals of preset time, and the video image acquisition device acquires a video image of a target area based on the instruction and returns the video image within a certain time period to the terminal device, so that the terminal device executes subsequent steps to identify dust in the video image.
In another scenario, the video image capturing device may also return the video image captured in real time to the terminal device, so that the terminal device executes the subsequent steps to identify dust in the video image, which is not limited in this embodiment of the present application.
In step 102, the video image is processed to obtain a foreground binary image.
Illustratively, step 102 may include: and carrying out defogging, size reduction and other processing on the video image sequence to obtain a clearer gray image sequence. And then, carrying out background modeling on the gray level image, and extracting a foreground moving object by a background difference method to obtain a foreground binary image.
Wherein the use is fromThe adaptive histogram equalization method for defogging the video image sequence specifically comprises the following steps: calculating a gray histogram of the video image by channels to obtain a maximum gray value and a minimum gray value, wherein the maximum gray value and the minimum gray value form an extreme value range; dividing a video image into a plurality of image blocks; for the first image block with the gray value outside the extreme value range, assigning the gray value of the first image block to be 0 or 255; and for the second image block with the gray value within the extreme value range, linearly transforming the gray value of the second image block to 0-255 to obtain a gray image. Wherein coefficients of the linear transformation
Figure BDA0003326301040000081
k 'is a constant value, and k' is a constant value,
Figure BDA0003326301040000082
is the intra gray variance of the second image block,
Figure BDA0003326301040000083
is the image noise variance of the second image block.
Referring to fig. 3, the defogging process of the video image a will be described as an example. The video image A is a video image obtained by converting a video stream acquired by a camera, and the video image A is a single-frame image.
Specifically, after the video image a is obtained, the gray level of the pixel in the video image a is counted, and the gray histogram of the video image a is established according to the counted gray level of the pixel. And then, calculating an accumulation histogram of the gray histogram, determining a mapping relation between the gray level of the gray histogram and the gray level of the accumulation histogram, and circularly outputting the gray level of each pixel of the image according to the mapping relation to obtain a new histogram A', namely a foreground binary image of the video image A.
In the embodiment, the background modeling is performed on the gray level image through a Gaussian mixture model algorithm, the kNN background subtraction device based on the OpenCV sets parameters according to experimental data, and the influence of similar targets such as water mist sprayed by a fog gun is reduced.
Illustratively, each pixel of the background image may be modeled by a mixture gaussian model consisting of K gaussian distributions, namely:
Figure BDA0003326301040000091
wherein x isjRepresenting the value of the pixel j at the time t, if the pixel is an RGB three-channel, then xjIs a vector, xj=[xjRxjG xjB],
Figure BDA0003326301040000092
An estimate of the weight coefficients representing the ith gaussian distribution in the gaussian mixture model at time t,
Figure BDA0003326301040000093
and
Figure BDA0003326301040000094
respectively representing the mean vector and covariance matrix of the ith Gaussian distribution in the mixed Gaussian model at the time t (here, it is assumed that the red, green and blue components of the pixel are independent of each other); η represents a gaussian distribution probability density function. Wherein:
Figure BDA0003326301040000095
Figure BDA0003326301040000096
Figure BDA0003326301040000097
and initializing a first Gaussian distribution corresponding to each pixel in a first frame of image, assigning the mean value to the value of the current pixel, assigning the weight value to 1, and initializing the mean values and the weight values of other Gaussian distribution functions except the first Gaussian distribution to zero.
At time t toEach pixel X of an image frametMatching with the corresponding Gaussian model, wherein the matching rule is as follows: if the pixel value XtDefining the distance between the average value of the ith Gaussian distribution Gi in the Gaussian mixture model and the standard deviation of the average value of the ith Gaussian distribution Gi in the Gaussian mixture model to be less than 2.5 times of the standard deviation of the average value of the ith Gaussian distribution Gi in the Gaussian mixture model, and defining the Gaussian distribution Gi and the pixel value XtAnd (6) matching.
If at least one Gaussian distribution and pixel value X in the Gaussian mixture model of the pixel are detectedtAnd matching, wherein the parameter updating rule of the Gaussian mixture model is as follows: 1) for mismatched gaussian distributions, their mean μ and covariance matrices remain unchanged; 2) the mean μ and covariance matrix of the matched gaussian distributions Gi are updated as follows:
μi,t=(1-ρ)·μi,t-1+ρ·Xt
j,t=(1-ρ)·∑j,t+ρ·diag[(Xti,t)T](Xti,t)
Figure BDA0003326301040000101
where α is the learning rate of the parameter estimation.
There are two important factors that influence whether a distribution is a background distribution: (1) the proportional size of the data generated by the distribution; (2) the variance of the distribution. Based on the two factors, the following method is adopted for estimation:
and sequencing the k Gaussian distributions forming each pixel Gaussian mixture model according to the ratio of omega k/sigma k from large to small, wherein omega k represents the proportion of data generated by the kth distribution, and sigma k represents the variance of the kth distribution. The first B gaussian distributions in the sequence are selected as background pixel models:
Figure BDA0003326301040000102
wherein T is a predetermined threshold (T is more than or equal to 0.5 and less than or equal to 1); b is the K Gaussian distributions after sorting, and the first B Gaussian distributions are the best description of the background pixel.
Re-checking each pixel value X at time ttThe matching relation with the first B Gaussian distributions obtained by the method, if the pixel value X istIf the pixel is matched with any one of the first B Gaussian distributions, the pixel is a background point, otherwise the pixel is classified as a foreground object, namely a moving object.
Referring to fig. 4, taking a video image a as an example, selecting a previous frame video image a ' and a next frame video image a ' of the video image a, extracting the backgrounds of the images a, a ', a "by simulating each background in the images a, a ', a" by using a mixed gaussian model, checking whether pixels in the backgrounds of the images a, a ', a "are matched with a background gaussian model, and if the pixels are matched with the background gaussian model, the background is obtained, otherwise, the foreground is obtained.
Specifically, parameters in a predefined gaussian model are initialized to obtain required parameters. And then, processing first pixels in the continuous video frame images a, a 'and a' to determine whether the first pixels are matched with a certain model, wherein the first pixels are randomly selected pixels at the same position in the video frame images a, a 'and a'. If the first pixel is matched with a certain model, the first pixel is classified into the model, and the model is updated according to a new pixel value. If the first pixel is not matched with any model, a Gaussian model is established by the first pixel. Initializing parameters to replace the least possible model in the original model; and selecting the previous most possible models to obtain an approximate background model and further obtain a foreground image.
Referring to fig. 5, fig. 5 is a foreground binary image processed by a gaussian mixture model, and the original scene video image is processed by step 102 to obtain the foreground binary image shown in fig. 5.
In step 103, object contour detection is performed on the foreground binary image to obtain an object contour set including a plurality of object contours.
For example, the object contour in the foreground binary image may be detected using a findContours () function, and drawn using a drawContours () function.
Specifically, when contour extraction is performed, a one-dimensional array is used for recording information of 8 neighboring regions around a pixel point, and if the gray value of the pixel points of the 8 neighboring regions is the same as that of the central point, the point is in the object and can be deleted; otherwise, the point is at the edge of the image and needs to be preserved. Each pixel in the image is processed in turn, the rest being the outline of the image.
Illustratively, a boundary tracking algorithm may be used. And searching a boundary point a from the image, and searching a next boundary point b according to the search criterion from the boundary point until a follow-up point tracked returns to the initial boundary point a.
Referring to fig. 6, a boundary point a is selected from the foreground binary image a, all boundary points in the foreground binary image a are tracked in a traversing manner until the boundary point a' coincides with the boundary point a, and the operation is stopped to obtain an object contour set of the foreground binary image.
Optionally, before step 103, the dust identification method may further include: and carrying out corrosion and expansion treatment on the foreground binary image to remove image noise in the foreground binary image.
Wherein, the erosion treatment and the expansion treatment are morphological methods and are used for processing the highlight part of the image. The dilation and erosion operation is to convolve the image with a kernel. The kernel can be of any shape and size, with an independently defined reference point. The expansion is an operation of solving a local maximum value, the maximum value of a pixel point of a region covered by a preset kernel is calculated by convolution of the preset kernel and the image, and the maximum value is assigned to a pixel designated by a reference point, so that a highlight region in the image is gradually increased. The corrosion is an operation of solving a local minimum value, the minimum value of pixel points in an area covered by a preset kernel is calculated by convolution of the preset kernel and an image, and the minimum value is assigned to a pixel designated by a reference point, so that highlight areas in the image are gradually reduced.
Specifically, noise in the foreground binary image is not only one pixel, and cannot be completely removed. Regarding the noise as a connected domain by using a connected domain method; and removing the connected domain with the size lower than the threshold value by searching the threshold value to remove the noise.
Illustratively, the forward function in Matlab may be used to remove the connected domain. In the connected domain operation, white parts need to be operated, and if the parts used as the connected domains in the graph to be processed are black, a gray inversion step needs to be added. When the image is a binary image: img is 1-img. After multiple tests, the threshold with the best effect is selected for operation, the connected domain lower than the threshold is removed, and then noise is removed.
In step 104, each object contour in the object contour set is detected based on at least one of color features, shape features and diffusivity features of the dust, and whether the dust is contained in each object contour is determined.
In some embodiments, step 104 may specifically include: extracting the region contour of each motion region in the object contour, and determining the minimum circumscribed rectangle of the vertical boundary of the region contour, the pixel area and the barycentric coordinate of the region contour; identifying dust in the area outline based on the length-width ratio and the rectangularity of the minimum circumscribed rectangle to obtain a suspected dust area, wherein the rectangularity is the ratio of the area of the outline to the area of the minimum circumscribed rectangle; and calculating a pixel gray value range corresponding to the dust in the dust color characteristic detection rule, and identifying the suspected dust area based on the pixel gray value range. The actual dust region is then determined based on the diffusivity characteristics of the dust.
For example, since the dust diffusivity characteristic may be changed to an area change, after the first dust region is obtained by identifying the suspected dust region based on the pixel gray value range, it may be determined whether the first dust region is a real dust region based on a change rate of an area of an outline of the same region corresponding to an adjacent video frame.
For example, the determining whether the first dust region is a real dust region based on the change rate of the area of the same region contour corresponding to the adjacent video frames may include: for any two objects, if the gravity center coordinates of the motion profiles meet a preset relation, determining that the two objects are the same object and counting the two objects; otherwise, adding two objects in a preset suspected dust list, and counting and timing the two objects. And if the number of the second objects in the suspected dust list exceeds the number threshold value and the increase rate of the area of the second objects reaches the increase rate threshold value, determining that the second objects are dust. And if the existence time of the first object in the suspected dust list exceeds a time threshold, eliminating the first object from the suspected dust list, wherein the first object is any one object in the suspected dust list.
For example, the minimum bounding rectangle, the pixel area, and the barycentric coordinates of the vertical boundary of each motion region contour may be obtained by extracting the contour of each motion region in the image. And finally, performing primary dust identification by using the aspect ratio and the rectangle degree (contour area/circumscribed rectangle area) of the minimum circumscribed rectangle according to the fact that the initial dust shape is almost circular, and obtaining a suspected dust area. And then, a dust color feature representing method obtained through experiments, namely a dust color feature detection rule, is used for further identifying the suspected dust area by calculating and obtaining a pixel gray value range as a color feature. And finally, detecting the increase rate of the area of the motion region in the continuous video frames, and finishing the identification and confirmation of the dust according to the dust diffusivity shown by the change of the area of the motion target.
Referring to fig. 7, determining whether the image includes dust based on the diffusivity characteristics of the dust may include:
and establishing a suspected dust list, wherein the suspected dust list comprises a counter, a timer, barycentric coordinates, a pixel area and the like. And then, determining whether the two motion profiles of the suspected dust are the same object or not by judging whether the barycentric coordinates of the two motion profiles of the suspected dust are in a certain range or not. If the objects are not the same, adding the object in the suspected dust list, counting to 1 and timing; and if the objects are the same, updating the dust information, and adding 1 to the counter. If an object suspected of being dust exists in the suspected dust list for more than a certain time, the object is eliminated from the suspected dust list, based on the reason that the dust does not float in the air for too long. And if the counter of a certain suspected dust object in the suspected dust list exceeds a certain number and the area increase rate reaches a certain threshold value, determining that the suspected dust object is dust.
Referring to fig. 8, fig. 8 is an image judged to be dust by diffusivity characteristics, with experimental visual calibration.
According to the dust identification method, after the video stream of a scene is acquired through the camera, a foreground binary image can be obtained through identification of a spatial moving object; by identifying the outline of the foreground object and according to the characteristics of dust, identifying the dust distribution of a scene, and further effectively monitoring and treating the dust; the step of densely arranging a large number of dust detectors is omitted, the influence of the installation process on production activities is avoided, the installation complexity is reduced, and the cost of use and device maintenance is reduced.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 7 shows a block diagram of a dust identification device provided in the embodiment of the present application, which corresponds to the dust identification method described in the above embodiment, and only shows the relevant parts in the embodiment of the present application for convenience of description.
Referring to fig. 9, the dust recognition apparatus in the embodiment of the present application may include an image acquisition module 201, an image processing module 202, an object contour recognition module 203, and a dust recognition module 204.
The image obtaining module 201 is configured to obtain a video image of a target area. And the image processing module 202 is configured to process the video image to obtain a foreground binary image. And the object contour identification module 203 is configured to perform object contour detection on the foreground binary image to obtain an object contour set including a plurality of object contours. And the dust identification module 204 is configured to detect each object contour in the object contour set based on at least one of a color feature, a shape feature and a diffusivity feature of dust, and determine whether the object contour set includes a dust contour.
In some embodiments, referring to fig. 10, based on the embodiment shown in fig. 9, the dust identification module 204 may include a moving area unit 2041, a first identification unit 2042, and a second identification unit 2043.
A contour extraction unit 2041, configured to extract a region contour of each motion region in the object contour, determine a minimum bounding rectangle of a vertical boundary of the region contour, and determine pixel area and barycentric coordinates of the region contour.
A shape recognition unit 2042, configured to recognize dust in the area outline based on the aspect ratio of the minimum circumscribed rectangle and a squareness, so as to obtain a suspected dust area, where the squareness is a ratio of an area of the outline to the area of the minimum circumscribed rectangle;
the color identification unit 2043 is configured to calculate a pixel grayscale value range corresponding to the dust in the dust color feature detection rule, and identify the suspected dust area based on the pixel grayscale value range.
Optionally, the dust recognition module 204 may further include: a diffusion identification unit 2044, configured to determine whether the first dust region is a real dust region based on a rate of change of an area of a same region contour corresponding to adjacent video frames.
For example, the determining unit may specifically be configured to:
for any two objects, if the gravity center coordinates of the motion profiles meet a preset relation, determining that the two objects are the same object and counting the two objects; otherwise, adding the two objects in a preset suspected dust list, and counting and timing the two objects;
and if the number of the second objects in the suspected dust list exceeds a number threshold value and the increase rate of the area of the second objects reaches an increase rate threshold value, determining that the second objects are dust.
And if the existence time of the first object in the suspected dust list exceeds a time threshold, eliminating the first object from the suspected dust list, wherein the first object is any one object in the suspected dust list.
Optionally, the image processing module 202 is specifically configured to: defogging the video image to obtain a gray image; and carrying out background modeling on the gray level image, and extracting a foreground moving object by a background difference method to obtain the foreground binary image.
Illustratively, the defogging processing on the video image to obtain the grayscale image includes:
calculating a gray level histogram of the video image by channels to obtain a maximum gray level value and a minimum gray level value, wherein the maximum gray level value and the minimum gray level value form an extreme value range;
dividing the video image into a plurality of image blocks;
for a first image block with a gray value outside the extreme value range, assigning the gray value of the first image block to be 0 or 255; for a second image block with the gray value within the extreme value range, linearly transforming the gray value of the second image block to 0-255 to obtain a gray image;
wherein coefficients of the linear transformation
Figure BDA0003326301040000151
k 'is a constant value, and k' is a constant value,
Figure BDA0003326301040000152
is the intra gray variance of said second image block,
Figure BDA0003326301040000153
the image noise variance of the second image block.
Optionally, the dust recognition device may further include: and the denoising module is used for carrying out corrosion and expansion processing on the foreground binary image and removing image noise points in the foreground binary image. And after the denoising module removes the image noise in the foreground binary image, the object contour identification module carries out object contour detection on the foreground binary image after the image noise is removed.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, and referring to fig. 11, the terminal device 500 may include: at least one processor 510, a memory 520, and a computer program stored in the memory 520 and executable on the at least one processor 510, the processor 510, when executing the computer program, implementing the steps of any of the various method embodiments described above, such as the steps 101 to 104 in the embodiment shown in fig. 2. Alternatively, the processor 510, when executing the computer program, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 201 to 204 shown in fig. 9.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 520 and executed by the processor 510 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal device 500.
Those skilled in the art will appreciate that fig. 11 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components such as input output devices, network access devices, buses, etc.
The Processor 510 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 520 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 520 is used for storing the computer programs and other programs and data required by the terminal device. The memory 520 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The dust identification method provided by the embodiment of the application can be applied to terminal devices such as industrial robots, computers, wearable devices, vehicle-mounted devices, tablet computers, notebook computers, netbooks, Personal Digital Assistants (PDAs), Augmented Reality (AR)/Virtual Reality (VR) devices and mobile phones, and the embodiment of the application does not limit the specific types of the terminal devices at all.
The embodiment of the application also provides a computer-readable storage medium, which stores a computer program, and the computer program is executed by a processor to implement the steps in the various embodiments of the dust identification method.
The embodiment of the application provides a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the embodiments of the dust identification method when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A dust identification method, comprising:
acquiring a video image of a target area;
processing the video image to obtain a foreground binary image;
carrying out object contour detection on the foreground binary image to obtain an object contour set containing a plurality of object contours;
and detecting each object contour in the object contour set based on at least one of color features, shape features and diffusivity features of the dust, and determining whether each object contour contains the dust.
2. The dust identification method of claim 1, wherein the detecting each object contour in the set of object contours based on at least one of a color feature, a shape feature and a diffusivity feature of the dust to determine whether each object contour contains dust comprises:
extracting a region contour of each motion region in the object contour, and determining a minimum bounding rectangle of a vertical boundary of the region contour, and pixel area and barycentric coordinates of the region contour;
identifying dust in the area outline based on the aspect ratio of the minimum circumscribed rectangle and the rectangle degree to obtain a suspected dust area, wherein the rectangle degree is the ratio of the area of the outline to the area of the minimum circumscribed rectangle;
and calculating a pixel gray value range corresponding to the dust in the dust color feature detection rule, and identifying the suspected dust area based on the pixel gray value range.
3. The dust identification method of claim 2, wherein identifying the suspected dust region based on the pixel grayscale value range yields a first dust region, and further comprising, after identifying the suspected dust region based on the pixel grayscale value range:
and determining whether the first dust region is a real dust region or not based on the change rate of the area of the same region outline corresponding to the adjacent video frames.
4. The dust identification method of claim 3, wherein the determining whether the first dust region is a real dust region based on a rate of change of an area of a same region contour corresponding to adjacent video frames comprises:
for any two objects, if the gravity center coordinates of the motion profiles meet a preset relation, determining that the two objects are the same object and counting the two objects; otherwise, adding the two objects in a preset suspected dust list, and counting and timing the two objects;
if the number of the second objects in the suspected dust list exceeds a number threshold and the increase rate of the area of the second objects reaches an increase rate threshold, determining that the second objects are dust;
and if the existence time of the first object in the suspected dust list exceeds a time threshold, eliminating the first object from the suspected dust list, wherein the first object is any one object in the suspected dust list.
5. The dust identification method of claim 1, wherein the processing the video image to obtain a foreground binary image comprises:
defogging the video image to obtain a gray image;
and carrying out background modeling on the gray level image, and extracting a foreground moving object by a background difference method to obtain the foreground binary image.
6. The dust recognition method of claim 5, wherein the defogging of the video image to obtain a gray image comprises:
calculating a gray level histogram of the video image by channels to obtain a maximum gray level value and a minimum gray level value, wherein the maximum gray level value and the minimum gray level value form an extreme value range;
dividing the video image into a plurality of image blocks;
for a first image block with a gray value outside the extreme value range, assigning the gray value of the first image block to be 0 or 255; for a second image block with the gray value within the extreme value range, linearly transforming the gray value of the second image block to 0-255 to obtain a gray image;
wherein coefficients of the linear transformation
Figure FDA0003326301030000021
k' is a constant, δ1 2Is the intra gray variance of said second image block,
Figure FDA0003326301030000022
the image noise variance of the second image block.
7. The dust recognition method of claim 1, wherein before the object contour detection on the foreground binary image, the method comprises:
and carrying out corrosion and expansion treatment on the foreground binary image to remove image noise in the foreground binary image.
8. A dust identification device, comprising:
the acquisition module is used for acquiring a video image of a target area;
the processing module is used for processing the video image to obtain a foreground binary image;
the object contour identification module is used for carrying out object contour detection on the foreground binary image to obtain an object contour set containing a plurality of object contours;
and the dust identification module is used for detecting each object contour in the object contour set based on at least one of color features, shape features and diffusivity features of dust, and determining whether the object contour set contains dust contours.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202111263432.8A 2021-10-28 2021-10-28 Dust identification method and device and terminal equipment Pending CN114005058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111263432.8A CN114005058A (en) 2021-10-28 2021-10-28 Dust identification method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111263432.8A CN114005058A (en) 2021-10-28 2021-10-28 Dust identification method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN114005058A true CN114005058A (en) 2022-02-01

Family

ID=79924910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111263432.8A Pending CN114005058A (en) 2021-10-28 2021-10-28 Dust identification method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN114005058A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114838670A (en) * 2022-03-11 2022-08-02 南京北新智能科技有限公司 Photovoltaic panel dust detection method based on color analysis
CN115496759A (en) * 2022-11-17 2022-12-20 歌尔股份有限公司 Dust detection method and device and storage medium
CN115487959A (en) * 2022-11-16 2022-12-20 山东济矿鲁能煤电股份有限公司阳城煤矿 Intelligent spraying control method for coal mine drilling machine
CN115797343A (en) * 2023-02-06 2023-03-14 山东大佳机械有限公司 Livestock and poultry breeding environment video monitoring method based on image data
CN116258715A (en) * 2023-05-15 2023-06-13 北京中超伟业信息安全技术股份有限公司 Dust recycling method and device and electronic equipment
CN116645363A (en) * 2023-07-17 2023-08-25 山东富鹏生物科技有限公司 Vision-based starch production quality real-time detection method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114838670A (en) * 2022-03-11 2022-08-02 南京北新智能科技有限公司 Photovoltaic panel dust detection method based on color analysis
CN115487959A (en) * 2022-11-16 2022-12-20 山东济矿鲁能煤电股份有限公司阳城煤矿 Intelligent spraying control method for coal mine drilling machine
CN115487959B (en) * 2022-11-16 2023-02-17 山东济矿鲁能煤电股份有限公司阳城煤矿 Intelligent spraying control method for coal mine drilling machine
CN115496759A (en) * 2022-11-17 2022-12-20 歌尔股份有限公司 Dust detection method and device and storage medium
CN115797343A (en) * 2023-02-06 2023-03-14 山东大佳机械有限公司 Livestock and poultry breeding environment video monitoring method based on image data
CN115797343B (en) * 2023-02-06 2023-04-21 山东大佳机械有限公司 Livestock and poultry breeding environment video monitoring method based on image data
CN116258715A (en) * 2023-05-15 2023-06-13 北京中超伟业信息安全技术股份有限公司 Dust recycling method and device and electronic equipment
CN116645363A (en) * 2023-07-17 2023-08-25 山东富鹏生物科技有限公司 Vision-based starch production quality real-time detection method
CN116645363B (en) * 2023-07-17 2023-10-13 山东富鹏生物科技有限公司 Vision-based starch production quality real-time detection method

Similar Documents

Publication Publication Date Title
CN114005058A (en) Dust identification method and device and terminal equipment
US9158985B2 (en) Method and apparatus for processing image of scene of interest
Scharcanski et al. A particle-filtering approach for vehicular tracking adaptive to occlusions
US20150131851A1 (en) System and method for using apparent size and orientation of an object to improve video-based tracking in regularized environments
CN111723644A (en) Method and system for detecting occlusion of surveillance video
Tian et al. Smoke detection in videos using non-redundant local binary pattern-based features
Cane et al. Saliency-based detection for maritime object tracking
CN112348778B (en) Object identification method, device, terminal equipment and storage medium
Yoshinaga et al. Object detection based on spatiotemporal background models
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
KR20140052256A (en) Real-time object tracking method in moving camera by using particle filter
Chang et al. An efficient method for lane-mark extraction in complex conditions
CN112634301A (en) Equipment area image extraction method and device
Kurniawan et al. Speed monitoring for multiple vehicle using closed circuit television (CCTV) camera
CN112651953A (en) Image similarity calculation method and device, computer equipment and storage medium
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN114758268A (en) Gesture recognition method and device and intelligent equipment
CN108960246B (en) Binarization processing device and method for image recognition
CN107301655B (en) Video moving target detection method based on background modeling
CN111242051A (en) Vehicle identification optimization method and device and storage medium
Xu et al. Pedestrian detection based on motion compensation and HOG/SVM classifier
CN115984786A (en) Vehicle damage detection method and device, terminal and storage medium
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
de Oliveira et al. Vehicle counting and trajectory detection based on particle filtering
Kaimkhani et al. UAV with Vision to Recognise Vehicle Number Plates

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination