CN113989251B - Intelligent control system and method for separating coal and gangue for mine - Google Patents

Intelligent control system and method for separating coal and gangue for mine Download PDF

Info

Publication number
CN113989251B
CN113989251B CN202111289304.0A CN202111289304A CN113989251B CN 113989251 B CN113989251 B CN 113989251B CN 202111289304 A CN202111289304 A CN 202111289304A CN 113989251 B CN113989251 B CN 113989251B
Authority
CN
China
Prior art keywords
image
gangue
target
range
maximum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111289304.0A
Other languages
Chinese (zh)
Other versions
CN113989251A (en
Inventor
刘素华
梁焱
宋向东
余伟凡
郭坤闪
刘景淼
王文艳
李愿彪
胡琴丽
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Zhongping Automation Ltd By Share Ltd
Original Assignee
Henan Zhongping Automation Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Zhongping Automation Ltd By Share Ltd filed Critical Henan Zhongping Automation Ltd By Share Ltd
Priority to CN202111289304.0A priority Critical patent/CN113989251B/en
Publication of CN113989251A publication Critical patent/CN113989251A/en
Application granted granted Critical
Publication of CN113989251B publication Critical patent/CN113989251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • B07C5/3425Sorting according to other particular properties according to optical properties, e.g. colour of granular material, e.g. ore particles, grain
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence, in particular to a mining coal gangue sorting intelligent control system and a mining coal gangue sorting intelligent control method, wherein the system comprises: the image acquisition module is used for acquiring a first image of the surface of the conveyor belt by using a first camera positioned at a first position; the image processing module is used for identifying coal and gangue in the first image by utilizing a semantic segmentation network; the method comprises the following steps of training a semantic segmentation network, wherein the construction process of a loss function used in training the semantic segmentation network specifically comprises the following steps: acquiring a second image of the surface of the conveyor belt based on a second camera located at a second position, wherein the similarity between the second image and the first image is greater than a preset similarity threshold value; acquiring the maximum moving range of the centroid of each target in the first image in the second image when the second image is acquired; the target is coal or gangue; constructing a loss function based on whether the centroid of each object in the second image is within the maximum range of motion. The semantic segmentation network has high segmentation precision, strong generalization capability and low requirement on the use condition.

Description

Intelligent control system and method for separating coal and gangue for mine
Technical Field
The invention relates to the field of artificial intelligence, in particular to a mining coal and gangue sorting intelligent control system and method.
Background
The intelligent and accurate identification and positioning of the coal and gangue sorting are realized, and the intelligent control is further realized; at present, the manual sorting mode is usually adopted in the prior art, but the manual sorting mode causes great waste to human resources, so that various schemes for realizing intelligent sorting through image processing or deep learning are provided in the prior art, but the conventional image processing and deep learning modes usually carry out coal and gangue sorting through contour or gray scale features, the generalization capability is not strong, and the influence of illumination factors is great, so that the accuracy rate in practical application is not high.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an intelligent control system and method for separating coal and gangue for mines, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the invention provides an intelligent control system for coal gangue sorting in a mine, which includes:
the image acquisition module is used for acquiring a first image of the surface of the conveyor belt by using a first camera positioned at a first position;
the image processing module is used for identifying coal and gangue in the first image by utilizing a semantic segmentation network;
the sorting control module is used for positioning the identified gangue so that the sorting device can realize the sorting of the gangue based on the positioning information;
the method comprises the following steps of training a semantic segmentation network, wherein the construction process of a loss function used in training the semantic segmentation network specifically comprises the following steps:
acquiring a second image of the surface of the conveyor belt based on a second camera located at a second position, wherein the similarity between the second image and the first image is greater than a preset similarity threshold value;
acquiring the maximum moving range of the centroid of each target in the first image in the second image when the second image is acquired; the target is coal or gangue;
constructing a loss function based on whether the centroid of each object in the second image is within the maximum range of motion.
Further, when the second image is acquired, the obtaining of the maximum moving range of the centroid of each target in the first image in the second image specifically includes:
acquiring target description images of two channels according to the gray value and the category value of the pixel point of the region where the target is located; the classification value of the pixel point is obtained based on the output result of the semantic segmentation network;
generating a unit maximum movement range of the target centroid in the second image within a time interval based on the target description image; the time interval is an interval for acquiring the first image or the second image;
expanding the range on the basis of the unit maximum moving range; and acquiring the times of range expansion based on the acquisition time difference of the first image and the second image and the time interval.
Further, the obtaining of the unit maximum movement range specifically includes:
and processing the target description image by using a range prediction network, and outputting a unit thermodynamic diagram, wherein a unit Gaussian hotspot in the unit thermodynamic diagram represents a unit maximum moving range, and the heat value of a certain pixel in the unit Gaussian hotspot represents the probability that the target is at the pixel position after moving from the first position along with the conveyor belt for a time interval.
Further, the range expansion is heat spot expansion, wherein the primary heat spot expansion specifically comprises:
and (3) generating unit Gaussian hot spots by taking each point in the hot spots obtained after the hot spots are expanded for the previous time as a central point, and fusing all the unit Gaussian hot spots to obtain the hot spots obtained after the hot spots are expanded for the previous time.
Further, based on the number of times of range expansion, the maximum hot spot is obtained after hot spot expansion is completed, and the maximum hot spot represents the maximum moving range.
Further, the loss function is specifically: l ═ ΣN n1(-logHn) Wherein H isnRepresenting the thermal value in the maximum hot spot corresponding to the centroid of the nth object in the second image, the thermal value HnCloser to 0, the smaller the probability that the centroid characterizing the nth target is within the maximum hot spot; n represents the number of objects in the second image.
In a second aspect, another embodiment of the invention provides an intelligent control method for sorting coal and gangue for mines, which specifically comprises the following steps:
capturing a first image of the conveyor belt surface with a first camera located at a first position; identifying coal and gangue in the first image by using a semantic segmentation network; positioning the identified gangue so that the sorting device can realize the sorting of the gangue based on the positioning information;
the method comprises the following steps of training a semantic segmentation network, wherein the construction process of a loss function used in training the semantic segmentation network specifically comprises the following steps:
acquiring a second image of the surface of the conveyor belt based on a second camera located at a second position, wherein the similarity between the second image and the first image is greater than a preset similarity threshold value;
acquiring the maximum moving range of the centroid of each target in the first image in the second image when the second image is acquired; the target is coal or gangue;
constructing a loss function based on whether the centroid of each object in the second image is within the maximum range of motion.
The embodiment of the invention at least has the following beneficial effects: the loss function provided by the invention is used for training the semantic segmentation network, so that the interference of external factors such as illumination on segmentation results can be reduced, the segmentation precision of the semantic segmentation network is improved, the generalization capability of the semantic segmentation network is strong, and the requirement on the use condition is low.
Detailed Description
In order to further illustrate the technical means and effects adopted by the invention to achieve the predetermined invention purpose, the following detailed description is given to the specific implementation, structure, features and effects of the intelligent control system and method for sorting the mine gangue, which are provided by the invention, in combination with the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following application scenarios are taken as examples to illustrate the present invention:
the application scene is as follows: this coal gangue selects separately intelligent control scene is including spreading out hardware devices such as coal device, conveyer belt and sorting unit, places the coal charge in the conveyer belt initiating terminal, utilizes to spread out the coal device and opens the coal charge for the individual layer on the conveyer belt, and the coal charge that has spread out for the individual layer passes through the conveyer belt and transmits to sorting unit department, and sorting unit is used for separating waste rock and coal, and the implementer can adopt modes such as beating ware, manipulator to realize selecting separately.
One embodiment of the invention provides a mining coal gangue sorting intelligent control system, which comprises:
the image acquisition module is used for acquiring a first image of the surface of the conveyor belt by using a first camera at a first position and acquiring a second image of the surface of the conveyor belt by using a second camera at a second position; wherein the first camera is close to the starting end of the conveyor belt, and the second camera is close to the sorting device; preferably, the first camera and the second camera in the embodiment are cameras with known poses and deployed above the conveyor belt to simultaneously shoot images of the surface of the conveyor belt from a top view; and determining the conveying speed of the conveyor belt and the pose change information of the coal and the gangue according to the first image and the second image.
The image processing module is used for identifying coal and gangue in the first image by utilizing a semantic segmentation network; specifically, coal pixel points, gangue pixel points and background pixel points in the first image are segmented through a semantic segmentation network to obtain a first segmentation map, and preferably, in the embodiment, the pixel values of the coal pixel points, the gangue pixel points and the background pixel points in the first segmentation map are 1, 2 and 0 respectively; under the condition that a training set is incomplete due to the influence of factors such as illumination, the self attributes of coal and gangue and the like in an actual scene, the semantic categories of coal pixel points and gangue pixel points in the obtained semantic segmentation result are inaccurate; the method generally collects enough images under various working conditions to process so as to ensure that the network learns enough characteristics, however, the method needs a large amount of preliminary preparation work and the generalization capability is still limited, so the method adopts an online training mode to update network parameters, and specifically, the construction process of a loss function used when the semantic segmentation network is trained is as follows:
(a) acquiring a second image of the surface of the conveyor belt based on a second camera located at a second position, wherein the similarity between the second image and the first image is greater than a preset similarity threshold value; specifically, the second camera acquires multiple frames of second images, and selects a second image, of which the similarity with the first image is greater than a preset similarity threshold, from the multiple frames of second images, for processing in a subsequent process, specifically:
(i) extracting the targets included in the first image based on the first segmentation map, and acquiring a target image of each target: obtaining a mask of each target based on the first segmentation graph, wherein the mask is as large as the first segmentation graph, pixel values of pixel points in a target area in the mask are 1, pixel values of other pixel points are 0, multiplying the mask and the first image to extract each target, and obtaining a target image of each target, wherein the pixel value of the area where the target is located in the target image is kept unchanged, and the pixel values of other pixel points are 0.
Acquiring a target description image of each target in the first image: the method comprises the steps of obtaining a gray value and a category value of a pixel point in a target area based on a target image, wherein the category value is the value of the pixel point corresponding to the pixel point in the target area in a first segmentation graph, obtaining a target description image of two channels corresponding to a target based on the gray value and the category value of the pixel point in the target area, specifically, the first channel is the gray value of the pixel point, and the second channel is the category value of the pixel point.
(ii) For each frame of second image, acquiring a target description image of each target in the frame of second image: segmenting the second image by utilizing a semantic segmentation network to obtain a second segmentation image; and (c) acquiring an object description image of each object in the second image of the frame according to the processing procedure in the step (a).
(iii) Respectively acquiring a first image and a second image of any frame, obtaining a target description image corresponding to the first image and the second image of any frame, obtaining a first target description image set and a second target description image set, matching the target description images in the two image sets through a KM algorithm, wherein the weight value between the two target description images is the reciprocal of the Euclidean distance between the two target description images during matching, and calculating the matching degree according to the mean value of the reciprocal of the Euclidean distance of the two images in each image matching pair after matching is finished; and respectively matching the first target description image set corresponding to the first image with the second target description image set corresponding to each frame of second image, and obtaining a plurality of matching degrees after matching for a plurality of times, wherein the second image corresponding to the highest matching degree is the finally selected second image with the similarity with the first image being greater than a preset similarity threshold.
(b) As can be seen from the above, the first image and the selected second image of step (a) are respectively the images of the surface of the conveyor belt acquired by the first camera located at the first position at the first time and the second camera located at the second position at the second time, i.e. the target included in the first image and the target included in the selected second image of step (a) are the same; constructing a loss function based on the first image and the second image which is selected in the step (a) and has the similarity with the first image larger than a preset similarity threshold:
(i) acquiring the maximum moving range of the centroid of each target in the first image in the second image when the second image is acquired; the target is coal or gangue; specifically, the method comprises the following steps:
acquiring target description images of two channels according to the gray value and the category value of the pixel point of the region where the target is located; the classification value of the pixel point is obtained based on the output result of the semantic segmentation network; generating a unit maximum movement range of the target centroid in the second image within a time interval based on the target description image; the time interval is an interval of acquiring the first image or the second image by the camera; expanding the range on the basis of the unit maximum moving range; the number of times of range expansion is obtained based on the time difference between the first image and the second image and the time interval, and specifically, the number of times of range expansion is obtained based on the time difference between the first time and the second time, where the number of times of range expansion is the number of frames between the first time and the second time when the first camera collects the first image or the number of frames when the second camera collects the second image.
The obtaining of the unit maximum moving range specifically includes: processing the target description image by using a range prediction network, outputting a unit thermodynamic diagram, wherein a unit Gaussian hot spot in the unit thermodynamic diagram represents the maximum unit moving range, and acquiring the image position of a target centroid in a first image, wherein the image position of the target centroid is the central point of the unit Gaussian hot spot in the unit thermodynamic diagram, and the thermal value of a certain pixel in the unit Gaussian hot spot represents the probability that the target is at the pixel position after moving for a time interval along with the conveyor belt from the first position; preferably, the unit gaussian heat spot is a circular heat spot or an elliptical heat spot.
Preferably, the range prediction network comprises a range prediction encoder and a range prediction decoder. The acquisition of training sample data and label data used by the training range prediction network specifically comprises the following steps: acquiring target description images of a plurality of targets as training sample data; in the invention, the probability distribution of the centroid in each position in the unit maximum movement range after the default target moves from the first position along with the conveyor belt for a time interval is two-dimensional Gaussian distribution, so that the acquisition method of the label data comprises the following steps: and for a target corresponding to each training sample data, placing the target at the initial position on the conveyor belt to start conveying, acquiring a first image and a second image in the conveying process, carrying out a plurality of experiments based on the position of the centroid of the target in the second image, counting the probability of the centroid at each position in the image after the target moves for a time interval along with the conveyor belt from the first position to obtain a probability distribution map, and taking the two-dimensional Gaussian distribution closest to the probability distribution map as the label data corresponding to the training sample data. It is noted that, the prior is that the coal and the gangue are transmitted on the conveyor belt and have position deviation caused by self vibration or conveyor belt vibration, and the coal and the gangue have different two-dimensional Gaussian distributions under the same shape due to different self densities; therefore, the maximum moving range of the centroid of each object in the first image in the second image when the second image is acquired is obtained in the invention, and is the maximum moving range of the centroid of each object in the first image in the moving process of moving from the first position to the second position.
Wherein, the range expansion is hot spot expansion, wherein, the hot spot expansion for one time is specifically as follows: generating a unit Gaussian hot spot by taking each point in the hot spots obtained after the hot spots are respectively expanded for the previous time as a central point, specifically, taking the unit Gaussian hot spot generated by taking each point as the central point as a unit maximum moving range of each point in the hot spots obtained after the hot spots are expanded for the previous time, namely when the target mass center is positioned at the point, the unit maximum moving range of the target mass center; and fusing all the unit Gaussian hot spots to obtain the hot spots after the hot spots are expanded for one time. And obtaining the maximum hot spot after finishing hot spot expansion based on the times of range expansion, wherein the maximum hot spot represents the maximum moving range. It should be noted that, when all unit gaussian hot spots are fused, the fusion method of the thermal value is as follows: generating a unit Gaussian hot spot by taking each point in the hot spot obtained after the last hot spot expansion as a central point, wherein the pixel value of each pixel point in the hot spot overlapping area is the product of the thermal value of the pixel point in the hot spot obtained after the last range expansion and the pixel value of the pixel point in the newly generated unit Gaussian hot spot; if one pixel point in the overlapping area is contained by a plurality of newly generated unit Gaussian hot spots, obtaining a plurality of products according to the thermal value of the pixel point in the hot spot obtained after the last range expansion and the pixel value of the pixel point in each unit Gaussian hot spot containing the pixel point, wherein the sum of the products is the fusion thermal value corresponding to the pixel point; the thermal force value of the pixel point in the non-coincident region is the value of the corresponding pixel point in the corresponding unit Gaussian hot spot; it should be noted that the thermal value of the pixel point in the hot spot obtained after the hot spot is enlarged for one time needs to be normalized.
It should be noted that when the hot spot is expanded for the first time, each point of the unit gaussian hot spot in the unit thermodynamic diagram output by the range prediction network is taken as a central point to generate a new unit gaussian hot spot, and all the unit gaussian hot spots are fused to obtain the hot spot after the hot spot is expanded for the first time.
(ii) Constructing a loss function according to whether the centroid of each object in the second image is within the maximum movement range: l ═ Σn N1(-logHn) Wherein H isnRepresenting the thermal value in the maximum hot spot corresponding to the centroid of the nth object in the second image, the thermal value HnThe closer to 0, the smaller the probability that the centroid characterizing the nth object is within the maximum hotspot, at which time-logHnThe value of (2) is extremely large, which indicates that classification errors exist when the semantic segmentation network classifies the target; n represents the number of objects in the second image.
The sorting control module is used for positioning the identified gangue so that the sorting device can realize the sorting of the gangue based on the positioning information; specifically, according to the actual distance between the first camera and the second camera and the acquisition time difference between the first image and the second image, the conveying speed of the conveying belt can be obtained, the position information of the gangue is obtained according to the second image, a sorting control instruction is generated based on the position information of the gangue, the conveying speed of the conveying belt and the distance between the second camera and the sorting device, the sorting control instruction is transmitted to the sorting device, the sorting device is controlled to remove the gangue from the coal block, and the coal gangue sorting is achieved.
Based on the same inventive concept as the system embodiment, one embodiment of the invention provides an intelligent control method for mine gangue sorting, which comprises the following steps:
capturing a first image of the conveyor belt surface with a first camera located at a first position; identifying coal and gangue in the first image by using a semantic segmentation network; positioning the identified gangue so that the sorting device can realize the sorting of the gangue based on the positioning information;
the method comprises the following steps of training a semantic segmentation network, wherein the construction process of a loss function used in training the semantic segmentation network specifically comprises the following steps:
acquiring a second image of the surface of the conveyor belt based on a second camera located at a second position, wherein the similarity between the second image and the first image is greater than a preset similarity threshold value;
acquiring the maximum moving range of the centroid of each target in the first image in the second image when the second image is acquired; the target is coal or gangue;
constructing a loss function based on whether the centroid of each object in the second image is within the maximum range of motion.
It should be noted that: the sequence of the above embodiments of the present invention is only for description, and does not represent the advantages or disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. The utility model provides a mining coal gangue selects separately intelligent control system which characterized in that, this system includes:
the image acquisition module is used for acquiring a first image of the surface of the conveyor belt by using a first camera positioned at a first position;
the image processing module is used for identifying coal and gangue in the first image by utilizing a semantic segmentation network;
the sorting control module is used for positioning the identified gangue so that the sorting device can realize the sorting of the gangue based on the positioning information;
the method comprises the following steps of training a semantic segmentation network, wherein the construction process of a loss function used in training the semantic segmentation network specifically comprises the following steps:
acquiring a second image of the surface of the conveyor belt based on a second camera located at a second position, wherein the similarity between the second image and the first image is greater than a preset similarity threshold value;
acquiring the maximum moving range of the centroid of each target in the first image in the second image when the second image is acquired; the target is coal or gangue;
constructing a loss function based on whether the centroid of each object in the second image is within the maximum range of motion.
2. The system of claim 1, wherein the maximum range of motion of the centroid of each object in the first image in the second image at the time the second image is acquired is specifically:
acquiring target description images of two channels according to the gray value and the category value of a pixel point in the area where the target is located, wherein the first channel of the two channels is the gray value of the pixel point, and the second channel of the two channels is the category value of the pixel point; the classification value of the pixel point is obtained based on the output result of the semantic segmentation network;
generating a unit maximum movement range of the target centroid in the second image within a time interval based on the target description image; the time interval is an interval for acquiring the first image or the second image;
expanding the range on the basis of the unit maximum moving range; and acquiring the times of range expansion based on the acquisition time difference of the first image and the second image and the time interval.
3. The system of claim 2, wherein the unit maximum movement range is obtained by:
and processing the target description image by using a range prediction network, and outputting a unit thermodynamic diagram, wherein a unit Gaussian hotspot in the unit thermodynamic diagram represents a unit maximum moving range, and the heat value of a certain pixel in the unit Gaussian hotspot represents the probability that the target is at the pixel position after moving from the first position along with the conveyor belt for a time interval.
4. The system of claim 3, wherein the range expansion is a hot spot expansion, wherein the one hot spot expansion is specifically:
and (3) generating unit Gaussian hot spots by taking each point in the hot spots obtained after the hot spots are expanded for the previous time as a central point, and fusing all the unit Gaussian hot spots to obtain the hot spots obtained after the hot spots are expanded for the previous time.
5. The system of claim 4, wherein a maximum hot spot is obtained after completion of hot spot expansion based on a number of range expansions performed, the maximum hot spot characterizing the maximum range of motion.
6. The system of claim 1, wherein the loss function is specifically:
Figure FDA0003570701790000011
wherein HnRepresenting the thermal value in the maximum hot spot corresponding to the centroid of the nth object in the second image, the thermal value HnThe closer to 0, the smaller the probability that the centroid characterizing the nth object is within the maximum hot spot; n represents the number of objects in the second image.
7. The intelligent control method for separating the coal gangue for the mine is characterized by comprising the following steps of:
capturing a first image of the conveyor belt surface with a first camera located at a first position; identifying coal and gangue in the first image by using a semantic segmentation network; positioning the identified gangue, and enabling the sorting device to realize the sorting of the gangue based on the positioning information;
the method comprises the following steps of training a semantic segmentation network, wherein the construction process of a loss function used in training the semantic segmentation network specifically comprises the following steps:
acquiring a second image of the surface of the conveyor belt based on a second camera located at a second position, wherein the similarity between the second image and the first image is greater than a preset similarity threshold value;
acquiring the maximum moving range of the centroid of each target in the first image in the second image when the second image is acquired; the target is coal or gangue;
constructing a loss function based on whether the centroid of each object in the second image is within the maximum range of motion.
CN202111289304.0A 2021-11-02 2021-11-02 Intelligent control system and method for separating coal and gangue for mine Active CN113989251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111289304.0A CN113989251B (en) 2021-11-02 2021-11-02 Intelligent control system and method for separating coal and gangue for mine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111289304.0A CN113989251B (en) 2021-11-02 2021-11-02 Intelligent control system and method for separating coal and gangue for mine

Publications (2)

Publication Number Publication Date
CN113989251A CN113989251A (en) 2022-01-28
CN113989251B true CN113989251B (en) 2022-05-24

Family

ID=79745857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111289304.0A Active CN113989251B (en) 2021-11-02 2021-11-02 Intelligent control system and method for separating coal and gangue for mine

Country Status (1)

Country Link
CN (1) CN113989251B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914462A (en) * 2020-07-22 2020-11-10 中国地质大学深圳研究院 Method and device for predicting target drift of maritime search and rescue
CN112264330A (en) * 2020-10-22 2021-01-26 安徽理工大学 Remove end coal gangue photoelectric separation system
CN112354874A (en) * 2020-09-03 2021-02-12 江苏旷博智能技术有限公司 Coal and gangue identification method and gangue automatic separation system

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10127667B2 (en) * 2014-08-01 2018-11-13 Locuslabs, Inc. Image-based object location system and process
US20190070787A1 (en) * 2017-08-10 2019-03-07 William Marsh Rice University Machine learning enabled model for predicting the spreading process in powder-bed three-dimensional printing
CN109188550A (en) * 2018-08-30 2019-01-11 山西精英科技股份有限公司 A kind of belt conveyor foreign matter detection system
CN109230351B (en) * 2018-08-30 2020-06-16 精英数智科技股份有限公司 Method for identifying abnormal operation of belt conveyor
CN111346842A (en) * 2018-12-24 2020-06-30 顺丰科技有限公司 Coal gangue sorting method, device, equipment and storage medium
CN109977747A (en) * 2019-01-08 2019-07-05 浙江大学 A kind of coal rock detection system of comprehensive space-time characteristic
US11308314B2 (en) * 2019-03-21 2022-04-19 Toyota Research Institute, Inc. Systems, devices, and methods for generating a pose estimate of an object
CN110232696B (en) * 2019-06-20 2024-03-08 腾讯科技(深圳)有限公司 Image region segmentation method, model training method and device
CN111709935B (en) * 2020-06-17 2023-04-07 西安科技大学 Real-time coal gangue positioning and identifying method for ground moving belt
CN112613515A (en) * 2020-11-23 2021-04-06 上海眼控科技股份有限公司 Semantic segmentation method and device, computer equipment and storage medium
CN112508105B (en) * 2020-12-11 2024-03-19 南京富岛信息工程有限公司 Fault detection and retrieval method for oil extraction machine
CN113289925A (en) * 2021-05-21 2021-08-24 中国矿业大学(北京) Intelligent sorting system and method for sundries in coal based on deep learning
CN113222857A (en) * 2021-05-27 2021-08-06 Oppo广东移动通信有限公司 Image processing method, model training method and device, medium and electronic equipment
CN113379743B (en) * 2021-08-12 2021-10-29 山东中都机器有限公司 Conveyor abnormity detection method and system based on computer vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914462A (en) * 2020-07-22 2020-11-10 中国地质大学深圳研究院 Method and device for predicting target drift of maritime search and rescue
CN112354874A (en) * 2020-09-03 2021-02-12 江苏旷博智能技术有限公司 Coal and gangue identification method and gangue automatic separation system
CN112264330A (en) * 2020-10-22 2021-01-26 安徽理工大学 Remove end coal gangue photoelectric separation system

Also Published As

Publication number Publication date
CN113989251A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN108846358B (en) Target tracking method for feature fusion based on twin network
Chen et al. Unsupervised domain adaptation for semantic segmentation of high-resolution remote sensing imagery driven by category-certainty attention
CN111680655A (en) Video target detection method for aerial images of unmanned aerial vehicle
CN113096138B (en) Weak supervision semantic image segmentation method for selective pixel affinity learning
CN106933816A (en) Across camera lens object retrieval system and method based on global characteristics and local feature
CN109934127B (en) Pedestrian identification and tracking method based on video image and wireless signal
CN103679154A (en) Three-dimensional gesture action recognition method based on depth images
CN107016391A (en) A kind of complex scene workpiece identification method
KR100813168B1 (en) Method for extracting object in digital image with shape prior and system for executing the method
US20180107877A1 (en) Image processing apparatus, image processing method, and image processing system
CN110705412A (en) Video target detection method based on motion history image
CN111461036B (en) Real-time pedestrian detection method using background modeling to enhance data
CN109927033A (en) A kind of target object dynamic adaptation method applied to conveyer belt sorting
Chen et al. Silhouette-based object phenotype recognition using 3D shape priors
CN108537816A (en) A kind of obvious object dividing method connecting priori with background based on super-pixel
CN113537066B (en) Wearing mask face recognition method based on multi-granularity mixed loss and electronic equipment
CN112927264A (en) Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN109376677A (en) A kind of video behavior detection method merged based on shape-movement double fluid information
CN116823885A (en) End-to-end single target tracking method based on pyramid pooling attention mechanism
CN113628246B (en) Twin network target tracking method based on 3D convolution template updating
CN113989251B (en) Intelligent control system and method for separating coal and gangue for mine
CN112509009B (en) Target tracking method based on natural language information assistance
WO2020068104A1 (en) Generating spatial gradient maps for a person in an image
CN103927517B (en) Motion detection method based on human body global feature histogram entropies
CN113221957A (en) Radar information fusion characteristic enhancement method based on Centernet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant