CN114049543A - Automatic identification method for scrap steel unloading change area based on deep learning - Google Patents

Automatic identification method for scrap steel unloading change area based on deep learning Download PDF

Info

Publication number
CN114049543A
CN114049543A CN202111357651.2A CN202111357651A CN114049543A CN 114049543 A CN114049543 A CN 114049543A CN 202111357651 A CN202111357651 A CN 202111357651A CN 114049543 A CN114049543 A CN 114049543A
Authority
CN
China
Prior art keywords
scrap
model
scrap steel
area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111357651.2A
Other languages
Chinese (zh)
Inventor
来博文
贾永坡
安宝
彭晶
王伟
冯兴
杨冬靓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hegang Digital Technology Co ltd
Original Assignee
Hegang Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hegang Digital Technology Co ltd filed Critical Hegang Digital Technology Co ltd
Priority to CN202111357651.2A priority Critical patent/CN114049543A/en
Publication of CN114049543A publication Critical patent/CN114049543A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a method for automatically identifying a scrap discharge change region based on deep learning, which is characterized by adopting a MaskR-CNN algorithm to realize rapid identification of the position of a scrap carriage in a video image, carrying out example segmentation on the position, adopting a YOLO-v4 algorithm to track a scrap discharge grab bucket, automatically acquiring a carriage image after the grab bucket grabs the scrap, and identifying the scrap change region in the carriage through a Gaussian mixture model. The method solves the problem that the quality basis for evaluating the scrap steel is lacked in the process of discharging and quality testing the scrap steel by scrap steel quality testing personnel, and improves the efficiency and the accuracy of the scrap steel evaluation by the scrap steel quality testing personnel.

Description

Automatic identification method for scrap steel unloading change area based on deep learning
Technical Field
The invention relates to the technical field of scrap steel recycling and processing, in particular to a method for automatically identifying a scrap steel discharging change area based on deep learning.
Background
In the current steel production process, in order to reduce the cost and improve the smelting efficiency, waste steel is generally recycled and melted for reuse. However, because the amount of scrap steel used is large, the scrap steel is mixed and loaded in a multi-material type, and the phenomenon of scrap steel adulteration and the like often occurs, the quality of the purchased scrap steel needs to be tested in order to ensure the product quality, improve the steel yield, and avoid accidents such as explosion, molten steel splashing and the like.
The traditional manual scrap steel quality inspection is greatly influenced by human subjective factors, has higher requirements on personnel, generally requires familiar standards and can be judged only by abundant experiences; moreover, each person judges that differences exist, evaluation results are possibly influenced due to fatigue, mood and the like, no quantitative evaluation conclusion is provided, good data analysis cannot be formed, and a supplier cannot be convinced easily. Meanwhile, the quality testing operation environment of the scrap steel is severe, a quality inspector needs to climb up to the top of a large truck for four or five meters every time to observe the scrap steel in the truck in a short distance, the labor intensity is high, and the operation risk is high.
Present degree of depth study and computer vision technique wide application are cut apart and image identification field in the image, and the convolutional neural network in the degree of depth study gains better effect in the aspect of image segmentation and image target identification, uses the degree of depth study technique to automatic identification steel scrap car process of unloading change region, can save the change region image in the carriage of grab bucket back of unloading at every turn automatically, for the staff provides the steel scrap quality of testing basis, reduces work load, guarantees work safety.
There are two challenges that currently exist: (1) how to quickly position the scrap steel carriage in the image video and accurately divide the scrap steel carriage; (2) how to automatically identify the change area of the scrap steel carriage after the scrap steel grab bucket grabs the scrap steel every time and storing the image of the change area.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for automatically identifying a scrap discharge change area based on deep learning, which can quickly locate the position of a scrap carriage in an image video and accurately divide the position, automatically identify the change area of the scrap carriage after a scrap grab bucket grabs the scrap each time, and store the image of the change area.
In order to solve the above technical problems, the present invention comprises:
a method for automatically identifying a scrap discharge change area based on deep learning comprises the following steps:
s1: acquiring a target area image to be detected through a camera and transmitting the target area image to an algorithm server;
s2: constructing a scrap steel carriage positioning model by using a Mask R-CNN algorithm, inputting the target area image obtained in the step S1 into the trained scrap steel carriage positioning model, identifying the position of a scrap steel carriage in the target area image, and extracting the pixel range of the scrap steel carriage area;
s3: constructing a tracking model of the scrap steel discharging grab bucket based on a YOLO-v4 algorithm, tracking the moving track of the grab bucket in the scrap steel carriage area extracted in the step S2 by utilizing the trained tracking model of the scrap steel discharging grab bucket, and capturing multi-frame images of the scrap steel carriage area after the grab bucket captures the scrap steel from a video by taking a descending edge as a trigger when the grab bucket is detected to leave the scrap steel carriage area;
s4: and modeling the background by using a Gaussian mixture model, training the Gaussian mixture model by using the multi-frame image of the area of the scrap steel carriage captured last time in the step S3, identifying the change area of the scrap steel carriage captured by the scrap steel grab bucket through the difference operation of the Gaussian mixture model and the current image of the area of the scrap steel carriage captured last time, and storing the change area with the largest area.
Further, in step S2, the scrap car positioning model performs multi-scale feature extraction on the target area image by using ResNet-FPN, and realizes identification of scrap car position pixel level through the head network.
Further, in step S2, the training of the scrap car positioning model includes the following steps:
s21: collecting image data of large-scale steel scrap car and industrial field video monitoring image data, labeling the image data, and making a training set VtrainTest set VtestAnd verification set Vval
S22: solving parameters in a model by using an optimization function, wherein the optimization function is an adam optimizer function in TensorFlow, and the size of an input image of the model is img (512, 3);
s23: training a scrap steel carriage positioning model, and stopping training when the Loss value meets the requirement:
Loss=Lcls+Lbox+Lmask
wherein L isclsTo classify errors, LboxTo detect errors, LmaskIs a segmentation error;
s24: and testing the prediction effect of the trained model, if the model meets the requirement, saving the model, otherwise, repeating the step S23 to continue training.
Further, in the step S3, the scrap discharge grab tracking model utilizes CSPdarknet53 as a main feature extraction network, and implements multi-scale feature extraction by combining with an SPP network.
Further, in the step S3, the training of the tracking model of the scrap discharge grab comprises the following steps:
s31: collecting image data of large-scale scrap grab bucket, marking the image data in three states of before grabbing, during grabbing and after grabbing, and making a training set QtrainTest set QtestAnd verification set Qval
S32: solving parameters in a model by using an optimization function, wherein the optimization function is an adam optimizer function in TensorFlow, and the model input image is img ═ 416, 3;
s33: training the tracking model of the scrap steel unloading grab bucket, and stopping training when the Loss value reaches the requirement:
Loss=Lcls+Lbox+Lcfd
wherein L isclsTo classify errors, LboxTo detect errors, LcfdIs the confidence error;
s34: and testing the prediction effect of the trained model, if the model meets the requirement, saving the model, otherwise, repeating the step S33 to continue training.
Further, in step S4, the background modeling of the gaussian mixture model includes the following steps:
s41: storing k frame images of the scrap steel carriage area after last grabbing;
s42: the Gaussian mixture model adopts k frame images of the scrap steel compartment region after last capture to model G, and the position in the images is (x)0,y0) The observed value of the pixel point in a period of time is as follows:
{X1,...,Xt}={I(x0,y0,i):1≤i≤t}
s43: modeling the observed value in the above formula by using a plurality of Gaussian distributions to obtain the color value probability of the current pixel point as follows:
Figure BDA0003357919590000041
wherein K is the number of Gaussian distributions, omegai,tIs an estimated value of the weight, namely the probability that the pixel point belongs to the ith Gaussian distribution at the moment t, mui,tIs the mean value, sigma, of the ith Gaussian distribution at time ti,tThe covariance matrix of the ith Gaussian distribution is obtained, and eta is a probability density function of the Gaussian distribution; three components (R, G, B) of pixel color values are mutually independent and have the same variance, namely the covariance matrix sigma of the ith Gaussian distributioni,tIs composed of
Figure BDA0003357919590000042
S44: difference operation between the Gaussian mixture model and the current captured image: for a pixel point (x) in the input image0,y0T), comparing the color value with K existing Gaussian distributions, judging whether the color value is matched with the existing Gaussian distributions or not, if so, taking the pixel point as a background point, and judging the rule as follows:
|(Xti,t-1)|<TH×σi,t-1
wherein, mui,t-1Is the mean value of the ith Gaussian distribution at the time t-1, TH is 2.5, sigmai,t-1Is the standard deviation of the ith Gaussian distribution at time t-1.
S45: inputting the current image of the grabbed scrap steel carriage area to a Gaussian mixture model G, identifying the current grabbed scrap steel carriage change area, and storing the change area with the largest area:
Achange=max(S(A1),S(A2),...,S(An))。
wherein A ischangeAs the coordinates of the rectangular variation area, S (A)1),S(A2),...,S(An) Is the area of each of the varying regions.
The invention has the beneficial effects that:
the invention provides a method for automatically identifying a scrap discharging change area by constructing multiple models based on a deep learning technology, which solves the problem that scrap quality testing personnel lack a basis for evaluating the scrap quality in the scrap discharging and quality testing process, and improves the efficiency and accuracy of the scrap quality testing personnel in evaluating the scrap. According to the invention, firstly, a positioning model of the scrap steel carriage is constructed by utilizing a Mask R-CNN algorithm, under the condition of high-efficiency operation efficiency, the example of the scrap steel carriage in an image is segmented, multi-scale features are extracted, and pixel-level positioning can be realized, namely the position coordinates and the occupied pixel range of the scrap steel carriage are detected; secondly, a tracking model of the scrap steel grab bucket is constructed by using a YOLO-v4 algorithm, the moving track of the grab bucket is tracked in a scrap steel carriage area, when the condition that the grab bucket leaves the scrap steel carriage area is detected, a descending edge is used as a trigger to capture a multiframe image, and the multiframe image is automatically captured after each capture; and finally, based on a Gaussian Mixture Model (GMM), realizing automatic identification of the discharging change area of the scrap steel carriage: and modeling the background by using a Gaussian mixture model, performing background modeling on the multi-frame image which is automatically captured after the last capture after each capture, obtaining a change region by differentiating the change region with the currently captured image, and determining the final change region by using an area method.
The method performs targeted training on the target scene, the algorithm model has flexibility, and the model parameters can be adapted according to the special scene. The invention has good adaptability, can be deployed in common computers, cameras and handheld photographing equipment, and has lower requirements on hardware needing to be matched.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment apparatus according to the present invention;
FIG. 2 is a schematic diagram showing the results of the steel scrap car inspection model of the present invention;
FIG. 3 is a schematic view of the results of the discharge grab tracking model of the present invention;
FIG. 4 is a diagram illustrating the result of identifying a change region by a Gaussian mixture model according to the present invention.
Detailed Description
For the purpose of promoting an understanding of the invention, reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. It should be understood by those skilled in the art that the examples are only for the understanding of the present invention and should not be construed as the specific limitations of the present invention.
Fig. 1 is a schematic diagram of hardware operating environment equipment related to the method for automatically identifying a scrap discharge change area based on a multi-model established by a deep learning technology. The invention adopts a deep learning multi-model to realize the automatic identification of the discharging change area of the scrap steel vehicle, and mainly comprises video image acquisition, algorithm service and result output.
A method for automatically identifying a scrap discharge change area based on deep learning comprises the following steps:
s1: acquiring a target area image to be detected through a camera and transmitting the target area image to an algorithm server;
s2: constructing a scrap steel carriage positioning model by using a Mask R-CNN algorithm, inputting the target area image obtained in the step S1 into the trained scrap steel carriage positioning model, identifying the position of a scrap steel carriage in the target area image, and extracting the pixel range of the scrap steel carriage area; as shown in fig. 2, fig. 2(a) is a schematic diagram of a convolution neural network structure of a scrap car detection model, and fig. 2(b) is a diagram of a result of extracting a car region by the scrap car detection model.
S3: constructing a tracking model of the scrap steel discharging grab bucket based on a YOLO-v4 algorithm, tracking the moving track of the grab bucket in the scrap steel carriage area extracted in the step S2 by utilizing the trained tracking model of the scrap steel discharging grab bucket, and capturing multi-frame images of the scrap steel carriage area after the grab bucket captures the scrap steel from a video by taking a descending edge as a trigger when the grab bucket is detected to leave the scrap steel carriage area; as shown in fig. 3, fig. 3(a) is a diagram of a convolutional neural network structure of a discharge grab tracking model, and fig. 3(b) is a diagram of a discharge grab tracking result, and the grab is marked by a regression box.
S4: and modeling the background by using a Gaussian mixture model, training the Gaussian mixture model by using the multi-frame image of the area of the scrap steel carriage captured last time in the step S3, identifying the change area of the scrap steel carriage captured by the scrap steel grab bucket through the difference operation of the Gaussian mixture model and the current image of the area of the scrap steel carriage captured last time, and storing the change area with the largest area. As shown in fig. 4, fig. 4(a) is a graph of the result after the discharge grab bucket captures, fig. 4(b) is a graph of the binarization result of the change region after one frame of captured image is input to the gaussian mixture model, fig. 4(c) is a graph of the rectangular frame mark of all the change regions, fig. 4(d) is a graph of the rectangular frame mark of the desired change region, and fig. 4(e) is a graph of the result of clipping the change region.
In step S2, the steel scrap car positioning model performs multi-scale feature extraction on the target area image by using ResNet-FPN, and realizes recognition of the steel scrap car position pixel level through the head network. The training of the scrap steel carriage positioning model comprises the following steps:
s21: collecting image data of large-scale steel scrap car and industrial field video monitoring image data, labeling the image data, and making a training set VtrainTest set VtestAnd verification setVval
S22: solving parameters in a model by using an optimization function, wherein the optimization function is an adam optimizer function in TensorFlow, and the size of an input image of the model is img (512, 3);
s23: training a scrap steel carriage positioning model, and stopping training when the Loss value meets the requirement:
Loss=Lcls+Lbox+Lmask
wherein L isclsTo classify errors, LboxTo detect errors, LmaskIs a segmentation error;
s24: and testing the prediction effect of the trained model, if the model meets the requirement, saving the model, otherwise, repeating the step S23 to continue training.
In the step S3, the scrap discharge grab tracking model utilizes CSPdarknet53 as a main feature extraction network, and implements multi-scale feature extraction by combining with an SPP network. 70+ convolution layer, 6 million parameter extraction grab bucket characteristics can realize real-time automatic tracking to the grab bucket under the complicated background of steel scrap environment, and the model has very high pursuit speed, guarantees the real-time of pursuit grab bucket, has very high positioning accuracy, guarantees the accuracy of pursuit grab bucket. The training of the tracking model of the scrap steel unloading grab bucket comprises the following steps:
s31: collecting image data of large-scale scrap grab bucket, marking the image data in three states of before grabbing, during grabbing and after grabbing, and making a training set QtrainTest set QtestAnd verification set Qval
S32: solving parameters in a model by using an optimization function, wherein the optimization function is an adam optimizer function in TensorFlow, and the model input image is img ═ 416, 3;
s33: training the tracking model of the scrap steel unloading grab bucket, and stopping training when the Loss value reaches the requirement:
Loss=Lcls+Lbox+Lcfd
wherein L isclsTo classify errors, LboxTo detect errors, LcfdIs the confidence error;
s34: and testing the prediction effect of the trained model, if the model meets the requirement, saving the model, otherwise, repeating the step S33 to continue training.
In the step S4, the change region identification model has a good modeling effect on the robust complex scene background, is effective in modeling the multimodal distribution background, and can adapt to changes of the background such as gradual change of light, complex steel scrap environment, and large scene light change. The background modeling of the Gaussian mixture model comprises the following steps:
s41: storing k frame images of the scrap steel carriage area after last grabbing;
s42: the Gaussian mixture model adopts k frame images of the scrap steel compartment region after last capture to model G, and the position in the images is (x)0,y0) The observed value of the pixel point in a period of time is as follows:
{X1,...,Xt}={I(x0,y0,i):1≤i≤t}
wherein, X1,...,XtIs in the position of (x)0,y0) The observed value of the pixel point in the time t;
s43: modeling the observed value in the above formula by using a plurality of Gaussian distributions to obtain the color value probability of the current pixel point as follows:
Figure BDA0003357919590000081
wherein K is the number of Gaussian distributions, omegai,tIs an estimated value of the weight, namely the probability that the pixel point belongs to the ith Gaussian distribution at the moment t, mui,tIs the mean value, sigma, of the ith Gaussian distribution at time ti,tThe covariance matrix of the ith Gaussian distribution is obtained, and eta is a probability density function of the Gaussian distribution; three components (R, G, B) of pixel color values are mutually independent and have the same variance, namely the covariance matrix sigma of the ith Gaussian distributioni,tIs composed of
Figure BDA0003357919590000091
S44: difference operation between the Gaussian mixture model and the current captured image: for a pixel point (x) in the input image0,y0T), comparing the color value with K existing Gaussian distributions, judging whether the color value is matched with the existing Gaussian distributions or not, if so, taking the pixel point as a background point, and judging the rule as follows:
|(Xti,t-1)|<TH×σi,t-1
wherein, mui,t-1Is the mean value of the ith Gaussian distribution at the time t-1, TH is 2.5, sigmai,t-1Is the standard deviation of the ith Gaussian distribution at time t-1.
S45: inputting the current image of the grabbed scrap steel carriage area to a Gaussian mixture model G, identifying the current grabbed scrap steel carriage change area, and storing the change area with the largest area:
Achange=max(S(A1),S(A2),...,S(An))。
wherein A ischangeAs the coordinates of the rectangular variation area, S (A)1),S(A2),...,S(An) Is the area of each of the varying regions.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A method for automatically identifying a scrap discharge change area based on deep learning is characterized by comprising the following steps:
s1: acquiring a target area image to be detected through a camera and transmitting the target area image to an algorithm server;
s2: constructing a scrap steel carriage positioning model by using a Mask R-CNN algorithm, inputting the target area image obtained in the step S1 into the trained scrap steel carriage positioning model, identifying the position of a scrap steel carriage in the target area image, and extracting the pixel range of the scrap steel carriage area;
s3: constructing a tracking model of the scrap steel discharging grab bucket based on a YOLO-v4 algorithm, tracking the moving track of the grab bucket in the scrap steel carriage area extracted in the step S2 by utilizing the trained tracking model of the scrap steel discharging grab bucket, and capturing multi-frame images of the scrap steel carriage area after the grab bucket captures the scrap steel from a video by taking a descending edge as a trigger when the grab bucket is detected to leave the scrap steel carriage area;
s4: and modeling the background by using a Gaussian mixture model, training the Gaussian mixture model by using the multi-frame image of the area of the scrap steel carriage captured last time in the step S3, identifying the change area of the scrap steel carriage captured by the scrap steel grab bucket through the difference operation of the Gaussian mixture model and the current image of the area of the scrap steel carriage captured last time, and storing the change area with the largest area.
2. The method for automatically identifying the scrap discharging change area based on deep learning of claim 1, wherein in the step S2, the scrap car positioning model utilizes ResNet-FPN to perform multi-scale feature extraction on the target area image, and the identification of the scrap car position pixel level is realized through a head network.
3. The automatic identification method for the scrap discharge change area based on deep learning according to claim 1, wherein in the step S2, the training of the scrap car positioning model comprises the following steps:
s21: collecting image data of large-scale steel scrap car and industrial field video monitoring image data, labeling the image data, and making a training set VtrainTest set VtestAnd verification set Vval
S22: solving parameters in a model by using an optimization function, wherein the optimization function is an adam optimizer function in TensorFlow, and the size of an input image of the model is img (512, 3);
s23: training a scrap steel carriage positioning model, and stopping training when the Loss value meets the requirement:
Loss=Lcls+Lbox+Lmask
wherein L isclsTo classify errors, LboxTo detect errors, LmaskIs a segmentation error;
s24: and testing the prediction effect of the trained model, if the model meets the requirement, saving the model, otherwise, repeating the step S23 to continue training.
4. The method for automatically identifying the scrap discharge change area based on deep learning of claim 1, wherein in step S3, the scrap discharge grab tracking model utilizes CSPdarknet53 as a main feature extraction network, and combines with an SPP network to realize multi-scale feature extraction.
5. The automatic identification method for the scrap discharge change area based on deep learning according to claim 1, wherein in the step S3, the training of the scrap discharge grab tracking model comprises the following steps:
s31: collecting image data of large-scale scrap grab bucket, marking the image data in three states of before grabbing, during grabbing and after grabbing, and making a training set QtrainTest set QtestAnd verification set Qval
S32: solving parameters in a model by using an optimization function, wherein the optimization function is an adam optimizer function in TensorFlow, and the model input image is img ═ 416, 3;
s33: training the tracking model of the scrap steel unloading grab bucket, and stopping training when the Loss value reaches the requirement:
Loss=Lcls+Lbox+Lcfd
wherein L isclsTo classify errors, LboxTo detect errors, LcfdIs the confidence error;
s34: and testing the prediction effect of the trained model, if the model meets the requirement, saving the model, otherwise, repeating the step S33 to continue training.
6. The method for automatically identifying the discharging change area of the scrap steel based on deep learning according to claim 1, wherein in the step S4, the Gaussian mixture model background modeling comprises the following steps:
s41: storing k frame images of the scrap steel carriage area after last grabbing;
s42: the Gaussian mixture model adopts k frame images of the scrap steel compartment region after last capture to model G, and the position in the images is (x)0,y0) The observed value of the pixel point in a period of time is as follows:
{X1,...,Xt}={I(x0,y0,i):1≤i≤t}
s43: modeling the observed value in the above formula by using a plurality of Gaussian distributions to obtain the color value probability of the current pixel point as follows:
Figure FDA0003357919580000031
wherein K is the number of Gaussian distributions, omegai,tIs an estimated value of the weight, namely the probability that the pixel point belongs to the ith Gaussian distribution at the moment t, mui,tIs the mean value, sigma, of the ith Gaussian distribution at time ti,tThe covariance matrix of the ith Gaussian distribution is obtained, and eta is a probability density function of the Gaussian distribution; three components (R, G, B) of pixel color values are mutually independent and have the same variance, namely the covariance matrix sigma of the ith Gaussian distributioni,tIs composed of
Figure FDA0003357919580000032
S44: difference operation between the Gaussian mixture model and the current captured image: for a pixel point (x) in the input image0,y0T), comparing the color value with K existing Gaussian distributions, judging whether the color value is matched with the existing Gaussian distributions or not, if so, taking the pixel point as a background point, and judging the rule as follows:
|(Xti,t-1)|<TH×σi,t-1
wherein, mui,t-1Is the mean value of the ith Gaussian distribution at the time t-1, TH is 2.5, sigmai,t-1Is the standard deviation of the ith Gaussian distribution at time t-1.
S45: inputting the current image of the grabbed scrap steel carriage area to a Gaussian mixture model G, identifying the current grabbed scrap steel carriage change area, and storing the change area with the largest area:
Achange=max(S(A1),S(A2),...,S(An))。
wherein A ischangeAs the coordinates of the rectangular variation area, S (A)1),S(A2),...,S(An) Is the area of each of the varying regions.
CN202111357651.2A 2021-11-16 2021-11-16 Automatic identification method for scrap steel unloading change area based on deep learning Pending CN114049543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111357651.2A CN114049543A (en) 2021-11-16 2021-11-16 Automatic identification method for scrap steel unloading change area based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111357651.2A CN114049543A (en) 2021-11-16 2021-11-16 Automatic identification method for scrap steel unloading change area based on deep learning

Publications (1)

Publication Number Publication Date
CN114049543A true CN114049543A (en) 2022-02-15

Family

ID=80209560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111357651.2A Pending CN114049543A (en) 2021-11-16 2021-11-16 Automatic identification method for scrap steel unloading change area based on deep learning

Country Status (1)

Country Link
CN (1) CN114049543A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830008A (en) * 2023-02-06 2023-03-21 上海爱梵达云计算有限公司 Analysis system for judging waste degree of scrap steel based on image analysis comparison

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830008A (en) * 2023-02-06 2023-03-21 上海爱梵达云计算有限公司 Analysis system for judging waste degree of scrap steel based on image analysis comparison

Similar Documents

Publication Publication Date Title
CN110738127B (en) Helmet identification method based on unsupervised deep learning neural network algorithm
CN107437245B (en) High-speed railway contact net fault diagnosis method based on deep convolutional neural network
Faghih-Roohi et al. Deep convolutional neural networks for detection of rail surface defects
CN103499585B (en) Based on noncontinuity lithium battery film defect inspection method and the device thereof of machine vision
CN106991668B (en) Evaluation method for pictures shot by skynet camera
CN105044122B (en) A kind of copper piece surface defect visible detection method based on semi-supervised learning model
CN108257114A (en) A kind of transmission facility defect inspection method based on deep learning
CN104992449A (en) Information identification and surface defect on-line detection method based on machine visual sense
Liu et al. A rail surface defect detection method based on pyramid feature and lightweight convolutional neural network
CN113324864B (en) Pantograph carbon slide plate abrasion detection method based on deep learning target detection
CN108090434B (en) Rapid ore identification method
CN106683073B (en) License plate detection method, camera and server
CN110569843B (en) Intelligent detection and identification method for mine target
CN106778650A (en) Scene adaptive pedestrian detection method and system based on polymorphic type information fusion
CN104198497A (en) Surface defect detection method based on visual saliency map and support vector machine
CN109911550A (en) Scratch board conveyor protective device based on infrared thermal imaging and visible light video analysis
CN112613454A (en) Electric power infrastructure construction site violation identification method and system
CN110909657A (en) Method for identifying apparent tunnel disease image
CN113177605A (en) Scrap steel carriage grade judgment method based on video monitoring
CN115512134A (en) Express item stacking abnormity early warning method, device, equipment and storage medium
CN114049543A (en) Automatic identification method for scrap steel unloading change area based on deep learning
CN113077423A (en) Laser selective melting pool image analysis system based on convolutional neural network
CN117152161B (en) Shaving board quality detection method and system based on image recognition
Hashmi et al. Computer-vision based visual inspection and crack detection of railroad tracks
CN112613560A (en) Method for identifying front opening and closing damage fault of railway bullet train head cover based on Faster R-CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination