CN110660074B - Method for establishing steel scrap grade division neural network model - Google Patents

Method for establishing steel scrap grade division neural network model Download PDF

Info

Publication number
CN110660074B
CN110660074B CN201910958076.8A CN201910958076A CN110660074B CN 110660074 B CN110660074 B CN 110660074B CN 201910958076 A CN201910958076 A CN 201910958076A CN 110660074 B CN110660074 B CN 110660074B
Authority
CN
China
Prior art keywords
convolution
layer
calculation
line
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910958076.8A
Other languages
Chinese (zh)
Other versions
CN110660074A (en
Inventor
李大亮
王保红
王占祥
郭锋
齐明誉
谢建军
韩超洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tongchuang Xintong Technology Co ltd
Original Assignee
Beijing Tongchuang Xintong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=69040347&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CN110660074(B) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Beijing Tongchuang Xintong Technology Co ltd filed Critical Beijing Tongchuang Xintong Technology Co ltd
Priority to CN201910958076.8A priority Critical patent/CN110660074B/en
Publication of CN110660074A publication Critical patent/CN110660074A/en
Application granted granted Critical
Publication of CN110660074B publication Critical patent/CN110660074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention discloses a method for establishing a steel scrap grade dividing neural network model, wherein the model is used for grade classification detection of steel scrap storage, and comprises the steps of obtaining a plurality of images, visually determining different steel scrap grades of the images, preprocessing the images to remove invalid watermarks and improve image contrast, extracting image data characteristics of the images, and performing convolutional neural network learning on the extracted image data characteristics of different grades to form a grade dividing neural network model with grade classification output; the extraction of the image data features is realized by performing a set of convolution neural network convolution calculation on image picture pixel point matrix data, and comprises the following steps of: and extracting object color, edge features and texture features in the image and extracting correlation features between object edges and textures in the image, wherein the extraction is formed by calculating a plurality of line convolution layers or convolution layers and a pooling layer which are output in a gathering mode.

Description

Method for establishing steel scrap grade division neural network model
Technical Field
The invention relates to a method for establishing a steel scrap grade division neural network model.
Background
A large amount of scrap steel is stored in a steel mill every year, except waste machine equipment, a large amount of scrap steel materials are mainly sent into the steel mill through an automobile, the scrap steel materials comprise pipes in various shapes, blocks and plates, the sizes and the shapes are different, other impurities are also doped in the scrap steel materials, the scrap steel materials need to be graded according to the thicknesses in a transportation process, payment is carried out according to grades, therefore, the grading examination is very important, at present, manual collection is carried out in a quality testing process, a collection notice is manually filled, in the quality testing process, quality testing personnel are needed, a scrap steel vehicle is climbed, the dimension of the scrap steel is manually measured, the quality of the scrap steel is manually visually measured, the grade of the scrap steel is judged, and then the judgment result is manually input into a metering system and an ERP material system. Such an approach has several disadvantages: 1) the scrap steel has different shapes and is stacked disorderly, and the integral scrap steel condition cannot be checked in detail. 2) Because the quality testing personnel need frequently get on and off the vehicle, potential safety hazards exist. 3) The production rhythm of a steel mill is fast, quality testing personnel frequently get on and off the train for testing, the unloading progress of a crown block is influenced, and the production process is influenced. 4) A specially-assigned person is required to do the work, and the problem of labor cost exists. 5) Manual copying and checking are all visual, standards are different and not uniform from person to person, misjudgment and misjudgment exist, and the accuracy is not high. 6) There is a risk of private communication between the supplier and the certified person.
As an image recognition technology, a convolutional neural network has been widely applied to face recognition, and whether classification of broken steel grades can be realized by using the method or not by establishing a recognition model to input a face image into a model for recognition, however, in the process of establishing a face recognition model, learning and establishing are usually performed by extracting face edge features, broken steel materials transported by automobiles are overlapped and extruded together, some small broken steel materials are mixed together and spread in a carriage, and the shapes of the small broken steel materials cannot be distinguished and classified at all, so that a model cannot be established by using a conventional method for extracting edge features.
Disclosure of Invention
The invention aims to provide a method for establishing a steel scrap grade division neural network model, which is used for classifying and detecting steel scrap grades in storage.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a method for establishing a steel scrap grade dividing neural network model is used for grade classification detection of steel scrap storage, and comprises the steps of obtaining a plurality of images, visually determining different steel scrap grades of the images, preprocessing the images to remove invalid watermarks and improve image contrast, extracting image data characteristics of the image data, and performing convolutional neural network learning on the extracted image data characteristics of different grades to form a grade dividing neural network model with grade classification output; the extraction of the image data features is realized by performing a set of convolution neural network convolution calculation on image picture pixel point matrix data, and comprises the following steps of: extracting object color, edge feature and texture feature in the image and extracting correlation feature between object edge and texture in the image, wherein the extraction is formed by calculating a plurality of line convolution layers or convolution layers and a pooling layer which are output in a gathering manner;
wherein: the collection output of the calculation output of the convolution layer or the convolution layer and the pooling layer of the at least three lines forms the extraction of object color, edge characteristic and texture characteristic in the image, and the convolution layer number of each line is different;
the number of lines calculated for the convolution layer extracted from the correlation characteristics between the edge and the texture is larger than the number of lines calculated for the convolution layer extracted from the color, the edge and the texture characteristics of the object in the image.
The scheme is further as follows: the determination of the different scrap levels for the plurality of images is determined by a professional panel through human eye recognition discussion.
The scheme is further as follows: the extraction of object color and edge feature in the image is composed of the aggregate output of the calculation output of the convolution layer of the three lines and the pooling layer, and comprises a first line convolution layer, a pooling layer, a second line convolution layer, a third line convolution layer and a fourth line convolution layer,
wherein:
the first line is a pooling layer: the pooling layer performs maximum pooling calculation on the pixel matrix data of the effective image picture by taking a 3 multiplied by 3 pixel matrix as a sliding window and taking the step length as 2, and outputs the data to a set;
the second line comprises two convolution layers: the bottom layer convolution layer performs convolution calculation on the pixel point matrix data of the effective image picture through convolution check of 192 1 × 1 pixel point matrixes, and the second convolution layer performs convolution calculation on the convolution calculation effective result of the bottom layer convolution layer through convolution check of 192 3 × 3 pixel point matrixes with the step length of 2 and outputs the result to the set;
the third line has four convolutional layers: the convolution calculation is carried out on the pixel point matrix data of the effective image picture by carrying out convolution check on 256 1 × 1 pixel point matrix on the bottom convolution layer, the convolution calculation is carried out on the effective result of the convolution calculation of the bottom convolution layer by carrying out convolution check on the 256 1 × 7 pixel point matrix on the second convolution layer, the convolution calculation is carried out on the effective result of the convolution calculation of the second convolution layer by carrying out convolution check on 320 7 × 1 pixel point matrix on the third convolution layer, and the convolution calculation is carried out on the effective result of the convolution calculation of the third convolution layer by taking the step length as 2 on 320 3 × 3 pixel point matrix convolution kernels and is output to the set.
The scheme is further as follows: the extraction of the texture features in the image is carried out on the extraction set output of the object color and the edge features in the image, and the extraction set output is formed by the set output of the calculation output of the convolution layers of the three lines, wherein the set output comprises a convolution layer 0 of the first line, a convolution layer two layers of the second line and a convolution layer three layers of the third line, and the texture features form an activation function of a convolution network;
wherein:
the first line 0 convolutional layer: directly outputting the pixel matrix data output by the extraction set of the effective color and the edge characteristic to a set without any operation;
the second line comprises two convolution layers: the convolution calculation is carried out on the pixel matrix data output by the extraction set of the effective color and the edge characteristic of the convolution check of 192 1 × 1 pixel matrixes by the bottom convolution layer, the convolution calculation is carried out on the sum of the effective result of the convolution calculation of the bottom convolution layer and the effective result of the convolution calculation of the third convolution layer of the third line by the convolution check of 1154 1 × 1 pixel matrixes by the second convolution layer, and the sum is output to the set;
the third line is a three-layer convolution layer: the convolution calculation is carried out on pixel matrix data output by the extraction set of effective color and edge characteristics of the bottom convolution layer through convolution checking of 128 1x 1 pixel matrixes, the convolution calculation is carried out on an effective result of convolution calculation of the bottom convolution layer through convolution checking of 160 1x7 pixel matrixes, and the convolution calculation is carried out on an effective result of convolution calculation of the second convolution layer through convolution checking of 192 7x1 pixel matrixes by the third convolution layer to be output to the second convolution layer of the second line.
The scheme is further as follows: when the number of the calculated lines of the convolution layers for extracting the color, the edge and the texture features is 3, the number of the calculated lines of the convolution layers for extracting the correlation features between the edge and the texture is 4, and the calculated lines comprise a first line first pooling layer, a second line second convolution layer, a third line second convolution layer and a fourth line third convolution layer;
wherein:
the first line is a pooling layer: the pooling layer performs maximum pooling calculation on the pixel matrix data of the effective image picture by taking a 3 multiplied by 3 pixel matrix as a sliding window and taking 2 as a step length and outputs the data to a set;
the second line comprises two convolution layers: the bottom layer convolution layer performs convolution calculation on effective image picture pixel point matrix data through 256 convolution kernels of 1 × 1 pixel point matrixes, and the second convolution layer performs convolution calculation on effective results of convolution calculation of the bottom layer convolution layer through 384 convolution kernels of 3 × 3 pixel point matrixes with the step length being 2 and outputs the effective results to a set;
the third circuit comprises two convolution layers: the convolution calculation is carried out on the pixel point matrix data of the effective image picture by the convolution check of 256 1 × 1 pixel point matrixes by the bottom convolution layer, and the convolution calculation is carried out on the effective result of the convolution calculation of the bottom convolution layer by 388 3 × 3 pixel point matrix convolution kernels with the step length of 2 and the effective result is output to a set by the second convolution layer;
the fourth line is a three-layer convolutional layer: the convolution calculation is carried out on the convolution calculation effective image pixel point matrix data by convolution check of 256 1x 1 pixel point matrixes, the convolution calculation effective result of the convolution calculation of the convolution check of the bottom layer convolution layer is carried out on the second convolution layer by convolution check of 388 3 x 3 pixel point matrixes, and the convolution calculation effective result of the convolution calculation of the second layer convolution layer is carried out on the third convolution layer by convolution check of 320 3 x 3 pixel point matrixes and with the step length of 2 and is output to the set.
The scheme is further as follows: in the preprocessing, an expansion corrosion processing method is adopted for removing the invalid watermark, and a histogram equalization processing method is adopted for improving the image contrast.
The method extracts the image characteristics by performing the set of convolution neural network convolution calculation on the image picture pixel point matrix data, solves the problem of establishing a broken steel model, adopts multi-line and multi-layer, does not use edges but uses the hierarchy of the image as characteristic extraction, is a unique characteristic extraction method, and can achieve the identification consistency of the broken steel grade by using the model to be more than 80 to 90 percent.
The invention is described in detail below with reference to the figures and examples.
Drawings
FIG. 1 is a schematic diagram of the process of extracting object color and edge features;
FIG. 2 is a schematic diagram of a texture feature extraction process;
FIG. 3 is a schematic diagram illustrating a process of extracting correlation features between edges and textures.
Detailed Description
A method for establishing a steel scrap grade division neural network model is used for grade classification detection of steel scrap storage, and a convolutional neural network CNN (convolutional neural networks) is a known technology and is formed by arranging and combining an input layer, a convolutional layer, a pooling layer, a full connecting layer and an output layer.
The method comprises the steps of obtaining a plurality of images, visually observing and determining different scrap steel grades of the images, preprocessing the images to remove invalid watermarks and improve image contrast, extracting image data characteristics of the images, and performing convolutional neural network learning on the extracted image data characteristics of different grades to form a grade division neural network model with grade classification output (namely a convolutional neural network output layer); the extraction of the image data features is realized by performing a set of convolution neural network convolution calculation on image picture pixel point matrix data, and comprises the following steps of: extracting object color, edge feature and texture feature in the image and extracting correlation feature between object edge and texture in the image, wherein the extraction is formed by calculating a plurality of line convolution layers or convolution layers and a pooling layer which are output in a gathering manner;
wherein: the collection output of the calculation output of the convolution layer or the convolution layer and the pooling layer of the at least three lines forms the extraction of object color, edge characteristic and texture characteristic in the image, and the convolution layer number of each line is different;
the number of lines calculated for the convolution layer extracted from the correlation characteristics between the edge and the texture is larger than the number of lines calculated for the convolution layer extracted from the color, the edge and the texture characteristics of the object in the image.
The method is different from the traditional image processing in the extraction of image data characteristics, the traditional identification characteristics generally adopt a single-line structure, only can extract characteristics with fixed visual field and level or directly reserve the characteristics of the previous layer for information combination, and do not further extract the characteristics, which are enough for single purpose, namely human face and fingerprint identification, but the problems to be solved by the embodiment, namely the target is disordered and has no seal, and the single target profile is not distinguished, all steel scrap entities are scattered and have no seal and are mutually overlapped, if each entity is distinguished, the size and the number of each entity are obviously unrealistic, therefore, the method adopts the mode of multi-stage overlapping extraction, the method adopts the mode of parallel division into a plurality of partial convolution operations, the levels of the extracted image features are different in different times of convolution operations, and the image features in a certain part not only contain a certain level, but also include high-order and low-order. The image information thus obtained is the most abundant and meaningful. The traditional model only carries out feature extraction by a line, can only extract the feature of fixed field of vision at a certain stage, and is not abundant, has very big limitation, has very big influence to the model rate of accuracy of later stage. For the image with the complex background, the limitation is larger when feature extraction and recognition are carried out. Compared with the traditional model, the model of the method has higher accuracy.
Wherein: the determination of the different scrap levels for the plurality of images is determined by a professional panel through human eye recognition discussion. That is, the conventional sorting operator determines the grade of the scrap steel corresponding to each image by visual inspection and comparison with measurement at the photographing site according to the sorting standard.
The following are some preferred schemes for feature extraction:
firstly, the extraction of the object color and edge feature in the image is composed of the aggregate output of the calculation output of the three line convolution layers and the pooling layer, as shown in fig. 1, the extraction comprises a first line one-layer pooling layer, a second line two-layer convolution layer and a third line four-layer convolution layer from left to right,
wherein:
the first line is a pooling layer: the pooling layer takes a 3 multiplied by 3 pixel point matrix as a sliding window, carries out maximum pooling calculation (MaxPool) on pixel point matrix data (Filter concat) of an effective image picture by taking the step length as 2 (stride 2V) and outputs the data to a set;
the second line comprises two convolution layers: the convolution calculation (Conv) is carried out on the convolution check effective image pixel point matrix data of 192 1 × 1 pixel point matrixes by the bottom convolution layer, and the convolution calculation is carried out on the convolution calculation effective result of the bottom convolution layer to be collected by the second convolution layer with the step length of 2 through 192 3 × 3 pixel point matrix convolution kernels;
the third line has four convolutional layers: the convolution calculation is carried out on the pixel point matrix data of the effective image picture by carrying out convolution check on 256 1 × 1 pixel point matrix on the bottom convolution layer, the convolution calculation is carried out on the effective result of the convolution calculation of the bottom convolution layer by carrying out convolution check on the 256 1 × 7 pixel point matrix on the second convolution layer, the convolution calculation is carried out on the effective result of the convolution calculation of the second convolution layer by carrying out convolution check on 320 7 × 1 pixel point matrix on the third convolution layer, and the convolution calculation is carried out on the effective result of the convolution calculation of the third convolution layer by taking the step length as 2 on 320 3 × 3 pixel point matrix convolution kernels and is output to the set.
In the color and edge feature extraction part formed by the 3-line parallel convolution layers, different lines adopt convolution operations of different levels to extract features of different degrees, and then information combination is carried out, so that the obtained features such as edges are abundant. One path of the channel adopts a convolution kernel of 7x7, the problem of more considered parameters is that the convolution kernel is split into 7x1 and 1x7, and for a single channel, the parameter quantity is reduced by 35. Here a convolution kernel of 7x7 was used, resulting in a larger receptive field.
The extraction of the texture features in the image is the extraction of the extracted set output of the object color and the edge features in the image, and is formed by the set output of the calculation output of the three line convolution layers, as shown in fig. 2, the three line convolution layers include a first line 0 convolution layer, a second line two-layer convolution layer and a third line three-layer convolution layer from left to right; the texture feature is formed by the activation function (Relu activation) of the convolutional network
Wherein:
the first line 0 convolutional layer: directly outputting the pixel matrix data output by the extraction set of the effective color and the edge characteristic to a set without any operation;
the second line comprises two convolution layers: the convolution calculation is carried out on the pixel matrix data output by the extraction set of the effective color and the edge characteristic of the convolution check of 192 1 × 1 pixel matrixes by the bottom convolution layer, the convolution calculation is carried out on the sum of the effective result of the convolution calculation of the bottom convolution layer and the effective result of the convolution calculation of the third convolution layer of the third line by the convolution check of 1154 1 × 1 pixel matrixes by the second convolution layer, and the sum is output to the set;
the third line is a three-layer convolution layer: the convolution calculation is carried out on pixel matrix data output by the extraction set of effective color and edge characteristics of the bottom convolution layer through convolution checking of 128 1x 1 pixel matrixes, the convolution calculation is carried out on an effective result of convolution calculation of the bottom convolution layer through convolution checking of 160 1x7 pixel matrixes, and the convolution calculation is carried out on an effective result of convolution calculation of the second convolution layer through convolution checking of 192 7x1 pixel matrixes by the third convolution layer to be output to the second convolution layer of the second line.
The extraction of texture features formed by 3-line parallel convolutional layers is that the operation of the convolutional layers of three lines is different and is sequentially 0,2 and 3 layers from left to right, the purpose of doing so is to extract features at different levels, then information merging is performed (generally, the more convolutional operations, the higher the feature extraction is, but some original image information is lost), information complementation is performed through convolutional operations at different levels, and finally the obtained features are more meaningful.
Generally speaking, the deeper the network hierarchy, the higher the features extracted by the network at the later stages, the 1, 2, 3-layer modules are ordered from the front to the back, and the middle may be composed of one of the modules. Texture features are high-order features, color features are low-order, and edges are in between.
When the number of lines calculated by the convolution layers for extracting the color, edge and texture features is 3, the number of lines calculated by the convolution layers for extracting the correlation features between the edge and the texture is 4, and as shown in fig. 3, the convolution layers comprise a first line one-layer pooling layer, a second line two-layer convolution layer, a third line two-layer convolution layer and a fourth line three-layer convolution layer from left to right;
wherein:
the first line is a pooling layer: the pooling layer performs maximum pooling calculation on the pixel matrix data of the effective image picture by taking a 3 multiplied by 3 pixel matrix as a sliding window and taking 2 as a step length and outputs the data to a set;
the second line comprises two convolution layers: the bottom layer convolution layer performs convolution calculation on effective image picture pixel point matrix data through 256 convolution kernels of 1 × 1 pixel point matrixes, and the second convolution layer performs convolution calculation on effective results of convolution calculation of the bottom layer convolution layer through 384 convolution kernels of 3 × 3 pixel point matrixes with the step length being 2 and outputs the effective results to a set;
the third circuit comprises two convolution layers: the convolution calculation is carried out on the pixel point matrix data of the effective image picture by the convolution check of 256 1 × 1 pixel point matrixes by the bottom convolution layer, and the convolution calculation is carried out on the effective result of the convolution calculation of the bottom convolution layer by 388 3 × 3 pixel point matrix convolution kernels with the step length of 2 and the effective result is output to a set by the second convolution layer;
the fourth line is a three-layer convolutional layer: the convolution calculation is carried out on the convolution calculation effective image pixel point matrix data by convolution check of 256 1x 1 pixel point matrixes, the convolution calculation effective result of the convolution calculation of the convolution check of the bottom layer convolution layer is carried out on the second convolution layer by convolution check of 388 3 x 3 pixel point matrixes, and the convolution calculation effective result of the convolution calculation of the second layer convolution layer is carried out on the third convolution layer by convolution check of 320 3 x 3 pixel point matrixes and with the step length of 2 and is output to the set.
The extraction of the associated features between the edges and the textures which are composed of 4 lines is carried out, the operation times of extracting the features of each line are different (different convolution layers exist), and finally, the merged information is very rich.
In the examples: the image is preprocessed so as to enable the image to be clearer, and the preprocessing comprises invalid watermark removing processing and image contrast improving processing, wherein the invalid watermark removing processing adopts an expansion corrosion processing method, and the image contrast improving processing adopts a histogram equalization processing method.
The establishment of the model comprises an input layer, a convolution layer, a pooling layer, a full-connection layer and an output layer, the obtained result is the output of the output layer, and the result of the output layer is obtained by layer-by-layer operation; if the actual output of the output layer is different from the expected output, turning to error back propagation, which can be iteration for many times, and if the actual output of the output layer is the same as the expected output, ending; in the process of back propagation, the minimum square error between the output result of forward propagation of the convolutional neural network and the sample input is calculated, the iteration frequency is set to be 40 times, the batch size is set to be 40 times, the learning rate is 0.01 of the value preset by Adagarad, the loss function is multi-class logarithmic loss, the weight of the network is adjusted layer by layer in a reverse direction according to the mode of minimizing the error, and the process of correcting the weight is completed by using an Adagarad algorithm. Repeating forward propagation and backward propagation until the error is minimum or the maximum iteration times are reached, thereby obtaining a trained convolutional neural network model; and storing the trained model parameters.
This model is used in the categorised detection method of steel scrap grade in a collection stores up, and concrete process includes around the one or more cameras that the freight train railway carriage upside of unloading set up, and an electro-magnet sucking disc is inhaled the garrulous steel material in with the carriage and is rotated out and lift off, and wherein, the categorised detection method of steel scrap grade is: the method comprises the steps of obtaining images of scattered forms of broken steel materials in a carriage shot by a camera from different angles before the broken steel materials are sucked by an electromagnet sucker each time, processing the images to obtain image data characteristics, sending the image data characteristics into a grading neural network model, outputting corresponding grading by the grading neural network model aiming at each input image until the broken steel materials in the carriage are completely unloaded, calculating the occupancy rate of the data of all grading results of different grades, and determining the grade of all unloaded scrap steel of the carriage according to the preset occupancy rate percentage.
The above-described embodiments can be varied within the knowledge of a person skilled in the art without departing from the spirit of the invention.

Claims (6)

1. A method for establishing a steel scrap grade dividing neural network model is used for grade classification detection of steel scrap storage, and comprises the steps of obtaining a plurality of images, visually determining different steel scrap grades of the images, preprocessing the images to remove invalid watermarks and improve image contrast, extracting image data characteristics of the image data, and performing convolutional neural network learning on the extracted image data characteristics of different grades to form a grade dividing neural network model with grade classification output; the method is characterized in that the extraction of the image data features is the extraction of the set implementation of convolution neural network convolution calculation on the image picture pixel point matrix data, and comprises the following steps: extracting object color, edge feature and texture feature in the image and extracting correlation feature between object edge and texture in the image, wherein the extraction is formed by calculating a plurality of line convolution layers or convolution layers and a pooling layer which are output in a gathering manner;
the extraction of object color and edge features in the image is formed by the collective output of calculation output of a convolution layer and a pooling layer of three lines, wherein the collection output comprises a first line one-layer pooling layer, a second line two-layer convolution layer and a third line four-layer convolution layer from left to right; the extraction of the texture features in the image is the extraction of the extraction set output of the object color and the edge features in the image, and is formed by the set output of the calculation output of three line convolution layers, wherein the set output comprises a first line 0 convolution layer, a second line two-layer convolution layer and a third line three-layer convolution layer from left to right; the texture features form the activation function (Relu activation) of the convolutional network;
the collection output of the calculation output of the convolution layer or the convolution layer and the pooling layer of the at least three lines forms the extraction of object color, edge characteristic and texture characteristic in the image, and the convolution layer number of each line is different;
the number of lines calculated for the convolution layer extracted from the correlation characteristics between the edge and the texture is larger than the number of lines calculated for the convolution layer extracted from the color, the edge and the texture characteristics of the object in the image.
2. The method of claim 1, wherein the determining the different scrap levels for the plurality of images is determined by a professional team through a human eye recognition discussion.
3. The method of claim 1, wherein the extraction of object color and edge features in the image is made up of an aggregate output of three-line convolution layer plus pooling layer calculation outputs, including a first line one-pooling layer, a second line two-pooling layer and a third line four-pooling layer,
wherein:
the first line is a pooling layer: the pooling layer performs maximum pooling calculation on the pixel matrix data of the effective image picture by taking a 3 multiplied by 3 pixel matrix as a sliding window and taking the step length as 2, and outputs the data to a set;
the second line comprises two convolution layers: the bottom layer convolution layer performs convolution calculation on the pixel point matrix data of the effective image picture through convolution check of 192 1 × 1 pixel point matrixes, and the second convolution layer performs convolution calculation on the convolution calculation effective result of the bottom layer convolution layer through convolution check of 192 3 × 3 pixel point matrixes with the step length of 2 and outputs the result to the set;
the third line has four convolutional layers: the convolution calculation is carried out on the pixel point matrix data of the effective image picture by carrying out convolution check on 256 1 × 1 pixel point matrix on the bottom convolution layer, the convolution calculation is carried out on the effective result of the convolution calculation of the bottom convolution layer by carrying out convolution check on the 256 1 × 7 pixel point matrix on the second convolution layer, the convolution calculation is carried out on the effective result of the convolution calculation of the second convolution layer by carrying out convolution check on 320 7 × 1 pixel point matrix on the third convolution layer, and the convolution calculation is carried out on the effective result of the convolution calculation of the third convolution layer by taking the step length as 2 on 320 3 × 3 pixel point matrix convolution kernels and is output to the set.
4. The method according to claim 1, wherein the extraction of the texture features in the image is performed on the extracted set outputs of the object color and the edge features in the image, and is formed by the set outputs of the calculation outputs of the three-line convolutional layers, including a first line 0 convolutional layer, a second line two-layer convolutional layer and a third line three-layer convolutional layer, and the texture features form an activation function of a convolutional network;
wherein:
the first line 0 convolutional layer: directly outputting the pixel matrix data output by the extraction set of the effective color and the edge characteristic to a set without any operation;
the second line comprises two convolution layers: the convolution calculation is carried out on the pixel matrix data output by the extraction set of the effective color and the edge characteristic of the convolution check of 192 1 × 1 pixel matrixes by the bottom convolution layer, the convolution calculation is carried out on the sum of the effective result of the convolution calculation of the bottom convolution layer and the effective result of the convolution calculation of the third convolution layer of the third line by the convolution check of 1154 1 × 1 pixel matrixes by the second convolution layer, and the sum is output to the set; the third line is a three-layer convolution layer: the convolution calculation is carried out on pixel matrix data output by the extraction set of effective color and edge characteristics of the bottom convolution layer through convolution checking of 128 1x 1 pixel matrixes, the convolution calculation is carried out on an effective result of convolution calculation of the bottom convolution layer through convolution checking of 160 1x7 pixel matrixes, and the convolution calculation is carried out on an effective result of convolution calculation of the second convolution layer through convolution checking of 192 7x1 pixel matrixes by the third convolution layer to be output to the second convolution layer of the second line.
5. The method of claim 1, wherein when the number of lines calculated for the convolutional layers for color, edge and texture feature extraction is 3, the number of lines calculated for the convolutional layers for correlation feature extraction between edges and textures is 4, including a first line-one pooling layer, a second line-two convolutional layer, a third line-two convolutional layer and a fourth line-three convolutional layer;
wherein:
the first line is a pooling layer: the pooling layer performs maximum pooling calculation on the pixel matrix data of the effective image picture by taking a 3 multiplied by 3 pixel matrix as a sliding window and taking 2 as a step length and outputs the data to a set;
the second line comprises two convolution layers: the bottom layer convolution layer performs convolution calculation on effective image picture pixel point matrix data through 256 convolution kernels of 1 × 1 pixel point matrixes, and the second convolution layer performs convolution calculation on effective results of convolution calculation of the bottom layer convolution layer through 384 convolution kernels of 3 × 3 pixel point matrixes with the step length being 2 and outputs the effective results to a set;
the third circuit comprises two convolution layers: the convolution calculation is carried out on the pixel point matrix data of the effective image picture by the convolution check of 256 1 × 1 pixel point matrixes by the bottom convolution layer, and the convolution calculation is carried out on the effective result of the convolution calculation of the bottom convolution layer by 388 3 × 3 pixel point matrix convolution kernels with the step length of 2 and the effective result is output to a set by the second convolution layer;
the fourth line is a three-layer convolutional layer: the convolution calculation is carried out on the convolution calculation effective image pixel point matrix data by convolution check of 256 1x 1 pixel point matrixes, the convolution calculation effective result of the convolution calculation of the convolution check of the bottom layer convolution layer is carried out on the second convolution layer by convolution check of 388 3 x 3 pixel point matrixes, and the convolution calculation effective result of the convolution calculation of the second layer convolution layer is carried out on the third convolution layer by convolution check of 320 3 x 3 pixel point matrixes and with the step length of 2 and is output to the set.
6. The method of claim 1, wherein the removing of the invalid watermark in the preprocessing is performed by a dilation-erosion process, and the improving of the image contrast is performed by a histogram equalization process.
CN201910958076.8A 2019-10-10 2019-10-10 Method for establishing steel scrap grade division neural network model Active CN110660074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910958076.8A CN110660074B (en) 2019-10-10 2019-10-10 Method for establishing steel scrap grade division neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910958076.8A CN110660074B (en) 2019-10-10 2019-10-10 Method for establishing steel scrap grade division neural network model

Publications (2)

Publication Number Publication Date
CN110660074A CN110660074A (en) 2020-01-07
CN110660074B true CN110660074B (en) 2021-04-16

Family

ID=69040347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910958076.8A Active CN110660074B (en) 2019-10-10 2019-10-10 Method for establishing steel scrap grade division neural network model

Country Status (1)

Country Link
CN (1) CN110660074B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292026A (en) * 2020-04-27 2020-06-16 江苏金恒信息科技股份有限公司 Scrap steel grading method and device based on neural network model fusion
JP7205637B2 (en) * 2020-04-30 2023-01-17 Jfeスチール株式会社 Scrap discrimination system and scrap discrimination method
CN112744439A (en) * 2021-01-15 2021-05-04 湖南镭目科技有限公司 Remote scrap steel monitoring system based on deep learning technology
CN112801391B (en) * 2021-02-04 2021-11-19 科大智能物联技术股份有限公司 Artificial intelligent scrap steel impurity deduction rating method and system
CN114998318B (en) * 2022-07-18 2022-10-25 聊城一明五金科技有限公司 Scrap steel grade identification method used in scrap steel treatment process
CN116561668A (en) * 2023-07-11 2023-08-08 深圳传趣网络技术有限公司 Chat session risk classification method, device, equipment and storage medium
CN117372431B (en) * 2023-12-07 2024-02-20 青岛天仁微纳科技有限责任公司 Image detection method of nano-imprint mold

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063584A (en) * 2018-07-11 2018-12-21 深圳大学 Facial characteristics independent positioning method, device, equipment and the medium returned based on cascade
CN109145928A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 It is a kind of based on the headstock of image towards recognition methods and device
CN109685145A (en) * 2018-12-26 2019-04-26 广东工业大学 A kind of small articles detection method based on deep learning and image procossing
CN109711426A (en) * 2018-11-16 2019-05-03 中山大学 A kind of pathological picture sorter and method based on GAN and transfer learning
WO2019104217A1 (en) * 2017-11-22 2019-05-31 The Trustees Of Columbia University In The City Of New York System method and computer-accessible medium for classifying breast tissue using a convolutional neural network
CN110084203A (en) * 2019-04-29 2019-08-02 北京航空航天大学 Full convolutional network aircraft level detection method based on context relation

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6450053B2 (en) * 2015-08-15 2019-01-09 セールスフォース ドット コム インコーポレイティッド Three-dimensional (3D) convolution with 3D batch normalization
CN107169956B (en) * 2017-04-28 2020-02-14 西安工程大学 Color woven fabric defect detection method based on convolutional neural network
CN107463906A (en) * 2017-08-08 2017-12-12 深图(厦门)科技有限公司 The method and device of Face datection
CN108364281B (en) * 2018-01-08 2020-10-30 佛山市顺德区中山大学研究院 Ribbon edge flaw defect detection method based on convolutional neural network
CN109657584B (en) * 2018-12-10 2022-12-09 西安汇智信息科技有限公司 Improved LeNet-5 fusion network traffic sign identification method for assisting driving
CN110009051A (en) * 2019-04-11 2019-07-12 浙江立元通信技术股份有限公司 Feature extraction unit and method, DCNN model, recognition methods and medium
CN110263681B (en) * 2019-06-03 2021-07-27 腾讯科技(深圳)有限公司 Facial expression recognition method and device, storage medium and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145928A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 It is a kind of based on the headstock of image towards recognition methods and device
WO2019104217A1 (en) * 2017-11-22 2019-05-31 The Trustees Of Columbia University In The City Of New York System method and computer-accessible medium for classifying breast tissue using a convolutional neural network
CN109063584A (en) * 2018-07-11 2018-12-21 深圳大学 Facial characteristics independent positioning method, device, equipment and the medium returned based on cascade
CN109711426A (en) * 2018-11-16 2019-05-03 中山大学 A kind of pathological picture sorter and method based on GAN and transfer learning
CN109685145A (en) * 2018-12-26 2019-04-26 广东工业大学 A kind of small articles detection method based on deep learning and image procossing
CN110084203A (en) * 2019-04-29 2019-08-02 北京航空航天大学 Full convolutional network aircraft level detection method based on context relation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image based fruit category classification by 13-layer deep convolutional neural network and data augmentation;Yu-Dong Zhang 等;《Multimed Tools Appl》;20170930;3613–3632 *
基于卷积神经网络的遥感图像分类算法研究;梁晓旭;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20190215(第02期);C028-177 *

Also Published As

Publication number Publication date
CN110660074A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
CN110660074B (en) Method for establishing steel scrap grade division neural network model
CN110717455B (en) Method for classifying and detecting grades of scrap steel in storage
KR102121958B1 (en) Method, system and computer program for providing defect analysis service of concrete structure
CN110111331B (en) Honeycomb paper core defect detection method based on machine vision
Kaseko et al. A neural network-based methodology for pavement crack detection and classification
CN106934795B (en) A kind of automatic testing method and prediction technique of glue into concrete beam cracks
CN108021938A (en) A kind of Cold-strip Steel Surface defect online detection method and detecting system
JP7104799B2 (en) Learning data collection device, learning data collection method, and program
CN111915572B (en) Adaptive gear pitting quantitative detection system and method based on deep learning
CN107220603A (en) Vehicle checking method and device based on deep learning
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN113283395B (en) Video detection method for blocking foreign matters at transfer position of coal conveying belt
CN108073774A (en) A kind of reliable method for verifying quick-fried heap LUMPINESS DISTRIBUTION
WO2022267270A1 (en) Crack characteristic representation method and system based on multi-fractal spectrum
CN113159061A (en) Actual tunnel surrounding rock fragment identification method based on example segmentation
CN111539251B (en) Security check article identification method and system based on deep learning
CN116129135A (en) Tower crane safety early warning method based on small target visual identification and virtual entity mapping
CN111415339A (en) Image defect detection method for complex texture industrial product
CN113298181A (en) Underground pipeline abnormal target identification method and system based on dense connection Yolov3 network
CN109102486B (en) Surface defect detection method and device based on machine learning
CN112200766A (en) Industrial product surface defect detection method based on area-associated neural network
US8306311B2 (en) Method and system for automated ball-grid array void quantification
CN115880181A (en) Method, device and terminal for enhancing image contrast
JP7311455B2 (en) Scrap grade determination system, scrap grade determination method, estimation device, learning device, learned model generation method, and program
CN114972280A (en) Fine coordinate attention module and application thereof in surface defect detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant