CN111881756A - Waste mobile phone model identification method based on convolutional neural network - Google Patents
Waste mobile phone model identification method based on convolutional neural network Download PDFInfo
- Publication number
- CN111881756A CN111881756A CN202010600473.0A CN202010600473A CN111881756A CN 111881756 A CN111881756 A CN 111881756A CN 202010600473 A CN202010600473 A CN 202010600473A CN 111881756 A CN111881756 A CN 111881756A
- Authority
- CN
- China
- Prior art keywords
- mobile phone
- matrix
- model
- identified
- bilinear
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000002699 waste material Substances 0.000 title claims abstract description 29
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 15
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000004064 recycling Methods 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 87
- 239000013598 vector Substances 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 15
- 230000009467 reduction Effects 0.000 claims description 15
- 238000011176 pooling Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 8
- 230000003213 activating effect Effects 0.000 claims description 6
- 238000013145 classification model Methods 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 238000007635 classification algorithm Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 8
- 238000003708 edge detection Methods 0.000 abstract description 3
- 238000011084 recovery Methods 0.000 description 14
- 230000008901 benefit Effects 0.000 description 5
- 239000000284 extract Substances 0.000 description 5
- 238000007689 inspection Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000010813 municipal solid waste Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000009270 solid waste treatment Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/30—Administration of product recycling or disposal
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02W—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
- Y02W30/00—Technologies for solid waste management
- Y02W30/50—Reuse, recycling or recovery technologies
- Y02W30/82—Recycling of waste of electrical or electronic equipment [WEEE]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02W—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
- Y02W90/00—Enabling technologies or technologies with a potential or indirect contribution to greenhouse gas [GHG] emissions mitigation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Human Resources & Organizations (AREA)
- Operations Research (AREA)
- Sustainable Development (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for identifying models of waste mobile phones based on a convolutional neural network, aiming at the problem that the models are difficult to accurately identify in the process of recycling the waste mobile phones. The method utilizes the edge detection algorithm to analyze the regional characteristics in the mobile phone image of the phone, constructs a weight-sharing feature extraction convolution network, evaluates the similarity between the regional characteristics of the waste mobile phone image and the standard sample, and realizes the rapid identification of the mobile phone model.
Description
Technical Field
The invention realizes the accurate identification of the mobile phone model in the waste mobile phone recovery process by using the waste mobile phone model identification method based on the low-rank convolutional neural network. In the process of recycling the waste mobile phones, the mobile phones can be classified according to the models to obtain greater economic benefits, the models of the mobile phones are identified as important factors influencing the recycling efficiency of the waste mobile phones, the models of the mobile phones are various, the similarity is high, and therefore certain experience accumulation is needed to distinguish the models of the mobile phones skillfully. The waste mobile phone model identification method based on the convolutional neural network is applied to the waste mobile phone recovery process, the problems of classification errors, low classification efficiency and the like caused by insufficient experience of personnel can be solved, the accuracy and the rapidity of waste mobile phone recovery are improved, and the method is an important branch of the image identification field and belongs to the field of solid waste treatment.
Background
The rapid and accurate identification of the model of the waste mobile phone can improve the mobile phone recovery efficiency, save labor, and simultaneously improve the economic benefits of waste mobile phone recovery enterprises, so that the method is an important measure for improving the reutilization of urban solid waste resources; not only has better economic benefit, but also has obvious environmental and social benefits. Therefore, the research result of the invention has wide application prospect.
The identification of the model of the waste mobile phone is an image identification and classification process, and the accuracy of the identification of the model of the waste mobile phone is seriously influenced because the angles of the camera inspection pictures, equipment, light sources and other shooting conditions are different when the camera inspection personnel recover the mobile phone, and the resolution is also different.
The similarity of part of mobile phone models is too high, the classes of the mobile phone models in an actual mobile phone recovery scene are dynamically updated along with the appearance of a new model, and the training samples of the new model are few, so that the model is difficult to learn and extract effective characteristic information in time, the difficulty in establishing the model is increased, the difference among the models of the mobile phone is measured according to the similarity of the mobile phone, the calculation amount required by model learning can be reduced, the calculation speed is increased, and the recovery requirement of waste mobile phones is met. Therefore, the recycling efficiency is improved, the circulation process of the waste mobile phone is accelerated, the labor cost can be reduced, and the benefit of a recycling enterprise is improved.
The invention designs a mobile phone model identification method based on a low-rank bilinear convolutional neural network, mainly extracts identifiable mobile phone areas in a photo of a machine tester through a low-rank bilinear convolutional algorithm, and realizes quick and accurate identification of the models of waste mobile phones by using the convolutional neural network.
Disclosure of Invention
The invention obtains a mobile phone model identification method of a low-rank bilinear convolutional neural network, which extracts identifiable mobile phone areas in a picture of a machine-checking picture through a low-rank convolutional algorithm and realizes quick and accurate identification of the models of waste mobile phones by using the convolutional neural network; the problem of model discernment in the old and useless cell-phone recovery process is solved, the recovery efficiency of cell-phone has been improved.
The invention adopts the following technical scheme and implementation steps:
1. a mobile phone model identification method based on a low-rank bilinear convolutional neural network realizes accurate identification of mobile phone models by designing a low-rank bilinear convolutional network structure, and is characterized by comprising the following steps:
(1) selecting input variables of the mobile phone model identification model as follows: first mobile phone image pixel matrix I to be identified1(ii) a Second mobile phone image pixel matrix I to be identified2(ii) a First mobile phone image I to be identified1Medium red channel pixel matrix IR1The first mobile phone image I to be identified1Medium green channel pixel matrix IG1The first mobile phone image I to be identified1Medium blue channel pixel matrix IB1The second mobile phone image I to be identified2Medium red channel pixel matrix IR2And the second mobile phone image I to be identified2Medium green channel pixel matrix IG2And the second mobile phone image I to be identified2Medium blue channel pixel matrix IB2The first mobile phone image I to be identified1Model number label z1(ii) a Second to-be-identified mobile phone image I2Model number label z2;
(2) Establishing bilinear convolution network mobile phone type feature extraction model
Inputting variable I of two mobile phone model image samplesR、IG、IBAfter graying, bilinear convolution characteristic extraction is carried out, and a specific calculation formula is as follows:
r1(t)=0.299×IR1(t)+0.588×IG1(t)+0.114×IB1(t) (1)
r2(t)=0.299×IR2(t)+0.588×IG2(t)+0.114×IB2(t) (2)
r1(t+1)=f(w(t)×r1(t)+λ1) (3)
r2(t+1)=f(w(t)×r2(t)+λ2) (4)
in the formula: r is1(t) is a mobile phone image I to be identified1A grayed pixel matrix; r is2(t) is a mobile phone image I to be identified2A grayed pixel matrix; r is1(t +1) activating a characteristic pixel matrix for the mobile phone image to be identified in the 1 st bilinear convolution structure in the t +1 th iteration; r is2(t +1) activating a characteristic pixel matrix for the mobile phone image to be identified in the (2) th bilinear convolution structure in the (t +1) th iteration; f (-) is an activation function; w (t) represents the parametric weights of the bilinear convolution structure; lambda [ alpha ]1Output bias parameter, λ, for the 1 st bilinear convolution structure1Randomly taking values in the interval (0, 1); lambda [ alpha ]2Output bias parameters for 2 nd bilinear convolution structureNumber, lambda2Randomly taking values in the interval (0, 1); t is the number of iterations; p1(r (t)) is r1(t) pooled output vectors; r is1Is a feature vector r1(t) an element of (a); p2(r (t)) is r2(t) pooled output vectors; r is2Is a feature vector r2(t) an element of (a); s1Step length of horizontal pooling; s2The step length is vertical pooling; a is the dimension of the convolution characteristic diagram in the horizontal direction after the average pooling; b is the dimension of the convolution characteristic diagram in the vertical direction after the average pooling; i represents the number of rows in the feature matrix; j represents the number of columns in the feature matrix; b is1(t) is P1(t) the feature matrix after singular value decomposition and dimension reduction; b is2(t) is P2(t) the feature matrix after singular value decomposition and dimension reduction;1(t) is P1(t) a matrix of singular values; u shape1(t) is P1(t) a left singular matrix; v1 T(t) is P1(t) transposing the right singular matrix;2(t) is P2(t) a matrix of singular values; u shape2(t) is P2(t) a left singular matrix; v2 T(t) is P2(t) transposing the right singular matrix; z1(t) is a matrix B1(t) outputting vectors after each characteristic element in the (t) is subjected to regular vectorization; z2(t) is a matrix B2(t) outputting vectors after each characteristic element in the (t) is subjected to regular vectorization; | | non-woven hair2Representing a two-norm normalization operation; sign is a sign function;
(2) designing a joint supervision classification model
The mobile phone image joint supervision and classification model adopts a regression classification algorithm combining Softmax and contrast loss, similar samples use central loss to calculate similarity measurement among samples, and Softmax loss is directly adopted to calculate in heterogeneous cases, and the specific calculation method is shown as the formula (11):
in the formula: l (t) is the output of the joint loss function; mu is a loss tradeoff coefficient when z1=z2That is, when the sample is the same model mobile phone, mu is 1; when z is1≠z2That is, when the sample is a mobile phone of different model, mu is 0; w is aT(t) a transpose of the parameter weights representing the bilinear convolution structure;
(3) mobile phone model identification process
The process of identifying the mobile phone model by using the bilinear convolution network structure specifically comprises the following steps:
selecting any two images I from real images acquired by a waste mobile phone in the recycling and machine-testing process as training data1And I2Inputting the data into a bilinear convolution feature extraction model to obtain the fusion feature B of each training sample1(t) and B2(t);
Secondly, performing rank reduction operation on the bilinear feature matrix by a low-rank matrix parameter dimension reduction method shown in formulas (5) to (6) to reduce the calculation complexity of outer product aggregation operation, improve the operation speed and finally obtain the low-rank bilinear feature matrix Z1(t) and Z2(t);
Inputting the characteristic matrix after dimensionality reduction into a joint loss function shown in a formula (7) to obtain a joint supervision value L (t) of the sample, and performing back propagation and repeatedly adjusting the parameter weight w (t) of the bilinear convolution model to enable the joint supervision value L (t) to reach the global minimum value.
And fourthly, randomly inputting the samples into a bilinear convolution identification model in pairs, setting the weight parameters of the model as w (t), inputting the low-rank bilinear feature matrix output by the model into a joint loss function for classification and identification, and further obtaining the target model label value of the classified samples, wherein the label value is the model of the mobile phone.
The invention is mainly characterized in that:
(1) aiming at the mobile phone model identification process in the current waste mobile phone recovery process, the mobile phone model needs to be accurately and quickly identified, the waste mobile phone recovery efficiency is improved, however, the shooting conditions of mobile phone photo angles, equipment, light sources and the like are different, the resolution ratio is also different, the accuracy of waste mobile phone model identification is seriously influenced, the similarity of partial mobile phone models is overhigh, the type of the mobile phone model in the actual mobile phone recovery scene is dynamically updated along with the appearance of a new model, and the model is difficult to learn and extract effective characteristic information in time due to fewer new model training samples; the model identification algorithm based on the convolutional neural network is adopted, so that the method has the characteristics of high precision, short detection time and the like;
(2) the invention provides a waste mobile phone model identification method based on a convolutional neural network, which extracts an identifiable mobile phone area in a machine-checking picture through an edge detection algorithm and realizes quick and accurate identification of the waste mobile phone model by using the convolutional neural network; the problem of model discernment in the old and useless cell-phone recovery process is solved, the recovery efficiency of cell-phone has been improved.
Particular attention is paid to: for convenience of description, the invention adopts a convolutional neural network and an edge detection algorithm to process a mobile phone image, and combines other feature extraction algorithms and an identification region extraction algorithm to form an image identification method with the same principle, and the image identification method belongs to the scope of the invention.
Detailed Description
1. A mobile phone model identification method based on a low-rank bilinear convolutional neural network realizes accurate identification of mobile phone models by designing a low-rank bilinear convolutional network structure, and is characterized by comprising the following steps:
(1) selecting input variables of the mobile phone model identification model as follows: first mobile phone image pixel matrix I to be identified1(ii) a Second mobile phone image pixel matrix I to be identified2(ii) a First mobile phone image I to be identified1Medium red channel pixel matrix IR1The first mobile phone image I to be identified1Medium green channel pixel matrix IG1The first mobile phone image I to be identified1Medium blue channel pixel matrix IB1The second mobile phone image I to be identified2Medium red channel pixel matrix IR2And the second mobile phone image I to be identified2Medium green channel pixel matrix IG2And the second mobile phone image I to be identified2Medium blue channel pixel matrix IB2The first mobile phone image I to be identified1Model number label z1(ii) a Second to-be-identified mobile phone image I2Model number label z2;
(2) Establishing bilinear convolution network mobile phone type feature extraction model
Inputting variable I of two mobile phone model image samplesR、IG、IBAfter graying, bilinear convolution characteristic extraction is carried out, and a specific calculation formula is as follows:
r1(t)=0.299×IR1(t)+0.588×IG1(t)+0.114×IB1(t) (1)
r2(t)=0.299×IR2(t)+0.588×IG2(t)+0.114×IB2(t) (2)
r1(t+1)=f(w(t)×r1(t)+λ1) (3)
r2(t+1)=f(w(t)×r2(t)+λ2) (4)
in the formula: r is1(t) is a mobile phone image I to be identified1A grayed pixel matrix;r2(t) is a mobile phone image I to be identified2A grayed pixel matrix; r is1(t +1) activating a characteristic pixel matrix for the mobile phone image to be identified in the 1 st bilinear convolution structure in the t +1 th iteration; r is2(t +1) activating a characteristic pixel matrix for the mobile phone image to be identified in the (2) th bilinear convolution structure in the (t +1) th iteration; f (-) is an activation function; w (t) represents the parametric weights of the bilinear convolution structure; lambda [ alpha ]1Output bias parameter, λ, for the 1 st bilinear convolution structure1Randomly taking values in the interval (0, 1); lambda [ alpha ]2Output bias parameter, λ, for the 2 nd bilinear convolution structure2Randomly taking values in the interval (0, 1); t is the number of iterations; p1(r (t)) is r1(t) pooled output vectors; r is1Is a feature vector r1(t) an element of (a); p2(r (t)) is r2(t) pooled output vectors; r is2Is a feature vector r2(t) an element of (a); s1Step length of horizontal pooling; s2The step length is vertical pooling; a is the dimension of the convolution characteristic diagram in the horizontal direction after the average pooling; b is the dimension of the convolution characteristic diagram in the vertical direction after the average pooling; i represents the number of rows in the feature matrix; j represents the number of columns in the feature matrix; b is1(t) is P1(t) the feature matrix after singular value decomposition and dimension reduction; b is2(t) is P2(t) the feature matrix after singular value decomposition and dimension reduction;1(t) is P1(t) a matrix of singular values; u shape1(t) is P1(t) a left singular matrix; v1 T(t) is P1(t) transposing the right singular matrix;2(t) is P2(t) a matrix of singular values; u shape2(t) is P2(t) a left singular matrix; v2 T(t) is P2(t) transposing the right singular matrix; z1(t) is a matrix B1(t) outputting vectors after each characteristic element in the (t) is subjected to regular vectorization; z2(t) is a matrix B2(t) outputting vectors after each characteristic element in the (t) is subjected to regular vectorization; | | non-woven hair2Representing a two-norm normalization operation; sign is a sign function;
(2) designing a joint supervision classification model
The mobile phone image joint supervision and classification model adopts a regression classification algorithm combining Softmax and contrast loss, similar samples use central loss to calculate similarity measurement among samples, and Softmax loss is directly adopted to calculate in heterogeneous cases, and the specific calculation method is shown as the formula (11):
in the formula: l (t) is the output of the joint loss function; mu is a loss tradeoff coefficient when z1=z2That is, when the sample is the same model mobile phone, mu is 1; when z is1≠z2That is, when the sample is a mobile phone of different model, mu is 0; w is aT(t) a transpose of the parameter weights representing the bilinear convolution structure;
(3) mobile phone model identification process
The process of identifying the mobile phone model by using the bilinear convolution network structure specifically comprises the following steps:
fifthly, selecting any two images I from the real images collected by the waste mobile phone in the recycling and machine-checking process as training data1And I2Inputting the data into a bilinear convolution feature extraction model to obtain the fusion feature B of each training sample1(t) and B2(t);
Sixthly, performing rank reduction operation on the bilinear feature matrix by a low-rank matrix parameter dimension reduction method shown in formulas (5) to (6) to reduce the computation complexity of outer product aggregation operation, improve the operation speed and finally obtain the low-rank bilinear feature matrix Z1(t) and Z2(t);
And (c) inputting the feature matrix after dimensionality reduction into a joint loss function shown in a formula (7) to obtain a joint supervision value L (t) of the sample, and performing back propagation and repeatedly adjusting the parameter weight w (t) of the bilinear convolution model to enable the joint supervision value L (t) to reach a global minimum value.
Randomly inputting samples in pairs into a bilinear convolution identification model, setting weight parameters of the model as w (t), inputting a low-rank bilinear feature matrix output by the model into a joint loss function for classification and identification, and further obtaining a target model label value of the classified samples, wherein the label value is the model of the mobile phone.
Claims (1)
1. A waste mobile phone model identification method based on a convolutional neural network is characterized by comprising the following steps:
(1) selecting input variables of the mobile phone model identification model as follows: first mobile phone image pixel matrix I to be identified1(ii) a Second mobile phone image pixel matrix I to be identified2(ii) a First mobile phone image I to be identified1Medium red channel pixel matrix IR1The first mobile phone image I to be identified1Medium green channel pixel matrix IG1The first mobile phone image I to be identified1Medium blue channel pixel matrix IB1The second mobile phone image I to be identified2Medium red channel pixel matrix IR2And the second mobile phone image I to be identified2Medium green channel pixel matrix IG2And the second mobile phone image I to be identified2Medium blue channel pixel matrix IB2The first mobile phone image I to be identified1Model number label z1(ii) a Second to-be-identified mobile phone image I2Model number label z2;
(2) Establishing bilinear convolution network mobile phone type feature extraction model
Inputting variable I of two mobile phone model image samplesR、IG、IBAfter graying, bilinear convolution characteristic extraction is carried out, and a specific calculation formula is as follows:
r1(t)=0.299×IR1(t)+0.588×IG1(t)+0.114×IB1(t) (1)
r2(t)=0.299×IR2(t)+0.588×IG2(t)+0.114×IB2(t) (2)
r1(t+1)=f(w(t)×r1(t)+λ1) (3)
r2(t+1)=f(w(t)×r2(t)+λ2) (4)
in the formula: r is1(t) is a mobile phone image I to be identified1A grayed pixel matrix; r is2(t) is a mobile phone image I to be identified2A grayed pixel matrix; r is1(t +1) activating a characteristic pixel matrix for the mobile phone image to be identified in the 1 st bilinear convolution structure in the t +1 th iteration; r is2(t +1) activating a characteristic pixel matrix for the mobile phone image to be identified in the (2) th bilinear convolution structure in the (t +1) th iteration; f (-) is an activation function; w (t) represents the parametric weights of the bilinear convolution structure; lambda [ alpha ]1Output bias parameter, λ, for the 1 st bilinear convolution structure1Randomly taking values in the interval (0, 1); lambda [ alpha ]2Output bias parameter, λ, for the 2 nd bilinear convolution structure2Randomly taking values in the interval (0, 1); t is the number of iterations; p1(r (t)) is r1(t) pooled output vectors; r is1Is a feature vector r1(t) an element of (a); p2(r (t)) is r2(t) pooled output vectors; r is2Is a feature vector r2(t) an element of (a); s1Step length of horizontal pooling; s2For vertical poolingLength; a is the dimension of the convolution characteristic diagram in the horizontal direction after the average pooling; b is the dimension of the convolution characteristic diagram in the vertical direction after the average pooling; i represents the number of rows in the feature matrix; j represents the number of columns in the feature matrix; b is1(t) is P1(t) the feature matrix after singular value decomposition and dimension reduction; b is2(t) is P2(t) the feature matrix after singular value decomposition and dimension reduction;1(t) is P1(t) a matrix of singular values; u shape1(t) is P1(t) a left singular matrix; v1 T(t) is P1(t) transposing the right singular matrix;2(t) is P2(t) a matrix of singular values; u shape2(t) is P2(t) a left singular matrix; v2 T(t) is P2(t) transposing the right singular matrix; z1(t) is a matrix B1(t) outputting vectors after each characteristic element in the (t) is subjected to regular vectorization; z2(t) is a matrix B2(t) outputting vectors after each characteristic element in the (t) is subjected to regular vectorization; | | |2 represents a two-norm normalization operation; sign is a sign function;
(2) designing a joint supervision classification model
The mobile phone image joint supervision and classification model adopts a regression classification algorithm combining Softmax and contrast loss, similar samples use central loss to calculate similarity measurement among samples, and Softmax loss is directly adopted to calculate in heterogeneous cases, and the specific calculation method is shown as formula (13):
in the formula: l (t) is the output of the joint loss function; mu is a loss tradeoff coefficient when z1=z2That is, when the sample is the same model mobile phone, mu is 1; when z is1≠z2That is, when the sample is a mobile phone of different model, mu is 0; w is aT(t) a transpose of the parameter weights representing the bilinear convolution structure;
(3) mobile phone model identification process
The process of identifying the mobile phone model by using the bilinear convolution network structure specifically comprises the following steps:
selecting any two images I from real images acquired by a waste mobile phone in the recycling and machine-testing process as training data1And I2Inputting the data into a bilinear convolution feature extraction model to obtain the fusion feature B of each training sample1(t) and B2(t);
Secondly, performing rank reduction operation on the bilinear feature matrix by a low-rank matrix parameter dimension reduction method shown in formulas (5) to (6) to reduce the calculation complexity of outer product aggregation operation, improve the operation speed and finally obtain the low-rank bilinear feature matrix Z1(t) and Z2(t);
Inputting the characteristic matrix after dimensionality reduction into a joint loss function shown in a formula (7) to obtain a joint supervision value L (t) of the sample, and performing reverse propagation and repeatedly adjusting the parameter weight w (t) of the bilinear convolution model to enable the joint supervision value L (t) to reach the global minimum value;
and fourthly, randomly inputting the samples into a bilinear convolution identification model in pairs, setting the weight parameters of the model as w (t), inputting the low-rank bilinear feature matrix output by the model into a joint loss function for classification and identification, and further obtaining the target model label value of the classified samples, wherein the label value is the model of the mobile phone.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010600473.0A CN111881756A (en) | 2020-06-28 | 2020-06-28 | Waste mobile phone model identification method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010600473.0A CN111881756A (en) | 2020-06-28 | 2020-06-28 | Waste mobile phone model identification method based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111881756A true CN111881756A (en) | 2020-11-03 |
Family
ID=73158151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010600473.0A Pending CN111881756A (en) | 2020-06-28 | 2020-06-28 | Waste mobile phone model identification method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111881756A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780306A (en) * | 2021-08-11 | 2021-12-10 | 北京工业大学 | Waste mobile phone color identification method based on deep convolutional neural network |
CN114820582A (en) * | 2022-05-27 | 2022-07-29 | 北京工业大学 | Mobile phone surface defect accurate classification method based on mixed attention deformation convolution neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170083792A1 (en) * | 2015-09-22 | 2017-03-23 | Xerox Corporation | Similarity-based detection of prominent objects using deep cnn pooling layers as features |
CN110569764A (en) * | 2019-08-28 | 2019-12-13 | 北京工业大学 | mobile phone model identification method based on convolutional neural network |
CN110909682A (en) * | 2019-11-25 | 2020-03-24 | 北京工业大学 | Waste mobile terminal intelligent identification method based on recovery equipment |
-
2020
- 2020-06-28 CN CN202010600473.0A patent/CN111881756A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170083792A1 (en) * | 2015-09-22 | 2017-03-23 | Xerox Corporation | Similarity-based detection of prominent objects using deep cnn pooling layers as features |
CN110569764A (en) * | 2019-08-28 | 2019-12-13 | 北京工业大学 | mobile phone model identification method based on convolutional neural network |
CN110909682A (en) * | 2019-11-25 | 2020-03-24 | 北京工业大学 | Waste mobile terminal intelligent identification method based on recovery equipment |
Non-Patent Citations (2)
Title |
---|
LUIS TOBIAS 等: "Convolutional Neural Networks for Object Recognition on Mobile Devices: a Case Study", 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 8 December 2016 (2016-12-08), pages 3530 - 3535, XP033086122, DOI: 10.1109/ICPR.2016.7900181 * |
韩红桂 等: "基于模糊神经网络的废旧手机价值评估方法", 北京工业大学学报, vol. 45, no. 11, 30 November 2018 (2018-11-30), pages 1033 - 1040 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780306A (en) * | 2021-08-11 | 2021-12-10 | 北京工业大学 | Waste mobile phone color identification method based on deep convolutional neural network |
CN113780306B (en) * | 2021-08-11 | 2024-04-09 | 北京工业大学 | Deep convolutional neural network-based waste mobile phone color recognition method |
CN114820582A (en) * | 2022-05-27 | 2022-07-29 | 北京工业大学 | Mobile phone surface defect accurate classification method based on mixed attention deformation convolution neural network |
CN114820582B (en) * | 2022-05-27 | 2024-05-31 | 北京工业大学 | Mobile phone surface defect accurate grading method based on mixed attention deformation convolutional neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106778604B (en) | Pedestrian re-identification method based on matching convolutional neural network | |
CN110276264B (en) | Crowd density estimation method based on foreground segmentation graph | |
CN111340123A (en) | Image score label prediction method based on deep convolutional neural network | |
CN110728656A (en) | Meta-learning-based no-reference image quality data processing method and intelligent terminal | |
CN104966085A (en) | Remote sensing image region-of-interest detection method based on multi-significant-feature fusion | |
Chen et al. | Remote sensing image quality evaluation based on deep support value learning networks | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
CN110827304B (en) | Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method | |
CN104636755A (en) | Face beauty evaluation method based on deep learning | |
CN101819638A (en) | Establishment method of pornographic detection model and pornographic detection method | |
CN102034267A (en) | Three-dimensional reconstruction method of target based on attention | |
CN111339924B (en) | Polarized SAR image classification method based on superpixel and full convolution network | |
CN108830130A (en) | A kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method | |
CN111881756A (en) | Waste mobile phone model identification method based on convolutional neural network | |
CN109919246A (en) | Pedestrian's recognition methods again based on self-adaptive features cluster and multiple risks fusion | |
CN111507183A (en) | Crowd counting method based on multi-scale density map fusion cavity convolution | |
CN110363218A (en) | A kind of embryo's noninvasively estimating method and device | |
CN112926485A (en) | Few-sample sluice image classification method | |
CN110490894A (en) | Background separating method before the video decomposed based on improved low-rank sparse | |
CN110569764B (en) | Mobile phone model identification method based on convolutional neural network | |
CN112766102A (en) | Unsupervised hyperspectral video target tracking method based on space-spectrum feature fusion | |
CN112364747A (en) | Target detection method under limited sample | |
Huang et al. | SIDNet: a single image dedusting network with color cast correction | |
Kiratiratanapruk et al. | Automatic detection of rice disease in images of various leaf sizes | |
CN111242003B (en) | Video salient object detection method based on multi-scale constrained self-attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |