CN117058473A - Warehouse material management method and system based on image recognition - Google Patents

Warehouse material management method and system based on image recognition Download PDF

Info

Publication number
CN117058473A
CN117058473A CN202311320510.2A CN202311320510A CN117058473A CN 117058473 A CN117058473 A CN 117058473A CN 202311320510 A CN202311320510 A CN 202311320510A CN 117058473 A CN117058473 A CN 117058473A
Authority
CN
China
Prior art keywords
feature map
feature
wavelet
warehouse
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311320510.2A
Other languages
Chinese (zh)
Other versions
CN117058473B (en
Inventor
刘权超
罗品超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ego Robotics Co ltd
Original Assignee
Shenzhen Ego Robotics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ego Robotics Co ltd filed Critical Shenzhen Ego Robotics Co ltd
Priority to CN202311320510.2A priority Critical patent/CN117058473B/en
Publication of CN117058473A publication Critical patent/CN117058473A/en
Application granted granted Critical
Publication of CN117058473B publication Critical patent/CN117058473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a warehouse material management method and system based on image recognition. The system comprises a warehouse material management system based on image recognition, an X-ray image acquisition module of warehouse materials to be detected, an X-ray image preprocessing module of the warehouse materials to be detected, a warehouse material detection model management module, a warehouse material detection module and a stored material management database updating module. According to the application, the X-ray image of the warehouse material to be detected is obtained, and the warehouse material packaged in the packaging box can be identified by combining with image identification, so that the warehouse material does not need to be manually scanned with an electronic tag or taken out of the packaging box for identification, and an operator can conveniently manage the warehouse material; according to the image recognition algorithm, the influence of changes such as contour information and the like along with the change of the stacking mode of the storage materials on the detection of the storage materials is reduced by extracting the scale invariant features in the materials to be detected, and the accuracy of the detection of the storage materials can be improved.

Description

Warehouse material management method and system based on image recognition
Technical Field
The application relates to the field of warehouse management, in particular to a warehouse material management method and system based on image identification.
Background
The warehouse material management refers to the management of materials stored in the warehouse out and includes warehouse out and warehouse in records and the like. At present, warehouse in and out registration of warehouse materials is generally realized by manually scanning an electronic tag on the surface of a warehouse material packaging box, but the following problems possibly occur in a mode of adopting the electronic tag, firstly, the electronic tag is carelessly pasted, the electronic tag is wrong, further, the warehouse in and out information is wrong, secondly, the price of the electronic tag is relatively higher, and higher cost can be caused when a large amount of electronic tags are used. In addition, the mode of identifying the storage materials through machine vision also appears at present, but the mode generally needs to take the storage materials out of the packaging box and then identify the storage materials, and the operation is troublesome.
Disclosure of Invention
The application provides a warehouse material management method and system based on image recognition, which can recognize the warehouse material packed in a packing box by acquiring an X-ray image of the warehouse material to be detected and combining with the image recognition, does not need to manually scan an electronic tag or take the warehouse material out of the packing box to recognize, and is convenient for operators to manage the warehouse material; according to the image recognition algorithm, the influence of changes such as contour information and the like along with the change of the stacking mode of the storage materials on the detection of the storage materials is reduced by extracting the scale invariant features in the materials to be detected, and the accuracy of the detection of the storage materials can be improved.
A warehouse material management method based on image recognition comprises the following steps:
acquiring warehouse-in and warehouse-out information of warehouse materials to be tested;
an X-ray image of the storage material to be detected is recorded as an X-ray image of the storage material to be detected;
preprocessing an X-ray image of a storage material to be detected to obtain an image to be detected;
sending the image to be detected into a trained storage material detection model for detection, and outputting type information corresponding to the storage material to be detected;
the storage material detection model is built based on an improved residual neural network and comprises a preprocessing layer, a feature extraction layer, a scale-invariant feature positioning layer, a global average pooling layer, a full connection layer and a classification layer, wherein the preprocessing layer is used for downsampling an image to be detected;
and updating the stored material management database according to the type information and the warehouse in and out information corresponding to the warehouse materials to be tested.
As a preferred aspect of the present application, the detection of the stored materials by the trained stored material detection model comprises the steps of: extracting features of an image to be detected through a feature extraction layer to obtain a feature image G1, and processing the feature image G1 through a scale-invariant feature positioning layer to obtain a scale-invariant feature positioning weight matrix; and carrying out weight assignment on the feature map G1 through a scale-invariant feature positioning weight matrix to obtain a feature map G11, and sequentially passing the feature map G11 through a global average pooling layer, a full connection layer and a classification layer to output type information of storage materials.
As a preferred aspect of the present application, the feature extraction layer in the warehouse material detection model is constructed based on the modified resnet50, and includes a first modified residual, a second modified residual, a third modified residual, a fourth modified residual, and a first upsampling portion, where the first modified residual, the second modified residual, the third modified residual, and the fourth modified residual include 3, 4, 6, and 3 modified residual blocks, respectively, and the modified residual blocks include a first convolution layer, a feature split layer, P second convolution layers, a third convolution layer, and three nonlinear activation layers, and P is determined by the number of channels of the feature map input to the feature split layer; the three nonlinear activation layers are respectively positioned between the first convolution layer and the feature splitting layer, between the P second convolution layers and the third convolution layers and behind the third convolution layers, and the nonlinear activation layers comprise batch normalization layers and ReLu functions and are used for normalizing and nonlinear changing the input features; the feature splitting layer is used for splitting the input feature graphs into groups, marking the feature graph input to the feature splitting layer as S1, marking the channel number of the feature graph S1 as c, and splitting the feature graph S1 into P features according to the channel numberGroup D p P.epsilon. {1,2, 3.P }, P=c/32, feature set D p Performing convolution operation through the corresponding second convolution layer; the first upsampling section is used to change the size of the feature map input to the first upsampling section.
As a preferred aspect of the present application, feature extraction of an image to be measured by a feature extraction layer includes the steps of: the feature images obtained after the images to be measured are sequentially processed by a first improved residual part, a second improved residual part, a third improved residual part and a fourth improved residual part are respectively marked as a feature image F1, a feature image F2, a feature image F3 and a feature image F4, and the feature image F4 is processed by a first upsampling part and then is spliced and fused with the feature image F3 to obtain a feature image F5; after one convolution operation, the feature map F5 is sent to a first up-sampling part for processing, and then is spliced and fused with the feature map F2 to obtain a feature map F6; after one convolution operation, the feature map F6 is sent to a first up-sampling part for processing, and then is spliced and fused with the feature map F1 to obtain a feature map F7; and takes the feature map F7 as the feature map output by the feature extraction layer.
As a preferred aspect of the present application, the scale-invariant feature localization layer in the warehouse material detection model includes a wavelet dispersion network part, a multi-view fusion part and a second upsampling part, the wavelet dispersion network part includes three wavelet dispersion network layers, and the three wavelet dispersion network layers are built-in with different wavelet dispersion network parameters { J, R, M }, where J is a maximum wavelet transformation scattering scale corresponding to the wavelet dispersion network layer, where R is a rotation direction combination corresponding to the wavelet dispersion network layer, and r= { R 1 ,R 2 ,R 3 ···R n ···R N },n∈{1,2,3······N},R n For the rotation direction in the rotation direction combination, M is the maximum scattering order corresponding to the wavelet scattering network layer, and the wavelet scattering network layer is used for extracting scale invariant features; the multi-view fusion part comprises a first cavity convolution layer, a second cavity convolution layer, a third cavity convolution layer and a channel attention mechanism layer, wherein different cavity convolution kernels are respectively arranged in the first cavity convolution layer, the second cavity convolution layer and the third cavity convolution layer; the second up-sampling part is used for changing the characteristic diagram input to the first up-sampling partIs a size of (c) a.
As a preferred aspect of the present application, the obtaining the scale-invariant feature localization weight matrix through the scale-invariant feature localization layer includes the steps of: obtaining a feature map output by a feature extraction layer, namely a feature map G1, sending the feature map G1 into a wavelet dispersion network part, respectively obtaining three scale-invariant features after three wavelet dispersion network layers, respectively marking the three scale-invariant features as a feature map G2, a feature map G3 and a feature map G4, and increasing the corresponding sizes of the feature map G2, the feature map G3 and the feature map G4 one by one; processing the feature map G2, the feature map G3 and the feature map G4 through a multi-view fusion part to obtain a feature map G5, a feature map G6 and a feature map G7; the feature map G5 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G6 to obtain a feature map G8; the feature map G8 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G7 to obtain a feature map G9; the feature map G9 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G1 to obtain a feature map G10; and calculating the feature map G10 through a softmax function to obtain a scale-invariant feature positioning weight matrix.
As a preferred aspect of the present application, extracting scale-invariant features from the feature map G1 by the wavelet-dispersion network portion includes the steps of: for any wavelet dispersion network layer, acquiring a wavelet dispersion network parameter corresponding to the wavelet dispersion network layer, and acquiring a wavelet function psi jmr (x) Wavelet function ψ jmr (x) Characterization by wavelet function ψ jmr (x)=2 -2j ψ(2 -j r -1 x) performing wavelet transform with a wavelet transform scattering scale j and a rotation direction R on a feature map x input in an mth order wavelet scatter, wherein M is a scattering order, M is {0,1,2 }, M }, j is a wavelet transform scattering scale, j=m-1, R is a rotation direction, R is { R } 1 ,R 2 ,R 3 ···R n ···R N ' will be m,R The wavelet function is marked as a wavelet function with a scattering order of m and a rotation direction combination of R; determining a wavelet scattering path set P with a scattering order of m according to a wavelet transformation scattering scale j, a scattering order of m and a rotation direction r m ={p 0 ,p 1 ,p 2 ···p k ···p K -k=m, where p k For the secondary wavelet function set { ψ ] 0,R ,ψ 1,R ,ψ 2,R ···ψ m-1,R Selecting k wavelet functions and wavelet function psi from } m,R The composed wavelet scattering paths are grouped into a wavelet scattering path set P m Unfolding to { ζ0, ζ 1 ,ζ 2 ···ζ y ···ζ Y Y=; from a set of wavelet-scattered paths P m Generating corresponding filter set filters= { μ 0 ,μ 1 ,μ 2 ···μ y ···μ Y },μ y Scattering path ζ for mth order wavelet y Corresponding filter ζ y =(ψ i,R ,···ψ m,R ) I is the wavelet dispersion path ζ y Numbering corresponding to the wavelet function; generating a scatter transformation intermediate value U from a filter set filters m (μ y ) The formula is as follows: u (U) m (μ y )=|…||x*ψ i,R |…*ψ m,R I, select window function phi J (x)=2 -2J φ(2 -J x), wherein φ (x) = (4pi) -0.5 exp(-0.25|x| 2 For a two-dimensional Gaussian function, a reference scattering coefficient T is calculated Z =x*φ J (x) According to the window function phi J (x) Intermediate value U of scattering transformation m (μ y ) Output scattering coefficient T m (μ y ) The formula is as follows: t (T) m (μ y )=U m (μ y )*φ J (x) Concatenating all scattering coefficients T m (μ y ) And reference scattering coefficient T Z =x*φ J (x) Outputting the scale invariant features; and (3) arranging the scale invariant features output by the three wavelet dispersion network layers from small to large according to the size, and marking the scale invariant features as a feature map G2, a feature map G3 and a feature map G4.
As a preferred aspect of the present application, the processing of the feature map G2, the feature map G3, and the feature map G4 by the multi-view fusion section includes the steps of: carrying out convolution operation on the feature map G2 through the first cavity convolution layer, the second cavity convolution layer and the third cavity convolution layer to obtain a feature map E1, a feature map E2 and a feature map E3; the feature map G2 is sent to a channel attention mechanism layer, the channel attention mechanism layer respectively carries out global average pooling and global maximum pooling on the feature map G2 according to channels to obtain a first channel weight vector and a second channel weight vector, and a third channel weight vector is obtained by carrying out convolution after splicing the first channel weight vector and the second channel weight vector; the feature map E1, the feature map E2 and the feature map E3 are spliced and fused, then convolution is carried out for one time to obtain a feature map E4, and weight assignment is carried out on the feature map E4 through a third channel weight vector to obtain a feature map G5; the same operation is performed on the feature map G3 and the feature map G4 to obtain a feature map G6 and a feature map G7, respectively.
As a preferred aspect of the application, training the warehouse material detection model specifically comprises the following steps; acquiring a training image marked with type information; combining all training images marked with type information to form a training image set, training the initialized storage material detection model through the training image set, and outputting the trained storage material detection model if the corresponding cross entropy loss value is within a preset range by adopting an alternate optimization method during training; otherwise, continuing the iterative training.
A warehouse material management system based on image recognition, comprising:
the warehouse-in and warehouse-out information acquisition module is used for acquiring warehouse-in and warehouse-out information of warehouse materials to be tested;
the X-ray image acquisition module is used for acquiring Chu Wuliao X-ray images of the warehouse to be detected corresponding to the warehouse to be detected;
the X-ray image preprocessing module is used for preprocessing the X-ray image of the storage material to be detected to obtain an image to be detected;
the warehouse material detection model management module is used for training and storing a warehouse material detection model;
the storage material detection module is used for processing the image to be detected according to the storage material detection model and outputting type information corresponding to the storage material to be detected;
and the stored material management database updating module is used for updating the stored material management database according to the type information and the warehouse in and out information corresponding to the warehouse materials to be tested.
The application has the following advantages:
1. according to the application, the X-ray image of the warehouse material to be detected is obtained, and the warehouse material packaged in the packaging box can be identified by combining with image identification, so that the warehouse material does not need to be manually scanned with an electronic tag or taken out of the packaging box for identification, and an operator can conveniently manage the warehouse material; according to the image recognition algorithm, the influence of changes such as contour information and the like along with the change of the stacking mode of the storage materials on the detection of the storage materials is reduced by extracting the scale invariant features in the materials to be detected, and the accuracy of the detection of the storage materials can be improved.
2. When the feature extraction is carried out on the X-ray image of the storage material to be detected, the shallow features and the deep features are fused in a multi-scale fusion mode, so that the accuracy of the subsequent detection of the storage material is improved.
3. In the application, as the parameters of the wavelet dispersion network are different, the sizes and the channel numbers of the output feature images are different, in order to learn the feature information acquired under different parameters of the wavelet dispersion network, the feature images G2, G3 and G4 are unified in size and fused with the features through the multi-view fusion part and the second up-sampling part, so that the consistency of the features with unchanged dimensions is ensured.
4. According to the application, the feature map G1 is used for extracting the scale-invariant features through the wavelet scattering network and reinforcing the scale-invariant features in the image to be detected, and in the process of extracting the scale-invariant features, all scale scattering transformation is performed on the feature map G1 and all scattering transformation intermediate values, so that the extraction effect of the scale-invariant features is further enhanced.
Drawings
Fig. 1 is a schematic structural diagram of a warehouse material detection model according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a feature extraction layer according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an improved residual block according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a scale-invariant feature positioning layer employed in an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a multi-view fusion portion according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a warehouse material management system based on image recognition according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the technical solution of the present application, the technical solution of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
Embodiment 1, a warehouse material management method based on image recognition, includes:
the method comprises the steps of obtaining warehouse-in and warehouse-out information of warehouse-in and warehouse-out materials to be detected, wherein the warehouse-in and warehouse-out information comprises warehouse-in and warehouse-out states, time information corresponding to warehouse-in and warehouse-out and the like, and particularly, the warehouse-in and warehouse-out states of the warehouse-in and warehouse-out materials to be detected are marked by Boolean values, for example, 1 indicates that the warehouse-in materials to be detected are in warehouse-in states, and 0 indicates that the warehouse-out states of the warehouse-in and warehouse-out materials to be detected;
the X-ray image of the storage material to be detected is recorded as the X-ray image of the storage material to be detected, and it is to be noted that in the process of acquiring the X-ray image of the storage material to be detected, the X-ray imaging instrument scans and images a packing box containing the storage material to be detected, and as the storage material to be detected is different in material and stacking mode and different in X-ray absorption degree, different storage materials to be detected have different image characteristics, the storage material to be detected can be identified in a deep learning mode according to the different image characteristics, and the storage material to be detected can be hardware metal pieces such as screws and the like or sockets and the like;
when the warehouse material is required to be taken out from or stored in the warehouse, warehouse management personnel can send the warehouse material into a device carrying a program for executing a warehouse material management method based on image recognition, the device is similar to a high-speed railway security inspection machine and is provided with a conveyor belt, an X-ray imager and the like, and the X-ray imager can acquire a Chu Wuliao X-ray image of a warehouse to be detected corresponding to the warehouse material;
preprocessing an X-ray image of a storage material to be detected to obtain an image to be detected, wherein the preprocessing is image filtering operation; the image filtering operation adopted in the embodiment adopts an adaptive median filtering algorithm;
sending the image to be detected into a trained storage material detection model for detection, and outputting type information corresponding to the storage material to be detected;
the storage material detection model is built based on an improved residual neural network, the residual neural network is a resnet50, as shown in fig. 1, the storage material detection model comprises a preprocessing layer, a feature extraction layer, a scale-invariant feature positioning layer, a global averaging pooling layer, a full connection layer and a classification layer, wherein the preprocessing layer is used for downsampling an image to be detected, the calculated amount is reduced, and the adopted convolution kernel can be 7 multiplied by 7;
and updating a stored material management database according to the type information and the warehouse-in and warehouse-out information corresponding to the stored material to realize management of the stored material, wherein the stored material management database is used for recording management information of the stored material, such as the type corresponding to the stored material, quantity change information corresponding to the stored material, current quantity information of the stored material and the like, for example, when the stored material which is being stored is detected as a socket, the quantity corresponding to the socket can be modified in an item corresponding to the socket in the stored material management database, and the change information can be recorded.
Referring to fig. 1, the detection of the warehouse material by the trained warehouse material detection model includes the following steps: extracting features of an image to be detected through a feature extraction layer to obtain a feature image G1, and processing the feature image G1 through a scale-invariant feature positioning layer to obtain a scale-invariant feature positioning weight matrix; and carrying out weight assignment on the feature map G1 through a scale-invariant feature positioning weight matrix to obtain a feature map G11, and sequentially passing the feature map G11 through a global average pooling layer, a full connection layer and a classification layer to output type information of storage materials.
According to the application, the X-ray image of the warehouse material to be detected is obtained, and the warehouse material packaged in the packaging box can be identified by combining with image identification, so that the warehouse material does not need to be manually scanned with an electronic tag or taken out of the packaging box for identification, and an operator can conveniently manage the warehouse material; according to the image recognition algorithm, the influence of changes such as contour information and the like along with the change of the stacking mode of the storage materials on the detection of the storage materials is reduced by extracting the scale invariant features in the materials to be detected, and the accuracy of the detection of the storage materials can be improved.
The feature extraction layer in the warehouse material detection model is constructed based on an improved resnet50, and comprises a first improved residual part, a second improved residual part, a third improved residual part, a fourth improved residual part and a first upsampling part, wherein the first improved residual part, the second improved residual part, the third improved residual part and the fourth improved residual part respectively comprise 3, 4, 6 and 3 improved residual blocks, as shown in fig. 3, the improved residual blocks comprise a first convolution layer, a feature splitting layer, P second convolution layers, a third convolution layer and three nonlinear activation layers, and P is determined by the number of channels of a feature map input to the feature splitting layer; the three nonlinear activation layers are respectively positioned between the first convolution layer and the feature splitting layer, between the P second convolution layers and the third convolution layers and after the third convolution layers, and the nonlinear activation layers comprise batch normalization layers (BN) and ReLu functions for normalizing and nonlinear changing the input features; the feature splitting layer is used for splitting the input feature graphs into groups, marking the feature graph input to the feature splitting layer as S1, marking the channel number of the feature graph S1 as c, and splitting the feature graph S1 into P feature groups D according to the channel number p P.epsilon. {1,2, 3.P }, P=c/32, i.e. the feature map S1 is divided into groups of every 32 channels, feature group D p Performing convolution operation through the corresponding second convolution layer; the feature map S1 is used for learning the features according to the channel group, so that the expression of the features can be enhanced, and the space related features can be weakened, thereby realizing the effect of highlighting the local features in the image to be detected; in addition, specific parameters of the first convolution layer, the P second convolution layers, the third convolution layer and the three nonlinear activation layers are set as referencesA resnet50; the first upsampling section is used to change the size of the feature map input to the first upsampling section.
Referring to fig. 2, feature extraction of an image to be measured by a feature extraction layer includes the steps of: the feature images obtained after the images to be measured are sequentially processed by a first improved residual part, a second improved residual part, a third improved residual part and a fourth improved residual part are respectively marked as a feature image F1, a feature image F2, a feature image F3 and a feature image F4, and the feature image F4 is processed by a first upsampling part and then is spliced and fused with the feature image F3 to obtain a feature image F5; after one convolution operation, the feature map F5 is sent to a first up-sampling part for processing, and then is spliced and fused with the feature map F2 to obtain a feature map F6; after one convolution operation, the feature map F6 is sent to a first up-sampling part for processing, and then is spliced and fused with the feature map F1 to obtain a feature map F7; and takes the feature map F7 as the feature map output by the feature extraction layer.
Because the storage materials are arranged in the packaging box, the stacking mode, the material type and the like of the storage materials can influence the obtained X-ray image of the storage materials to be detected, when the feature extraction is carried out on the X-ray image of the storage materials to be detected, the shallow features and the deep features are fused in a multi-scale fusion mode, and the detection accuracy rate of the storage materials is improved.
As shown in fig. 4, the scale-invariant feature localization layer in the warehouse material detection model includes a wavelet dispersion network part, a multi-view fusion part and a second upsampling part, wherein the wavelet dispersion network part includes three wavelet dispersion network layers, and the three wavelet dispersion network layers are built-in with different wavelet dispersion network parameters { J, R, M }, where J is the largest wavelet transformation scattering scale corresponding to the wavelet dispersion network layer, where R is the rotation direction combination corresponding to the wavelet dispersion network layer, and r= { R 1 ,R 2 ,R 3 ···R n ···R N },n∈{1,2,3······N},R n For the rotation direction in the rotation direction combination, for example pi, M is the maximum scattering order corresponding to the wavelet scattering network layer, and the wavelet scattering network layer is used for extracting scale invariant features; as shown in fig. 5, the multi-view fusion portion includes a first hole convolution layer, a second hole convolution layer, and a third holeThe system comprises a convolution layer and a channel attention mechanism layer, wherein the first, second and third cavity convolution layers are respectively internally provided with cavity convolution kernels with expansion rates of 1,2 and 3; the second upsampling section is used to change the size of the feature map input to the first upsampling section.
Referring to fig. 4, the obtaining the scale-invariant feature localization weight matrix through the scale-invariant feature localization layer specifically includes the following steps: obtaining a feature map output by a feature extraction layer, namely a feature map G1, sending the feature map G1 into a wavelet dispersion network part, respectively obtaining three scale-invariant features after three wavelet dispersion network layers, respectively marking the three scale-invariant features as a feature map G2, a feature map G3 and a feature map G4, and increasing the corresponding sizes of the feature map G2, the feature map G3 and the feature map G4 one by one; processing the feature map G2, the feature map G3 and the feature map G4 through a multi-view fusion part to obtain a feature map G5, a feature map G6 and a feature map G7; the feature map G5 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G6 to obtain a feature map G8; the feature map G8 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G7 to obtain a feature map G9; the feature map G9 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G1 to obtain a feature map G10; calculating the feature map G10 through a softmax function to obtain a scale-invariant feature positioning weight matrix; because the stacking modes of the storage materials are different, the profile information correspondingly presented in the X-ray image can also be changed in rotation, translation and the like, in order to avoid influencing the detection of the storage materials, the application adopts the scale-invariant feature positioning layer to strengthen the scale-invariant features in the image to be detected, and improves the accuracy rate of the detection of the storage materials.
After the processing of the three wavelet scattering network layers, the feature graph G1 is processed by the three wavelet scattering network layers, and the size and the channel number of the output feature graph are different due to different wavelet scattering network parameters, so that in order to learn the feature information acquired under different wavelet scattering network parameters, the feature graph G2, the feature graph G3 and the feature graph G4 are unified in size and fused through the multi-view fusion part and the second up-sampling part, and the consistency of the feature with unchanged dimensions is ensured.
The extraction of the scale invariant feature from the feature map G1 by the wavelet dispersion network portion includes the steps of: for any wavelet dispersion network layer, acquiring a wavelet dispersion network parameter corresponding to the wavelet dispersion network layer, and acquiring a wavelet function psi jmr (x) Wavelet function ψ jmr (x) Characterization by wavelet function ψ jmr (x)=2 -2j ψ(2 -j r -1 x) performing wavelet transform with a wavelet transform scattering scale j and a rotation direction R on a feature map x input in an mth order wavelet scatter, wherein M is a scattering order, M is {0,1,2 }, M }, j is a wavelet transform scattering scale, j=m-1, R is a rotation direction, R is { R } 1 ,R 2 ,R 3 ···R n ···R N ' will be m,R Let be the wavelet function with scattering order m and rotation direction combination R, for example: the current scattering order is 2, and the rotation direction combination R= {0, pi/4, pi/2, 3 pi/4 }, the corresponding wavelet function ψ m,R ={ψ 1,0 ,ψ 1,π/4 ,ψ 1,π/2 ,ψ 1,3π/4 -a }; determining a wavelet scattering path set P with a scattering order of m according to a wavelet transformation scattering scale j, a scattering order of m and a rotation direction r m ={p 0 ,p 1 ,p 2 ···p k ···p K -k=m, where p k For the secondary wavelet function set { ψ ] 0,R ,ψ 1,R ,ψ 2,R ···ψ m-1,R Selecting k wavelet functions and wavelet function psi from } m,R Composed wavelet dispersion paths, and the wavelet dispersion path p is known from the principle of combination 0 There are 1 wavelet dispersion paths p 1 There are m wavelet dispersion paths p 2 There are m-1, and so on, wavelet dispersion paths p K There are 1, so the wavelet is scattered to the path set P m Unfolding to { ζ0, ζ 1 ,ζ 2 ···ζ y ···ζ Y When the scattering order is 2, the wavelet scatters the path set P 2 ={(ψ 2,R ),(ψ 0,R ,ψ 2,R ),(ψ 1,R ,ψ 2,R ),(ψ 0,R ,ψ 1,R ,ψ 2,R ) -a }; from a set of wavelet-scattered paths P m Generating corresponding filter set filters= { μ 0 ,μ 1 ,μ 2 ···μ y ···μ Y },μ y Scattering path ζ for mth order wavelet y Corresponding filter ζ y =(ψ i,R ,···ψ m,R ) I is the wavelet dispersion path ζ y Numbering corresponding to the wavelet function; generating a scatter transformation intermediate value U from a filter set filters m (μ y ) The formula is as follows: u (U) m (μ y )=|…||x*ψ i,R |…*ψ m,R I, select window function phi J (x)=2 -2J φ(2 -J x), wherein φ (x) = (4pi) -0.5 exp(-0.25|x| 2 For a two-dimensional Gaussian function, a reference scattering coefficient T is calculated Z =x*φ J (x) According to the window function phi J (x) Intermediate value U of scattering transformation m (μ y ) Output scattering coefficient T m (μ y ) The formula is as follows: t (T) m (μ y )=U m (μ y )*φ J (x) Concatenating all scattering coefficients T m (μ y ) And reference scattering coefficient T Z =x*φ J (x) Outputting the scale invariant features; and (3) arranging the scale invariant features output by the three wavelet dispersion network layers from small to large according to the size, and marking the scale invariant features as a feature map G2, a feature map G3 and a feature map G4.
Further describing the feature diagram G1 with unchanged scale by the wavelet dispersion network part, firstly inputting the feature diagram G1 into the wavelet dispersion network layer, assuming that the maximum wavelet dispersion transformation scale J of the wavelet dispersion network layer is 2 and the maximum scattering order is 2, selecting the direction combination R= {0, pi/4, pi/2, 3 pi/4 }, marking the feature diagram G1 as x, and calculating the reference scattering coefficient T Z =x*φ J (x) And pass through a 0 th order filter mu 0 =(ψ 0,R ) Calculating a scatter transformation intermediate value U 0 (μ 0 )=|x*ψ 0,R Computing scattering coefficient T of 0 th order 0 (μ 0 )=|x*ψ 0,R |*φ J (x) The method comprises the steps of carrying out a first treatment on the surface of the Through a filter mu of the 1 st order 0 =(ψ 1,R ),μ 1 =(ψ 0,R ,ψ 1,R ) Calculating a scatter transformation intermediate value U 1 (μ 0 )=|x*ψ 1,R |,U 1 (μ 1 )=||x*ψ 0,R |*ψ 1,R I, calculate the scattering coefficient T of the 1 st order 1 (μ 0 )=|x*ψ 1,R |*φ J (x),T 1 (μ 1 )=||x*ψ 0,R |*ψ 1,R |*φ J (x) The method comprises the steps of carrying out a first treatment on the surface of the Through a filter mu of order 2 0 =(ψ 2,R ),μ 1 =(ψ 0,R ,ψ 2,R ),μ 2 =(ψ 1,R ,ψ 2,R ),μ 3 =(ψ 0,R ,ψ 1,R ,ψ 2,R ) Calculating a scatter transformation intermediate value U 2 (μ 0 )=|x*ψ 2,R |,U 2 (μ 1 )=||x*ψ 0,R |*ψ 2,R |,U 2 (μ 2 )=||x*ψ 1,R |*ψ 2,R |,U 2 (μ 3 )=|||x*ψ 0,R |*ψ 1,R |*ψ 1,R Computing scattering coefficient T of the 2 nd order 2 (μ 0 )=|x*ψ 2,R |*φ J (x),T 2 (μ 1 )=||x*ψ 0,R |*ψ 2,R |*φ J (x),T 2 (μ 3 )=|||x*ψ 0,R |*ψ 1,R |*ψ 1,R |*φ J (x) The method comprises the steps of carrying out a first treatment on the surface of the In cascade all scattering coefficients T m (μ y ) And reference scattering coefficient T Z =x*φ J (x) And outputting the scale-invariant features.
According to the application, the feature map G1 is used for extracting the scale-invariant features through the wavelet scattering network and reinforcing the scale-invariant features in the image to be detected, and in the process of extracting the scale-invariant features, all scale scattering transformation is performed on the feature map G1 and all scattering transformation intermediate values, so that the extraction effect of the scale-invariant features is further enhanced.
Referring to fig. 5, the processing of the feature map G2, the feature map G3, and the feature map G4 by the multi-view fusion section includes the steps of: carrying out convolution operation on the feature map G2 through a first cavity convolution layer, a second cavity convolution layer and a third cavity convolution layer to obtain a feature map E1, a feature map E2 and a feature map E3, wherein the first cavity convolution layer, the second cavity convolution layer and the third cavity convolution layer are provided with cavity convolution kernels with different expansion rates, so that feature information of different receptive fields in the feature map G2 can be learned, and the learning capability of features is further enhanced; the feature map G2 is sent to a channel attention mechanism layer, the channel attention mechanism layer respectively carries out global average pooling and global maximum pooling on the feature map G2 according to channels to obtain a first channel weight vector and a second channel weight vector, and a third channel weight vector is obtained by carrying out convolution after splicing the first channel weight vector and the second channel weight vector; carrying out one-time convolution after splicing and fusing the feature map E1, the feature map E2 and the feature map E3 to obtain a feature map E4, and carrying out weight assignment on the feature map E4 through a third channel weight vector, namely multiplying all the feature maps E4 by weight values corresponding to channels of the feature maps E4 to obtain a feature map G5; the same operation is performed on the feature map G3 and the feature map G4 to obtain a feature map G6 and a feature map G7, respectively, the feature map G2, the feature map G3, and the feature map G4 are denoted as input feature maps, and the feature map G5, the feature map G6, and the feature map G7 are denoted as output feature maps.
Training the storage material detection model specifically comprises the following steps of; the training image marked with the type information is acquired, and the training image is an X-ray image of the warehouse material accurately acquired by an operator by adopting an X-ray imager and is marked by the actual type information; combining all training images marked with type information to form a training image set, training the initialized storage material detection model through the training image set, adopting an alternate optimization method during training, if the corresponding cross entropy loss value is in a preset range, setting the preset range by an operator, indicating that the detection effect of the storage material detection model reaches the expected value, and outputting the trained storage material detection model; otherwise, continuing the iterative training.
Embodiment 2, as shown in fig. 6, is a warehouse material management system based on image recognition, which includes:
the warehouse-in and warehouse-out information acquisition module is used for acquiring warehouse-in and warehouse-out information of warehouse materials to be tested;
the X-ray image acquisition module is used for acquiring Chu Wuliao X-ray images of the warehouse to be detected corresponding to the warehouse to be detected;
the X-ray image preprocessing module is used for preprocessing the X-ray image of the storage material to be detected to obtain an image to be detected;
the warehouse material detection model management module is used for training and storing a warehouse material detection model;
the storage material detection module is used for processing the image to be detected according to the storage material detection model and outputting type information corresponding to the storage material to be detected;
and the stored material management database updating module is used for updating the stored material management database according to the type information and the warehouse in and out information corresponding to the warehouse materials to be tested.
It will be understood that modifications and variations will be apparent to those skilled in the art from the foregoing description, and it is intended that all such modifications and variations be included within the scope of the following claims. Parts of the specification not described in detail belong to the prior art known to those skilled in the art.

Claims (10)

1. The warehouse material management method based on image recognition is characterized by comprising the following steps:
acquiring warehouse-in and warehouse-out information of warehouse materials to be tested;
an X-ray image of the storage material to be detected is recorded as an X-ray image of the storage material to be detected;
preprocessing an X-ray image of a storage material to be detected to obtain an image to be detected;
sending the image to be detected into a trained storage material detection model for detection, and outputting type information corresponding to the storage material to be detected;
the storage material detection model is built based on an improved residual neural network and comprises a preprocessing layer, a feature extraction layer, a scale-invariant feature positioning layer, a global average pooling layer, a full connection layer and a classification layer, wherein the preprocessing layer is used for downsampling an image to be detected;
and updating the stored material management database according to the type information and the warehouse in and out information corresponding to the warehouse materials to be tested.
2. The method for image recognition-based storage material management according to claim 1, wherein the step of detecting the storage material by using the trained storage material detection model comprises the steps of: extracting features of an image to be detected through a feature extraction layer to obtain a feature image G1, and processing the feature image G1 through a scale-invariant feature positioning layer to obtain a scale-invariant feature positioning weight matrix; and carrying out weight assignment on the feature map G1 through a scale-invariant feature positioning weight matrix to obtain a feature map G11, and sequentially passing the feature map G11 through a global average pooling layer, a full connection layer and a classification layer to output type information of storage materials.
3. The method for managing warehouse materials based on image recognition according to claim 2, wherein the feature extraction layer in the warehouse materials detection model is constructed based on an improved resnet50, and comprises a first improved residual part, a second improved residual part, a third improved residual part, a fourth improved residual part and a first upsampling part, wherein the first improved residual part, the second improved residual part, the third improved residual part and the fourth improved residual part respectively comprise 3, 4, 6 and 3 improved residual blocks, the improved residual blocks comprise a first convolution layer, a feature split layer, P second convolution layers, a third convolution layer and three nonlinear activation layers, and P is determined by the number of channels of the feature map input to the feature split layer; the three nonlinear activation layers are respectively positioned between the first convolution layer and the feature splitting layer, between the P second convolution layers and the third convolution layers and behind the third convolution layers, and the nonlinear activation layers comprise batch normalization layers and ReLu functions and are used for normalizing and nonlinear changing the input features; the feature splitting layer is used for splitting the input feature graphs into groupsThe feature map input to the feature splitting layer is denoted as S1, the number of channels of the feature map S1 is denoted as c, and the feature splitting layer splits the feature map S1 into P feature groups D according to the number of channels p P.epsilon. {1,2, 3.P }, P=c/32, feature set D p Performing convolution operation through the corresponding second convolution layer; the first upsampling section is used to change the size of the feature map input to the first upsampling section.
4. A warehouse material management method based on image recognition as claimed in claim 3, wherein the feature extraction of the image to be tested through the feature extraction layer comprises the steps of: the feature images obtained after the images to be measured are sequentially processed by a first improved residual part, a second improved residual part, a third improved residual part and a fourth improved residual part are respectively marked as a feature image F1, a feature image F2, a feature image F3 and a feature image F4, and the feature image F4 is processed by a first upsampling part and then is spliced and fused with the feature image F3 to obtain a feature image F5; after one convolution operation, the feature map F5 is sent to a first up-sampling part for processing, and then is spliced and fused with the feature map F2 to obtain a feature map F6; after one convolution operation, the feature map F6 is sent to a first up-sampling part for processing, and then is spliced and fused with the feature map F1 to obtain a feature map F7; and takes the feature map F7 as the feature map output by the feature extraction layer.
5. The method of claim 4, wherein the scale-invariant feature localization layer in the warehouse material detection model comprises a wavelet dispersion network part, a multi-view fusion part and a second upsampling part, the wavelet dispersion network part comprises three wavelet dispersion network layers, and the three wavelet dispersion network layers are internally provided with different wavelet dispersion network parameters { J, R, M }, wherein J is the largest wavelet transformation dispersion scale corresponding to the wavelet dispersion network layers, R is the rotation direction combination corresponding to the wavelet dispersion network layers, and R= { R 1 ,R 2 ,R 3 ···R n ···R N },n∈{1,2,3······N},R n For the rotation direction in the rotation direction combination, M is the most corresponding to the wavelet dispersion network layerThe wavelet scattering network layer is used for extracting scale invariant features; the multi-view fusion part comprises a first cavity convolution layer, a second cavity convolution layer, a third cavity convolution layer and a channel attention mechanism layer, wherein different cavity convolution kernels are respectively arranged in the first cavity convolution layer, the second cavity convolution layer and the third cavity convolution layer; the second upsampling section is used to change the size of the feature map input to the first upsampling section.
6. The method for warehouse material management based on image recognition as claimed in claim 5, wherein the step of obtaining the scale-invariant feature localization weight matrix through the scale-invariant feature localization layer comprises the steps of: obtaining a feature map output by a feature extraction layer, namely a feature map G1, sending the feature map G1 into a wavelet dispersion network part, respectively obtaining three scale-invariant features after three wavelet dispersion network layers, respectively marking the three scale-invariant features as a feature map G2, a feature map G3 and a feature map G4, and increasing the corresponding sizes of the feature map G2, the feature map G3 and the feature map G4 one by one; processing the feature map G2, the feature map G3 and the feature map G4 through a multi-view fusion part to obtain a feature map G5, a feature map G6 and a feature map G7; the feature map G5 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G6 to obtain a feature map G8; the feature map G8 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G7 to obtain a feature map G9; the feature map G9 is sent to a second up-sampling part for processing after one convolution operation, and then is spliced and fused with the feature map G1 to obtain a feature map G10; and calculating the feature map G10 through a softmax function to obtain a scale-invariant feature positioning weight matrix.
7. The method for warehouse material management based on image recognition according to claim 6, wherein the step of extracting scale invariant features from the feature map G1 by the wavelet dispersion network portion comprises the steps of: for any wavelet dispersion network layer, acquiring a wavelet dispersion network parameter corresponding to the wavelet dispersion network layer, and acquiring a wavelet function psi jmr (x) Wavelet function ψ jmr (x) Characterization by wavelet functionψ jmr (x)=2 -2j ψ(2 -j r -1 x) performing wavelet transform with a wavelet transform scattering scale j and a rotation direction R on a feature map x input in an mth order wavelet scatter, wherein M is a scattering order, M is {0,1,2 }, M }, j is a wavelet transform scattering scale, j=m-1, R is a rotation direction, R is { R } 1 ,R 2 ,R 3 ···R n ···R N ' will be m,R The wavelet function is marked as a wavelet function with a scattering order of m and a rotation direction combination of R; determining a wavelet scattering path set P with a scattering order of m according to a wavelet transformation scattering scale j, a scattering order of m and a rotation direction r m ={p 0 ,p 1 ,p 2 ···p k ···p K -k=m, where p k For the secondary wavelet function set { ψ ] 0,R ,ψ 1,R ,ψ 2,R ···ψ m-1,R Selecting k wavelet functions and wavelet function psi from } m,R The composed wavelet scattering paths are grouped into a wavelet scattering path set P m Unfolding to { ζ0, ζ 1 ,ζ 2 ···ζ y ···ζ Y Y=; from a set of wavelet-scattered paths P m Generating corresponding filter set filters= { μ 0 ,μ 1 ,μ 2 ···μ y ···μ Y },μ y Scattering path ζ for mth order wavelet y Corresponding filter ζ y =(ψ i,R ,···ψ m,R ) I is the wavelet dispersion path ζ y Numbering corresponding to the wavelet function; generating a scatter transformation intermediate value U from a filter set filters m (μ y ) The formula is as follows: u (U) m (μ y )=|…||x*ψ i,R |…*ψ m,R I, select window function phi J (x)=2 -2J φ(2 -J x), wherein φ (x) = (4pi) -0.5 exp(-0.25|x| 2 For a two-dimensional Gaussian function, a reference scattering coefficient T is calculated Z =x*φ J (x) According to the window function phi J (x) Intermediate value U of scattering transformation m (μ y ) Output scattering coefficient T m (μ y ) The formula is as follows: t (T) m (μ y )=U m (μ y )*φ J (x) Concatenating all scattering coefficients T m (μ y ) And reference scattering coefficient T Z =x*φ J (x) Outputting the scale invariant features; and (3) arranging the scale invariant features output by the three wavelet dispersion network layers from small to large according to the size, and marking the scale invariant features as a feature map G2, a feature map G3 and a feature map G4.
8. The warehouse material management method based on image recognition as claimed in claim 7, wherein the processing of the feature map G2, the feature map G3, and the feature map G4 through the multi-view fusion part includes the steps of: carrying out convolution operation on the feature map G2 through the first cavity convolution layer, the second cavity convolution layer and the third cavity convolution layer to obtain a feature map E1, a feature map E2 and a feature map E3; the feature map G2 is sent to a channel attention mechanism layer, the channel attention mechanism layer respectively carries out global average pooling and global maximum pooling on the feature map G2 according to channels to obtain a first channel weight vector and a second channel weight vector, and a third channel weight vector is obtained by carrying out convolution after splicing the first channel weight vector and the second channel weight vector; the feature map E1, the feature map E2 and the feature map E3 are spliced and fused, then convolution is carried out for one time to obtain a feature map E4, and weight assignment is carried out on the feature map E4 through a third channel weight vector to obtain a feature map G5; the same operation is performed on the feature map G3 and the feature map G4 to obtain a feature map G6 and a feature map G7, respectively.
9. The method for managing warehouse materials based on image recognition as set forth in claim 8, wherein training the warehouse materials detection model comprises the steps of; acquiring a training image marked with type information; combining all training images marked with type information to form a training image set, training the initialized storage material detection model through the training image set, and outputting the trained storage material detection model if the corresponding cross entropy loss value is within a preset range by adopting an alternate optimization method during training; otherwise, continuing the iterative training.
10. A warehouse material management system based on image recognition, comprising:
the warehouse-in and warehouse-out information acquisition module is used for acquiring warehouse-in and warehouse-out information of warehouse materials to be tested;
the X-ray image acquisition module is used for acquiring Chu Wuliao X-ray images of the warehouse to be detected corresponding to the warehouse to be detected;
the X-ray image preprocessing module is used for preprocessing the X-ray image of the storage material to be detected to obtain an image to be detected;
the warehouse material detection model management module is used for training and storing a warehouse material detection model;
the storage material detection module is used for processing the image to be detected according to the storage material detection model and outputting type information corresponding to the storage material to be detected;
and the stored material management database updating module is used for updating the stored material management database according to the type information and the warehouse in and out information corresponding to the warehouse materials to be tested.
CN202311320510.2A 2023-10-12 2023-10-12 Warehouse material management method and system based on image recognition Active CN117058473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311320510.2A CN117058473B (en) 2023-10-12 2023-10-12 Warehouse material management method and system based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311320510.2A CN117058473B (en) 2023-10-12 2023-10-12 Warehouse material management method and system based on image recognition

Publications (2)

Publication Number Publication Date
CN117058473A true CN117058473A (en) 2023-11-14
CN117058473B CN117058473B (en) 2024-01-16

Family

ID=88663098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311320510.2A Active CN117058473B (en) 2023-10-12 2023-10-12 Warehouse material management method and system based on image recognition

Country Status (1)

Country Link
CN (1) CN117058473B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800716A (en) * 2019-01-22 2019-05-24 华中科技大学 One kind being based on the pyramidal Oceanic remote sensing image ship detecting method of feature
CN111340882A (en) * 2020-02-20 2020-06-26 盈嘉互联(北京)科技有限公司 Image-based indoor positioning method and device
CN112287985A (en) * 2020-10-16 2021-01-29 贵州大学 Brain glioma histological classification based on invariant features and visualization method thereof
JP2022022139A (en) * 2020-07-22 2022-02-03 本田技研工業株式会社 Image identification device, method of performing semantic segmentation, and program
WO2022079527A1 (en) * 2020-10-12 2022-04-21 Everseen Limited Goods receipt management system and method
CN114821554A (en) * 2022-04-02 2022-07-29 澳门科技大学 Image recognition method, electronic device, and storage medium
CN114913413A (en) * 2022-04-19 2022-08-16 浙江工贸职业技术学院 A goods sorter for logistics storage
CN115170816A (en) * 2022-07-19 2022-10-11 华北电力大学(保定) Multi-scale feature extraction system and method and fan blade defect detection method
CN115759186A (en) * 2022-11-29 2023-03-07 北京邮电大学 Six-class motor imagery electroencephalogram signal classification method based on convolutional neural network
EP4177793A1 (en) * 2021-11-05 2023-05-10 Nuctech Company Limited Method and system of verifying authenticity of declaration information, device and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800716A (en) * 2019-01-22 2019-05-24 华中科技大学 One kind being based on the pyramidal Oceanic remote sensing image ship detecting method of feature
CN111340882A (en) * 2020-02-20 2020-06-26 盈嘉互联(北京)科技有限公司 Image-based indoor positioning method and device
JP2022022139A (en) * 2020-07-22 2022-02-03 本田技研工業株式会社 Image identification device, method of performing semantic segmentation, and program
WO2022079527A1 (en) * 2020-10-12 2022-04-21 Everseen Limited Goods receipt management system and method
CN112287985A (en) * 2020-10-16 2021-01-29 贵州大学 Brain glioma histological classification based on invariant features and visualization method thereof
EP4177793A1 (en) * 2021-11-05 2023-05-10 Nuctech Company Limited Method and system of verifying authenticity of declaration information, device and medium
CN114821554A (en) * 2022-04-02 2022-07-29 澳门科技大学 Image recognition method, electronic device, and storage medium
CN114913413A (en) * 2022-04-19 2022-08-16 浙江工贸职业技术学院 A goods sorter for logistics storage
CN115170816A (en) * 2022-07-19 2022-10-11 华北电力大学(保定) Multi-scale feature extraction system and method and fan blade defect detection method
CN115759186A (en) * 2022-11-29 2023-03-07 北京邮电大学 Six-class motor imagery electroencephalogram signal classification method based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴华娟 等: "基于小波散射卷积网络的纹理分割方法", 微电子学与计算机, vol. 30, no. 05, pages 31 - 34 *

Also Published As

Publication number Publication date
CN117058473B (en) 2024-01-16

Similar Documents

Publication Publication Date Title
Kanwal et al. The devil is in the details: Whole slide image acquisition and processing for artifacts detection, color variation, and data augmentation: A review
JP5984559B2 (en) Necrotic cell region detection apparatus and method, and necrotic cell region detection program
CN112633297B (en) Target object identification method and device, storage medium and electronic device
US10346980B2 (en) System and method of processing medical images
CN111553422A (en) Automatic identification and recovery method and system for surgical instruments
CN111179261A (en) Defect detection method, system, terminal device and storage medium
Shihavuddin et al. Automated classification and thematic mapping of bacterial mats in the north sea
CN112313718A (en) Image-based novelty detection of material samples
Wang et al. Focuslitenn: High efficiency focus quality assessment for digital pathology
CN117474929A (en) Tray outline dimension detection method and system based on machine vision
CN113569679B (en) Method, device and system for measuring elongation at break
CN117315210B (en) Image blurring method based on stereoscopic imaging and related device
CN117058473B (en) Warehouse material management method and system based on image recognition
Erener et al. A methodology for land use change detection of high resolution pan images based on texture analysis
EP3940370A1 (en) Method for extracting spectral information of object to be detected
CN116174342B (en) Board sorting and packaging method, terminal and board production line
CN108764112A (en) A kind of Remote Sensing Target object detecting method and equipment
CN117880479A (en) Computing unit and method for estimating depth map from digital hologram, encoding method of video sequence, computer program
CN111882521A (en) Image processing method of cell smear
Tapia et al. Face feature visualisation of single morphing attack detection
Alvarez-Ramos et al. Automatic classification of Nosema pathogenic agents through machine vision techniques and kernel-based vector machines
CN110333185B (en) Material quality detection method
CN113139932A (en) Deep learning defect image identification method and system based on ensemble learning
Mahmood et al. Measuring focus quality in vector valued images for shape from focus
Jenerowicz et al. Comparison of mathematical morphology with the local multifractal description applied to the image samples processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant