CN111695621A - System and method for detecting matching of customized content near-plane rule article and order based on deep learning - Google Patents

System and method for detecting matching of customized content near-plane rule article and order based on deep learning Download PDF

Info

Publication number
CN111695621A
CN111695621A CN202010517670.6A CN202010517670A CN111695621A CN 111695621 A CN111695621 A CN 111695621A CN 202010517670 A CN202010517670 A CN 202010517670A CN 111695621 A CN111695621 A CN 111695621A
Authority
CN
China
Prior art keywords
image
matching
order
corner
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010517670.6A
Other languages
Chinese (zh)
Other versions
CN111695621B (en
Inventor
邹国平
彭飞
吕茂鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yinge Technology Co ltd
Original Assignee
Hangzhou Yinge Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yinge Technology Co ltd filed Critical Hangzhou Yinge Technology Co ltd
Priority to CN202010517670.6A priority Critical patent/CN111695621B/en
Publication of CN111695621A publication Critical patent/CN111695621A/en
Application granted granted Critical
Publication of CN111695621B publication Critical patent/CN111695621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A system and a method for detecting and customizing content near-plane rule object and order matching based on deep learning belong to the technical field of image recognition and machine vision. The system comprises an image library feature extraction and index building module, a production line image acquisition module, a corner detection and image alignment module, an image feature extraction module, a feature matching and result output module, a training corner detection model and a training image matching model. According to the system and the method for matching the detection customized content near-plane rule articles with the orders, after the characteristics of the order images of the near-plane rule articles are extracted and the indexes are established according to the batches, the finished near-plane rule articles of the batches are automatically and accurately matched with the order images of the users on the production line of the customized content near-plane rule articles, and the problem of sorting the customized content near-plane rule articles is solved.

Description

System and method for detecting matching of customized content near-plane rule article and order based on deep learning
Technical Field
The invention belongs to the technical field of image recognition and machine vision, and particularly relates to a system and a method for detecting and matching customized content near-plane regular articles and orders based on deep learning.
Background
Nowadays, the electronic commerce of society has developed to maturity, and people can purchase most products through the internet, and certainly, the products are standard products produced in a large scale; along with the improvement of living standard of people, people no longer satisfy the standardized product of one side of thousands of people, but individualized, express from my differentiation product, the most direct differentiation product is just to print specific content on the basis of the sample and become the sample of customization content, for example, cell-phone shell, photo frame, mug, canvas bag of customization content etc. people can customize its pattern freely, then original standard product cell-phone shell, photo frame, mug, canvas bag just become the individual product that size, material and pattern are abundant. However, the mass production of such personalized products has certain problems, one of which is how to match the finished customized content with the user order. The traditional way is to manually perform matching sorting on a production line, but as the production capacity rises, the manual matching greatly restricts the production efficiency, so an intelligent device for automatically matching the finished products of the customized content with the user orders on the production line of the customized content products is needed.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to design and provide a system and a method for matching a detection customized content near-plane rule article with an order based on deep learning, wherein after the characteristics of an order image of the near-plane rule article are extracted and an index is established according to batches, the finished near-plane rule article of the batch is automatically and accurately matched with the order image of a user on a production line of the customized content near-plane rule article, so that the sorting problem of the customized content near-plane rule article is solved; and the accurate matching does not depend on the good shooting conditions, postures and rich degree of surface pattern content of the object to be matched, and the method is robustly suitable for wider scenes.
The system for detecting and customizing the content near-plane regular article and matching the order based on the deep learning is characterized by comprising an image library feature extraction and indexing module, a production line image acquisition module, a corner detection and image alignment module, an image feature extraction module, a feature matching and result output module, a training corner detection model and a training image matching model;
the image library feature extraction and index establishment module is used for extracting the features of the order image library and establishing an index;
the production line image acquisition module completes triggering, shooting, sending and matching result acquisition of production line product image acquisition through hardware cooperative operation;
the corner detection and image alignment module is used for detecting the corners of the target area and aligning the images of the target area, and comprises a corner detection model training step of training a corner detection model before the step;
the image feature extraction module is used for extracting the image features of the target area, and comprises an image matching model training step of training an image matching model before the step;
and the characteristic matching and result output module is used for characteristic matching and result output.
The method for detecting the customized content near-plane rule object and order matching system based on deep learning is characterized by comprising the following steps of:
s101, establishing order image library feature extraction and establishing an index through an image library feature extraction and index establishment module;
s102, acquiring production line images through a production line image acquisition module;
s103, detecting the corner points of the target area and aligning the image of the target area through a corner point detection and image alignment module; a corner detection model training step S1030 including a primary training of a corner detection model before this step;
s104, extracting the image characteristics of the target area through an image characteristic extraction module, wherein an image matching model training step S1040 of training an image matching model is included before the step;
and S105, performing feature matching and result output through a feature matching and result output module.
The method for detecting the customized content near-plane regular article and the order matching system based on deep learning is characterized in that the near-plane regular article is a mobile phone shell, a photo frame, a mug or a canvas bag of the customized content.
The method for detecting the customized content near-plane rule object and order matching system based on deep learning is characterized in that the step S101 specifically comprises the following steps:
s1011, preprocessing the order pictures of a certain batch, specifically scaling the pictures to 256 according to the longest edge, then placing the pictures in the middle of the image with the size of 128x256, and setting the pixel values of other positions to (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1012, inputting the processed 3-channel 128x256 data into an image matching network, and outputting 128-dimensional floating point data vectors as image features by the network;
s1013, constructing KD-Tree structures for the extracted features of all the order pictures of the batch, forming indexes with the order id, and storing the indexes as index files;
and S1014, storing the characteristic data of all the order pictures, the constructed KD-Tree structure and the order id corresponding to the order pictures as a binary data file of characteristics and indexes.
The method for detecting the customized content near-plane rule object and order matching system based on deep learning is characterized in that the step S103 specifically comprises the following steps:
s1031, preprocessing the product pictures collected on the production line, specifically scaling the pictures to 320 according to the longest edge, storing the scaling coefficients in the horizontal direction and the vertical direction, and then filling pixel values (128,128,128) in the short edge direction until the size of the image is 320x 320; subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1032, inputting the processed 3-channel 320x320 data into a corner detection network, and outputting 4 corner response characteristic graphs and 1 background response characteristic graph by the network, wherein the size of the characteristic graphs is 40x 40;
s1033, processing the 4 corner response characteristic graphs, specifically, applying bilinear interpolation to each corner response characteristic graph to amplify the graph to 320x320, finding a local maximum response value and a coordinate position in each amplified characteristic graph, and dividing the coordinate position by the horizontal and vertical scaling coefficients stored in S1031 to obtain corner coordinates;
and S1034, performing perspective transformation according to the 4 calculated angular point coordinates to obtain an aligned image of the target region surrounded by the 4 angular points.
The method for detecting the customized content near-plane rule object and order matching system based on deep learning is characterized in that the step S104 specifically comprises the following steps:
s1041, preprocessing the target area image extracted in the S1034, specifically, scaling the picture to 256 according to the longest edge, then placing the picture in the middle of the image with the size of 128x256, and setting the pixel values of other positions to (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1042, inputting the processed 3-channel 128x256 data into an image matching network, and outputting 128-dimensional floating point data vectors as image features by the network;
s1043, inputting the 128-dimensional floating point data serving as query data into a KD-Tree structure established by an order image feature library, searching an engine, setting a k value of a neighbor number to be 1, and returning an order image feature serial number closest to the query data and a distance value by the engine;
s1044, inquiring the returned matched order image feature serial number in an order image feature and order id index to obtain an order id, namely the matched order id, and taking the matched order id as one of output results; comparing the returned distance value with a default threshold value 1.0 of the system, setting the current matching state to True if the distance value is less than or equal to the threshold value, indicating that the current matching result is credible, setting the current matching state to False if the distance value is greater than the threshold value, indicating that the current matching result is incredible, sending out a warning by the system, and taking the current matching state as a second output result; specifically, the coordinates of 4 corner points output in S1034 are output as the third output result.
The method for detecting the customized content near-plane rule object and order matching system based on deep learning is characterized in that the step S1030 of corner detection model training specifically comprises the following steps:
s10301, carrying out manual corner point marking on the collected finished product near-plane regular article images on the production line, and recording 4 corner point coordinates of each near-plane regular article area;
s10302, synthesizing a finished product near-plane regular article image by using the near-plane regular article preview image with the customized content and the production line conveyor belt background image, and recording 4 angular point coordinates of each near-plane regular article area;
s10303, preprocessing the sample picture, specifically, scaling the picture to 320 according to the longest edge, storing the scaling coefficients of the horizontal direction and the vertical direction, and then filling pixel values (128,128,128) in the direction of the short edge until the size of the image is 320x 320; subtracting 128 and dividing 128 the pixel values on the image RGB channel;
generating a corner response characteristic diagram through 4 corners, specifically generating a characteristic diagram data of 40x40 for each corner, defaulting each coordinate position to 0, and endowing a response value to a circular area taking the corner as a center and 7 as a radius, wherein the response value is generated by a Gaussian function; obtaining an average feature map of the feature maps of the 4 angular points, and subtracting the response of each position of the average feature map from 1 to serve as the feature map of the background, so that each sample corresponds to 5 feature maps with the size of 40x 40;
s10304, inputting the processed sample picture and the feature map thereof to a corner detection network, wherein the network is a convolutional neural network, and specifically consists of 15 convolutional layers;
s10305, building a corner detection network and a training process by means of PyTorch, setting an initial learning rate to be 0.01 and the number of times of terminating iteration cycles to be 300, selecting an optimizer as SGD, and outputting a corner detection model finally, wherein a loss function is L2 norm loss.
The method for detecting the customized content near-plane rule object and order matching system based on deep learning is characterized in that the image matching model training step S1040 specifically comprises:
s10401, generating an image matching sample by using the near-plane regular item preview image of the customized content, specifically, generating 20 samples by enhancing the image of the preview image of each order according to a preset scheme, wherein the preset scheme includes: random color transformation, random rotation, random edge cutting by 20%, random erasing by 20%, random Gaussian noise and random Gaussian blur;
s10402, preprocessing the sample picture, specifically, scaling the picture to 256 according to the longest edge, then placing the picture in the middle of the image with the size of 128x256, and setting the pixel values at other positions as (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s10403, inputting the processed sample picture and order number thereof into an image matching network, wherein the network is a convolutional neural network, and specifically consists of 50 convolutional layers;
s10404, constructing an image matching network and a training process by means of PyTorch, setting an initial learning rate to be 0.01 and the number of times of terminating iteration cycles to be 100, selecting an optimizer as SGD, and finally outputting an image matching model, wherein the loss function is an additive angle interval loss function.
According to the system and the method for matching the detection customized content near-plane regular article with the order based on deep learning, an advanced deep learning method is adopted to construct a model for positioning and aligning the corner points of the near-plane regular article with any angle on a production line and a near-plane regular article matching model, the corner points of the near-plane regular article are positioned and aligned, then accurate matching characteristics are extracted for optimal matching, automatic matching of the finished near-plane regular article and the user order on the production line is completed, and the production efficiency is greatly improved; more generally, the method is not only applied to intelligent matching sorting of a customized content near-plane regular article production line, but also applicable to intelligent matching sorting of other customized products similar to regular rectangular personalized content, and has the advantages of good robustness and fast operation.
Drawings
FIG. 1 is a block diagram of the architecture of the system of the present invention;
FIG. 2 is a flow chart of the characteristic extraction and index construction of the order image library according to the present invention;
FIG. 3 is a flow chart of the corner detection of the target area and the image alignment of the target area according to the present invention;
FIG. 4 is a flowchart of the target area image feature extraction according to the present invention;
FIG. 5 is a flow chart of the corner detection model training module of the present invention;
FIG. 6 is a flow diagram of an image matching model training module of the present invention;
in the figure: the method comprises the steps of 1-image library feature extraction and index building module, 2-production line image acquisition module, 3-angular point detection and image alignment module, 4-image feature extraction module, 5-feature matching and result output module, 301-training angular point detection model and 401-loading image matching model.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, which are simplified schematic drawings that illustrate only the basic structure of the invention and, therefore, only show the structures that are relevant to the invention.
As shown in the figure, the system for detecting and customizing the content near-plane regular article and matching an order based on deep learning comprises an image library feature extraction and indexing module 1, a production line image acquisition module 2, a corner detection and image alignment module 3, an image feature extraction module 4, a feature matching and result output module 5, a training corner detection model 301 and a training image matching model 401;
the image library feature extraction and index establishment module 1 is used for extracting the features of the order image library and establishing an index;
the production line image acquisition module 2 generally completes triggering, shooting, sending and obtaining of matching results of production line product image acquisition through hardware cooperative operation;
the corner detection and image alignment module 3 is used for detecting corners of a target area and aligning images of the target area, and comprises a corner detection model training step of training the corner detection model 301 before the step; in particular, when the corner detection model already exists and does not need to be optimized, it is not necessary to include the training corner detection model 301.
The image feature extraction module 4 is used for extracting the image features of the target area, and comprises an image matching model training step of training an image matching model 401 before the step; in particular, when an image matching model already exists and optimization is not required, it is not necessary to include the training image matching model 401.
And the characteristic matching and result output module 5 is used for characteristic matching and result output.
The method for detecting the customized content near-plane rule object and order matching system based on deep learning comprises the following steps:
s101, establishing order image library feature extraction and establishing an index through an image library feature extraction and index establishing module 1; the step is only carried out after a certain batch of order pictures are imported for the first time, and the step is not required to be repeated before matching is carried out on the finished product pictures of the batch of orders, and only the condition that whether the characteristics and the index files corresponding to the batch of order pictures exist is judged, if yes, the files are directly loaded, and the characteristics and the index data are loaded; if not, performing the step;
s102, acquiring production line images through the production line image acquisition module 2;
s103, detecting the corner points of the target area and aligning the image of the target area through the corner point detecting and image aligning module 3; the method comprises the following steps that S102, collected product pictures are that product individuals are placed on a certain background in a certain posture, the step is that four corner point coordinates of the product individuals are accurately detected, perspective transformation is carried out according to the four corner point coordinates, and a target area image which is in a positive direction and only contains the product individuals is obtained; a corner detection model training step S1030 including a primary training of the corner detection model 301 before this step;
s104, extracting the image features of the target area through the image feature extraction module 4; the image features are output obtained by inputting a target area image to an image matching model, are one-dimensional data vectors used for describing visual contents of the image, and can reflect the visual matching degree between the images through measurement or comparison between the features; an image matching model training step S1040 including a primary training of the image matching model 401 before this step;
s105, performing feature matching and result output through the feature matching and result output module 5; in step S101, the feature matching is to compare the query features with the features of the feature library one by one, calculate the distance and sort, and the order corresponding to the matching feature closest to the query feature is the matching result.
The near-planar regular articles of the present invention include, but are not limited to, customized content mobile phone shells, photo frames, mug cups, canvas bags, and the like.
As shown in fig. 2, preferably, step S101 specifically includes:
s1011, preprocessing the order pictures of a certain batch, specifically scaling the pictures to 256 according to the longest edge, then placing the pictures in the middle of the image with the size of 128x256, and setting the pixel values of other positions to (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1012, inputting the processed 3-channel 128x256 data into an image matching network, and outputting 128-dimensional floating point data vectors as image features by the network;
s1013, constructing a KD-Tree structure for the extracted features of all the order pictures of the batch, wherein the number of Trees is 4, the Euclidean distance is set for distance measurement, and an index is formed with the order id and stored as an index file;
and S1014, storing the characteristic data of all the order pictures, the constructed KD-Tree structure and the order id corresponding to the order pictures as a binary data file of characteristics and indexes.
As shown in fig. 3, preferably, step S103 specifically includes:
s1031, preprocessing the product pictures collected on the production line, specifically scaling the pictures to 320 according to the longest edge, storing the scaling coefficients in the horizontal direction and the vertical direction, and then filling pixel values (128,128,128) in the short edge direction until the size of the image is 320x 320; subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1032, inputting the processed 3-channel 320x320 data into a corner detection network, and outputting 4 corner response characteristic graphs and 1 background response characteristic graph by the network, wherein the size of the characteristic graphs is 40x 40;
s1033, processing the 4 corner response characteristic graphs, specifically, applying bilinear interpolation to each corner response characteristic graph to amplify the graph to 320x320, finding a local maximum response value and a coordinate position in each amplified characteristic graph, and dividing the coordinate position by the horizontal and vertical scaling coefficients stored in S1031 to obtain corner coordinates;
and S1034, performing perspective transformation according to the 4 calculated angular point coordinates to obtain an aligned image of the target region surrounded by the 4 angular points.
As shown in fig. 4, preferably, step S104 specifically includes:
s1041, preprocessing the target area image extracted in the S1034, specifically, scaling the picture to 256 according to the longest edge, then placing the picture in the middle of the image with the size of 128x256, and setting the pixel values of other positions to (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1042, inputting the processed 3-channel 128x256 data into an image matching network, and outputting 128-dimensional floating point data vectors as image features by the network;
and S1043, inputting the 128-dimensional floating point data serving as query data into a KD-Tree structure (wherein the number of Trees is 4, and Euclidean distance is set for distance measurement) built in an order image feature library, setting the k value of a neighbor number to be 1, and returning an order image feature serial number closest to the query data and a distance value by the engine.
S1044, inquiring the returned matched order image feature serial number in an order image feature and order id index to obtain an order id, namely the matched order id, and taking the matched order id as one of output results; and comparing the returned distance value with a default threshold value 1.0 of the system, setting the current matching state to True if the distance value is less than or equal to the threshold value, indicating that the current matching result is credible, setting the current matching state to False if the distance value is greater than the threshold value, indicating that the current matching result is incredible, sending out a warning by the system, and taking the current matching state as a second output result. Specifically, the coordinates of 4 corner points output in S1034 are output as the third output result.
As shown in fig. 5, preferably, the corner detection model training step S1030 specifically includes:
s10301, carrying out manual corner point marking on the collected finished product near-plane regular article images on the production line, and recording 4 corner point coordinates of each near-plane regular article area;
s10302, synthesizing a finished product near-plane regular article image by using the near-plane regular article preview image with the customized content and the production line conveyor belt background image, and recording 4 angular point coordinates of each near-plane regular article area;
s10303, preprocessing the sample picture, specifically, scaling the picture to 320 according to the longest edge, storing the scaling coefficients of the horizontal direction and the vertical direction, and then filling pixel values (128,128,128) in the direction of the short edge until the size of the image is 320x 320; subtracting 128 and dividing 128 the pixel values on the image RGB channel;
generating a corner response characteristic diagram through 4 corners, specifically generating a characteristic diagram data of 40x40 for each corner, defaulting each coordinate position to 0, and endowing a response value to a circular area taking the corner as a center and 7 as a radius, wherein the response value is generated by a Gaussian function; obtaining an average feature map of the feature maps of the 4 angular points, and subtracting the response of each position of the average feature map from 1 to serve as the feature map of the background, so that each sample corresponds to 5 feature maps with the size of 40x 40;
s10304, inputting the processed sample picture and the feature map thereof to a corner detection network, wherein the network is a convolutional neural network, and specifically consists of 15 convolutional layers;
s10305, building a corner detection network and a training process by means of PyTorch, setting an initial learning rate to be 0.01 and the number of times of terminating iteration cycles to be 300, selecting an optimizer as SGD, and outputting a corner detection model finally, wherein a loss function is L2 norm loss.
As shown in fig. 6, preferably, the image matching model training step S1040 specifically includes:
s10401, generating an image matching sample by using the near-plane regular item preview image of the customized content, specifically, generating 20 samples by enhancing the image of the preview image of each order according to a preset scheme, wherein the preset scheme includes: random color transformation, random rotation, random edge cutting by 20%, random erasing by 20%, random Gaussian noise and random Gaussian blur;
s10402, preprocessing the sample picture, specifically, scaling the picture to 256 according to the longest edge, then placing the picture in the middle of the image with the size of 128x256, and setting the pixel values at other positions as (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s10403, inputting the processed sample picture and order number thereof into an image matching network, wherein the network is a convolutional neural network, and specifically consists of 50 convolutional layers;
s10404, constructing an image matching network and a training process by means of PyTorch, setting an initial learning rate to be 0.01 and the number of times of terminating iteration cycles to be 100, selecting an optimizer as SGD, and finally outputting an image matching model, wherein the loss function is an additive angle interval loss function.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (8)

1. A detection customized content near-plane rule object and order matching system based on deep learning is characterized by comprising an image library feature extraction and index establishment module (1), a production line image acquisition module (2), an angular point detection and image alignment module (3), an image feature extraction module (4), a feature matching and result output module (5), a training angular point detection model (301) and a training image matching model (401);
the image library feature extraction and index establishment module (1) is used for extracting the features of the order image library and establishing an index;
the production line image acquisition module (2) completes triggering, shooting, sending and obtaining of matching results of production line product image acquisition through hardware cooperative operation;
the corner detection and image alignment module (3) is used for detecting the corner of the target area and aligning the images of the target area, and comprises a corner detection model training step of training a corner detection model (301) before the step;
the image feature extraction module (4) is used for extracting the image features of the target area, and comprises an image matching model training step of training an image matching model (401) before the step;
and the characteristic matching and result output module (5) is used for characteristic matching and result output.
2. The method for detecting the customized content near-plane rule object and order matching system based on deep learning of claim 1 is characterized by comprising the following steps:
s101, establishing order image library feature extraction and establishing an index through an image library feature extraction and index establishing module (1);
s102, acquiring production line images through a production line image acquisition module (2);
s103, detecting the corner points of the target area and aligning the image of the target area through a corner point detection and image alignment module (3); a corner detection model training step S1030 comprising primary training of a corner detection model (301) before the step;
s104, extracting the image features of the target area through an image feature extraction module (4), and before the step, performing image matching model training step S1040 including primary training of an image matching model (401);
and S105, performing feature matching and result output through a feature matching and result output module (5).
3. The method for detecting customized content near-plane rule object and order matching system based on deep learning as claimed in claim 2, wherein the near-plane rule object is a mobile phone shell, a photo frame, a mug or a canvas bag of the customized content.
4. The method for detecting the matching system of the customized content near plane rule object and the order based on the deep learning as claimed in claim 2, wherein the step S101 specifically comprises:
s1011, preprocessing the order pictures of a certain batch, specifically scaling the pictures to 256 according to the longest edge, then placing the pictures in the middle of the image with the size of 128x256, and setting the pixel values of other positions to (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1012, inputting the processed 3-channel 128x256 data into an image matching network, and outputting 128-dimensional floating point data vectors as image features by the network;
s1013, constructing KD-Tree structures for the extracted features of all the order pictures of the batch, forming indexes with the order id, and storing the indexes as index files;
and S1014, storing the characteristic data of all the order pictures, the constructed KD-Tree structure and the order id corresponding to the order pictures as a binary data file of characteristics and indexes.
5. The method for detecting the matching system of the customized content near plane rule object and the order based on the deep learning as claimed in claim 2, wherein the step S103 specifically comprises:
s1031, preprocessing the product pictures collected on the production line, specifically scaling the pictures to 320 according to the longest edge, storing the scaling coefficients in the horizontal direction and the vertical direction, and then filling pixel values (128,128,128) in the short edge direction until the size of the image is 320x 320; subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1032, inputting the processed 3-channel 320x320 data into a corner detection network, and outputting 4 corner response characteristic graphs and 1 background response characteristic graph by the network, wherein the size of the characteristic graphs is 40x 40;
s1033, processing the 4 corner response characteristic graphs, specifically, applying bilinear interpolation to each corner response characteristic graph to amplify the graph to 320x320, finding a local maximum response value and a coordinate position in each amplified characteristic graph, and dividing the coordinate position by the horizontal and vertical scaling coefficients stored in S1031 to obtain corner coordinates;
and S1034, performing perspective transformation according to the 4 calculated angular point coordinates to obtain an aligned image of the target region surrounded by the 4 angular points.
6. The method for detecting the matching system of the customized content near plane rule object and the order based on the deep learning as claimed in claim 2, wherein the step S104 specifically comprises:
s1041, preprocessing the target area image extracted in the S1034, specifically, scaling the picture to 256 according to the longest edge, then placing the picture in the middle of the image with the size of 128x256, and setting the pixel values of other positions to (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s1042, inputting the processed 3-channel 128x256 data into an image matching network, and outputting 128-dimensional floating point data vectors as image features by the network;
s1043, inputting the 128-dimensional floating point data serving as query data into a KD-Tree structure established by an order image feature library, searching an engine, setting a k value of a neighbor number to be 1, and returning an order image feature serial number closest to the query data and a distance value by the engine;
s1044, inquiring the returned matched order image feature serial number in an order image feature and order id index to obtain an order id, namely the matched order id, and taking the matched order id as one of output results; comparing the returned distance value with a default threshold value 1.0 of the system, setting the current matching state to True if the distance value is less than or equal to the threshold value, indicating that the current matching result is credible, setting the current matching state to False if the distance value is greater than the threshold value, indicating that the current matching result is incredible, sending out a warning by the system, and taking the current matching state as a second output result; specifically, the coordinates of 4 corner points output in S1034 are output as the third output result.
7. The method for detecting the system for matching the customized content near-plane regular article with the order based on the deep learning as claimed in claim 2, wherein the step S1030 of training the corner detection model specifically comprises:
s10301, carrying out manual corner point marking on the collected finished product near-plane regular article images on the production line, and recording 4 corner point coordinates of each near-plane regular article area;
s10302, synthesizing a finished product near-plane regular article image by using the near-plane regular article preview image with the customized content and the production line conveyor belt background image, and recording 4 angular point coordinates of each near-plane regular article area;
s10303, preprocessing the sample picture, specifically, scaling the picture to 320 according to the longest edge, storing the scaling coefficients of the horizontal direction and the vertical direction, and then filling pixel values (128,128,128) in the direction of the short edge until the size of the image is 320x 320; subtracting 128 and dividing 128 the pixel values on the image RGB channel;
generating a corner response characteristic diagram through 4 corners, specifically generating a characteristic diagram data of 40x40 for each corner, defaulting each coordinate position to 0, and endowing a response value to a circular area taking the corner as a center and 7 as a radius, wherein the response value is generated by a Gaussian function; obtaining an average feature map of the feature maps of the 4 angular points, and subtracting the response of each position of the average feature map from 1 to serve as the feature map of the background, so that each sample corresponds to 5 feature maps with the size of 40x 40;
s10304, inputting the processed sample picture and the feature map thereof to a corner detection network, wherein the network is a convolutional neural network, and specifically consists of 15 convolutional layers;
s10305, building a corner detection network and a training process by means of PyTorch, setting an initial learning rate to be 0.01 and the number of times of terminating iteration cycles to be 300, selecting an optimizer as SGD, and outputting a corner detection model finally, wherein a loss function is L2 norm loss.
8. The method for detecting a matching system of a customized content near-plane rule object and an order based on deep learning according to claim 2, wherein the step of training the image matching model S1040 specifically comprises:
s10401, generating an image matching sample by using the near-plane regular item preview image of the customized content, specifically, generating 20 samples by enhancing the image of the preview image of each order according to a preset scheme, wherein the preset scheme includes: random color transformation, random rotation, random edge cutting by 20%, random erasing by 20%, random Gaussian noise and random Gaussian blur;
s10402, preprocessing the sample picture, specifically, scaling the picture to 256 according to the longest edge, then placing the picture in the middle of the image with the size of 128x256, and setting the pixel values at other positions as (128,128,128); subtracting 128 and dividing 128 the pixel values on the image RGB channel;
s10403, inputting the processed sample picture and order number thereof into an image matching network, wherein the network is a convolutional neural network, and specifically consists of 50 convolutional layers;
s10404, constructing an image matching network and a training process by means of PyTorch, setting an initial learning rate to be 0.01 and the number of times of terminating iteration cycles to be 100, selecting an optimizer as SGD, and finally outputting an image matching model, wherein the loss function is an additive angle interval loss function.
CN202010517670.6A 2020-06-09 2020-06-09 Method for detecting matching of customized article and order based on deep learning Active CN111695621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010517670.6A CN111695621B (en) 2020-06-09 2020-06-09 Method for detecting matching of customized article and order based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010517670.6A CN111695621B (en) 2020-06-09 2020-06-09 Method for detecting matching of customized article and order based on deep learning

Publications (2)

Publication Number Publication Date
CN111695621A true CN111695621A (en) 2020-09-22
CN111695621B CN111695621B (en) 2023-05-05

Family

ID=72479914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010517670.6A Active CN111695621B (en) 2020-06-09 2020-06-09 Method for detecting matching of customized article and order based on deep learning

Country Status (1)

Country Link
CN (1) CN111695621B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066227A (en) * 2021-03-16 2021-07-02 广东便捷神科技股份有限公司 Support to sell unmanned vending machine of depositing function in advance

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140101152A1 (en) * 2009-03-18 2014-04-10 Shutterfly, Inc. Proactive creation of image-based products
US20140376819A1 (en) * 2013-06-21 2014-12-25 Microsoft Corporation Image recognition by image search
US20150199583A1 (en) * 2012-07-27 2015-07-16 Hitachi High-Technologies Corporation Matching Process Device, Matching Process Method, and Inspection Device Employing Same
CN105404682A (en) * 2015-06-12 2016-03-16 北京卓视智通科技有限责任公司 Digital image content based book retrieval method
US20160342863A1 (en) * 2013-08-14 2016-11-24 Ricoh Co., Ltd. Hybrid Detection Recognition System

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140101152A1 (en) * 2009-03-18 2014-04-10 Shutterfly, Inc. Proactive creation of image-based products
US20150199583A1 (en) * 2012-07-27 2015-07-16 Hitachi High-Technologies Corporation Matching Process Device, Matching Process Method, and Inspection Device Employing Same
US20140376819A1 (en) * 2013-06-21 2014-12-25 Microsoft Corporation Image recognition by image search
US20160342863A1 (en) * 2013-08-14 2016-11-24 Ricoh Co., Ltd. Hybrid Detection Recognition System
CN105404682A (en) * 2015-06-12 2016-03-16 北京卓视智通科技有限责任公司 Digital image content based book retrieval method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
尚会超: "印刷图像在线检测的算法研究与系统实现" *
朱思聪, 周德龙: "角点检测技术综述" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066227A (en) * 2021-03-16 2021-07-02 广东便捷神科技股份有限公司 Support to sell unmanned vending machine of depositing function in advance

Also Published As

Publication number Publication date
CN111695621B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN111881913A (en) Image recognition method and device, storage medium and processor
CN109583483B (en) Target detection method and system based on convolutional neural network
US7421125B1 (en) Image analysis, editing and search techniques
US20190362144A1 (en) Eyeball movement analysis method and device, and storage medium
US20200380263A1 (en) Detecting key frames in video compression in an artificial intelligence semiconductor solution
US10558844B2 (en) Lightweight 3D vision camera with intelligent segmentation engine for machine vision and auto identification
EP2624224B1 (en) Method and device for distinguishing value documents
CN106599925A (en) Plant leaf identification system and method based on deep learning
AU2018202767B2 (en) Data structure and algorithm for tag less search and svg retrieval
CN110717366A (en) Text information identification method, device, equipment and storage medium
CN111191582B (en) Three-dimensional target detection method, detection device, terminal device and computer readable storage medium
CN114730377A (en) Shoe authentication device and authentication process
CN113901874A (en) Tea tender shoot identification and picking point positioning method based on improved R3Det rotating target detection algorithm
CN114255223A (en) Deep learning-based method and equipment for detecting surface defects of two-stage bathroom ceramics
CN114399781A (en) Document image processing method and device, electronic equipment and storage medium
CN115482529A (en) Method, equipment, storage medium and device for recognizing fruit image in near scene
CN116092231A (en) Ticket identification method, ticket identification device, terminal equipment and storage medium
CN111695621B (en) Method for detecting matching of customized article and order based on deep learning
CN115861409A (en) Soybean leaf area measuring and calculating method, system, computer equipment and storage medium
CN110751004A (en) Two-dimensional code detection method, device, equipment and storage medium
CN117252926B (en) Mobile phone shell auxiliary material intelligent assembly control system based on visual positioning
CN109508623A (en) Item identification method and device based on image procossing
CN110019901A (en) Three-dimensional model search device, searching system, search method and computer readable storage medium
CN110070626B (en) Three-dimensional object retrieval method based on multi-view classification
CN116758419A (en) Multi-scale target detection method, device and equipment for remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant