CN109978868A - Toy appearance quality determining method and its relevant device - Google Patents
Toy appearance quality determining method and its relevant device Download PDFInfo
- Publication number
- CN109978868A CN109978868A CN201910247494.6A CN201910247494A CN109978868A CN 109978868 A CN109978868 A CN 109978868A CN 201910247494 A CN201910247494 A CN 201910247494A CN 109978868 A CN109978868 A CN 109978868A
- Authority
- CN
- China
- Prior art keywords
- layer
- characteristic pattern
- feature
- detected
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30161—Wood; Lumber
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of toy appearance quality determining method and its relevant devices.Wherein method includes: to obtain image to be detected of toy, and determine depth convolutional neural networks trained in advance, wherein, depth convolutional neural networks include for the first input layer of feature extraction, for the first output layer of pond layer, the classification layer for classifying to all pixels in characteristic pattern and the classification results for exporting each pixel to characteristic pattern progress dimensionality reduction operation;Feature extraction is carried out to image to be detected according to the first input layer, obtains the characteristic pattern of image to be detected;Dimensionality reduction operation is carried out to characteristic pattern according to pond layer;The characteristic pattern by dimensionality reduction operation is up-sampled according to classification layer, and is classified pixel-by-pixel on the characteristic pattern of up-sampling, the classification results of each pixel in the characteristic pattern up-sampled;According to the classification results of each pixel, quality testing is carried out to toy.Detection efficiency can be improved in this method, improves the accuracy rate of testing result.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of toy appearance quality determining method, device, system,
Computer equipment and computer readable storage medium.
Background technique
In toy, there are many and use timber as material, and the appearance of these toys be limited to using timber, may
There are some defects as caused by timber, for example, worm hole, cracking, collapse it is scarce, the defects of incrustation, it is therefore desirable to wooden toy into
Row surface quality detection.
In the related technology, there are mainly two types of modes for the appearance quality testing of wooden toy: the first is pure artificial quality inspection side
Formula visually observes the photo in production environment dependent on industry specialists and provides judgement;Second of the artificial matter for machine auxiliary
Procuratorial organ's formula is mainly filtered out by the quality inspection system with certain judgement and does not have defective photo, by industry specialists to doubtful
The photo of existing defects carries out detection judgement.Wherein, the second way be usually expert system and Feature Engineering System Development and
Come, experience is solidificated in quality inspection system by expert, has certain automatic capability.
But presently, there are the problem of be: in the case where artificial quality inspection, need staff with the naked eye to wooden toy
It is checked;Alternatively, checking system using the area of computer aided based on Feature Engineering, this kind of rate of accurateness is low, system performance
Difference, it is low so as to cause detection efficiency, it is easy erroneous judgement of failing to judge.
Summary of the invention
The purpose of the present invention is intended to solve above-mentioned one of technical problem at least to a certain extent.
For this purpose, the first purpose of this invention is to propose a kind of toy appearance quality determining method.This method can mention
High detection efficiency improves the accuracy rate of testing result.
Second object of the present invention is to propose a kind of toy appearance quality detection device.
Third object of the present invention is to propose a kind of toy appearance quality detecting system.
Fourth object of the present invention is to propose a kind of computer equipment.
5th purpose of the invention is to propose a kind of computer readable storage medium.
In order to achieve the above objectives, the toy appearance quality determining method that first aspect present invention embodiment proposes, comprising: obtain
Image to be detected of toy is taken, and determines depth convolutional neural networks trained in advance, wherein the depth convolutional neural networks
Including the first input layer for feature extraction, the pond layer for carrying out dimensionality reduction operation to characteristic pattern, for in characteristic pattern
The classification layer classified of all pixels and the classification results for exporting each pixel the first output layer;According to described
First input layer carries out feature extraction to described image to be detected, obtains the characteristic pattern of described image to be detected;According to the pond
Change layer and dimensionality reduction operation is carried out to the characteristic pattern;The characteristic pattern by dimensionality reduction operation is up-sampled according to the classification layer,
And classified pixel-by-pixel on the characteristic pattern of the up-sampling, obtain the classification of each pixel in the characteristic pattern of the up-sampling
As a result;According to the classification results of pixel each in the characteristic pattern of the up-sampling, quality testing is carried out to the toy.
The toy appearance quality determining method of the embodiment of the present invention can obtain image to be detected of toy, and determine preparatory
Trained depth convolutional neural networks can carry out feature extraction to image to be detected by depth convolutional neural networks, obtain later
Dimensionality reduction operation is carried out to the characteristic pattern of image to be detected, and to characteristic pattern, the characteristic pattern by dimensionality reduction operation is up-sampled,
And classified pixel-by-pixel on the characteristic pattern of up-sampling, the classification results of each pixel in the characteristic pattern up-sampled, with
And the classification results according to pixel each in the characteristic pattern of up-sampling, quality testing is carried out to toy.I.e. based on semantic segmentation
Algorithm trains depth convolutional neural networks in advance, and then using the trained depth convolutional neural networks to the to be detected of toy
Each pixel in image carries out classification prediction, so that quality testing is carried out to the toy according to the classification results of each pixel, it can be with
The toy appearance is effectively detected out with the presence or absence of quality problems, in entire detection process, participates in, reduces artificial without artificial
Cost, and quality testing is carried out by trained network model, detection efficiency is improved, and improve the accurate of testing result
Rate.
In order to achieve the above objectives, the toy appearance quality detection device that second aspect of the present invention embodiment proposes, comprising: figure
As obtaining module, for obtaining image to be detected of toy;Model determining module, for determining depth convolution mind trained in advance
Through network, wherein the depth convolutional neural networks include for the first input layer of feature extraction, for carrying out to characteristic pattern
Pond layer, the classification layer for classifying to all pixels in characteristic pattern and for exporting each pixel of dimensionality reduction operation
Classification results the first output layer;Characteristic extracting module, for according to first input layer to described image to be detected into
Row feature extraction obtains the characteristic pattern of described image to be detected;Dimensionality reduction module is used for according to the pond layer to the characteristic pattern
Carry out dimensionality reduction operation;Categorization module, for being up-sampled according to the classification layer to the characteristic pattern by dimensionality reduction operation, and
Classified pixel-by-pixel on the characteristic pattern of the up-sampling, obtains the classification knot of each pixel in the characteristic pattern of the up-sampling
Fruit;Quality detection module carries out the toy for the classification results of each pixel in the characteristic pattern according to the up-sampling
Quality testing.
The toy appearance quality detection device of the embodiment of the present invention can obtain image to be detected of toy, and determine preparatory
Trained depth convolutional neural networks can carry out feature extraction to image to be detected by depth convolutional neural networks, obtain later
Dimensionality reduction operation is carried out to the characteristic pattern of image to be detected, and to characteristic pattern, the characteristic pattern by dimensionality reduction operation is up-sampled,
And classified pixel-by-pixel on the characteristic pattern of up-sampling, the classification results of each pixel in the characteristic pattern up-sampled, with
And the classification results according to pixel each in the characteristic pattern of up-sampling, quality testing is carried out to toy.I.e. based on semantic segmentation
Algorithm trains depth convolutional neural networks in advance, and then using the trained depth convolutional neural networks to the to be detected of toy
Each pixel in image carries out classification prediction, so that quality testing is carried out to the toy according to the classification results of each pixel, it can be with
The toy appearance is effectively detected out with the presence or absence of quality problems, in entire detection process, participates in, reduces artificial without artificial
Cost, and quality testing is carried out by trained network model, detection efficiency is improved, and improve the accurate of testing result
Rate.
In order to achieve the above objectives, the toy appearance quality detecting system that third aspect present invention embodiment proposes, comprising: figure
As acquisition device, control device and server, wherein described image acquisition device, for carrying out Image Acquisition to toy, and will
Acquired image is sent to the control device as image to be detected of the toy;The control device is used for basis
Described image to be detected generates detection request, and detection request is sent to the server;The server, for mentioning
Image to be detected in the detection request is taken, and determines depth convolutional neural networks trained in advance, and according to the depth
Convolutional neural networks carry out feature extraction to described image to be detected, obtain the characteristic pattern of described image to be detected, and to described
Characteristic pattern carries out dimensionality reduction operation, up-samples to the characteristic pattern by dimensionality reduction operation, and on the characteristic pattern of the up-sampling
Classified pixel-by-pixel, obtains the classification results of each pixel in the characteristic pattern of the up-sampling, and according to the up-sampling
Characteristic pattern in each pixel classification results, to the toy carry out quality testing.
The toy appearance quality detecting system of the embodiment of the present invention can will have the inspection of image to be detected by control device
Survey request and be sent to server so that server by trained depth convolutional neural networks to the image to be detected into
Row is classified pixel-by-pixel, obtains the classification results of each pixel, and then carry out matter to toy according to the classification results of each pixel
Amount detection.I.e. the algorithm based on semantic segmentation trains depth convolutional neural networks in advance, and then is rolled up using the trained depth
Product neural network carries out classification prediction to each pixel in image to be detected of toy, thus according to the classification results pair of each pixel
The toy carries out quality testing, the toy appearance can be effectively detected out with the presence or absence of quality problems, in entire detection process,
Without manually participating in, reduces cost of labor, and quality testing is carried out by trained network model, improve detection effect
Rate, and improve the accuracy rate of testing result.
In order to achieve the above objectives, the computer equipment that fourth aspect present invention embodiment proposes, comprising: memory, processing
Device and it is stored in the computer program that can be run on the memory and on the processor, the processor executes the meter
When calculation machine program, toy appearance quality determining method described in first aspect present invention embodiment is realized.
In order to achieve the above objectives, the computer readable storage medium that fifth aspect present invention embodiment proposes, stores thereon
There is computer program, is realized outside toy described in first aspect present invention embodiment when the computer program is executed by processor
Table quality determining method.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partially become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow chart of toy appearance quality determining method according to an embodiment of the invention;
Fig. 2 is the flow chart according to an embodiment of the present invention for obtaining characteristic pattern;
Fig. 3 is the structural schematic diagram of toy appearance quality detection device according to an embodiment of the invention;
Fig. 4 is the structural schematic diagram of toy appearance quality detection device in accordance with another embodiment of the present invention;
Fig. 5 is the structural schematic diagram of the toy appearance quality detection device of another embodiment according to the present invention;
Fig. 6 is the structural schematic diagram of toy appearance quality detecting system according to an embodiment of the invention;
Fig. 7 shows the block diagram for being suitable for the exemplary computer device for being used to realize the embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to is used to explain the present invention, and is not considered as limiting the invention.
Currently, wooden toy industry overall intelligence the degree of automation is lower, according to state's investigation and analysis, most of production
Producer is to there are two types of the appearance quality detection modes of wooden toy, i.e. artificial detection and automated optical detection equipment detection.Manually
Detection method by people Subjective Factors are larger, detection efficiency is lower, work long hours larger to the injury of human eye;Automatic light
Detection device is learned, wooden toy surface is detected by vision, false detection rate is higher, and detection system performance is poor, and can not cover
Cover all examination criterias of wooden toy manufacturer.
In the related technology, there are mainly two types of modes for the appearance quality detecting system of wooden toy: the first is pure artificial matter
Procuratorial organ's formula visually observes the photo in production environment dependent on industry specialists and provides judgement;Second of people for machine auxiliary
Working medium procuratorial organ formula, is mainly filtered out by the quality inspection system with certain judgement and does not have defective photo, by industry specialists pair
The photo of doubtful existing defects carries out detection judgement.Wherein, the second way is mostly expert system and Feature Engineering System Development
, experience is solidificated in quality inspection system by expert, with certain automatic capability.The quality testing of both wooden toys
Method, not only inefficiency, but also be easy to appear erroneous judgement, in addition, the industrial data that this mode generates be not easy to store, manage and
Mining again recycles.
It is low in order to solve detection efficiency in the prior art, and it is easy to appear erroneous judgement of failing to judge, it is accurate so as to cause testing result
The low technical problem of rate, the invention proposes a kind of toy appearance quality determining method, device, system, computer equipment and meters
Calculation machine readable storage medium storing program for executing.Specifically, below with reference to the accompanying drawings toy appearance quality determining method, the dress of the embodiment of the present invention are described
It sets, system, computer equipment and computer readable storage medium.
Fig. 1 is the flow chart of toy appearance quality determining method according to an embodiment of the invention.It should be noted that
The toy appearance quality determining method of the embodiment of the present invention can be applied to the toy appearance quality detection device of the embodiment of the present invention.
As shown in Figure 1, the toy appearance quality determining method may include:
S110 obtains image to be detected of toy, and determines depth convolutional neural networks trained in advance.
For example, Image Acquisition can be carried out to the toy in scene by image collecting device, to obtain the acquisition figure of toy
Picture using the image as image to be detected of the toy, and can determine preparatory depth convolutional neural networks, so as to later use
After the depth convolutional neural networks pre-process the image to be detected, semantic segmentation calculating is carried out, is lacked to provide representative
The location information of sunken classification information and defect.
Wherein, in an embodiment of the present invention, which may include for the first defeated of feature extraction
Enter layer, the pond layer for carrying out dimensionality reduction operation to characteristic pattern, the classification for classifying to all pixels in characteristic pattern
First output layer of layer and the classification results for exporting each pixel.That is, the depth convolutional neural networks can be by
One input layer, pond layer, classification layer and the first output layer are constituted.Wherein, which can be convolutional layer, the convolutional layer
Convolution can be scanned to image to be detected using the convolution kernel of weighted by convolution operation, therefrom extract various meanings
Feature, and export into characteristic pattern.Pond layer can carry out dimensionality reduction operation, the main feature in keeping characteristics figure to characteristic pattern.Benefit
It, can be to the deformation of photo, fuzzy, illumination on production line with this deep neural network model operated with convolution, pondization
Variation etc. robustness with higher, for classification task have it is higher can generalization.
It is obtained it should be noted that the depth convolutional neural networks can train in advance.As an example, it can adopt
It trains the depth convolutional neural networks in advance with the following methods: obtaining sample labeled data, wherein sample labeled data includes needle
To the class label of each pixel in the image and image of sample toy;Construct depth convolutional neural networks;Number is marked using sample
It is trained according to the depth convolutional neural networks of building.
That is, a large amount of sample labeled data can be obtained, each sample labeled data includes sample image and should
The class label of each pixel in sample image.Depth convolutional neural networks can be constructed, wherein the depth convolutional neural networks can wrap
Include the first input layer for feature extraction, the pond layer for carrying out dimensionality reduction operation to characteristic pattern, for in characteristic pattern
First output layer of classification layer and the classification results for exporting each pixel that all pixels are classified.Then, available
The sample labeled data of acquisition is trained the depth convolutional neural networks of building.
Wherein, defeated using the first of the depth convolutional neural networks when being trained to depth convolutional neural networks
Enter layer and feature extraction is carried out to the sample image in sample labeled data, therefrom extracts the feature of various meanings, and export to spy
It levies in figure.Pond layer carries out dimensionality reduction operation to characteristic pattern, the main feature in keeping characteristics figure.It is obtained after most after-bay layer
Characteristic pattern, classification layer this feature figure is up-sampled using bilinear interpolation algorithm, make its picture size and original image
The size of (i.e. sample image) is consistent, and the prediction of pixel scale is carried out to each pixel in the characteristic pattern of up-sampling, will
It predicts that the classification of obtained each pixel is compared with the class label of pixel each in sample image, obtains loss function.It will damage
The error back propagation that function obtains is lost, depth convolutional neural networks are trained.Output and label when the first output layer
Between error amount be less than the preset threshold value for meeting business need when, deconditioning, to obtain trained depth
Convolutional neural networks.
S120 carries out feature extraction to image to be detected according to the first input layer, obtains the characteristic pattern of image to be detected.
Optionally, feature is carried out to the image to be detected using the first input layer in the depth convolutional neural networks to mention
It takes, therefrom extracts the feature of various meanings, and be output in characteristic pattern, to obtain the characteristic pattern of the image to be detected.
Wherein, in one embodiment of the invention, which can be by the deep learning facilities network through overfitting
Network is constituted;In an embodiment of the present invention, the deep learning basic network may include for feature extraction the second input layer, use
In the extruded layer of Feature Compression, the excitation layer for generating weight for each feature channel, for the recalibration of feature recalibration
Layer and the second output layer for exporting characteristic pattern;The excitation layer has learnt to obtain the correlation for Modelling feature interchannel
The parameter of property.
It is described that image to be detected progress feature is mentioned according to the first input layer as a kind of example of possible implementation
It takes, the specific implementation process for obtaining the characteristic pattern of image to be detected can be as follows: by the deep learning basic network to be checked
Altimetric image carries out feature extraction, obtains the characteristic pattern of image to be detected.
In an embodiment of the present invention, as shown in Fig. 2, described carry out image to be detected by deep learning basic network
Feature extraction, the specific implementation process for obtaining the characteristic pattern of image to be detected can comprise the following steps that
S210 carries out feature extraction to image to be detected according to the second input layer in deep learning basic network, obtains
Multiple features, the corresponding feature channel of each feature.
S220 carries out Feature Compression to multiple feature channels according to extruded layer, and is multiple according to the parameter in excitation layer
Feature channel generates respective weights.
In this step, after mentioning feature, Feature Compression can be carried out to feature channel by extruded layer, by each two
The feature channel of dimension becomes a real number, this real number has global receptive field in a way, and the dimension that exports and
The feature port number of input matches.It characterizes the global distribution responded on feature channel, and makes close to input
Layer can also obtain global receptive field.It later, can be that each feature channel generates respective weights by the parameter in excitation layer.
Wherein, in an embodiment of the present invention, the parameter in excitation layer can be what preparatory learning training obtained, which can be learnt
For showing the correlation of Modelling feature interchannel.
It should be noted that in an embodiment of the present invention, carrying out feature dimensions to multiple feature channels by extruded layer
After degree reduces operation, original dimension is returned to by a full articulamentum liter after can activating by excitation layer.It does so than straight
Connecing is advantageous in that with a full articulamentum: 1) having more non-linear, can preferably be fitted the correlation of interchannel complexity
Property;2) parameter amount and calculation amount are considerably reduced.
S230, will be in the feature of the Weight of generation to character pair channel according to recalibration layer.
Optionally, the weight that excitation layer generates is normalized by recalibration layer, and by the power after normalization
It is weighted in the feature in corresponding feature channel again.
The feature that recalibration layer exports is exported according to the second output layer, obtains the feature of image to be detected by S240
Figure.
The characteristic pattern of image to be detected can be obtained in S210-S240 through the above steps as a result,.Wherein, pass through excitation layer
Automatically the mode learnt gets the significance level in each feature channel, and goes to promote useful spy according to this important procedure
Sign, and inhibit the feature little to current task use, so that the feature extracted is more significant, improve feature extraction
Validity and accuracy.
S130 carries out dimensionality reduction operation to characteristic pattern according to pond layer.
Optionally, dimensionality reduction operation is carried out to characteristic pattern by pond layer, retains the main feature in this feature figure.
S140 up-samples the characteristic pattern by dimensionality reduction operation according to classification layer, and on the characteristic pattern of up-sampling
Classified pixel-by-pixel, the classification results of each pixel in the characteristic pattern up-sampled.
Optionally, the characteristic pattern by dimensionality reduction operation is up-sampled using bilinear interpolation algorithm by classification layer.
That is, can be up-sampled by using the characteristic pattern that bilinear interpolation algorithm operates the dimensionality reduction, so that
The size of this feature figure becomes identical as the size of the image to be detected.It, can be on this after being up-sampled characteristic pattern
Classified pixel-by-pixel on characteristic pattern after sampling, to obtain the classification results of each pixel in the characteristic pattern after the up-sampling.
S150 carries out quality testing to toy according to the classification results of pixel each in the characteristic pattern of up-sampling.
Optionally, according to the classification results of pixel each in the characteristic pattern of up-sampling, the classification letter of defect in toy is determined
Breath and location information of the defect in image to be detected.That is, can be according to point of pixel each in the characteristic pattern of up-sampling
Class is as a result, to detect in the image to be detected with the presence or absence of the pixel for belonging to defect classification, and if it exists, then can determine that the toy
There is mass defect there are quality problems in appearance, and can determine that the defect is being played in the position of image according to the defect
Position outside tool.In order to further discriminate between the type of defect, in the training of depth convolutional neural networks, settable multiple defects
Type may be implemented to distinguish multiple defect kinds of toy in this way, for example, can distinguish worm hole, crack, collapse scarce, incrustation
The defects of.
In order to further increase the accuracy rate of testing result, guarantee the detection performance of depth convolutional neural networks, optionally,
In one embodiment of the invention, history quality testing result can be obtained, and is detecting the history quality testing result
Accuracy rate be less than preset threshold when, correct the history quality testing result, by after correction history quality testing result make
For training data, the depth convolutional neural networks are trained.
That is, can be by the online mode of small flow gradually for trained depth convolutional neural networks each time
Replace the old depth convolutional neural networks that are just running on line, with reach depth convolutional neural networks service dynamic extend it is extensive
Purpose.It, can be by obtaining history quality testing result after detection system runs a period of time, and detecting the history matter
When measuring the accuracy rate of testing result less than preset threshold, the history quality testing result is corrected, by the history quality after correction
Testing result is trained the depth convolutional neural networks as training data, thus the re -training depth convolution mind
Through network, i.e., mining again recycling is carried out to history quality testing result, to improve Detection accuracy.
The toy appearance quality determining method of the embodiment of the present invention obtains image to be detected of toy, and determines instruction in advance
Experienced depth convolutional neural networks can carry out feature extraction to image to be detected by depth convolutional neural networks, obtain later
The characteristic pattern of image to be detected, and dimensionality reduction operation is carried out to characteristic pattern, the characteristic pattern by dimensionality reduction operation is up-sampled, and
Classified pixel-by-pixel on the characteristic pattern of up-sampling, the classification results of each pixel in the characteristic pattern up-sampled, and
According to the classification results of pixel each in the characteristic pattern of up-sampling, quality testing is carried out to toy.I.e. based on the calculation of semantic segmentation
Method trains depth convolutional neural networks in advance, and then using the trained depth convolutional neural networks to the mapping to be checked of toy
Each pixel as in carries out classification prediction, to carry out quality testing, Ke Yiyou to the toy according to the classification results of each pixel
Effect ground detects the toy appearance with the presence or absence of quality problems, in entire detection process, without artificial participation, reduce manually at
This, and quality testing is carried out by trained network model, detection efficiency is improved, and improve the accurate of testing result
Rate.
Corresponding with the toy appearance quality determining method that above-mentioned several embodiments provide, a kind of embodiment of the invention is also
A kind of toy appearance quality detection device is provided, due to toy appearance quality detection device provided in an embodiment of the present invention with it is above-mentioned
The toy appearance quality determining method that several embodiments provide is corresponding, therefore in the reality of aforementioned toy appearance quality determining method
The mode of applying is also applied for toy appearance quality detection device provided in this embodiment, is not described in detail in the present embodiment.Fig. 3
It is the structural schematic diagram of toy appearance quality detection device according to an embodiment of the invention.As shown in figure 3, the toy appearance
Quality detection device 300 may include: image collection module 310, model determining module 320, characteristic extracting module 330, dimensionality reduction
Module 340, categorization module 350 and quality detection module 360.
Specifically, image collection module 310 is used to obtain image to be detected of toy.
Model determining module 320 is for determining depth convolutional neural networks trained in advance, wherein depth convolutional Neural net
Network includes for the first input layer of feature extraction, the pond layer for carrying out dimensionality reduction operation to characteristic pattern, for characteristic pattern
In the classification layer classified of all pixels and the classification results for exporting each pixel the first output layer.
Characteristic extracting module 330 is used to carry out feature extraction to image to be detected according to the first input layer, obtains to be detected
The characteristic pattern of image.Wherein, in one embodiment of the invention, the first input layer is by the deep learning facilities network through overfitting
Network constitute, the deep learning basic network include the second input layer for feature extraction, the extruded layer for Feature Compression,
For generating the excitation layer of weight for each feature channel, for the recalibration layer of feature recalibration and for exporting characteristic pattern
The second output layer;The excitation layer has learnt to obtain the parameter of the correlation for Modelling feature interchannel.
In an embodiment of the present invention, characteristic extracting module 330 can by the deep learning basic network to it is described to
Detection image carries out feature extraction, obtains the characteristic pattern of described image to be detected.
As an example, characteristic extracting module 330 is specifically used for: according to second in the deep learning basic network
Input layer carries out feature extraction to described image to be detected, obtains multiple features, the corresponding feature channel of each feature;According to
The extruded layer carries out Feature Compression to multiple feature channels, and raw for multiple feature channels according to the parameter in the excitation layer
At respective weights;It will be in the feature of the Weight of generation to character pair channel according to the recalibration layer;According to described
Two output layers export the feature that the recalibration layer exports, and obtain the characteristic pattern of described image to be detected.
Dimensionality reduction module 340 is used to carry out dimensionality reduction operation to characteristic pattern according to pond layer.
Categorization module 350 is used to up-sample the characteristic pattern by dimensionality reduction operation according to classification layer, and is up-sampling
Characteristic pattern on classified pixel-by-pixel, the classification results of each pixel in the characteristic pattern up-sampled.As an example,
Categorization module 350 to the characteristic pattern by dimensionality reduction operation can adopt by the classification layer using bilinear interpolation algorithm
Sample.
Quality detection module 360 is used for the classification results according to pixel each in the characteristic pattern of up-sampling, carries out to toy
Quality testing.
It is obtained it should be noted that the depth convolutional neural networks can train in advance.Optionally, of the invention
In one embodiment, as shown in figure 4, the toy appearance quality detection device 300 may also include that model training module 370.The mould
Type training module 370 can be used for training the depth convolutional neural networks in advance.Wherein, in an embodiment of the present invention, model
Training module 370 is specifically used for: obtaining sample labeled data, wherein the sample labeled data includes for sample toy
The class label of each pixel in image and described image;Construct depth convolutional neural networks;Utilize the sample labeled data pair
The depth convolutional neural networks of building are trained.
In order to further increase the accuracy rate of testing result, guarantee the detection performance of depth convolutional neural networks, optionally,
In one embodiment of the invention, as shown in figure 5, the toy appearance quality detection device 300 may also include that acquisition module
380 and correction module 390.Wherein, module 380 is obtained for obtaining history quality testing result;Correction module 390 is for examining
When measuring the accuracy rate of the history quality testing result less than preset threshold, the history quality testing result is corrected;Wherein,
Model training module 370 be also used to correct after history quality testing result as training data, to the depth convolution mind
It is trained through network.
The toy appearance quality detection device of the embodiment of the present invention can obtain image to be detected of toy, and determine preparatory
Trained depth convolutional neural networks can carry out feature extraction to image to be detected by depth convolutional neural networks, obtain later
Dimensionality reduction operation is carried out to the characteristic pattern of image to be detected, and to characteristic pattern, the characteristic pattern by dimensionality reduction operation is up-sampled,
And classified pixel-by-pixel on the characteristic pattern of up-sampling, the classification results of each pixel in the characteristic pattern up-sampled, with
And the classification results according to pixel each in the characteristic pattern of up-sampling, quality testing is carried out to toy.I.e. based on semantic segmentation
Algorithm trains depth convolutional neural networks in advance, and then using the trained depth convolutional neural networks to the to be detected of toy
Each pixel in image carries out classification prediction, so that quality testing is carried out to the toy according to the classification results of each pixel, it can be with
The toy appearance is effectively detected out with the presence or absence of quality problems, in entire detection process, participates in, reduces artificial without artificial
Cost, and quality testing is carried out by trained network model, detection efficiency is improved, and improve the accurate of testing result
Rate.
In order to realize above-described embodiment, the invention also provides a kind of toy appearance quality detecting systems.
Fig. 6 is the structural schematic diagram of toy appearance quality detecting system according to an embodiment of the invention.Such as Fig. 6 institute
Show, which may include: image collecting device 610, control device 620 and server 630.
Specifically, image collecting device 610 is used to carry out Image Acquisition to toy, and using acquired image as toy
Image to be detected be sent to control device 620.
Control device 620 is used to generate detection request according to image to be detected, and will test request and be sent to server
630。
Server 630 is used to extract image to be detected in detection request, and determines depth convolutional Neural trained in advance
Network, and feature extraction is carried out to image to be detected according to depth convolutional neural networks, the characteristic pattern of image to be detected is obtained, and
Dimensionality reduction operation is carried out to characteristic pattern, the characteristic pattern by dimensionality reduction operation is up-sampled, and enterprising in the characteristic pattern of up-sampling
Row is classified pixel-by-pixel, the classification results of each pixel in the characteristic pattern up-sampled, and according in the characteristic pattern of up-sampling
The classification results of each pixel carry out quality testing to toy.
Optionally, in one embodiment of the invention, server is multiple, and each server has described at least one
Depth convolutional neural networks.In an embodiment of the present invention, control device 620 can be according to depth convolutional Neural on multiple servers
The deployment scenario of network carries out load balancing and scheduling, determines the server for carrying target depth convolutional neural networks, and by institute
Detection request is stated to be sent on the server for carrying target depth convolutional neural networks.That is, control device 620 can
According to the deployment scenario real-time perfoming load balancing and scheduling of depth convolutional neural networks on line, it will test request and be sent to most preferably
The server for carrying depth convolutional neural networks on.Depth convolutional neural networks are run on the server, depth volume
Product neural network is completed via training engine training.
Optionally, in one embodiment of the invention, control device 620 can also be used according to business demand, to described
The quality measurements of toy make corresponding service response.For example, after determining the quality measurements of the toy, control
Device 620 designs in conjunction with business scenario, can make to quality measurements according to business demand and meet production environment scene
It is required that response, such as alarm, storage log, control mechanical arm.
In an embodiment of the present invention, can have tranining database, training engine and Production database on server 630,
Wherein, there can be training data in the tranining database, which can obtain training data from tranining database with right
Depth convolutional neural networks on the server are trained.Production database can be stored with generation log, for example, control device
It can be using the processing behavior of the prediction result of depth convolutional neural networks and service response as production log storage on line to production
In database, so as to update tranining database according to the data in the Production database.
The toy appearance quality detecting system of the embodiment of the present invention can will have the inspection of image to be detected by control device
Survey request and be sent to server so that server by trained depth convolutional neural networks to the image to be detected into
Row is classified pixel-by-pixel, obtains the classification results of each pixel, and then carry out matter to toy according to the classification results of each pixel
Amount detection.I.e. the algorithm based on semantic segmentation trains depth convolutional neural networks in advance, and then is rolled up using the trained depth
Product neural network carries out classification prediction to each pixel in image to be detected of toy, thus according to the classification results pair of each pixel
The toy carries out quality testing, the toy appearance can be effectively detected out with the presence or absence of quality problems, in entire detection process,
Without manually participating in, reduces cost of labor, and quality testing is carried out by trained network model, improve detection effect
Rate, and improve the accuracy rate of testing result.
In order to realize above-described embodiment, the invention also provides a kind of computer readable storage mediums, are stored thereon with meter
Calculation machine program realizes toy appearance described in any of the above-described a embodiment of the present invention when computer program is executed by processor
Quality determining method.
Fig. 7 shows the block diagram for being suitable for the exemplary computer device for being used to realize the embodiment of the present invention.The meter that Fig. 7 is shown
Calculating machine equipment 12 is only an example, should not function to the embodiment of the present invention and use scope bring any restrictions.
As shown in fig. 7, computer equipment 12 is showed in the form of universal computing device.The component of computer equipment 12 can be with
Including but not limited to: one or more processor or processing unit 16, system storage 28 connect different system components
The bus 18 of (including system storage 28 and processing unit 16).
Bus 18 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts
For example, these architectures include but is not limited to industry standard architecture (Industry Standard
Architecture;Hereinafter referred to as: ISA) bus, microchannel architecture (Micro Channel Architecture;Below
Referred to as: MAC) bus, enhanced isa bus, Video Electronics Standards Association (Video Electronics Standards
Association;Hereinafter referred to as: VESA) local bus and peripheral component interconnection (Peripheral Component
Interconnection;Hereinafter referred to as: PCI) bus.
Computer equipment 12 typically comprises a variety of computer system readable media.These media can be it is any can be by
The usable medium that computer equipment 12 accesses, including volatile and non-volatile media, moveable and immovable medium.
Memory 28 may include the computer system readable media of form of volatile memory, such as random access memory
Device (Random Access Memory;Hereinafter referred to as: RAM) 30 and/or cache memory 32.Computer equipment 12 can be with
It further comprise other removable/nonremovable, volatile/non-volatile computer system storage mediums.Only as an example,
Storage system 34 can be used for reading and writing immovable, non-volatile magnetic media, and (Fig. 7 do not show, commonly referred to as " hard drive
Device ").Although being not shown in Fig. 7, the disk for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") can be provided and driven
Dynamic device, and to removable anonvolatile optical disk (such as: compact disc read-only memory (Compact Disc Read Only
Memory;Hereinafter referred to as: CD-ROM), digital multi CD-ROM (Digital Video Disc Read Only
Memory;Hereinafter referred to as: DVD-ROM) or other optical mediums) read-write CD drive.In these cases, each driving
Device can be connected by one or more data media interfaces with bus 18.Memory 28 may include that at least one program produces
Product, the program product have one group of (for example, at least one) program module, and it is each that these program modules are configured to perform the application
The function of embodiment.
Program/utility 40 with one group of (at least one) program module 42 can store in such as memory 28
In, such program module 42 include but is not limited to operating system, one or more application program, other program modules and
It may include the realization of network environment in program data, each of these examples or certain combination.Program module 42 is usual
Execute the function and/or method in embodiments described herein.
Computer equipment 12 can also be with one or more external equipments 14 (such as keyboard, sensing equipment, display 24
Deng) communication, can also be enabled a user to one or more equipment interact with the computer equipment 12 communicate, and/or with make
The computer equipment 12 any equipment (such as network interface card, the modulatedemodulate that can be communicated with one or more of the other calculating equipment
Adjust device etc.) communication.This communication can be carried out by input/output (I/O) interface 22.Also, computer equipment 12 may be used also
To pass through network adapter 20 and one or more network (such as local area network (Local Area Network;Hereinafter referred to as:
LAN), wide area network (Wide Area Network;Hereinafter referred to as: WAN) and/or public network, for example, internet) communication.Such as figure
Shown, network adapter 20 is communicated by bus 18 with other modules of computer equipment 12.It should be understood that although not showing in figure
Out, other hardware and/or software module can be used in conjunction with terminal device 12, including but not limited to: microcode, device drives
Device, redundant processing unit, external disk drive array, RAID system, tape drive and data backup storage system etc..
Processing unit 16 by the program that is stored in system storage 28 of operation, thereby executing various function application and
Data processing, such as realize the toy appearance quality determining method referred in previous embodiment.
In the description of the present invention, it is to be understood that, term " first ", " second " are used for description purposes only, and cannot
It is interpreted as indication or suggestion relative importance or implicitly indicates the quantity of indicated technical characteristic.Define as a result, " the
One ", the feature of " second " can explicitly or implicitly include at least one of the features.In the description of the present invention, " multiple "
It is meant that at least two, such as two, three etc., unless otherwise specifically defined.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable
Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware
Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal
Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries
It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium
In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in a processing module
It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as to limit of the invention
System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of the invention
Type.
Claims (17)
1. a kind of toy appearance quality determining method, which comprises the following steps:
Image to be detected of toy is obtained, and determines depth convolutional neural networks trained in advance, wherein the depth convolution mind
It include for the first input layer of feature extraction, the pond layer for carrying out dimensionality reduction operation to characteristic pattern, for spy through network
First output layer of classification layer and the classification results for exporting each pixel that all pixels in sign figure are classified;
Feature extraction is carried out to described image to be detected according to first input layer, obtains the feature of described image to be detected
Figure;
Dimensionality reduction operation is carried out to the characteristic pattern according to the pond layer;
The characteristic pattern by dimensionality reduction operation is up-sampled according to the classification layer, and enterprising in the characteristic pattern of the up-sampling
Row is classified pixel-by-pixel, obtains the classification results of each pixel in the characteristic pattern of the up-sampling;
According to the classification results of pixel each in the characteristic pattern of the up-sampling, quality testing is carried out to the toy.
2. the method according to claim 1, wherein first input layer is by the deep learning base through overfitting
Plinth network is constituted, and the deep learning basic network includes the second input layer for feature extraction, squeezing for Feature Compression
Pressurized layer, for for each feature channel generate weight excitation layer, for the recalibration layer of feature recalibration and for exporting spy
Levy the second output layer of figure;The excitation layer has learnt to obtain the parameter of the correlation for Modelling feature interchannel;
Wherein, feature extraction is carried out to described image to be detected according to first input layer, obtains described image to be detected
Characteristic pattern, comprising:
Feature extraction is carried out to described image to be detected by the deep learning basic network, obtains described image to be detected
Characteristic pattern.
3. according to the method described in claim 2, it is characterized in that, by the deep learning basic network to described to be detected
Image carries out feature extraction, obtains the characteristic pattern of described image to be detected, comprising:
Feature extraction is carried out to described image to be detected according to the second input layer in the deep learning basic network, is obtained more
A feature, the corresponding feature channel of each feature;
Feature Compression is carried out to multiple feature channels according to the extruded layer, and is multiple spies according to the parameter in the excitation layer
It levies channel and generates respective weights;
It will be in the feature of the Weight of generation to character pair channel according to the recalibration layer;
The feature that the recalibration layer exports is exported according to second output layer, obtains the spy of described image to be detected
Sign figure.
4. the method according to claim 1, wherein according to the classification layer to the characteristic pattern by dimensionality reduction operation
It is up-sampled, comprising:
The characteristic pattern by dimensionality reduction operation is up-sampled using bilinear interpolation algorithm by the classification layer.
5. method according to claim 1 to 4, which is characterized in that in the following ways in advance described in training
Depth convolutional neural networks:
Obtain sample labeled data, wherein the sample labeled data includes in image and described image for sample toy
The class label of each pixel;
Construct depth convolutional neural networks;
It is trained using depth convolutional neural networks of the sample labeled data to building.
6. the method according to any one of claims 1 to 5, which is characterized in that further include:
Obtain history quality testing result;
When the accuracy rate for detecting the history quality testing result is less than preset threshold, the history quality detection knot is corrected
Fruit;
Using the history quality testing result after correction as training data, the depth convolutional neural networks are trained.
7. a kind of toy appearance quality detection device characterized by comprising
Image collection module, for obtaining image to be detected of toy;
Model determining module, for determining depth convolutional neural networks trained in advance, wherein the depth convolutional neural networks
Including the first input layer for feature extraction, the pond layer for carrying out dimensionality reduction operation to characteristic pattern, for in characteristic pattern
The classification layer classified of all pixels and the classification results for exporting each pixel the first output layer;
Characteristic extracting module obtains described for carrying out feature extraction to described image to be detected according to first input layer
The characteristic pattern of image to be detected;
Dimensionality reduction module, for carrying out dimensionality reduction operation to the characteristic pattern according to the pond layer;
Categorization module for being up-sampled according to the classification layer to the characteristic pattern by dimensionality reduction operation, and is adopted on described
Classified pixel-by-pixel on the characteristic pattern of sample, obtains the classification results of each pixel in the characteristic pattern of the up-sampling;
Quality detection module, for the classification results of each pixel in the characteristic pattern according to the up-sampling, to the toy into
Row quality testing.
8. device according to claim 7, which is characterized in that first input layer is by the deep learning base through overfitting
Plinth network is constituted, and the deep learning basic network includes the second input layer for feature extraction, squeezing for Feature Compression
Pressurized layer, for for each feature channel generate weight excitation layer, for the recalibration layer of feature recalibration and for exporting spy
Levy the second output layer of figure;The excitation layer has learnt to obtain the parameter of the correlation for Modelling feature interchannel;
Wherein, the characteristic extracting module is specifically used for:
Feature extraction is carried out to described image to be detected by the deep learning basic network, obtains described image to be detected
Characteristic pattern.
9. device according to claim 8, which is characterized in that the characteristic extracting module is specifically used for:
Feature extraction is carried out to described image to be detected according to the second input layer in the deep learning basic network, is obtained more
A feature, the corresponding feature channel of each feature;
Feature Compression is carried out to multiple feature channels according to the extruded layer, and is multiple spies according to the parameter in the excitation layer
It levies channel and generates respective weights;
It will be in the feature of the Weight of generation to character pair channel according to the recalibration layer;
The feature that the recalibration layer exports is exported according to second output layer, obtains the spy of described image to be detected
Sign figure.
10. device according to claim 7, which is characterized in that the categorization module is specifically used for:
The characteristic pattern by dimensionality reduction operation is up-sampled using bilinear interpolation algorithm by the classification layer.
11. device according to any one of claims 7 to 10, which is characterized in that described device further include:
Model training module, for training the depth convolutional neural networks in advance;
Wherein, the model training module is specifically used for:
Obtain sample labeled data, wherein the sample labeled data includes in image and described image for sample toy
The class label of each pixel;
Construct depth convolutional neural networks;
It is trained using depth convolutional neural networks of the sample labeled data to building.
12. device according to claim 11, which is characterized in that described device further include:
Module is obtained, for obtaining history quality testing result;
Correction module, for when the accuracy rate for detecting the history quality testing result is less than preset threshold, described in correction
History quality testing result;
Wherein, the model training module, the history quality testing result after being also used to correct is as training data, to described
Depth convolutional neural networks are trained.
13. a kind of toy appearance quality detecting system characterized by comprising image collecting device, control device and service
Device, wherein
Described image acquisition device, for carrying out Image Acquisition to toy, and using acquired image as the toy to
Detection image is sent to the control device;
Detection request for generating detection request according to described image to be detected, and is sent to institute by the control device
State server;
The server for extracting image to be detected in the detection request, and determines depth convolution mind trained in advance
Feature extraction is carried out to described image to be detected through network, and according to the depth convolutional neural networks, is obtained described to be detected
The characteristic pattern of image, and dimensionality reduction operation is carried out to the characteristic pattern, the characteristic pattern by dimensionality reduction operation is up-sampled, and
Classified pixel-by-pixel on the characteristic pattern of the up-sampling, obtains the classification knot of each pixel in the characteristic pattern of the up-sampling
The classification results of each pixel in fruit, and characteristic pattern according to the up-sampling carry out quality testing to the toy.
14. system according to claim 13, which is characterized in that the server be it is multiple, each server have extremely
Few depth convolutional neural networks;The control device is specifically used for:
Load balancing and scheduling are carried out according to the deployment scenario of depth convolutional neural networks on multiple servers, determines and carries target
The server of depth convolutional neural networks;
The detection request is sent on the server for carrying target depth convolutional neural networks.
15. system described in 3 or 14 according to claim 1, which is characterized in that the control device is also used to: according to business need
It asks, corresponding service response is made to the quality measurements of the toy.
16. a kind of computer equipment characterized by comprising memory, processor and be stored on the memory and can be
The computer program run on the processor, when the processor executes the computer program, realize as claim 1 to
Toy appearance quality determining method described in any one of 6.
17. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
Such as toy appearance quality determining method described in any one of claims 1 to 6 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910247494.6A CN109978868A (en) | 2019-03-29 | 2019-03-29 | Toy appearance quality determining method and its relevant device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910247494.6A CN109978868A (en) | 2019-03-29 | 2019-03-29 | Toy appearance quality determining method and its relevant device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109978868A true CN109978868A (en) | 2019-07-05 |
Family
ID=67081533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910247494.6A Pending CN109978868A (en) | 2019-03-29 | 2019-03-29 | Toy appearance quality determining method and its relevant device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109978868A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780074A (en) * | 2021-08-04 | 2021-12-10 | 五邑大学 | Method and device for detecting quality of wrapping paper and storage medium |
CN116525295A (en) * | 2023-07-03 | 2023-08-01 | 河南华佳新材料技术有限公司 | Metallized film for high-frequency pulse capacitor and preparation method thereof |
CN116563795A (en) * | 2023-05-30 | 2023-08-08 | 北京天翊文化传媒有限公司 | Doll production management method and doll production management system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392896A (en) * | 2017-07-14 | 2017-11-24 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of Wood Defects Testing method and system based on deep learning |
JP2018005640A (en) * | 2016-07-04 | 2018-01-11 | タカノ株式会社 | Classifying unit generation device, image inspection device, and program |
CN107966447A (en) * | 2017-11-14 | 2018-04-27 | 浙江大学 | A kind of Surface Flaw Detection method based on convolutional neural networks |
CN108230317A (en) * | 2018-01-09 | 2018-06-29 | 北京百度网讯科技有限公司 | Steel plate defect detection sorting technique, device, equipment and computer-readable medium |
CN109064462A (en) * | 2018-08-06 | 2018-12-21 | 长沙理工大学 | A kind of detection method of surface flaw of steel rail based on deep learning |
CN109084955A (en) * | 2018-07-02 | 2018-12-25 | 北京百度网讯科技有限公司 | Display screen quality determining method, device, electronic equipment and storage medium |
CN109214399A (en) * | 2018-10-12 | 2019-01-15 | 清华大学深圳研究生院 | A kind of improvement YOLOV3 Target Recognition Algorithms being embedded in SENet structure |
CN109447990A (en) * | 2018-10-22 | 2019-03-08 | 北京旷视科技有限公司 | Image, semantic dividing method, device, electronic equipment and computer-readable medium |
CN109470708A (en) * | 2018-11-30 | 2019-03-15 | 北京百度网讯科技有限公司 | Quality determining method, device, server and the storage medium of plastic foam cutlery box |
-
2019
- 2019-03-29 CN CN201910247494.6A patent/CN109978868A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018005640A (en) * | 2016-07-04 | 2018-01-11 | タカノ株式会社 | Classifying unit generation device, image inspection device, and program |
CN107392896A (en) * | 2017-07-14 | 2017-11-24 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of Wood Defects Testing method and system based on deep learning |
CN107966447A (en) * | 2017-11-14 | 2018-04-27 | 浙江大学 | A kind of Surface Flaw Detection method based on convolutional neural networks |
CN108230317A (en) * | 2018-01-09 | 2018-06-29 | 北京百度网讯科技有限公司 | Steel plate defect detection sorting technique, device, equipment and computer-readable medium |
CN109084955A (en) * | 2018-07-02 | 2018-12-25 | 北京百度网讯科技有限公司 | Display screen quality determining method, device, electronic equipment and storage medium |
CN109064462A (en) * | 2018-08-06 | 2018-12-21 | 长沙理工大学 | A kind of detection method of surface flaw of steel rail based on deep learning |
CN109214399A (en) * | 2018-10-12 | 2019-01-15 | 清华大学深圳研究生院 | A kind of improvement YOLOV3 Target Recognition Algorithms being embedded in SENet structure |
CN109447990A (en) * | 2018-10-22 | 2019-03-08 | 北京旷视科技有限公司 | Image, semantic dividing method, device, electronic equipment and computer-readable medium |
CN109470708A (en) * | 2018-11-30 | 2019-03-15 | 北京百度网讯科技有限公司 | Quality determining method, device, server and the storage medium of plastic foam cutlery box |
Non-Patent Citations (7)
Title |
---|
DOMEN TABERNIK 等: "Segmentation-Based Deep-Learning Approach for Surface-Defect Detection", 《ARXIV》 * |
HANCHAO LI 等: "Pyramid Attention Network for Semantic Segmentation", 《ARXIV》 * |
JIE HU 等: "Squeeze-and-Excitation Networks", 《ARXIV》 * |
JONATHAN LONG 等: "Fully Convolutional Networks for Semantic Segmentation", 《ARXIV》 * |
于志洋: "基于全卷积神经网络的表面缺陷检测方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王立中 等: "基于深度学习算法的带钢表面缺陷识别", 《西安工程大学学报》 * |
胡杰: "专栏 | Momenta详解ImageNet 2017夺冠架构SENet", 《HTTPS://WWW.SOHU.COM/A/161633191_465975》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780074A (en) * | 2021-08-04 | 2021-12-10 | 五邑大学 | Method and device for detecting quality of wrapping paper and storage medium |
CN116563795A (en) * | 2023-05-30 | 2023-08-08 | 北京天翊文化传媒有限公司 | Doll production management method and doll production management system |
CN116525295A (en) * | 2023-07-03 | 2023-08-01 | 河南华佳新材料技术有限公司 | Metallized film for high-frequency pulse capacitor and preparation method thereof |
CN116525295B (en) * | 2023-07-03 | 2023-09-08 | 河南华佳新材料技术有限公司 | Metallized film for high-frequency pulse capacitor and preparation method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109978867A (en) | Toy appearance quality determining method and its relevant device | |
CN111598881B (en) | Image anomaly detection method based on variational self-encoder | |
CN106897573B (en) | Use the computer-aided diagnosis system for medical image of depth convolutional neural networks | |
CN109961433A (en) | Product defects detection method, device and computer equipment | |
CN109978868A (en) | Toy appearance quality determining method and its relevant device | |
CN108564104A (en) | Product defects detection method, device, system, server and storage medium | |
CN100474878C (en) | Image quality prediction method and apparatus and fault diagnosis system | |
CN105574550A (en) | Vehicle identification method and device | |
CN110503635B (en) | Hand bone X-ray film bone age assessment method based on heterogeneous data fusion network | |
CN109544548A (en) | Defect inspection method, device, server, equipment and the storage medium of cutlery box | |
CN112132800B (en) | Deep learning-based pulmonary fibrosis detection and severity assessment method and system | |
CN106295502A (en) | A kind of method for detecting human face and device | |
CN112132801B (en) | Lung bulla focus detection method and system based on deep learning | |
CN109242831A (en) | Picture quality detection method, device, computer equipment and storage medium | |
CN113537496A (en) | Deep learning model visual construction system and application and design method thereof | |
CN117034143B (en) | Distributed system fault diagnosis method and device based on machine learning | |
CN110188303A (en) | Page fault recognition methods and device | |
CN107004200A (en) | The evaluated off-line of ranking function | |
CN109598712A (en) | Quality determining method, device, server and the storage medium of plastic foam cutlery box | |
CN116629270B (en) | Subjective question scoring method and device based on examination big data and text semantics | |
CN113052227A (en) | Pulmonary tuberculosis identification method based on SE-ResNet | |
CN110705278A (en) | Subjective question marking method and subjective question marking device | |
CN112711530A (en) | Code risk prediction method and system based on machine learning | |
JP2019158684A (en) | Inspection system, identification system, and discriminator evaluation device | |
CN110335244A (en) | A kind of tire X-ray defect detection method based on more Iterative classification devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |