CN110969168A - Pre-packaging quality identification system based on deep learning - Google Patents

Pre-packaging quality identification system based on deep learning Download PDF

Info

Publication number
CN110969168A
CN110969168A CN201811151199.2A CN201811151199A CN110969168A CN 110969168 A CN110969168 A CN 110969168A CN 201811151199 A CN201811151199 A CN 201811151199A CN 110969168 A CN110969168 A CN 110969168A
Authority
CN
China
Prior art keywords
unit
image
preprocessing
learning
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811151199.2A
Other languages
Chinese (zh)
Inventor
张耿霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Jiuzhou Chuangzhi Technology Co ltd
Original Assignee
Dalian Jiuzhou Chuangzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Jiuzhou Chuangzhi Technology Co ltd filed Critical Dalian Jiuzhou Chuangzhi Technology Co ltd
Priority to CN201811151199.2A priority Critical patent/CN110969168A/en
Publication of CN110969168A publication Critical patent/CN110969168A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a prepackage quality recognition system based on deep learning, which at least comprises: the image acquisition unit is used for acquiring a pre-packaged image, the preprocessing unit is used for preprocessing the image acquired by the image acquisition unit, the data learning unit is used for learning the image processed by the preprocessing unit, the storage unit is used for storing a model learned by the data learning unit, the identification unit is used for establishing an identification model after the data learning unit learns the model, and the display unit is used for displaying the data recognized by the identification unit. According to the invention, the accuracy and the precision of recognition are realized through the learning model, and the quality of the package is recognized through collecting images, so that the labor is saved, and the detection cost is reduced.

Description

Pre-packaging quality identification system based on deep learning
Technical Field
The invention relates to the technical field of quality identification, in particular to a pre-packaging quality identification system based on deep learning.
Background
In the prior art, the quality of the package is often checked by adopting a manual method, but the quality of the identified finished product is often judged according to the judgment of personnel due to various reasons, so that the quality standards of the package are different. Meanwhile, the problem that detection is not timely or a lot of personnel are needed often occurs in manual detection, so that the cost of the whole production is improved, and the identification standard is reduced.
Disclosure of Invention
In light of the above-mentioned technical problems, a system for recognizing the quality of prepackage based on deep learning is provided. The invention mainly utilizes a prepackage quality recognition system based on deep learning, which is characterized by at least comprising the following components:
the image acquisition unit is used for acquiring a pre-packaged image, the preprocessing unit is used for preprocessing the image acquired by the image acquisition unit, the data learning unit is used for learning the image processed by the preprocessing unit, the storage unit is used for storing a model learned by the data learning unit, the identification unit is used for establishing an identification model after the data learning unit learns the model, and the display unit is used for displaying the data recognized by the identification unit;
the image acquisition unit acquires a pre-packaged image through a mobile phone/camera, and the acquired pre-packaged image is preprocessed through a preprocessing unit; cropping the acquired pre-packaged image into 40 x 40 pixels; the data learning unit transmits the image preprocessed by the preprocessing unit to the storage unit for storage through learning;
when the image recognition device is used, a user collects the pre-packaged image, after the image is preprocessed by the preprocessing unit, the recognition unit calls the model stored in the storage unit to recognize, and the recognition result is displayed on the display unit.
Further, the preprocessing unit performs graying processing on the image; the graying process establishes correspondence of the luminance Y and R, G, B three color components according to YUV color space:
Y=0.3R+0.59G+0.11B;
where Y denotes the luminance of the dot reflecting the luminance level, R denotes red, G denotes green, and B denotes blue.
Further, the data learning unit comprises a low classifier and a high classifier;
the low classifier adopts Bayes theory; the Bayesian theory is as follows:
Figure BDA0001818003700000021
wherein y represents a class variable and X represents a dependent feature vector; x ═ X1,x2,x3,...,xn);
Figure BDA0001818003700000022
Wherein x isiRepresents the i-th X dependent feature vector, P (y) represents class probability, P (X)iY) represents a conditional probability;
the high classifier is classified as:
Figure BDA0001818003700000023
wherein,
Figure BDA0001818003700000024
representing the weight of the training sample, zjRepresenting a formal factor.
Further, the display unit adopts an LED/LCD display screen.
Compared with the prior art, the invention has the following advantages:
according to the invention, the accuracy and the precision of recognition are realized through the learning model, and the quality of the package is recognized through collecting images, so that the labor is saved, and the detection cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic view of the overall structure of the present invention.
FIG. 2 is a flow chart of a method according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, the present invention provides a deep learning-based prepackaging quality recognition system, which is characterized by at least comprising:
the image acquisition unit is used for acquiring a pre-packaged image, the preprocessing unit is used for preprocessing the image acquired by the image acquisition unit, the data learning unit is used for learning the image processed by the preprocessing unit, the storage unit is used for storing a model learned by the data learning unit, the identification unit is used for establishing an identification model after the data learning unit learns the model, and the display unit is used for displaying the data recognized by the identification unit;
the image acquisition unit acquires a pre-packaged image through a mobile phone/camera, and the acquired pre-packaged image is preprocessed through a preprocessing unit; cropping the acquired pre-packaged image into 40 x 40 pixels; the data learning unit transmits the image preprocessed by the preprocessing unit to the storage unit for storage through learning;
when the image recognition device is used, a user collects the pre-packaged image, after the image is preprocessed by the preprocessing unit, the recognition unit calls the model stored in the storage unit to recognize, and the recognition result is displayed on the display unit.
In the present embodiment, the preprocessing unit performs a graying process on the image; the graying process establishes correspondence of the luminance Y and R, G, B three color components according to YUV color space:
Y=0.3R+0.59G+0.11B;
where Y denotes the luminance of the dot reflecting the luminance level, R denotes red, G denotes green, and B denotes blue. It is understood that in other embodiments, graying or other preprocessing methods may be performed according to actual requirements, and noise reduction may also be performed.
In a preferred embodiment, the data learning unit of the present invention includes a low classifier and a high classifier;
as a preferred implementation mode, the low classifier adopts Bayes theory; the Bayesian theory is as follows:
Figure BDA0001818003700000041
wherein y represents a class variable and X represents a dependent feature vector; x ═ X1,x2,x3,...,xn);
Figure BDA0001818003700000042
Wherein x isiRepresents the i-th X dependent feature vector, P (y) represents class probability, P (X)iY) represents the conditional probability.
As a preferred embodiment, the high classifier of the present invention is classified into:
Figure BDA0001818003700000043
wherein,
Figure BDA0001818003700000044
representing the weight of the training sample, zjRepresenting a formal factor. It is understood that in other embodiments, other classifier methods may be used for classification as long as the image can be clearly identified.
As a preferred embodiment, the display unit of the invention adopts an LED/LCD display screen. It is understood that in other embodiments, the display unit may also be a touch display screen, and image acquisition is performed on the product by touching a shooting button on the display screen.
Example 1
As shown in fig. 2, an identification method applying the system of the present invention at least includes:
an off-line training process and an on-line identification process;
the off-line identification process comprises at least the following steps:
s11: the image acquisition unit for acquiring the pre-packaged image acquires an image and transmits the image acquired by the image acquisition unit to the preprocessing unit for preprocessing; s12: the image processed by the preprocessing unit is transmitted to a data learning unit; s13: the data learned by the data learning unit is transmitted to the storage unit for storage;
the online identification process at least comprises the following steps: s21: the image acquisition unit for acquiring the pre-packaged image acquires an image and transmits the image acquired by the image acquisition unit to the preprocessing unit for preprocessing; s22: the image processed by the preprocessing unit is transmitted to a data learning unit; s23: the identification unit calls the model stored in the storage unit for identification; s24: after recognition, the recognition result is displayed through the display unit.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (4)

1. A deep learning based prepackaging quality recognition system comprising at least:
the image acquisition unit is used for acquiring a pre-packaged image, the preprocessing unit is used for preprocessing the image acquired by the image acquisition unit, the data learning unit is used for learning the image processed by the preprocessing unit, the storage unit is used for storing a model learned by the data learning unit, the identification unit is used for establishing an identification model after the data learning unit learns the model, and the display unit is used for displaying the data recognized by the identification unit;
the image acquisition unit acquires a pre-packaged image through a mobile phone/camera, and the acquired pre-packaged image is preprocessed through a preprocessing unit; cropping the acquired pre-packaged image into 40 x 40 pixels; the data learning unit transmits the image preprocessed by the preprocessing unit to the storage unit for storage through learning;
when the image recognition device is used, a user collects the pre-packaged image, after the image is preprocessed by the preprocessing unit, the recognition unit calls the model stored in the storage unit to recognize, and the recognition result is displayed on the display unit.
2. The deep learning based prepackaging quality recognition system of claim 1 further characterized by:
the preprocessing unit performs graying processing on the image; the graying process establishes correspondence of the luminance Y and R, G, B three color components according to YUV color space:
Y=0.3R+0.59G+0.11B;
where Y denotes the luminance of the dot reflecting the luminance level, R denotes red, G denotes green, and B denotes blue.
3. The deep learning based prepackaging quality recognition system of claim 1 further characterized by:
the data learning unit comprises a low classifier and a high classifier;
the low classifier adopts Bayes theory; the Bayesian theory is as follows:
Figure FDA0001818003690000011
wherein y represents a class variable and X represents a dependent feature vector; x ═ X1,x2,x3,...,xn);
Figure FDA0001818003690000012
Wherein x isiRepresents the i-th X dependent feature vector, P (y) represents class probability, P (X)iY) represents a conditional probability;
the high classifier is classified as:
Figure FDA0001818003690000021
wherein,
Figure FDA0001818003690000022
representing the weight of the training sample, zjRepresenting a formal factor.
4. The deep learning based prepackaging quality recognition system of claim 1 further characterized by: the display unit adopts an LED/LCD display screen.
CN201811151199.2A 2018-09-29 2018-09-29 Pre-packaging quality identification system based on deep learning Pending CN110969168A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811151199.2A CN110969168A (en) 2018-09-29 2018-09-29 Pre-packaging quality identification system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811151199.2A CN110969168A (en) 2018-09-29 2018-09-29 Pre-packaging quality identification system based on deep learning

Publications (1)

Publication Number Publication Date
CN110969168A true CN110969168A (en) 2020-04-07

Family

ID=70027478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811151199.2A Pending CN110969168A (en) 2018-09-29 2018-09-29 Pre-packaging quality identification system based on deep learning

Country Status (1)

Country Link
CN (1) CN110969168A (en)

Similar Documents

Publication Publication Date Title
CN108229277B (en) Gesture recognition method, gesture control method, multilayer neural network training method, device and electronic equipment
CN111814520A (en) Skin type detection method, skin type grade classification method, and skin type detection device
WO2014137806A2 (en) Visual language for human computer interfaces
CN112906741A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108280426B (en) Dark light source expression identification method and device based on transfer learning
CN111985281B (en) Image generation model generation method and device and image generation method and device
US20120257679A1 (en) System and method for encoding and decoding video data
CN110674759A (en) Monocular face in-vivo detection method, device and equipment based on depth map
CN106778627B (en) Detect the method, apparatus and mobile terminal of face face value
CN110543848B (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN113627411A (en) Super-resolution-based commodity identification and price matching method and system
CN103366390A (en) Terminal, image processing method and device thereof
CN109389096A (en) Detection method and device
CN108647696B (en) Picture color value determining method and device, electronic equipment and storage medium
CN109977875A (en) Gesture identification method and equipment based on deep learning
CN112633221A (en) Face direction detection method and related device
CN111339884A (en) Image recognition method and related equipment and device
CN114120307A (en) Display content identification method, device, equipment and storage medium
CN103959309B (en) The regional choice determined for counterfeit
CN113436081A (en) Data processing method, image enhancement method and model training method thereof
CN111126283B (en) Rapid living body detection method and system for automatically filtering fuzzy human face
CN110378299B (en) Clothing identification system under indoor lighting condition
CN110969168A (en) Pre-packaging quality identification system based on deep learning
CN110969177A (en) Pre-packaging quality identification method based on deep learning
CN109141457A (en) Navigate appraisal procedure, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200407

WD01 Invention patent application deemed withdrawn after publication