CN114723936A - Method and device for automatic label modeling, storage medium and electronic equipment - Google Patents

Method and device for automatic label modeling, storage medium and electronic equipment Download PDF

Info

Publication number
CN114723936A
CN114723936A CN202210248818.XA CN202210248818A CN114723936A CN 114723936 A CN114723936 A CN 114723936A CN 202210248818 A CN202210248818 A CN 202210248818A CN 114723936 A CN114723936 A CN 114723936A
Authority
CN
China
Prior art keywords
label
block diagram
target
picture
edge block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210248818.XA
Other languages
Chinese (zh)
Inventor
陆华章
周叶笛
徐强
李致亮
熊伟
王晓琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202210248818.XA priority Critical patent/CN114723936A/en
Publication of CN114723936A publication Critical patent/CN114723936A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Toxicology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for automatic label modeling, a storage medium and electronic equipment, and belongs to the technical field of automatic modeling. The method comprises the steps of obtaining a suspected edge block diagram set from a packaging diagram of a target box body; filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set; filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams comprise label information of the target box body; and creating a label template of the target box body by adopting the target edge diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body. By the method and the device, the technical problems that manual label modeling efficiency is low and modeling consistency cannot be guaranteed in the prior art are solved, the modeling efficiency is improved, and the modeling consistency is guaranteed.

Description

Method and device for automatic label modeling, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of automatic modeling, in particular to a method and a device for automatic label modeling, a storage medium and electronic equipment.
Background
In the related art, in order to control the label pasting quality on the air conditioner product packaging box and avoid wrong label pasting, visual detection equipment is needed, and whether the current pasted label is correct or not is judged according to a built label template picture.
In the related technology, the modeling form is to manually model the frame of the label, but because the models of products are many, new models are generated every day, which brings great workload to modeling personnel, and meanwhile, the artificial modeling has subjective consciousness and can not ensure the consistency of modeling.
In view of the above problems in the related art, no effective solution has been found at present.
Disclosure of Invention
In order to solve the problems that the efficiency of artificially modeling a label is low and the consistency of modeling cannot be ensured in the prior art, the invention provides a method and a device for automatically modeling the label, a storage medium and electronic equipment.
According to an aspect of an embodiment of the present application, there is provided a method for automatic modeling of tags, including: acquiring a suspected edge block diagram set from a packaging image of a target box; filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set; filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams comprise label information of the target box body; and creating a label template of the target box body by adopting the plurality of target edge frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body.
Further, the step of obtaining the suspected edge block diagram set from the packaging diagram of the target box comprises the following steps: acquiring a packaging picture of the outer surface of the target box body; carrying out smooth filtering and noise reduction processing on the packaged picture to obtain a first intermediate picture; and carrying out edge detection on the first intermediate picture, and extracting a suspected edge block diagram set.
Further, performing edge detection on the first intermediate picture, and extracting a suspected edge block diagram set includes: detecting edge straight line characteristics of the first intermediate picture; searching a plurality of short straight lines in the first intermediate picture according to the edge straight line characteristics, wherein the short straight lines are straight lines with the length smaller than a preset length; aiming at each first short straight line in the short straight lines, searching a second straight line section which is parallel to the track of the first short straight line and has the same length as the track of the first short straight line in other short straight lines, combining the first short straight line and the second straight line in pairs and storing the first short straight line and the second straight line section into a parallel line group;
and carrying out parallelogram combination on the straight lines in the parallel line group to obtain a suspected edge block diagram set.
Further, the filtering the suspected edge block diagram set by using a convolutional neural network to obtain a candidate edge block diagram set includes: for each suspected edge block diagram in the candidate edge block diagrams, converting the suspected border block diagram into a picture vector; circularly executing the following steps by adopting the picture vector until the last pooling layer: taking the picture vector as an input quantity, performing feature extraction on the picture vector through a first convolution layer, and outputting a first feature map; taking the first feature map as an input quantity, performing down-sampling on the first feature map in a first pooling layer, and outputting a second feature map after dimension reduction; determining the second feature map as an input quantity of a next convolutional layer; and inputting the target feature map output by the last pooling layer into a classification model, and outputting classification information of the suspected edge block diagram, wherein the classification information is used for indicating whether the corresponding suspected edge block diagram is a candidate edge block diagram.
Further, performing feature extraction on the picture vector by using the picture vector as an input quantity through a first convolution layer, and outputting a first feature map includes: sequentially inputting the picture vectors into M sliding windows of a first convolution layer, and outputting M first feature maps, wherein the first convolution layer comprises M sliding windows with different weights, and the picture vectors execute the following steps in each sliding window: and multiplying the weight value of the current sliding window and the gray value of the suspected edge block diagram and summing to obtain a characteristic value, wherein M is an integer greater than 1.
Further, the down-sampling of the first feature map in the first pooling layer using the first feature map as an input amount and the outputting of the second feature map after the dimensionality reduction include: inputting the M first feature maps into M sliding windows of a first pooling layer, and outputting M second feature maps, wherein the first pooling layer comprises M sliding windows with the same weight, and the M first feature maps execute the following steps in each sliding window: and acquiring a first characteristic value of a first characteristic diagram in the current sliding window, comparing the first characteristic value with a second characteristic value of a second characteristic diagram in the previous sliding window, and determining the characteristic diagram with the maximum characteristic value as the second characteristic diagram after dimension reduction, wherein M is an integer greater than 1.
Further, filtering the candidate frame diagram set according to the prior condition to obtain a plurality of target frame diagrams includes: calculating label parameters of the candidate edge block diagram aiming at each candidate edge block diagram in the candidate edge block diagram set; judging whether the label parameter meets a prior condition, wherein the prior condition comprises at least one of the following conditions: the minimum width and height of the label, the maximum width-height ratio of the label, the minimum width-height ratio of the label, characters in the label, and the color tone difference ratio of the whole color tone of the label and the box body; and if the prior condition is met, judging that the candidate edge frame is the target edge frame.
Further, calculating the label parameter of the candidate edge block diagram comprises: calculating a distance value of two parallel straight lines in the candidate edge frame diagram, determining a transverse distance as a width value, determining a longitudinal distance as a height value, and determining a ratio between the transverse distance and the longitudinal distance as an aspect ratio, wherein the label parameters include: width value, height value, aspect ratio; judging whether the candidate edge frame contains characters or not through Optical Character Recognition (OCR); calculating a first gray value of the candidate edge frame diagram, calculating a second gray value of the packaging picture, and determining a ratio between the first gray value and the second gray value as a hue difference ratio.
According to another aspect of the embodiments of the present application, there is also provided an automatic tag modeling apparatus, including: the acquisition module is used for acquiring a suspected edge block diagram set from a packaging image of the target box body; the first filtering module is used for filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set; the second filtering module is used for filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams comprise label information of the target box body; and the creating module is used for creating a label template of the target box body by adopting the target edge frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body.
Further, the acquisition module comprises: the acquisition unit is used for acquiring a packaging picture of the outer surface of the target box body; the processing unit is used for carrying out smooth filtering and noise reduction processing on the packaged picture to obtain a first intermediate picture; and the extracting unit is used for carrying out edge detection on the first intermediate picture and extracting a suspected edge block diagram set.
Further, the extraction unit includes: the detection subunit is used for detecting the edge straight line characteristic of the first intermediate picture; the searching subunit is configured to search a plurality of short straight lines in the first intermediate picture according to the edge straight line feature, where the short straight lines are straight lines with a length smaller than a preset length; the storage subunit is used for searching a second straight line segment which is parallel to the track of each first short straight line in the plurality of short straight lines and has the same length as the track of the first short straight line in other short straight lines, and storing the first short straight line and the second straight line segment into a parallel line group after pairwise combination; and the combination subunit is used for carrying out parallelogram combination on the straight lines in the parallel line group to obtain a suspected edge block diagram set.
Further, the first filtering module includes: a conversion unit, configured to convert, for each suspected edge block diagram in the candidate edge block diagrams, the suspected edge block diagram into a picture vector; an execution unit, configured to perform the following steps in a loop using the picture vectors until the last pooling layer: taking the picture vector as an input quantity, performing feature extraction on the picture vector through a first convolution layer, and outputting a first feature map; taking the first feature map as an input quantity, performing down-sampling on the first feature map in a first pooling layer, and outputting a second feature map after dimension reduction; determining the second feature map as an input quantity of a next convolutional layer; and the output unit is used for inputting the target feature map output by the last pooling layer into a classification model and outputting classification information of the suspected edge block diagram, wherein the classification information is used for indicating whether the corresponding suspected edge block diagram is a candidate edge block diagram or not.
Further, the execution unit includes: a first output subunit, configured to sequentially input the picture vector into M sliding windows of a first convolution layer, and output M first feature maps, where the first convolution layer includes M sliding windows with different weights, and the picture vector performs the following steps in each sliding window: and multiplying the weight value of the current sliding window and the gray value of the suspected edge block diagram and summing to obtain a characteristic value, wherein M is an integer greater than 1.
Further, the execution unit further includes: a second output subunit, configured to input the M first feature maps into M sliding windows of a first pooling layer, and output M second feature maps, where the first pooling layer includes M sliding windows with the same weight, and the M first feature maps perform the following steps in each sliding window: and acquiring a first characteristic value of a first characteristic diagram in the current sliding window, comparing the first characteristic value with a second characteristic value of a second characteristic diagram in the previous sliding window, and determining the characteristic diagram with the maximum characteristic value as the second characteristic diagram after dimension reduction, wherein M is an integer greater than 1.
Further, the second filter module includes: a calculating unit, configured to calculate, for each candidate edge block in the candidate edge block set, a label parameter of the candidate edge block; a judging unit, configured to judge whether the tag parameter satisfies a prior condition, where the prior condition includes at least one of: the minimum width and height of the label, the maximum width-height ratio of the label, the minimum width-height ratio of the label, characters in the label, and the color tone difference ratio of the whole color tone of the label and the box body; and the determining unit is used for judging that the candidate edge frame is the target edge frame if the prior condition is met.
Further, the calculation unit includes: a first calculating subunit, configured to calculate a distance value between two parallel straight lines in the candidate edge frame, determine a transverse distance as a width value, determine a longitudinal distance as a height value, and determine a ratio between the transverse distance and the longitudinal distance as an aspect ratio, where the label parameter includes: width value, height value, aspect ratio; the judging subunit is used for judging whether the candidate edge frame contains characters or not through Optical Character Recognition (OCR); and the second calculating subunit is used for calculating a first gray value of the candidate edge frame diagram, calculating a second gray value of the packaging picture, and determining a ratio between the first gray value and the second gray value as a hue difference ratio.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program, wherein the program performs the above-mentioned method steps when executed.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus; wherein: a memory for storing a computer program; a processor for executing the above method steps by executing the program stored in the memory.
Embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the steps of the above method.
By the method, a suspected edge block diagram set is obtained from a packaging image of a target box body; filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set; filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams comprise label information of the target box body; the label template of the target box body is created by adopting the target side frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body, the target side frame diagrams are screened from the suspected frame diagram set of the packaging pictures by adopting a convolutional neural network and a priori condition, and the label template of the target box body is created by adopting the target side frame diagrams, so that the consistency and the accuracy of the label template are ensured, meanwhile, an automatic creation scheme of the label template is realized, the technical problems that the label modeling efficiency is low and the modeling consistency cannot be ensured in the prior art are solved, the modeling efficiency is increased, the modeling consistency is ensured, and the labels, the product models and the bar codes can be automatically marked.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of an automatic tag modeling apparatus according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a method for automatic modeling of tags, in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a scenario of an embodiment of the present invention;
FIG. 4 is a workflow diagram of an embodiment of the present invention;
fig. 5 is a block diagram of a tag automatic modeling apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method provided by the first embodiment of the present application may be executed in a computer, a server, a control device, or a similar computing device. Taking a computer as an example, fig. 1 is a block diagram of a hardware structure of automatic tag modeling according to an embodiment of the present invention. As shown in fig. 1, the tag auto-modeling may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure for the automatic modeling of the tags described above. For example, the automatic modeling of tags may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a program for operating the automatic tag modeling, for example, a software program and a module of application software, such as an automatic modeling program corresponding to a method for automatically modeling tags in an embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the automatic modeling program stored in the memory 104, so as to implement the method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located from the processor 102, which may be connected to the tag automated modeling apparatus via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the tag automatic modeling apparatus. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, a method for tag automatic modeling is provided, and fig. 2 is a flowchart of a method for tag automatic modeling according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, a suspected edge block diagram set is obtained from a packaging image of a target box body;
in an embodiment of this embodiment, a plurality of images with edge frames, such as bar code images, energy efficiency identifiers, product logos, and notice icons, are attached to the outer packaging surface of the target box, all the edge frames are suspected edge frames, and all the suspected edge frames are obtained from the packaging images of the target box and form a set. Optionally, the frame diagram of this embodiment may be a square, a rectangle, a hexagon, etc., and in some special scenes, may also be a trapezoid, a rhombus, a triangle, a pentagon, etc.
Step S204, filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set;
in an embodiment of this embodiment, a convolutional neural network two-class classification model is used to filter the suspected edge block diagram set, where the two-class classification model is trained first through training data, the training data are two types of data, one type is a background local picture of the carton, and the second type is a picture of the label. And finally, obtaining a model for distinguishing the two types of data, and filtering the suspected edge frame set through the model to obtain a candidate edge frame set. The candidate frame graph is a part of label edge graph which accords with the condition after the suspected frame graph set is classified and filtered by the two classification models, and the target edge graph can be obtained by further filtering.
Step S206, filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams comprise label information of the target box body;
in an embodiment of this embodiment, the prior condition is used to determine a target edge frame map meeting the condition in the candidate edge frame map set, where the prior condition includes parameters such as a size and a proportion of a standard tag frame, and the target edge frame map includes tag information of a target box.
And S208, creating a label template of the target box body by adopting the plurality of target edge frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body.
In an embodiment of this embodiment, the tag template includes coordinate position information of the target side frame diagram on the target box, and the coordinate information and the corresponding package picture are stored in the database, that is, the automatic modeling of the tag is completed.
After the frame position information is obtained, a bar code picture is attached to the target frame picture, the product bar code can be obtained by carrying out bar code identification, meanwhile, because the product model is associated with the bar code, the product model can be inquired through networking as long as the content of the bar code is scanned, and the steps solve the problem of how to automatically label the product model, the bar code and the label.
Meanwhile, because the target frame diagram is attached with information data such as product model, bar code, label coordinate position and the like, the embodiment can also realize that whether the label frame is correctly pasted or not, because the character content and the size on the label are provided for a packaging production line in advance, and after the label frame diagram template is pasted, whether the label content is fuzzy or not or places with unclear writing such as marks, bar codes and the like can be judged through the label frame diagram template which is successfully modeled.
Through the steps, a suspected edge block diagram set is obtained from a packaging image of the target box body; filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set; filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams comprise label information of the target box body; the label template of the target box body is created by adopting the target side frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body, the target side frame diagrams are screened from the suspected frame diagram set of the packaging pictures by adopting a convolutional neural network and a priori condition, and the label template of the target box body is created by adopting the target side frame diagrams, so that the consistency and the accuracy of the label template are ensured, meanwhile, an automatic creation scheme of the label template is realized, the technical problems that the label modeling efficiency is low and the modeling consistency cannot be ensured in the prior art are solved, the modeling efficiency is increased, the modeling consistency is ensured, and the labels, the product models and the bar codes can be automatically marked.
In this embodiment, the obtaining the suspected edge block diagram set from the packaging diagram of the target box includes: acquiring a packaging picture of the outer surface of the target box body; carrying out smooth filtering and noise reduction processing on the packaged picture to obtain a first intermediate picture; and carrying out edge detection on the first intermediate picture, and extracting a suspected edge block diagram set.
In the above steps, after a package picture of the outer surface of the target box body is obtained through a camera, the package picture is transmitted to a computer for processing, smooth filtering and noise reduction processing is carried out on the package picture, for example, noise reduction processing is carried out on mean filtering and median filtering, a first middle picture is obtained, edge detection is carried out, and suspected edge block diagrams on the package picture are obtained and are integrated together.
In this embodiment, performing edge detection on the first intermediate picture, and extracting a suspected edge block set includes: detecting edge straight line characteristics of the first intermediate picture; searching a plurality of short straight lines in the first intermediate picture according to the edge straight line characteristics, wherein the short straight lines are straight lines with the length smaller than a preset length; aiming at each first short straight line in the short straight lines, searching a second straight line section which is parallel to the track of the first short straight line and has the same length with the track of the first short straight line in other short straight lines, combining the first short straight line and the second straight line section in pairs and storing the combined first short straight line and second straight line section in a parallel line group; and carrying out parallelogram combination on the straight lines in the parallel line group to obtain a suspected edge block diagram set.
In the above steps, each short straight line is found according to the straight line characteristics of the edge of the picture, then, two parallel straight line segments are calculated along the outline of each short straight line to find out and use as parallel line groups, wherein the two parallel straight line segments have the same length and parallel tracks, then, parallelogram combination is carried out on all straight lines in the parallel line groups, any multiple groups of adjacent and closable parallel line groups are selected and combined to form a suspected edge block diagram, and finally, a plurality of suspected edge block diagrams are obtained.
In this embodiment, the filtering the suspected edge block diagram set by using a convolutional neural network to obtain a candidate edge block diagram set includes: for each suspected edge block diagram in the candidate edge block diagrams, converting the suspected border block diagram into a picture vector; circularly executing the following steps by adopting the picture vector until the last pooling layer: taking the picture vector as an input quantity, performing feature extraction on the picture vector through a first convolution layer, and outputting a first feature map; taking the first feature map as an input quantity, performing down-sampling on the first feature map in a first pooling layer, and outputting a second feature map after dimension reduction; determining the second feature map as an input quantity of a next convolutional layer; and inputting the target feature map output by the last pooling layer into a classification model, and outputting classification information of the suspected edge block diagram, wherein the classification information is used for indicating whether the corresponding suspected edge block diagram is a candidate edge block diagram.
In the above steps, the suspected frame maps are filtered by using two classification models of the convolutional neural network to obtain a candidate frame map set, wherein the two classification models have 7 layers, the first layer is a 25 × 7 convolutional layer, the second layer is a 25 × 7 pooling layer, the third layer is a 75 × 9 convolutional layer, the fourth layer is a 75 × 9 pooling layer, the fifth layer is a 25 × 7 convolutional layer, the sixth layer is a 25 × 7 pooling layer, and the last layer is a classification layer.
In the training process of the convolution neural network secondary classification model, the model is trained by using two types of data, and finally, a model for distinguishing the two types of data is obtained. These two types of data are: a plurality of label pictures and local picture of carton. After the model is trained, the picture to be detected is input into the model, and the model can judge whether the picture is a label or a local picture of a carton. The processing inside the model is that after the pictures are input, a dimension-reduced feature map is obtained through convolution and pooling, and then the feature map is sent to a classification layer, for example, a softmax classifier for classification, so as to determine which type the input pictures belong to, namely, the input pictures belong to the label pictures or the carton local pictures.
In this embodiment, the convolutional layer functions to perform feature extraction on the input variable to obtain a feature map. The function of the pooling layer is to perform down-sampling on the feature map obtained by the previous convolution layer to obtain a feature map with reduced dimension. The function of the classification layer is to classify the input dimensionality reduction feature map, such as logistic regression by softmax.
In this embodiment, performing feature extraction on the picture vector by using the picture vector as an input quantity through a first convolution layer, and outputting a first feature map includes: sequentially inputting the picture vectors into M sliding windows of a first convolution layer, and outputting M first feature maps, wherein the first convolution layer comprises M sliding windows with different weights, and the picture vectors execute the following steps in each sliding window: and multiplying the weight value of the current sliding window and the gray value of the suspected edge block diagram and summing to obtain a characteristic value, wherein M is an integer greater than 1.
In one example, M is 25, such as 25 × 7 convolution layers, 7 × 7 indicates that a window with 7 × 7 weights slides on an input picture, each weight value in the window is multiplied by a corresponding picture gray value, the sum of the weight values and the corresponding picture gray value is obtained as an output, and after the window sliding is completed on the whole input picture, one feature map is obtained, 25 indicates 25 windows with different weights, and finally 25 different feature maps are obtained.
In this embodiment, the outputting a reduced-dimension second feature map by down-sampling the first feature map in a first pooling layer using the first feature map as an input amount includes: inputting the M first feature maps into M sliding windows of a first pooling layer, and outputting M second feature maps, wherein the first pooling layer comprises M sliding windows with the same weight, and the M first feature maps execute the following steps in each sliding window: and acquiring a first characteristic value of a first characteristic diagram in the current sliding window, comparing the first characteristic value with a second characteristic value of a second characteristic diagram in the previous sliding window, and determining the characteristic diagram with the maximum characteristic value as the second characteristic diagram after dimension reduction, wherein M is an integer greater than 1.
In the above steps, for the 25 × 7 pooling layer, 7 × 7 indicates that the window of 7 × 7 is slid in the input feature map, the maximum gray-scale value in the window is taken as the output to obtain a down-sampled feature map, and 25 indicates that the 25 feature maps in the upper layer are down-sampled.
In this embodiment, the filtering the candidate edge block diagram set according to the prior condition to obtain a plurality of target edge block diagrams includes: calculating label parameters of the candidate edge block diagram aiming at each candidate edge block diagram in the candidate edge block diagram set; judging whether the label parameter meets a prior condition, wherein the prior condition comprises at least one of the following conditions: the minimum width and height of the label, the maximum width-height ratio of the label, the minimum width-height ratio of the label, characters in the label and the color tone difference ratio of the whole color tone of the label and the box body are set; and if the prior condition is met, judging that the candidate edge frame is the target edge frame.
In the above steps, for the candidate edge frame diagram, whether the candidate edge frame diagram is the label edge frame diagram is judged by calculating the label parameters, that is, whether the candidate edge frame diagram is the label edge frame diagram is judged by comparing the label parameters in advance through the prior conditions, and when at least one of the prior conditions is satisfied, the candidate edge frame diagram is judged as the label edge frame diagram, that is, the target edge frame diagram is obtained.
In one embodiment of this embodiment, a plurality of suspected edge block diagram sets including various logo diagrams, barcode diagrams, product attention item diagrams, product logos and the like are obtained on a packaging image of a target box, and after filtering through a convolutional neural network two-classification model, a candidate edge block diagram is obtained, but some non-label candidate edge block diagrams which do not meet the requirements still exist, and a label edge block diagram which meets the requirements is further screened by judging a priori conditions. Different prior conditions can be set for different product packages, although the labels are various, the label type of each product can be determined in advance before the production of a packaging line, and the method can be directly applied to actual production.
In this embodiment, calculating the label parameter of the candidate edge block diagram includes: calculating a distance value of two parallel straight lines in the candidate edge frame diagram, determining a transverse distance as a width value, determining a longitudinal distance as a height value, and determining a ratio between the transverse distance and the longitudinal distance as an aspect ratio, wherein the label parameters include: width value, height value, aspect ratio; judging whether the candidate edge frame contains characters through Optical Character Recognition (OCR); calculating a first gray value of the candidate edge frame diagram, calculating a second gray value of the packaging picture, and determining a ratio between the first gray value and the second gray value as a hue difference ratio.
In the above steps, the width and height of the label can be obtained by calculating the distance between two parallel straight lines of the candidate edge frame, and the width and height are compared with the minimum width and height, the maximum width and height ratio and the minimum width and height ratio of the label under the prior condition, and then whether the candidate edge frame has characters is identified by Optical Character Recognition (OCR), at this time, the target box body has a gray value, the candidate edge frame also has a gray value, the difference is large by comparing the two gray values, namely the tone difference is large, and if the compared gray value is large, the candidate edge frame is deleted.
Fig. 3 is a schematic view of a scene according to an embodiment of the present invention, in which a suspected edge block diagram of a box is obtained by a camera, the suspected edge block diagram with a label is uploaded to a computer, processed and finally displayed on a display interface, and automatic modeling is completed.
Fig. 4 is a flowchart of the working process of the embodiment of the present invention, which includes taking a picture, obtaining a suspected quadrilateral frame, filtering out non-labeled quadrilateral frames through a deep learning model, i.e., a convolutional neural network two-class model, filtering out non-conforming quadrilateral frames according to prior conditions of labels, and finally obtaining a quadrilateral block diagram of labels.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method according to the foregoing embodiments may be implemented by software plus necessary general mechanical equipment, and certainly may also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software-controlled mechanical device, wherein the software is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), and includes instructions for enabling a mechanical device (e.g., a tag automatic modeling apparatus, etc.) to execute the method according to the embodiments of the present invention.
Example 2
The embodiment also provides an automatic label modeling device, which is used for implementing the above embodiments and preferred embodiments, and the description of the label modeling device is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram illustrating a structure of an automatic tag modeling apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including: an acquisition module 50, a first filtering module 52, a second filtering module 54, a creation module 56, wherein,
an obtaining module 50, configured to obtain a suspected edge block set from a packaging image of a target box;
a first filtering module 52, configured to filter the suspected edge frame set by using a convolutional neural network to obtain a candidate edge frame set;
a second filtering module 54, configured to filter the candidate edge box set according to a priori condition to obtain a plurality of target edge boxes, where the target edge boxes include tag information of the target box;
a creating module 56, configured to create a label template of the target box by using the plurality of target edge diagrams and the package picture, where the label template is used to locate a pasting position of label information on the target box.
Optionally, the obtaining module includes: the acquisition unit is used for acquiring a packaging picture of the outer surface of the target box body; the processing unit is used for carrying out smooth filtering and noise reduction processing on the packaging picture to obtain a first intermediate picture; and the extracting unit is used for carrying out edge detection on the first intermediate picture and extracting a suspected edge block diagram set.
Optionally, the extracting unit includes: the detection subunit is used for detecting the edge straight line characteristic of the first intermediate picture; the searching subunit is configured to search a plurality of short straight lines in the first intermediate picture according to the edge straight line feature, where the short straight lines are straight lines with a length smaller than a preset length; the storage subunit is used for searching a second straight line segment which is parallel to the track of each first short straight line in the plurality of short straight lines and has the same length as the track of the first short straight line in other short straight lines, and storing the first short straight line and the second straight line segment into a parallel line group after pairwise combination; and the combination subunit is used for carrying out parallelogram combination on the straight lines in the parallel line group to obtain a suspected edge block diagram set.
Optionally, the first filtering module includes: a conversion unit, configured to convert, for each suspected edge block diagram in the candidate edge block diagrams, the suspected edge block diagram into a picture vector; an execution unit, configured to perform the following steps in a loop using the picture vectors until the last pooling layer: taking the picture vector as an input quantity, performing feature extraction on the picture vector through a first convolution layer, and outputting a first feature map; taking the first feature map as an input quantity, performing down-sampling on the first feature map in a first pooling layer, and outputting a second feature map after dimension reduction; determining the second feature map as an input quantity of a next convolutional layer; and the output unit is used for inputting the target feature map output by the last pooling layer into the classification model and outputting the classification information of the suspected edge block diagram, wherein the classification information is used for indicating whether the corresponding suspected edge block diagram is a candidate edge block diagram.
Optionally, the execution unit includes: a first output subunit, configured to sequentially input the picture vector into M sliding windows of a first convolution layer, and output M first feature maps, where the first convolution layer includes M sliding windows with different weights, and the picture vector performs the following steps in each sliding window: and multiplying the weight value of the current sliding window and the gray value of the suspected edge block diagram and summing to obtain a characteristic value, wherein M is an integer greater than 1.
Optionally, the execution unit further includes: a second output subunit, configured to input the M first feature maps into M sliding windows in a first pooling layer, and output M second feature maps, where the first pooling layer includes M sliding windows with the same weight, and the M first feature maps perform the following steps in each sliding window: and acquiring a first characteristic value of a first characteristic diagram in the current sliding window, comparing the first characteristic value with a second characteristic value of a second characteristic diagram in the previous sliding window, and determining the characteristic diagram with the maximum characteristic value as the second characteristic diagram after dimension reduction, wherein M is an integer greater than 1.
Optionally, the second filtering module includes: a calculating unit, configured to calculate, for each candidate edge block in the candidate edge block set, a label parameter of the candidate edge block; a judging unit, configured to judge whether the tag parameter satisfies a prior condition, where the prior condition includes at least one of: the minimum width and height of the label, the maximum width-height ratio of the label, the minimum width-height ratio of the label, characters in the label, and the color tone difference ratio of the whole color tone of the label and the box body; and the determining unit is used for judging that the candidate edge frame is the target edge frame if the prior condition is met.
Optionally, the computing unit includes: a first calculating subunit, configured to calculate a distance value between two parallel straight lines in the candidate edge block diagram, determine a transverse distance as a width value, determine a longitudinal distance as a height value, and determine a ratio between the transverse distance and the longitudinal distance as an aspect ratio, where the label parameter includes: width value, height value, aspect ratio; the judging subunit is used for judging whether the candidate edge frame contains characters or not through Optical Character Recognition (OCR); and the second calculating subunit is used for calculating a first gray value of the candidate edge frame diagram, calculating a second gray value of the packaging picture, and determining a ratio between the first gray value and the second gray value as a hue difference ratio.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
An embodiment of the present invention further provides a storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps in any of the method embodiments described above when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a suspected edge block diagram set from a packaging image of the target box;
s2, filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set;
s3, filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams include label information of the target box body;
s4, creating a label template of the target box body by adopting the target edge frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a suspected edge block diagram set from the packaging image of the target box;
s2, filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set;
s3, filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams include label information of the target box body;
s4, creating a label template of the target box body by adopting the target edge frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that, as will be apparent to those skilled in the art, numerous modifications and adaptations can be made without departing from the principles of the present application and such modifications and adaptations are intended to be considered within the scope of the present application.

Claims (11)

1. A method for automatic modeling of tags, comprising:
acquiring a suspected edge block diagram set from a packaging image of a target box;
filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set;
filtering the candidate edge frame diagram sets according to prior conditions to obtain a plurality of target edge frame diagrams, wherein the target edge frame diagrams comprise label information of the target box body;
and creating a label template of the target box body by adopting the plurality of target edge frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body.
2. The method of claim 1, wherein obtaining the set of suspect edge frames from the packaging images of the target container comprises:
acquiring a packaging picture of the outer surface of the target box body;
performing smooth filtering and noise reduction processing on the packaging picture to obtain a first intermediate picture;
and carrying out edge detection on the first intermediate picture, and extracting a suspected edge block diagram set.
3. The method of claim 2, wherein performing edge detection on the first intermediate picture and extracting a set of suspected edge frames comprises:
detecting edge straight line characteristics of the first intermediate picture;
searching a plurality of short straight lines in the first intermediate picture according to the edge straight line characteristics, wherein the short straight lines are straight lines with the length smaller than a preset length;
aiming at each first short straight line in the short straight lines, searching a second straight line section which is parallel to the track of the first short straight line and has the same length with the track of the first short straight line in other short straight lines, combining the first short straight line and the second straight line section in pairs and storing the combined first short straight line and second straight line section in a parallel line group;
and carrying out parallelogram combination on the straight lines in the parallel line group to obtain a suspected edge block diagram set.
4. The method of claim 1, wherein filtering the suspected edge frame set using a convolutional neural network to obtain a candidate edge frame set comprises:
for each suspected edge block diagram in the candidate edge block diagrams, converting the suspected border block diagram into a picture vector;
and circularly executing the following steps by adopting the picture vectors until the last pooling layer: taking the picture vector as an input quantity, performing feature extraction on the picture vector through a first convolution layer, and outputting a first feature map; taking the first feature map as an input quantity, performing down-sampling on the first feature map in a first pooling layer, and outputting a second feature map after dimension reduction; determining the second feature map as an input quantity of a next convolutional layer;
and inputting the target feature map output by the last pooling layer into a classification model, and outputting classification information of the suspected edge block diagram, wherein the classification information is used for indicating whether the corresponding suspected edge block diagram is a candidate edge block diagram.
5. The method of claim 4, wherein the feature extraction of the picture vector is performed by a first convolution layer using the picture vector as an input quantity, and the outputting of the first feature map comprises:
sequentially inputting the picture vectors into M sliding windows of a first convolution layer, and outputting M first feature maps, wherein the first convolution layer comprises M sliding windows with different weights, and the picture vectors execute the following steps in each sliding window: and multiplying the weight value of the current sliding window and the gray value of the suspected edge block diagram and summing to obtain a characteristic value, wherein M is an integer greater than 1.
6. The method of claim 4, wherein down-sampling the first feature map at a first pooling layer using the first feature map as an input, and outputting a reduced-dimension second feature map comprises:
inputting the M first feature maps into M sliding windows of a first pooling layer, and outputting M second feature maps, wherein the first pooling layer comprises M sliding windows with the same weight, and the M first feature maps execute the following steps in each sliding window: and acquiring a first characteristic value of a first characteristic diagram in the current sliding window, comparing the first characteristic value with a second characteristic value of a second characteristic diagram in the previous sliding window, and determining the characteristic diagram with the maximum characteristic value as the second characteristic diagram after dimension reduction, wherein M is an integer greater than 1.
7. The method of claim 1, wherein filtering the candidate bounding box set according to a priori condition to obtain a plurality of target bounding boxes comprises:
calculating label parameters of the candidate edge block diagram aiming at each candidate edge block diagram in the candidate edge block diagram set;
judging whether the label parameter meets a prior condition, wherein the prior condition comprises at least one of the following conditions: the minimum width and height of the label, the maximum width-height ratio of the label, the minimum width-height ratio of the label, characters in the label, and the color tone difference ratio of the whole color tone of the label and the box body;
and if the prior condition is met, judging that the candidate edge frame is the target edge frame.
8. The method of claim 7, wherein computing label parameters for the candidate edge box comprises at least one of:
calculating a distance value of two parallel straight lines in the candidate edge frame diagram, determining a transverse distance as a width value, determining a longitudinal distance as a height value, and determining a ratio between the transverse distance and the longitudinal distance as an aspect ratio, wherein the label parameters include: width value, height value, aspect ratio;
judging whether the candidate edge frame contains characters or not through Optical Character Recognition (OCR);
calculating a first gray value of the candidate edge frame diagram, calculating a second gray value of the packaging picture, and determining a ratio between the first gray value and the second gray value as a hue difference ratio.
9. An apparatus for automatic modeling of tags, comprising:
the acquisition module is used for acquiring a suspected edge block diagram set from a packaging image of the target box body;
the first filtering module is used for filtering the suspected edge block diagram set by adopting a convolutional neural network to obtain a candidate edge block diagram set;
the second filtering module is used for filtering the candidate edge block diagram sets according to prior conditions to obtain a plurality of target edge block diagrams, wherein the target edge block diagrams comprise label information of the target box body;
and the creating module is used for creating a label template of the target box body by adopting the target edge frame diagrams and the packaging pictures, wherein the label template is used for positioning the pasting position of label information on the target box body.
10. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program is operative to perform the method steps of any of the preceding claims 1 to 7.
11. An electronic device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; wherein:
a memory for storing a computer program;
a processor for performing the method steps of any of claims 1 to 7 by executing a program stored on a memory.
CN202210248818.XA 2022-03-14 2022-03-14 Method and device for automatic label modeling, storage medium and electronic equipment Pending CN114723936A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210248818.XA CN114723936A (en) 2022-03-14 2022-03-14 Method and device for automatic label modeling, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210248818.XA CN114723936A (en) 2022-03-14 2022-03-14 Method and device for automatic label modeling, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114723936A true CN114723936A (en) 2022-07-08

Family

ID=82237531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210248818.XA Pending CN114723936A (en) 2022-03-14 2022-03-14 Method and device for automatic label modeling, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114723936A (en)

Similar Documents

Publication Publication Date Title
CN109447169A (en) The training method of image processing method and its model, device and electronic system
CN111095296A (en) Classifying character strings using machine learning
CN110008956B (en) Invoice key information positioning method, invoice key information positioning device, computer equipment and storage medium
CN111178355B (en) Seal identification method, device and storage medium
CN111415106A (en) Truck loading rate identification method, device, equipment and storage medium
CN101281597A (en) Apparatus and method for on-line real time capturing and recognizing product package pattern identification information
CN106203539B (en) Method and device for identifying container number
CN111460927B (en) Method for extracting structured information of house property evidence image
CN111428682B (en) Express sorting method, device, equipment and storage medium
CN111738245A (en) Commodity identification management method, commodity identification management device, server and readable storage medium
CN110909743B (en) Book checking method and book checking system
CN115131283A (en) Defect detection and model training method, device, equipment and medium for target object
CN112883926B (en) Identification method and device for form medical images
CN107403179B (en) Registration method and device for article packaging information
CN111598076B (en) Method and device for detecting and processing date in label image
CN103886319A (en) Intelligent held board recognizing method based on machine vision
CN113505780A (en) Two-dimensional code-based intelligent detection maintenance method and equipment
CN113591850A (en) Two-stage trademark detection method based on computer vision robustness target detection
CN113496212A (en) Text recognition method and device for box-type structure and electronic equipment
CN112257506A (en) Fruit and vegetable size identification method and device, electronic equipment and computer readable medium
CN112560718A (en) Method and device for acquiring material information, storage medium and electronic device
CN116245882A (en) Circuit board electronic element detection method and device and computer equipment
CN114723936A (en) Method and device for automatic label modeling, storage medium and electronic equipment
CN110610177A (en) Training method of character recognition model, character recognition method and device
CN114240924A (en) Power grid equipment quality evaluation method based on digitization technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination