CN115330869A - Visual modeling method, device, equipment and storage medium - Google Patents

Visual modeling method, device, equipment and storage medium Download PDF

Info

Publication number
CN115330869A
CN115330869A CN202210966923.7A CN202210966923A CN115330869A CN 115330869 A CN115330869 A CN 115330869A CN 202210966923 A CN202210966923 A CN 202210966923A CN 115330869 A CN115330869 A CN 115330869A
Authority
CN
China
Prior art keywords
determining
carton
template image
position information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210966923.7A
Other languages
Chinese (zh)
Inventor
汪二虎
李飞
陈然然
吴海涛
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LCFC Hefei Electronics Technology Co Ltd
Original Assignee
LCFC Hefei Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LCFC Hefei Electronics Technology Co Ltd filed Critical LCFC Hefei Electronics Technology Co Ltd
Priority to CN202210966923.7A priority Critical patent/CN115330869A/en
Publication of CN115330869A publication Critical patent/CN115330869A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Auxiliary Devices For And Details Of Packaging Control (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a visual modeling method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a template image, wherein the template image comprises a printing icon; acquiring the type of a carton; determining the position information of the printing icon according to the type of the carton; determining feature information corresponding to the position information based on the position information, wherein the feature information is used for representing the features of the printed icon; generating a visual model according to the position information and the characteristic information; can be according to the quick generation vision model of carton type and template image, and the vision model that generates and various carton type phase-matchs improve production efficiency.

Description

Visual modeling method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of visual modeling, and in particular, to a visual modeling method, apparatus, device, and storage medium.
Background
In industrial production, a large number of packing boxes are used, and visual inspection is required at each stage of packing box production so as to determine whether the printing of the packing boxes is qualified, wherein visual inspection requires a visual model corresponding to the packing boxes at each stage, and the existing modeling method is usually a manual modeling method based on shot images or an automatic modeling method based on template files at each stage; both take a lot of time to make the template, and the generated images or templates are numerous, increasing the management cost.
Disclosure of Invention
The present disclosure provides a visual modeling method, apparatus, device, and storage medium to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a visual modeling method, the method comprising: acquiring a template image, wherein the template image comprises a printing icon; acquiring the type of a carton; determining the position information of the printing icon according to the carton type; determining feature information corresponding to the position information based on the position information, wherein the feature information is used for representing the features of the printed icon; and generating a visual model according to the position information and the characteristic information.
In an implementation manner, the obtaining of the carton type is a printing carton, and correspondingly, determining the position information of the printing icon according to the carton type includes: carrying out binarization on the template image to obtain a binarized image; and determining first position information of a printing icon relative to the template image according to the binary image.
In an implementation manner, the obtained carton type is a die-cut carton, and correspondingly, the determining of the position information of the printed icon according to the carton type includes: determining a die cutting line corresponding to the template image according to the template image; determining an effective printing area corresponding to the die cutting line according to the die cutting line; and determining second position information of the printed icon relative to the effective printing area according to the effective printing area.
In an implementation manner, acquiring the carton type as a fitting carton, and correspondingly determining the position information of the printed icon according to the carton type includes: determining a die cutting line corresponding to the template image according to the template image; determining an effective printing area corresponding to the die cutting line according to the die cutting line; determining a folding line corresponding to the template image according to the template image; cutting the effective print area into a plurality of cut images according to the folding line; determining a folding direction corresponding to the folding line according to the folding line, and determining a position relation between the cutting images according to the folding direction and the cutting images; splicing the cut images based on the position relation to obtain a joint carton image; and determining third position information of the printing icon relative to the attached carton image according to the attached carton image.
In one implementation, according to the template image, determining a die cutting line corresponding to the template image, including; obtaining line information in a template image, wherein the line information at least comprises a die cutting line; and extracting a die cutting line corresponding to the first color in the line information based on the first color corresponding to the die cutting line.
In one embodiment, determining a folding line corresponding to the template image according to the template image includes: obtaining line information in a template image, wherein the line information at least comprises a folding line; and extracting a folding line corresponding to a second color in the line information based on the second color corresponding to the folding line, wherein the second color is different from the first color.
In an embodiment, the determining, based on the location information, feature information corresponding to the location information includes: determining a feature extraction mode corresponding to the position information based on the position information and the printing icon; extracting feature information corresponding to the printing icon according to the feature extraction mode; the feature extraction method at least comprises the following steps: a first feature extraction mode for extracting line features; and a second feature extraction mode for extracting the texture features.
According to a second aspect of the present disclosure, there is provided a visual modeling apparatus comprising: the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a template image which comprises a printing icon; the acquisition module is also used for acquiring the type of the carton; the determining module is used for determining the position information of the printing icon according to the carton type; the determining module is further used for determining characteristic information corresponding to the position information based on the position information, wherein the characteristic information is used for representing the characteristics of the printed icon; and the generating module is used for generating a visual model according to the position information and the characteristic information.
In an implementation manner, the obtained carton type is a printing carton, and the determining module is further configured to binarize the template image to obtain a binarized image; and determining first position information of a printing icon relative to the template image according to the binary image.
In one embodiment, the obtained carton type is a die-cut carton, and correspondingly, the determining module is further configured to include: according to the template image, determining a die cutting line corresponding to the template image; determining an effective printing area corresponding to the die cutting line according to the die cutting line; and determining second position information of the printed icon relative to the effective printing area according to the effective printing area.
In an implementation manner, the obtained carton type is a fit carton, and correspondingly, the determining module is further configured to determine a die cutting line corresponding to the template image according to the template image; determining an effective printing area corresponding to the die cutting line according to the die cutting line; determining a folding line corresponding to the template image according to the template image; cutting the effective print area into a plurality of cut images according to the folding lines; determining a folding direction corresponding to the folding line according to the folding line, and determining a position relation between the cutting images according to the folding direction and the cutting images; splicing the cut images based on the position relation to obtain a joint carton image; and determining third position information of the printing icon relative to the attached carton image according to the attached carton image.
In an implementation manner, the determining module is further configured to obtain line information in the template image, where the line information at least includes a die cut line; and extracting a die cutting line corresponding to the first color in the line information based on the first color corresponding to the die cutting line.
In an implementation manner, the determining module is further configured to acquire line information in the template image, where the line information at least includes a folding line; and extracting a folding line corresponding to a second color in the line information based on the second color corresponding to the folding line, wherein the second color is different from the first color.
In an implementation manner, the determining module is further configured to determine, based on the location information and the print icon, a feature extraction manner corresponding to the location information; extracting feature information corresponding to the printing icon according to the feature extraction mode; the feature extraction method at least comprises the following steps: a first feature extraction mode for extracting line features; and a second feature extraction mode for extracting the texture features.
According to a third aspect of the present disclosure, there is provided an electronic apparatus, characterized by comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions, wherein the computer instructions are for causing the computer to perform the method according to the present disclosure.
Acquiring a template image and a carton type, and determining position information of a printing icon in the template image according to the carton type; based on the position information, the characteristic information corresponding to the position information is determined, the visual model is generated according to the position information and the characteristic information, the visual model can be rapidly generated according to the carton type and the template image, the generated visual model is matched with various carton types, and the production efficiency is improved.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a first flowchart illustrating a first implementation of a visual modeling method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a second implementation flow of the visual modeling method according to the embodiment of the disclosure;
FIG. 3 shows a third flow chart of the implementation of the visual modeling method according to the embodiment of the present disclosure;
FIG. 4 shows a fourth flow chart for implementing the visual modeling method according to the embodiment of the disclosure;
FIG. 5 shows a fifth implementation flow diagram of a visual modeling method according to an embodiment of the disclosure;
FIG. 6 shows a schematic diagram of a visual modeling apparatus according to an embodiment of the present disclosure;
fig. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more apparent and understandable, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
FIG. 1 is a first flow chart illustrating a first implementation of a visual modeling method according to an embodiment of the present disclosure; FIG. 2 is a schematic diagram illustrating a second implementation flow of the visual modeling method according to the embodiment of the disclosure; please refer to fig. 1 and fig. 2;
according to a first aspect of the present disclosure, there is provided a visual modeling method, the method comprising: step 101, obtaining a template image, wherein the template image comprises a printing icon; 102, acquiring a carton type; 103, determining position information of a printing icon according to the carton type; 104, determining characteristic information corresponding to the position information based on the position information, wherein the characteristic information is used for representing the characteristics of the printed icon; and 105, generating a visual model according to the position information and the characteristic information.
Acquiring a template image and a carton type, and determining position information of a printing icon in the template image according to the carton type; based on the position information, the characteristic information corresponding to the position information is determined, the visual model is generated according to the position information and the characteristic information, the visual model can be rapidly generated according to the carton type and the template image, the generated visual model is matched with various carton types, and the production efficiency is improved.
In steps 101-102 of the present disclosure, the template image refers to a template image used for printing in industrial production, and the template image includes information such as a printing icon, a folding line, a die cutting line, and the like. The initial printed part is obtained by printing the template image, and the required printed matter, specifically the paper box, is obtained by subsequent processing. The template image may be pre-input by a worker to accommodate flexible production in the industry. Obtaining a carton type, wherein the carton type refers to a type of the carton in a production stage, and the carton type at least comprises the following components: printing carton, cross cutting carton and laminating carton. The processes of printing, die cutting and attaching respectively correspond to the printing process.
In steps 103-104 of the present disclosure, determining position information of the printed icon according to the carton type, the position information referring to a position where the printed icon is located in the carton; the information such as the shape and the size of the carton corresponding to the carton can be determined according to the carton type, the position of the printing icon corresponding to the carton in the current stage is determined according to the position of the printing icon in the template image, and the position information is generated. And performing feature extraction on the printed icon corresponding to the position information based on the position information to obtain feature information corresponding to the position information, wherein the feature information is specifically a feature for representing the printed icon. Specifically, the feature may be an ORB feature, a SIFT feature, a color feature, a corner feature, a texture feature, a barcode feature, or the like. Further, extracting the features may include; determining a feature extraction mode corresponding to the position information based on the position information and the printing icon; extracting feature information corresponding to the printing icon according to the feature extraction mode; the feature extraction method at least comprises the following steps: a first feature extraction mode for extracting line features; and a second feature extraction mode for extracting the texture features. Specifically, different feature extraction methods may be adopted according to different features, for example, a barcode is identified based on an icon line feature, and then barcode information features are extracted. And distinguishing the simple texture icon from the complex texture icon based on the texture features, extracting canny texture information for the simple texture icon, extracting ORB features for the complex texture and the like. Therefore, the method is suitable for visual detection of various products, and can improve the precision and accuracy of printing detection.
In step 105, merging the position information and the feature information corresponding to the position information, storing the merged position information and feature information to obtain a visual model, and when visual detection is required, selecting a corresponding visual model according to a template image to detect the obtained picture. The detection process specifically comprises the steps of obtaining an original image, preprocessing the original image to obtain an image to be detected, carrying out visual detection on the image to be detected through a visual model, and judging that the detection is qualified when the feature information on the image to be detected is consistent with the features of the visual model.
FIG. 3 shows a third flow chart of the implementation of the visual modeling method according to the embodiment of the present disclosure; please refer to fig. 3;
in an implementation manner, the obtained carton type is a printing carton, and correspondingly, the step 103 determines the position information of the printing icon according to the carton type, including: step 201, performing binarization on a template image to obtain a binarized image; and step 202, determining first position information of the printing icon relative to the template image according to the binary image.
In steps 201-202, a determination method of position information when the carton type is a printing carton is provided; and (3) binarizing the template image to obtain a binarized image, specifically, binarizing by adopting a maximum inter-class variance method, and extracting the position of the printing icon to determine the position as first position information by using Blob analysis and morphological image processing based on the template image. And (4) extracting the features by printing the features of the icons and adopting a corresponding feature extraction method. The printing carton refers to a carton obtained by printing according to a printing image.
FIG. 4 shows a fourth flow chart for implementing the visual modeling method according to the embodiment of the disclosure; refer to FIG. 4;
in an implementation, the obtained carton type is a die-cut carton, and correspondingly, step 103, determining the position information of the printed icon according to the carton type includes: step 301, determining a die cutting line corresponding to the template image according to the template image; step 302, determining an effective printing area corresponding to a die cutting line according to the die cutting line; step 303, determining second position information of the print icon relative to the effective print area according to the effective print area.
In steps 301-303, a die cut line in the template image is obtained according to the template image, wherein the die cut line refers to a die cut line for a die cutting process. Specifically, the die-cut line corresponds to a specified color, for example, black, so that the die-cut line feature in the template image can be acquired according to the color extraction method. Obtaining line information in a template image, wherein the line information at least comprises a die cutting line; based on the first color corresponding to the die cutting line, the die cutting line corresponding to the first color in the line information is extracted, and the line information further comprises a joint line, an inner folding line, an outer folding line and the like. The first color may be black. Further, the die cutting line characteristics are used as characteristic information to generate model information, and offset detection algorithm can be carried out on the image to be detected according to the die cutting line characteristics so as to detect whether the image to be detected generates offset in the printing process, and if the offset is detected, the image to be detected is judged to be unqualified. And acquiring a maximum external rectangle corresponding to the die cutting line according to the die cutting line, wherein the external rectangle is parallel to the template image boundary and is externally connected with the die cutting line. And the area corresponding to the maximum rectangle is the effective printing area, the rest areas are redundant image areas, and the redundant image areas are removed. And determining second position information corresponding to the printing icon according to the effective printing area, namely the position information of the printing icon in the effective area, determining characteristic information corresponding to the die-cut carton according to the second position information, and generating model information according to the characteristic information and the position information.
FIG. 5 shows a fifth implementation flow diagram of a visual modeling method according to an embodiment of the disclosure; please refer to fig. 5;
in an implementation manner, the acquiring the carton type as a fitting carton, and correspondingly, the step 103 of determining the position information of the print icon according to the carton type includes: step 401, according to the template image, determining a die cutting line corresponding to the template image; step 402, determining an effective printing area corresponding to a die cutting line according to the die cutting line; step 403, determining a folding line corresponding to the template image according to the template image; step 404, cutting the effective printing area into a plurality of cut images according to the folding lines; step 405, determining a folding direction corresponding to the folding line according to the folding line, and determining a position relation between the cutting images according to the folding direction and the cutting images; step 406, splicing the cut images based on the position relation to obtain a joint carton image; step 407, determining third position information of the print icon relative to the attached carton image according to the attached carton image.
In steps 401-402, the template image is segmented according to the die cutting line to obtain an effective printing area and a redundant image area, and the redundant image area is removed; the die cutting line can be a maximum external rectangle which is extracted according to the color extraction and corresponds to the die cutting line so as to determine the position of the die cutting line, is parallel to the edge of the template image and is externally connected with the die cutting line according to the position of the die cutting line; the area corresponding to the maximum circumscribed rectangle is the effective printing area.
In steps 403-407, determining a folding line according to the template image, specifically, obtaining line information in the template image, where the line information at least includes the folding line; and extracting a folding line corresponding to a second color in the line information based on the second color corresponding to the folding line, wherein the second color is different from the first color. The second color may include red and green, specifically, the corresponding folding line includes an inner folding line and an outer folding line, the folding line corresponding to the color is determined by a color extraction method, for example, the red corresponds to the inner folding line, the green corresponds to the outer folding line, and information of the folding line can be obtained by identifying positions of a red line and a green line. Cutting the effective printing area into a plurality of cut images according to the folding lines, wherein the cut images represent corresponding images of the folded carton; judging the folding direction corresponding to the folding line according to the color of the folding line, determining the position relation between the cutting images, splicing the cutting images according to the position relation, acquiring the image corresponding to the state of the attached carton, namely the image of the attached carton, identifying the printing icon according to the image of the attached carton, and acquiring third position information corresponding to the printing icon. Specifically, the attached carton may refer to a carton with two attached sides, and correspondingly, the number of the cut images may be two or three; when the number of the cutting images is two, the cutting images are correspondingly attached to the front side and the back side of the carton respectively; when the number of the cutting images is three, one cutting image is correspondingly the front side, the combined images of the other two cutting images are the back side, and the two cutting images are respectively positioned on the two sides of the front side image. And acquiring feature information corresponding to the third position information based on the third position information, and generating a visual model according to the third position information and the feature information. Furthermore, the characteristic information includes a fitting line characteristic and a folding line characteristic.
FIG. 6 shows a schematic structural diagram of a visual modeling apparatus according to an embodiment of the present disclosure; please refer to fig. 6;
according to a second aspect of the present disclosure, there is provided a visual modeling apparatus comprising: an obtaining module 501, configured to obtain a template image, where the template image includes a print icon; the obtaining module 501 is further configured to obtain a carton type; a determining module 502, configured to determine position information of the print icon according to the carton type; the determining module 502 is further configured to determine, based on the position information, feature information corresponding to the position information, where the feature information is used to represent features of the printed icon; a generating module 503, configured to generate a visual model according to the position information and the feature information.
In an implementation manner, the obtained carton type is a printing carton, and correspondingly, the determining module 502 is further configured to binarize the template image to obtain a binarized image; first position information of the print icon relative to the template image is determined from the binarized image.
In an embodiment, the obtained carton type is a die-cut carton, and correspondingly, the determining module 502 is further configured to, including: determining a die cutting line corresponding to the template image according to the template image; determining an effective printing area corresponding to the die cutting line according to the die cutting line; second position information of the print icon relative to the effective print area is determined according to the effective print area.
In an implementation manner, the obtained carton type is a fit carton, and correspondingly, the determining module 502 is further configured to determine a die cutting line corresponding to the template image according to the template image; determining an effective printing area corresponding to the die cutting line according to the die cutting line; determining a folding line corresponding to the template image according to the template image; cutting the effective printing area into a plurality of cut images according to the folding lines; determining a folding direction corresponding to the folding line according to the folding line, and determining a position relation between the cutting images according to the folding direction and the cutting images; splicing the cut images based on the position relation to obtain a joint carton image; and determining third position information of the printing icon relative to the attached carton image according to the attached carton image.
In an implementation, the determining module 502 is further configured to obtain line information in the template image, where the line information at least includes a die cut line; and extracting the die cutting line corresponding to the first color in the line information based on the first color corresponding to the die cutting line.
In an implementation, the determining module 502 is further configured to obtain line information in the template image, where the line information at least includes a folding line; and extracting a folding line corresponding to a second color in the line information based on the second color corresponding to the folding line, wherein the second color is different from the first color.
In an implementation manner, the determining module 502 is further configured to determine, based on the position information and the print icon, a feature extraction manner corresponding to the position information; extracting feature information corresponding to the printing icon according to the feature extraction mode; the feature extraction method at least comprises the following steps: a first feature extraction mode for extracting line features; and a second feature extraction mode for extracting the texture features.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
FIG. 7 shows a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 600 comprises a computing unit 601, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the various methods and processes described above, such as the visual modeling method. For example, in some embodiments, the visual modeling method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 600 via ROM602 and/or communications unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the visual modeling method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the visual modeling method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying any relative importance or implicit designation of the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The above is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of visual modeling, the method comprising:
acquiring a template image, wherein the template image comprises a printing icon;
acquiring the type of a carton; determining the position information of the printing icon according to the carton type;
determining feature information corresponding to the position information based on the position information, wherein the feature information is used for representing the features of the printed icon;
and generating a visual model according to the position information and the characteristic information.
2. The method of claim 1, wherein the obtained carton type is a printed carton, and correspondingly, determining the position information of the printed icon according to the carton type comprises:
carrying out binaryzation on the template image to obtain a binaryzation image;
and determining first position information of a printing icon relative to the template image according to the binary image.
3. The method of claim 1, wherein the obtained carton type is a die-cut carton, and correspondingly, determining the position information of the printed icon according to the carton type comprises:
according to the template image, determining a die cutting line corresponding to the template image;
determining an effective printing area corresponding to the die cutting line according to the die cutting line;
and determining second position information of the printed icon relative to the effective printing area according to the effective printing area.
4. The method of claim 1, wherein obtaining the carton type as a fit carton, and correspondingly determining the location information of the printed icon according to the carton type comprises:
determining a die cutting line corresponding to the template image according to the template image;
determining an effective printing area corresponding to the die cutting line according to the die cutting line;
determining a folding line corresponding to the template image according to the template image;
cutting the effective print area into a plurality of cut images according to the folding lines;
determining a folding direction corresponding to the folding line according to the folding line, and determining a position relation between the cutting images according to the folding direction and the cutting images;
splicing the cut images based on the position relation to obtain a joint carton image;
and determining third position information of the printing icon relative to the attached carton image according to the attached carton image.
5. The method according to claim 3 or 4, wherein determining, from the template image, die-cut lines corresponding to the template image comprises;
obtaining line information in a template image, wherein the line information at least comprises a die cutting line;
and extracting a die cutting line corresponding to the first color in the line information based on the first color corresponding to the die cutting line.
6. The method of claim 4, wherein determining, from the template image, a fold line corresponding to the template image comprises:
obtaining line information in a template image, wherein the line information at least comprises a folding line;
and extracting a folding line corresponding to a second color in the line information based on the second color corresponding to the folding line, wherein the second color is different from the first color.
7. The method of claim 1, wherein the determining feature information corresponding to the location information based on the location information comprises:
determining a feature extraction mode corresponding to the position information based on the position information and the printing icon;
extracting feature information corresponding to the printed icon according to the feature extraction mode;
the feature extraction method at least comprises the following steps: a first feature extraction mode for extracting line features; and a second feature extraction mode for extracting the texture features.
8. A visual modeling apparatus, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a template image which comprises a printing icon;
the acquisition module is also used for acquiring the type of the carton;
the determining module is used for determining the position information of the printing icon according to the carton type;
the determining module is further used for determining feature information corresponding to the position information based on the position information, and the feature information is used for representing the features of the printed icon;
and the generating module is used for generating a visual model according to the position information and the characteristic information.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of claims 1-7.
CN202210966923.7A 2022-08-11 2022-08-11 Visual modeling method, device, equipment and storage medium Pending CN115330869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210966923.7A CN115330869A (en) 2022-08-11 2022-08-11 Visual modeling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210966923.7A CN115330869A (en) 2022-08-11 2022-08-11 Visual modeling method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115330869A true CN115330869A (en) 2022-11-11

Family

ID=83924199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210966923.7A Pending CN115330869A (en) 2022-08-11 2022-08-11 Visual modeling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115330869A (en)

Similar Documents

Publication Publication Date Title
CN114419035B (en) Product identification method, model training device and electronic equipment
CN113034456B (en) Bolt loosening detection method, device, equipment and storage medium
CN112508128B (en) Training sample construction method, counting device, electronic equipment and medium
CN113362420A (en) Road marking generation method, device, equipment and storage medium
CN115311469A (en) Image labeling method, training method, image processing method and electronic equipment
CN112508005B (en) Method, apparatus, device and storage medium for processing image
CN113326766B (en) Training method and device of text detection model, text detection method and device
CN116137077B (en) Method and device for establishing electronic component library, electronic equipment and storage medium
CN116051558B (en) Defect image labeling method, device, equipment and medium
CN114677566B (en) Training method of deep learning model, object recognition method and device
CN115330869A (en) Visual modeling method, device, equipment and storage medium
CN114120305B (en) Training method of text classification model, and text content recognition method and device
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN114924959A (en) Page testing method and device, electronic equipment and medium
CN114581711A (en) Target object detection method, apparatus, device, storage medium, and program product
CN114494751A (en) License information identification method, device, equipment and medium
CN114445682A (en) Method, device, electronic equipment, storage medium and product for training model
CN113032071A (en) Page element positioning method, page testing method, device, equipment and medium
CN117098106A (en) Bluetooth testing method and device, electronic equipment and storage medium
CN117557511A (en) Method and device for detecting defects of electronic equipment and storage medium
CN114092698A (en) Target information processing method, device, equipment and storage medium
CN115496714A (en) PCB detection method and device, model training method, electronic device and storage medium
CN115829929A (en) Method, device and equipment for detecting defects of product surface image and storage medium
CN114998906A (en) Text detection method, model training method, device, electronic equipment and medium
CN116205890A (en) Method, device, equipment and storage medium for detecting product defects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination