CN113172986A - Network layout generation method, production system, image processing method and device - Google Patents
Network layout generation method, production system, image processing method and device Download PDFInfo
- Publication number
- CN113172986A CN113172986A CN202110327794.2A CN202110327794A CN113172986A CN 113172986 A CN113172986 A CN 113172986A CN 202110327794 A CN202110327794 A CN 202110327794A CN 113172986 A CN113172986 A CN 113172986A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- layers
- color
- trapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 101
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 90
- 238000003672 processing method Methods 0.000 title claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 123
- 230000008569 process Effects 0.000 claims abstract description 38
- 238000000926 separation method Methods 0.000 claims abstract description 32
- 238000007639 printing Methods 0.000 claims description 90
- 238000007650 screen-printing Methods 0.000 claims description 66
- 239000003086 colorant Substances 0.000 claims description 29
- 230000037452 priming Effects 0.000 claims description 16
- 230000002452 interceptive effect Effects 0.000 claims description 14
- 230000001960 triggered effect Effects 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 23
- 230000006870 function Effects 0.000 description 19
- 239000000463 material Substances 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 9
- 238000003860 storage Methods 0.000 description 9
- 238000013461 design Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 7
- 230000009467 reduction Effects 0.000 description 7
- 230000008602 contraction Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 239000006260 foam Substances 0.000 description 1
- 238000011031 large-scale manufacturing process Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000007645 offset printing Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B41—PRINTING; LINING MACHINES; TYPEWRITERS; STAMPS
- B41C—PROCESSES FOR THE MANUFACTURE OR REPRODUCTION OF PRINTING SURFACES
- B41C1/00—Forme preparation
- B41C1/14—Forme preparation for stencil-printing or silk-screen printing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Manufacturing & Machinery (AREA)
- Manufacture Or Reproduction Of Printing Formes (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
Abstract
The application discloses a screen image generation method, a production system, an image processing method and equipment. The screen image generation method comprises the following steps: acquiring an image and production parameters; carrying out color separation processing on the image to obtain at least two color layers; one layer contains partial pixel points in the image; performing trapping processing on part of the at least two color layers to add trapping on the processed color layer; and generating a screen map corresponding to each color layer according to the production parameters, the part of the at least two color layers subjected to the trapping treatment and the other part of the at least two color layers not subjected to the trapping treatment. The technical scheme provided by the embodiment of the application solves the problem that image color layering can be realized only by manually tracing and matting by using design software in the related technology, simplifies the generation process of the screen image and further contributes to improving the generation efficiency of the screen plate.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and a system for generating a halftone map, and a method and a device for processing an image.
Background
In the production link of the clothes printing, layered printing treatment needs to be carried out on colors in patterns when the patterns are printed by using screen printing, and a printing designer needs to spend more time to complete screen printing plate drawing production, including bottom layer edge expansion or edge contraction, color layering and trap printing at a color overlapping position.
For most garment factories, the print order comes from the print image provided by the merchant, and the color layering can be realized only by manually drawing and matting the print order by using design software by a print designer. Meanwhile, when in actual printing, the trapping may need to be modified again, and the whole process is manual and consumes a long time.
Disclosure of Invention
In view of the above, the present application is directed to a screen map generating method, a printing system, and an electronic apparatus that solve the above problems or at least partially solve the above problems.
Accordingly, in one embodiment of the present application, a method of generating a halftone map is provided. The method comprises the following steps:
acquiring an image and production parameters;
carrying out color separation processing on the image to obtain at least two color layers; one layer contains part of pixel points in the image;
performing trapping processing on part of the at least two color layers to add trapping on the processed color layer;
and generating a network layout corresponding to each color layer according to the production parameters, the part of the at least two color layers which is subjected to the trapping treatment and the other part of the at least two color layers which is not subjected to the trapping treatment.
In another embodiment of the present application, another production system is provided. The system comprises:
the client equipment is used for acquiring images and production parameters; carrying out color separation processing on the image to obtain at least two color layers, wherein one layer contains partial pixel points in the image; performing trapping processing on part of the at least two color layers to add trapping on the processed color layer; generating a network layout corresponding to each color layer according to the production parameters, the part of the at least two color layers which is subjected to the trapping treatment and the other part of the at least two color layers which is not subjected to the trapping treatment;
and the plate making equipment is in communication connection with the client equipment and is used for receiving the screen printing plate corresponding to each color layer sent by the client equipment and respectively processing different silk screens according to the received screen printing plate corresponding to each color layer to obtain a plurality of screen printing plates for printing.
In yet another embodiment of the present application, a method of generating a halftone map is provided. The method comprises the following steps:
acquiring an image;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
and generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers which are not subjected to the trapping processing in the plurality of layers.
In yet another embodiment of the present application, a production system is provided. The production system includes:
the client device acquires an image; dividing the image layer to obtain a plurality of image layers; performing trapping processing on part of the layers to add trapping on the processed layer; generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and the plate making equipment is in communication connection with the client equipment and is used for receiving the screen printing plate corresponding to each image sent by the client equipment and respectively processing different silk screens according to the received screen printing plate corresponding to each color layer to obtain a plurality of screen printing plates for printing.
In yet another embodiment of the present application, an image processing method is provided. The image processing method comprises the following steps:
responding to an input operation triggered by a user through an interactive interface, and acquiring an image input by the user;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and displaying the network map corresponding to each layer.
In yet another embodiment of the present application, an electronic device is provided. The electronic device includes: a memory and a processor; wherein,
a memory for storing a program;
a processor, coupled to the memory, for executing the program stored in the memory to:
acquiring an image and production parameters;
carrying out color separation processing on the image to obtain at least two color layers; one layer contains part of pixel points in the image;
performing trapping processing on part of the at least two color layers to add trapping on the processed color layer;
and generating a network layout corresponding to each color layer according to the production parameters, the part of the at least two color layers which is subjected to the trapping treatment and the other part of the at least two color layers which is not subjected to the trapping treatment.
In yet another embodiment of the present application, an electronic device is provided. The electronic device includes: a memory and a processor; wherein,
a memory for storing a program;
a processor, coupled to the memory, for executing the program stored in the memory to:
acquiring an image;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
and generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers which are not subjected to the trapping processing in the plurality of layers.
In yet another embodiment of the present application, an electronic device is provided. The electronic device includes: a memory, a processor and a display; wherein,
a memory for storing a program;
a processor, coupled to the memory, for executing the program stored in the memory to:
responding to an input operation triggered by a user through an interactive interface, and acquiring an image input by the user;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and controlling the display to display the network map corresponding to each layer.
In yet another embodiment of the present application, a plate making apparatus is provided. The plate making apparatus includes: a memory, a processor, and an execution component; wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
acquiring an image;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and controlling the action of the execution assembly based on the screen layout corresponding to each layer so as to respectively process different screens to obtain a plurality of screen printing plates for printing.
According to the technical scheme provided by the embodiment of the application, the image is subjected to color separation processing to obtain the image layers corresponding to different colors, the problem that image color layering can be realized only by manually tracing and matting by using design software in the related technology is solved, and the generation efficiency of the image layers is improved; in addition, the embodiment of the application realizes that the trapping processing is automatically carried out on part of the color layers in at least two color layers, and the trapping is directly superposed in the processed color layers; in addition, the screen layout corresponding to each color layer meeting the production requirements can be automatically generated according to the production parameters, manual adjustment or setting is not needed, the operation is simpler, the generation process of the screen layout is simplified, and the generation efficiency of the screen printing plate is improved.
According to the other technical scheme provided by the embodiment of the application, the image layers are divided to obtain a plurality of image layers, and manual division of the image layers is not needed; then, performing trapping processing on part of the layers in the plurality of layers so as to add trapping on the processed layers; then generating a network layout corresponding to each layer; the whole process is less in manual participation, and the generation process of the screen image is simplified.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of a halftone map generation method according to an embodiment of the present application;
fig. 2 is a schematic diagram of two color layers obtained by color separation of an image according to an embodiment of the present application, where one of the color layers is subjected to a trapping process;
FIG. 3 is a bottom layer of an image according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a screen map generating method according to another embodiment of the present application;
FIG. 5a is a schematic view of an image according to an embodiment of the present application;
FIG. 5b is a schematic diagram of a plurality of color layers obtained by color separation of the image shown in FIG. 5a and a priming layer obtained based on the outline of the image;
FIG. 5c is a schematic diagram of a mesh map corresponding to each layer generated based on each layer shown in FIG. 5 b;
fig. 6 is a block diagram of a production system according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a screen map generating method according to another embodiment of the present application;
fig. 8a is a schematic flowchart of a halftone map generation method according to another embodiment of the present application;
FIG. 8b is a diagram illustrating a user interface display after the method of FIG. 8a is employed;
fig. 9 is a block diagram of a halftone map generating apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of a halftone map generation apparatus according to another embodiment of the present application;
fig. 11 is a block diagram of a halftone map generating apparatus according to another embodiment of the present application;
fig. 12 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
At present, the screen layout manufacturing in the printing industry mainly relates to the garment printing industry and the product outer package printing industry.
In the prior art, software used for manufacturing a screen printing plate in the garment printing industry mainly has the following problems: one type of software, such as color separation software, needs to extract each color separately and then stratify, and has high requirement on the resolution of pixels; and the trapping on the layer needs manual drawing. Another kind of software, such as CorelDRaw, needs to create a new layer after manual cutout, and the software has an automatic color separation function; but again, the trapping on the layer requires manual drawing.
The screen printing chart making software commonly used in the product outer package printing industry can directly carry out color separation on vector diagrams, but can only split colors into C, M, Y, K four primary colors for bitmaps; the background image layer with any scaling can not be automatically generated, if the generation needs to be carried out, the edge of the background image is acquired according to the pattern, and then the edge is drawn (edge expansion) or erased (edge contraction) according to the actual scale. In addition, the software can directly generate all traps for the whole image by setting the trap widths at the overlapped positions of different colors, but all the generated traps are stored in a single image layer, and the final screen image can be obtained only by subsequent continuous color separation and screen plate drawing. The printed network layout of the external package is essentially different from the printed network layout of the garment, so that the network layout required by the garment production cannot be obtained by one key when the software is used for manufacturing the network layout.
The present application provides the following embodiments to solve or partially solve the problems of the above-described aspects. In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are used merely to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different. In addition, the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 illustrates a flow chart of a screen map generation method according to an embodiment of the present application. The method provided by the embodiment of the application is suitable for a client side or plate making equipment. The client may be hardware integrated on the terminal device and having an embedded program, may also be application software installed in the terminal, and may also be tool software embedded in an operating system of the terminal device, which is not limited in this embodiment of the present application. The terminal equipment can comprise any terminal equipment such as a mobile phone, a tablet personal computer, wearable equipment and AR equipment. The plate-making equipment has the function of automatically generating a screen layout and can produce a corresponding screen plate according to a self-generated screen layout. As shown in fig. 1, the method includes:
101. acquiring an image and production parameters;
102. carrying out color separation processing on an image to obtain at least two color layers, wherein one color layer contains partial pixel points in the image;
103. performing trapping processing on part of the at least two color layers to add trapping on the processed color layer;
104. and generating a network layout corresponding to each color layer according to the production parameters, the part of the at least two color layers which is subjected to the trapping treatment and the other part of the at least two color layers which is not subjected to the trapping treatment.
The technical scheme provided by the embodiment can be used for various printing industries, such as the garment printing industry. In the application of the method 101 in the garment printing industry, the production parameters may include, but are not limited to, at least one of a reduction or enlargement size, screen naming information, and the like. In the actual printing process, for example, the printing body is a garment material, after corresponding patterns are printed on the garment material by using a screen printing plate produced based on a screen printing plate, the garment material needs to be heated and dried at a high temperature, but due to the problem of the material of the garment material, the garment material can be deformed inconsistently due to different shrinkage rates after being heated, and at the moment, the screen printing plate needs to be adjusted according to actual conditions. Therefore, when the embodiment is implemented, the production parameters may be set in advance, the production parameters may be set after the staff knows the attributes of the printing medium in advance, the execution main body of the embodiment may be automatically generated based on the category to which the printing medium belongs, and the like, and the method is not limited in detail here.
In an implementation solution, the step 101 "acquiring an image and a production parameter" may include:
1011. displaying an interactive interface;
1012. and responding to an input operation triggered by a user through the interactive interface, and acquiring the image and the production parameters input by the user.
In the step 102, the pixels belonging to the same color classification in the image may be clustered based on a clustering algorithm to obtain a color layer corresponding to the color classification. That is, the step 102 "color-separating the image to obtain at least two color layers" may include the following steps:
1021. acquiring pixel information of pixel points in the image;
1022. and clustering the pixels belonging to the same color classification according to the pixel information of the pixels in the image to obtain the at least two color layers.
For example, in specific implementation, the user may input color number information (e.g., Lab color number) corresponding to all colors contained in the image through the above-mentioned interactive interface. That is, the step 1012 may specifically be: and responding to an input operation triggered by a user through the interactive interface, and acquiring the image input by the user, color number information corresponding to at least two colors contained in the image and the production parameters.
Wherein, the Lab color number is the Lab value corresponding to a certain color. The Lab color model is based on the perception of color by the human eye, and the Lab value describes all colors that a person with normal vision can see. In the Lab color model, the Lab color model is composed of three elements, i.e., luminance (L) and a and b, related to color. L represents lightness, a represents red-green color difference, and b represents blue-yellow color difference. In addition, there is a unique corresponding relationship between the Lab color number and the PANTONE color number used for printing, wherein PANTONE color card (PANTONE) is an international universal standard color card and is commonly called PANTONE in Chinese.
Therefore, in a specific implementation, the step 1022 may be implemented by the following steps:
acquiring color number information corresponding to at least two colors contained in the image;
and based on the color number information corresponding to the at least two colors and the pixel information of the pixel points in the image, performing clustering color separation on the pixel points in the image by using a Lab color space similarity algorithm to obtain at least two color layers.
When the method is specifically implemented, a formula corresponding to the Lab color space similarity algorithm is as follows:
wherein Δ E represents color difference, Δ L, ΔA. Δ B represents the difference between the two colors in different components, respectively. The color attributes of the pixels can be divided by calculating the Δ E values of the color number information of each pixel corresponding to each color input by the user. Before the specific implementation, the pixel information of each pixel of the image in this embodiment includes the Lab corresponding to the pixel 111The value is obtained. Suppose that the image includes 3 colors, and the color number information corresponding to the three colors is: lab01、Lab02And Lab03. Lab capable of calculating pixel point 111The values are respectively compared with Lab01、Lab02And Lab03Corresponding Delta E1、ΔE2、ΔE3Taking Δ E1、ΔE2、ΔE3The minimum corresponding color number information in the color classification is used as the color classification to which the pixel point 1 belongs.
The premise for the implementation of steps 1021 and 1022 is that the image is a digital image, and the pixel information of each pixel point can be extracted from the image. For the paper design image, a user adopts a picture shot by a camera; or a picture downloaded from a network side, etc., where the picture cannot directly extract the pixel information of each pixel point from the picture. For this reason, the method provided by the embodiment may include:
identifying the image to identify color block areas contained in the image and color number information corresponding to each color block area;
and generating a color layer corresponding to each color block area based on the identified color block area and the color number information corresponding to each color block area.
In specific implementation, the image may be identified by using an image identification technology, such as a machine learning algorithm (e.g., a neural network model), and not only the color number information corresponding to the colors included in the image but also the outlines of the regions (i.e., color block regions) having the same color are identified.
In the above 103, trapping is also called trapping, which is also called shrinking and expanding, and is mainly used to compensate the trapping between two adjacent different colors caused by inaccurate printing and misregistration. Therefore, it is necessary to add corresponding trapping on some layers to avoid white leakage during printing. Wherein the width of the trapping can be determined based on production parameters, such as the size of the enlarge. In this embodiment, one of the two color layers having the common boundary may be subjected to a trapping process, for example, a color layer having a light color is subjected to a trapping process. And adding trapping in the layer after the trapping treatment. Referring to the example shown in fig. 2, the image 4 is disassembled into two color layers, namely, a color layer 1 and a color layer 2. As can be seen from fig. 2, the color layer 2 is lighter in color than the color layer 1. At the place of the color layer 2 where there is a common boundary with the color layer 1, an outward expansion is added to superpose the trap 3 on the pattern in the color layer 1 (the trap color is substantially the same as that of the color layer 2, but the trap is deepened to be highlighted in fig. 2 for convenience of viewing). The shape of the trapping 3 is adapted to the shape corresponding to the boundary region where the trapping is located, and the trapping amount (such as the outward expansion width) is related to production parameters, material characteristics of a printing body, overprinting precision of a printing system and the like.
In the step 104, after obtaining the part of the color layer subjected to the trapping processing in the at least two color layers and the other part of the color layer not subjected to the trapping processing in the at least two color layers, the mesh pattern corresponding to each color may be generated based on the production parameters.
The production parameters in the process of generating the screen map serve to adjust the ratio of each layer, for example, to the size of the layer meeting the actual production requirement. In addition. With the process of generating the corresponding grid map by each color layer, the embodiment is not particularly limited, and the specific implementation process is the same as that in the prior art, which can refer to the related content in the prior art.
According to the technical scheme provided by the embodiment, the image is subjected to color separation processing to obtain the image layers corresponding to different colors, the problem that image color layering can be realized only by manually tracing and matting by using design software in the related technology is solved, and the generation efficiency of the image layers is improved; in addition, the embodiment of the application realizes that the trapping processing is automatically carried out on part of the color layers in at least two color layers, and the trapping is directly superposed in the processed color layers; in addition, the screen layout corresponding to each color layer meeting the production requirements can be automatically generated according to the production parameters, manual adjustment or setting is not needed, the operation is simpler, the generation process of the screen layout is simplified, and the generation efficiency of the screen printing plate is improved.
The specific implementation of step 103 in this embodiment is further described below. That is, in an implementation solution, the step 103 "performing a trapping process on a part of the at least two color layers" in the above step may include:
1031. determining the layer relation of at least two color layers;
1032. according to the layer relation, a first type layer needing to be subjected to trapping and a second type layer needing not to be subjected to trapping are determined from the at least two color layers;
1033. determining a boundary area of the first type of layer to be subjected to trapping;
1034. and performing trapping compensation treatment on the boundary area in the first type of layer according to the trapping size corresponding to the trapping area.
The layer relationship of the at least two color layers may be a relationship in which a common boundary exists and a relationship in which a common boundary does not exist. In specific implementation, the layer relationship between any two color layers can be determined based on a two-dimensional graphics algorithm, and when a common boundary exists between the two color layers, the common boundary area is determined, and the common boundary area is used as a boundary area of one color layer to be subjected to trapping.
The trapping principle is that the foreground color and the background color are mutually overlapped by an internal and external expanding method, so that the phenomenon of white edges between two different colors can be avoided. Generally, when at least two color blocks with larger color difference are connected, the trapping is needed. A trap may also be understood as an extended color band. And overlapping and printing the adjacent color layers by adopting a trapping technology of external expansion and internal contraction, namely the size of the color block in one color layer is unchanged, and the size of the color block in the other color layer is changed. Specifically, the size of the outer expansion is enlarged corresponding to the size of the foreground color object, and the size of the background color block is kept unchanged, so that the outer edge of the foreground color block is overprinted on the background color. The inner shrinkage is kept unchanged corresponding to the size of the foreground color block, the hollow part of the background color block is shrunk, and the foreground color block is overprinted on the edge of the shrunk background color block hollow part. Generally, in the trapping process, whether the inner contraction or the outer expansion is adopted depends on the contrast of the foreground color and the background color, and generally, the following principle is followed in the outer expansion: the background color is not expanded, the light color is expanded, the dark color is not expanded, and the flat net is expanded without expanding the field. The reason is that the background color, light color and plain net have less visual impact on people than the foreground color, dark color. Otherwise, one can easily perceive the change in the shape of the object.
In an implementation solution, the trap between two layers of different colors can be fixed in one of the color layers, for example, the trap between two layers of different colors can be fixed in a light color layer. That is, in step 1032, "determining a first type of layer that needs to be subjected to trapping and a second type of layer that does not need to be subjected to trapping from the at least two color layers according to the layer relationship," may specifically be:
10321. according to the layer relation, two target layers with a common boundary are found out from the at least two color layers;
10322. and determining a target image layer with light color as a first type of image layer needing to be subjected to trapping compensation, and determining a target image layer with dark color as a second type of image layer needing not to be subjected to trapping compensation.
More specifically, step 10322 may include the steps of:
acquiring brightness values corresponding to the two target layers respectively;
determining a target layer with a large brightness value as a first layer to be subjected to trapping compensation;
and determining the target layer with small brightness value as a second type layer without need of additional trapping.
In practical application, after the two color layers are overprinted by trapping, sometimes an additional line that does not exist in the pattern is generated, and for this reason, the following steps are added in this embodiment:
105. obtaining the first type of layer after the trapping processing;
106. and smoothing the supplementary trapping in the first type of layer.
In this embodiment, the step of smoothing the trapping is to generate a gradient effect of the color of the trapping so as to weaken lines caused by the overprinting. In specific implementation, the color of the trapping can be smoothed by using a corresponding image processing method, wherein the smoothing process is not specifically limited in this embodiment, and may be implemented by referring to related contents in the prior art.
Since the color layer is split in the pixel color clustering manner in this embodiment, an edge mosaic as shown in fig. 2 exists in the obtained color layer. In most cases, the designer designed the design has smooth edges, except for some designs that are specifically designed for a mosaic effect. Therefore, the present embodiment further adds the following steps:
107. performing edge mosaic processing on the at least two color layers to obtain the at least two color layers with smooth edges, and executing the trapping processing step and the screen printing plate image generating step based on the at least two color layers with smooth edges.
This step 107 may precede steps 103 and 104 so that the color layers participating in steps 103 and 104 are both color layers having smooth edges after edge mosaic processing. In specific implementation, the edge mosaic problem existing after bitmap color separation can be improved based on two-dimensional graphics and image processing. Similarly, the specific implementation of the edge mosaic processing in this embodiment is not limited, and all schemes that can perform the edge mosaic processing in the prior art can be applied to this embodiment.
Further, the technical solution provided by the embodiment of the present application further includes:
108. generating a bottoming layer according to the image;
109. and determining a screen printing plate corresponding to the priming image layer based on the priming image layer and the production parameters.
In the above 108, "generating a priming image layer according to an image" may specifically include the following steps:
1081. acquiring outline information of a pattern in an image;
1082. adjusting the outline information according to the production parameters to obtain a bottoming layer
In 1081, the outline information of the pattern in the image can be obtained by using an image recognition technology, wherein the outline information at least includes the size and shape of the image.
In 1082, the height and width of the profile information is adjusted based on the production parameters entered by the user. Specifically, the profile information is adjusted according to the size of the reduction or enlargement in the production parameters.
In 109, after the underlying layer is obtained, the mesh pattern corresponding to the underlying layer is generated. In one case, the underlying layer is directly used as the mesh map.
In the actual production process, information, such as header information, footer information, screen positioning symbols, etc., needs to be noted for each screen icon. The marking information needs to be generated on the screen printing plate corresponding to each screen printing plate; the screen plate is used in the printing process. The worker can determine the number of the screen printing plates, the image names and the like corresponding to the printing images through the marking information on the screen printing plates; the printing equipment can also carry out positioning operation and the like through screen positioning symbols on the screen printing plate. Namely, the present embodiment further includes the following steps:
110. determining the margin of the net layout corresponding to the bottoming layer;
111. acquiring screen plate naming information;
112. and determining a header for the screen image corresponding to the priming image layer based on the screen naming information.
In the above step 110, the margin of the mesh map corresponding to the underlying layer may be determined based on two-dimensional graphics and image processing in the prior art. The margins may include margins in four directions, that is, the distance between the upper boundary and the lower boundary, the distance between the left boundary and the distance between the right boundary of the pattern in the halftone screen image.
In the above 111, the screen naming information may be included in the production parameters input by the user. The user can input by himself through the interactive interface. Screen naming information may include, but is not limited to: the type of the clothes, the serial number of the clothes, the printing position, the color of the layer corresponding to the screen image and the like.
Besides, headers are noted for the screen icons corresponding to the underlying image layers, headers are also noted for the screen icons corresponding to the at least two color image layers. Meanwhile, the page feet can be also filled in the screen image corresponding to the bottoming image layer and the screen image icons corresponding to the at least two color image layers. That is, the technical solution provided in the embodiment of the present application further includes:
113. determining page headers for the screen images corresponding to the at least two color image layers respectively according to the screen naming information;
114. and marking page pins for the screen pattern corresponding to the at least two color layers and the screen icon corresponding to the priming layer.
Wherein, the page headers of the screen printing plate corresponding to each color layer can be the same as the page headers of the screen printing plate corresponding to the bottoming layer. For the footer, the process sequence of the screen printing plate corresponding to each screen image in the printing process can be determined, for example, when printing, the bottom color of the pattern needs to be printed by using the screen printing plate of the screen image corresponding to the priming image layer, and then, the printing is performed on the bottom color by using the screen printing plate of the screen image corresponding to each color image layer. Therefore, the footer of the mesh map corresponding to the bottoming layer can be marked as 1, and the footers of the mesh map corresponding to the rest color layers are arranged from 2 to the back according to the printing process sequence. Specifically, the footers of the halftone graph may take a similar form as 2/10: the total number of the images corresponding to the mesh pattern is 10, and the current mesh pattern is the second one.
After the header and the footer are arranged, the screen printing plate corresponding to the screen printing plate can be positioned. However, the headers and the footers are all characters, and deviation is easy to occur in the actual positioning process, so that screen positioning symbols can be marked on the screen image. That is, the technical solution provided in the embodiment of the present application further includes:
115. and according to the header and the footer of the screen layout, respectively marking screen positioning symbols for the screen layout corresponding to the bottoming layer and the screen icons corresponding to the at least two color layers.
The screen positioning symbol can be used for mounting a screen plate corresponding to the screen pattern. Referring to FIG. 3, a screen illustration 5 of a primed image layer having a header 6, footer 7, and screen alignment symbology 8。
The following describes the flow of the method provided in the embodiments of the present application with reference to specific examples.
Referring to the flowchart shown in fig. 4, the screen map generating method includes:
and S1, acquiring images and production parameters.
And S2, generating a background image layer according to the contour information of the image.
S3, carrying out color separation processing on the image to obtain at least two color layers;
s4, performing edge mosaic processing on the at least two color layers to smooth edges;
and S5, performing trapping process on part of the at least two color layers to add trapping in the corresponding color layers.
And S6, smoothing the trapping added in the color layer to prevent the appearance of unexpected lines in the pattern.
And S7, generating a network layout corresponding to each layer based on the production parameters.
Wherein, each map layer includes: the color layer added with the trap, the color layer without the trap and the priming layer.
For example, the user enters the image shown in FIG. 5a, as well as production parameters. Wherein, the production parameters include but are not limited to: the size of the reduction printing or the enlargement printing (such as 0.1mm), the name information of the screen printing plate, the color number information of all colors contained in the image, and the like. Then, based on the outline information of the image, a bottom layer 11 as shown in fig. 5b is obtained; the color separation processing in step S3 described above results in a plurality of color layers 12, 13, 14, 15, 16, 17, and 18. After the above processing of steps S4, S5, and S6, a plurality of reticles 21, 22, 23, 24, 25, 26, 27, and 28 as shown in fig. 5c are generated through step S7. The screen layouts are added with page headers, page footers and screen positioning symbols.
Further, the method provided by this embodiment may further include the following steps:
displaying the net layout corresponding to the bottoming layer and the net layout corresponding to each color layer;
and responding to a virtual trial printing instruction triggered by a user, starting a virtual trial printing module, and generating a virtual trial printing effect graph by the virtual trial printing module according to the net pattern corresponding to the bottoming layer and the net pattern corresponding to each color layer.
After the virtual test printing module is started, the virtual test printing module can generate corresponding printing animation and display the corresponding printing animation on a user interface, so that the virtual test printing module is convenient for a user to watch, and is visual and convenient. In addition to displaying the animated rendering and the virtual impression block, the virtual impression block may also provide error detection to detect errors in the respective reticle fields. After generating the virtual trial printing effect graph, the user may trigger the error detection function to obtain the erroneous reticle layout and the cause of the error (e.g., an outline error of the underlying layer, a production parameter error, etc.).
The user can look over the printing effect based on the virtual trial printing effect picture to can revise the half tone picture that each picture layer corresponds or revise production parameter etc. when the effect is unsatisfactory. That is, the method provided in this embodiment may further include the following steps:
and responding to modification operation triggered by a user aiming at the screen image of one image layer, modifying the screen image and displaying the modified screen image.
Alternatively, there are trial printing apparatuses, such as plate making apparatuses and printing apparatuses, on site. The user can send the screen printing plate corresponding to the bottoming layer and the screen printing plate corresponding to each color layer to the plate making equipment, and the plate making equipment respectively processes different silk screens to obtain a plurality of screen printing plates for printing. Then, a plurality of screen printing plates are placed on corresponding placement positions of printing equipment so as to print the patterns of corresponding image layers on the printing stock in sequence, and finally, the patterns on the printing stock are dried through processes such as drying. The user can adjust the screen printing image and/or the production parameters of each image layer in time based on the effect of the trial printing. After the trial printing effect is satisfied, the screen printing graph corresponding to the bottom printing layer and the screen printing graph corresponding to each color layer can be sent to equipment in an intelligent processing factory for large-scale production.
In addition, production parameters (also referred to as process parameters) of the different material fabrics, such as shrinkage of the fabrics (which is related to the size of the reduction or enlargement set by the user), etc., can be pre-configured locally as user-selectable options. For example, the user can automatically generate corresponding generation parameters by selecting the material of the printed material, and the user does not need to input the corresponding generation parameters by himself.
Fig. 6 shows a schematic structural diagram of a production system provided in an embodiment of the present application. As shown in fig. 6, the production system includes:
a client device 201 for acquiring images and production parameters; carrying out color separation processing on the image to obtain at least two color layers, wherein one layer contains partial pixel points in the image; performing trapping processing on part of the at least two color layers to add trapping on the processed color layer; generating a network layout corresponding to each color layer according to the production parameters, the part of the at least two color layers which is subjected to the trapping treatment and the other part of the at least two color layers which is not subjected to the trapping treatment;
and the plate making device 202 is in communication connection with the client device 201, and is configured to receive the halftone maps corresponding to the color layers sent by the client device 201, and process different silk screens respectively according to the received halftone maps corresponding to the color layers to obtain a plurality of screen plates for printing.
Further, the system provided by the embodiment may further include a printing device. The printing equipment is provided with a first placing position for placing the at least two screen printing plates and a second placing position for placing a printing stock, and the first placing position and the second placing position are used for performing printing operation by using the screen printing plates in corresponding sequence one by one according to the printing sequence of the at least two screen printing plates so as to print the patterns of the corresponding image layers on the printing stock.
The client device 201 may be any device having computing capability, such as a desktop computer, a notebook computer, a smart phone, a smart wearable device, a tablet computer, and the like.
Here, it should be noted that: the client device in the production system provided in the above embodiments may implement the technical solutions described in the above method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the above method embodiments, and is not described herein again.
Fig. 7 is a flowchart illustrating a screen map generating method according to another embodiment of the present application. As shown in fig. 7, the method includes:
301. acquiring an image;
302. dividing the image layer to obtain a plurality of image layers;
303. performing trapping processing on part of the layers to add trapping on the processed layer;
304. and generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers which are not subjected to the trapping processing in the plurality of layers.
The image layer division performed on the image in 302 above may be implemented by the following steps:
3021. determining a bottoming layer based on the contour information of the pattern in the image;
3022. carrying out color separation processing on the image to obtain a plurality of color layers; and one image layer contains part of pixel points in the image.
For the contents of the above steps 3021 and 3022, reference may be made to the corresponding contents in the above, and details are not described here.
The step 303 "generating a screen printing plate image corresponding to each image layer according to the part of the image layers after the trapping processing in the plurality of image layers and another part of the image layers not subjected to the trapping processing in the plurality of image layers" may specifically include:
3031. obtaining production parameters;
3032. generating a net layout corresponding to the backing layer according to the production parameters and the backing layer;
3033. performing trapping processing on part of the color layers in the plurality of color layers so as to add trapping on the processed color layers;
3034. and generating a network layout corresponding to each color layer according to the production parameters, the part of the color layers subjected to the trapping processing in the plurality of color layers and the other part of the color layers which are not subjected to the trapping processing in the plurality of color layers.
The production parameters described above may include, but are not limited to: size of reduction or enlargement, screen name information, color number information of all colors contained in the image, and the like.
Based on the above method embodiments, the present embodiment provides another production system. The structure of the production system is the same as that of fig. 6 described above, except for the function of the client device. In particular, the method comprises the following steps of,
the client device acquires an image; dividing the image layer to obtain a plurality of image layers; performing trapping processing on part of the layers to add trapping on the processed layer; generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and the plate making equipment is in communication connection with the client equipment and is used for receiving the screen printing plate corresponding to each image sent by the client equipment and respectively processing different silk screens according to the received screen printing plate corresponding to each color layer to obtain a plurality of screen printing plates for printing.
Similarly, the present embodiment may also include a printing device. The printing equipment is provided with a first placing position for placing the at least two screen printing plates and a second placing position for placing a printing stock, and the first placing position and the second placing position are used for executing printing operation by utilizing the screen printing plates in corresponding sequence one by one according to the printing sequence of the at least two screen printing plates so as to print the patterns of corresponding image layers on the printing stock.
The technical scheme provided by each embodiment of the application aims to: the method has the advantages that the working personnel can directly input images and related parameters, the calculation equipment can automatically generate the screen layout of each image layer according with the actual production requirement, the generation efficiency of the screen layout is improved, the workload is simplified, and the production efficiency of the whole printing process is improved. That is, the present application also provides an embodiment, such as a flowchart of the image processing method shown in fig. 8a and 8 b. As shown in fig. 8a and 8b, the method comprises:
401. responding to an input operation triggered by a user through an interactive interface, and acquiring an image input by the user;
402. dividing the image layer to obtain a plurality of image layers;
403. performing trapping processing on part of the layers to add trapping on the processed layer;
404. generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
405. and displaying the network map corresponding to each layer.
For the above steps 402-404, reference is made to the corresponding contents above, which are not described herein again.
Furthermore, the user can also modify the generated network layout and the like. Namely, the method provided by the embodiment may further include:
406. and responding to a modification operation triggered by a user aiming at a screen map corresponding to one layer, and modifying the screen map corresponding to the layer by the user.
What needs to be added here is: the technical scheme provided by the above embodiments of the present application is applicable to the pattern printing scenes of various types of materials. Such as a garment printing scene, a product overwrap printing scene, and the like. The printing screen type that the technical scheme provided by each embodiment of the application can cover can include: offset printing, watermarking, foam printing, screen printing, digital printing, and the like.
After the application program implemented by the technical scheme provided by the embodiment of the application is installed, a user can input images and related production parameters (certainly not limited to the parameters shown in fig. 8 b) through an interactive interface after opening the application program, and then click the "generation" control, so that the user can see the generated mesh layout corresponding to each layer. For a user, after the user inputs images and production parameters, the user can generate a network layout corresponding to each layer required by production by one key. Under the condition that the production parameters are correct, the user only needs to return the corresponding page to modify the corresponding parameters and then generates the parameters by one key, and the modification of the layout can be quickly completed.
In fact, the technical solution provided by this embodiment can be used as an optimization module of an existing computer design tool (such as Photoshop, etc.), for example, a module capable of implementing the method function provided by this embodiment is loaded in the existing computer design tool as an additional module. When the user needs to use, the module for realizing the function of the method provided by the embodiment is loaded in the calculation design tool, so that a designer does not need to develop a new application program, and the design difficulty is greatly simplified.
Fig. 9 is a schematic structural diagram of a screen map generating apparatus according to an embodiment of the present application. The screen image generating device includes: an acquisition module 31, a color separation module 32, a trapping module 33, and a generation module 34. The obtaining module 31 is used for obtaining images and production parameters. The color separation module 32 is configured to perform color separation processing on the image to obtain at least two color layers; and one image layer contains part of pixel points in the image. The trapping module 33 is configured to perform a trapping process on a part of the at least two color layers, so as to add a trap on the processed color layer. The generating module 34 is configured to generate a mesh map corresponding to each color layer according to the production parameter, the part of the at least two color layers that is subjected to the trapping processing, and another part of the at least two color layers that is not subjected to the trapping processing.
Further, when the color separation module 32 performs color separation processing on the image to obtain at least two color layers, it is specifically configured to:
acquiring pixel information of pixel points in the image; and clustering the pixels belonging to the same color classification according to the pixel information of the pixels in the image to obtain the at least two color layers.
Further, the color separation module 32 is configured to cluster the pixels belonging to the same color classification according to the pixel information of the pixels in the image, and when obtaining the at least two color layers, specifically:
acquiring color number information corresponding to at least two colors contained in the image; and based on the color number information corresponding to the at least two colors and the pixel information of the pixel points in the image, performing clustering color separation on the pixel points in the image by using a Lab color space similarity algorithm to obtain at least two color layers.
Further, the color separation module 32 is specifically configured to display an interactive interface when color number information corresponding to at least two colors included in the image is acquired; and responding to an input operation triggered by a user through the interactive interface, and acquiring the image input by the user, color number information corresponding to at least two colors contained in the image and the production parameters.
Further, when the trapping module 33 performs trapping processing on a part of the at least two color layers, the trapping module is specifically configured to:
determining the layer relation of the at least two color layers; according to the layer relation, a first type layer needing to be subjected to trapping and a second type layer needing not to be subjected to trapping are determined from the at least two color layers; determining a boundary area of the first type of layer to be subjected to trapping; and performing trapping compensation treatment on the boundary area in the first type of layer according to the trapping size corresponding to the trapping area.
Further, when the trapping module 33 determines a first type of layer that needs to be subjected to trapping and a second type of layer that does not need to be subjected to trapping from the at least two color layers according to the layer relationship, the trapping module is specifically configured to:
according to the layer relation, two target layers with a common boundary are found out from the at least two color layers; and determining a target image layer with light color as a first type of image layer needing to be subjected to trapping compensation, and determining a target image layer with dark color as a second type of image layer needing not to be subjected to trapping compensation.
Further, the trapping module 33 is configured to, when determining, of the two target image layers, a light-colored target image layer as a first type of image layer to be subjected to trapping compensation and a dark-colored target image layer as a second type of image layer not to be subjected to trapping compensation, specifically:
acquiring brightness values corresponding to the two target layers respectively; determining a target layer with a large brightness value as a first layer to be subjected to trapping compensation; and determining the target layer with small brightness value as a second type layer without need of additional trapping.
Further, the obtaining module 31 is further configured to obtain the first type layer after the trapping process. The trapping module 33 is further configured to perform smoothing processing on the supplementary trapping in the first type layer.
Further, the apparatus provided in this embodiment may further include an edge processing module. The edge processing module is used for performing edge mosaic processing on the at least two color layers to obtain the at least two color layers with smooth edges, and executing the trapping processing step and the screen printing plate image generating step based on the at least two color layers with smooth edges.
Further, in the apparatus provided in this embodiment, the generating module 34 is further configured to: generating a bottoming layer according to the image; and determining a screen printing plate corresponding to the priming image layer based on the priming image layer and the production parameters.
Still further, when the generating module 34 generates a bottom layer according to the image, it is specifically configured to: acquiring outline information of a pattern in the image; and adjusting the profile information according to the production parameters to obtain the bottoming layer.
Further, in the apparatus provided in this embodiment, the generating module 34 is further configured to:
determining the margin of the net layout corresponding to the bottoming layer; acquiring screen plate naming information; and determining a header for the screen image corresponding to the priming image layer based on the screen naming information.
Further, the generating module 34 is further configured to:
determining page headers for the screen images corresponding to the at least two color image layers respectively according to the screen naming information; and marking page pins for the screen pattern corresponding to the at least two color layers and the screen icon corresponding to the priming layer.
Further, the generating module 34 is further configured to: and according to the header and the footer of the screen layout, respectively marking screen positioning symbols for the screen layout corresponding to the bottoming layer and the screen icons corresponding to the at least two color layers.
Further, the production parameters include at least one of: the size of the reduction printing or the enlargement printing, the name information of the screen printing plate and the color number information of all colors in the image.
Here, it should be noted that: the screen image generating device provided in the above embodiment may implement the technical solutions described in the above method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the above method embodiments, and will not be described herein again.
Fig. 10 is a schematic structural diagram illustrating a screen map generating apparatus according to an embodiment of the present application. As shown in fig. 10, the screen map generating apparatus includes: an acquisition module 41, a division module 42, a trapping module 43, and a generation module 44. The acquiring module 41 is used for acquiring an image. The dividing module 42 is configured to perform layer division on the image to obtain multiple layers. The trapping module 43 is configured to perform a trapping process on a part of the layers, so as to add a trap on the processed layer. The generating module 44 is configured to generate a mesh map corresponding to each layer according to a part of the layers subjected to the trapping processing in the plurality of layers and another part of the layers not subjected to the trapping processing in the plurality of layers.
Further, when the dividing module 42 performs layer division on the image to obtain a plurality of layers, the dividing module is specifically configured to:
determining a bottoming layer based on the contour information of the pattern in the image; carrying out color separation processing on the image to obtain a plurality of color layers; and one image layer contains part of pixel points in the image.
Further, when the generation module generates the mesh map corresponding to each layer according to a part of the layers subjected to the trapping processing in the plurality of layers and another part of the layers not subjected to the trapping processing in the plurality of layers, the generation module is specifically configured to:
obtaining production parameters; generating a net layout corresponding to the backing layer according to the production parameters and the backing layer; performing trapping processing on part of the color layers in the plurality of color layers so as to add trapping on the processed color layers; and generating a network layout corresponding to each color layer according to the production parameters, the part of the color layers subjected to the trapping processing in the plurality of color layers and the other part of the color layers which are not subjected to the trapping processing in the plurality of color layers.
Further, the production parameters include the size of the reduction or enlargement, the name information of the screen printing plate, and the color number information of all colors contained in the image.
Here, it should be noted that: the screen image generating device provided in the above embodiment may implement the technical solutions described in the above method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the above method embodiments, and will not be described herein again.
Fig. 11 shows a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 11, the image processing apparatus includes: an acquisition module 51, a dividing module 52, a trapping module 53, a generation module 54 and a display module 55. The obtaining module 51 is configured to obtain an image input by a user in response to an input operation triggered by the user through an interactive interface. The dividing module 52 is configured to perform layer division on the image to obtain multiple layers. The trapping module 53 is configured to perform a trapping process on a part of the layers, so as to add a trap on the processed layer. The generating module 54 is configured to generate a mesh map corresponding to each layer according to a part of the layers in the plurality of layers after the trapping process and another part of the layers in the plurality of layers that is not subjected to the trapping process. The display module 55 is configured to display the mesh map corresponding to each layer.
Further, the apparatus provided in this embodiment may further include a modification module. The modification module is used for responding to modification operation triggered by a user aiming at a screen map corresponding to one map layer, and modifying the screen map corresponding to the map layer by the user.
Here, it should be noted that: the image processing apparatus provided in the foregoing embodiment may implement the technical solutions described in the foregoing corresponding method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing method embodiments, and is not described herein again.
Fig. 12 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application. Wherein the electronic device comprises: a memory 61 and a processor 62; wherein,
a memory 61 for storing a program;
a processor 62, coupled to the memory 61, for executing programs stored in the memory 61 for:
acquiring an image and production parameters;
carrying out color separation processing on the image to obtain at least two color layers; one layer contains part of pixel points in the image;
performing trapping processing on part of the at least two color layers to add trapping on the processed color layer;
and generating a network layout corresponding to each color layer according to the production parameters, the part of the at least two color layers which is subjected to the trapping treatment and the other part of the at least two color layers which is not subjected to the trapping treatment.
The memory 61 described above may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device. The memory 61 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 62 may also implement other functions besides the above functions when executing the program in the memory 61, and specifically refer to the description of the foregoing embodiments.
Further, as shown in fig. 12, the electronic apparatus further includes: communication components 63, display 64, power components 65, audio components 66, and the like. Only some of the components are schematically shown in fig. 12, and the electronic device is not meant to include only the components shown in fig. 12.
Another embodiment of the present application provides an electronic device, which has a structure similar to that of fig. 12. Specifically, the electronic device includes: a memory and a processor; wherein,
a memory for storing a program;
a processor, coupled to the memory, for executing the program stored in the memory to:
acquiring an image;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
and generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers which are not subjected to the trapping processing in the plurality of layers.
The memory may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
When the processor executes the program in the memory, the processor may implement other functions in addition to the above functions, which may be specifically referred to the description of the foregoing embodiments.
The present application also provides an electronic device having a structure similar to that of fig. 12 above. The electronic device comprises a memory, a processor and a display; wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
responding to an input operation triggered by a user through an interactive interface, and acquiring an image input by the user;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and controlling the display to display the network map corresponding to each layer.
When the processor executes the program in the memory, the processor may implement other functions in addition to the above functions, which may be specifically referred to the description of the foregoing embodiments.
Accordingly, embodiments of the present application further provide a computer-readable storage medium storing a computer program, where the computer program can implement the steps or functions of the screen image generation method and the image processing method provided in the foregoing embodiments when the computer program is executed by a computer.
Further, this embodiment still provides a platemaking equipment. The plate making apparatus includes: memory, processor, and execution components. Wherein the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
acquiring an image;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and controlling the action of the execution assembly based on the screen layout corresponding to each layer so as to respectively process different screens to obtain a plurality of screen printing plates for printing.
When the processor executes the program in the memory, the processor may implement other functions in addition to the above functions, which may be specifically referred to the description of the foregoing embodiments.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable coordinate determination device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable coordinate determination device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable coordinate determination apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable coordinate determination device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (16)
1. A screen image generation method is characterized by comprising the following steps:
acquiring an image and production parameters;
carrying out color separation processing on the image to obtain at least two color layers; one layer contains part of pixel points in the image;
performing trapping processing on part of the at least two color layers to add trapping on the processed color layer;
and generating a network layout corresponding to each color layer according to the production parameters, the part of the at least two color layers which is subjected to the trapping treatment and the other part of the at least two color layers which is not subjected to the trapping treatment.
2. The method of claim 1, wherein the color separating the image to obtain at least two color layers comprises:
acquiring pixel information of pixel points in the image;
and clustering the pixels belonging to the same color classification according to the pixel information of the pixels in the image to obtain the at least two color layers.
3. The method according to claim 2, wherein clustering pixels belonging to a color classification according to pixel information of pixels in the image to obtain the at least two color layers comprises:
acquiring color number information corresponding to at least two colors contained in the image;
and based on the color number information corresponding to the at least two colors and the pixel information of the pixel points in the image, performing clustering color separation on the pixel points in the image by using a Lab color space similarity algorithm to obtain at least two color layers.
4. The method according to any of claims 1 to 3, wherein the step of performing a trapping process on a part of the at least two color layers comprises:
determining the layer relation of the at least two color layers;
according to the layer relation, a first type layer needing to be subjected to trapping and a second type layer needing not to be subjected to trapping are determined from the at least two color layers;
determining a boundary area of the first type of layer to be subjected to trapping;
and performing trapping compensation treatment on the boundary area in the first type of layer according to the trapping size corresponding to the trapping area.
5. The method according to claim 4, wherein determining a first type of layer to be subjected to padding and a second type of layer not to be subjected to padding from the at least two color layers according to the layer relationship comprises:
according to the layer relation, two target layers with a common boundary are found out from the at least two color layers;
and determining a target image layer with light color as a first type of image layer needing to be subjected to trapping compensation, and determining a target image layer with dark color as a second type of image layer needing not to be subjected to trapping compensation.
6. The method of claim 4, further comprising:
obtaining the first type of layer after the trapping processing; smoothing the supplementary trapping in the first type of layer;
performing edge mosaic processing on the at least two color layers to obtain the at least two color layers with smooth edges, and executing the trapping processing step and the screen printing plate image generating step based on the at least two color layers with smooth edges.
7. The method of any of claims 1 to 3, further comprising:
acquiring outline information of a pattern in the image;
adjusting the outline information according to the production parameters to obtain a bottoming layer;
and determining a screen printing plate corresponding to the priming image layer based on the priming image layer and the production parameters.
8. The method of claim 7, further comprising:
determining the margin of the net layout corresponding to the bottoming layer;
acquiring screen plate naming information;
determining a header for the screen image corresponding to the priming image layer based on the screen naming information;
determining page headers for the screen images corresponding to the at least two color image layers respectively according to the screen naming information;
marking page pins for the screen pattern corresponding to the at least two color layers and the screen icon corresponding to the priming layer;
and according to the header and the footer of the screen layout, respectively marking screen positioning symbols for the screen layout corresponding to the bottoming layer and the screen icons corresponding to the at least two color layers.
9. A production system, comprising:
the client equipment is used for acquiring images and production parameters; carrying out color separation processing on the image to obtain at least two color layers, wherein one layer contains partial pixel points in the image; performing trapping processing on part of the at least two color layers to add trapping on the processed color layer; generating a network layout corresponding to each color layer according to the production parameters, the part of the at least two color layers which is subjected to the trapping treatment and the other part of the at least two color layers which is not subjected to the trapping treatment;
and the plate making equipment is in communication connection with the client equipment and is used for receiving the screen printing plate corresponding to each color layer sent by the client equipment and respectively processing different silk screens according to the received screen printing plate corresponding to each color layer to obtain a plurality of screen printing plates for printing.
10. The system of claim 9, comprising:
and the printing equipment is provided with a first placing position for placing the at least two screen printing plates and a second placing position for placing a printing stock, and is used for executing printing operation by using the screen printing plates in the corresponding sequence one by one according to the printing sequence of the at least two screen printing plates so as to print the pattern of the corresponding image layer on the printing stock.
11. A screen image generation method is characterized by comprising the following steps:
acquiring an image;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
and generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers which are not subjected to the trapping processing in the plurality of layers.
12. The method according to claim 11, wherein the image layer division for the image to obtain at least two image layers comprises:
determining a bottoming layer based on the contour information of the pattern in the image;
carrying out color separation processing on the image to obtain at least two color layers; and one image layer contains part of pixel points in the image.
13. A production system, comprising:
the client device acquires an image; dividing the image layer to obtain a plurality of image layers; performing trapping processing on part of the layers to add trapping on the processed layer; generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and the plate making equipment is in communication connection with the client equipment and is used for receiving the screen printing plate corresponding to each image sent by the client equipment and respectively processing different silk screens according to the received screen printing plate corresponding to each color layer to obtain a plurality of screen printing plates for printing.
14. An image processing method, comprising:
responding to an input operation triggered by a user through an interactive interface, and acquiring an image input by the user;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and displaying the network map corresponding to each layer.
15. An electronic device, comprising: a memory and a processor; wherein,
the memory is used for storing programs;
the processor, coupled to the memory, for executing the program stored in the memory to implement the steps of the method of any one of the preceding claims 1 to 8; or to carry out the steps of the method according to claim 11 or 12; or to carry out the steps of the method of claim 14.
16. A plate making apparatus, comprising: a memory, a processor, and an execution component; wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
acquiring an image;
dividing the image layer to obtain a plurality of image layers;
performing trapping processing on part of the layers to add trapping on the processed layer;
generating a network layout corresponding to each layer according to the part of the layers subjected to the trapping processing in the plurality of layers and the other part of the layers not subjected to the trapping processing in the plurality of layers;
and controlling the action of the execution assembly based on the screen layout corresponding to each layer so as to respectively process different screens to obtain a plurality of screen printing plates for printing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110327794.2A CN113172986B (en) | 2021-03-26 | 2021-03-26 | Network layout generation method, production system, image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110327794.2A CN113172986B (en) | 2021-03-26 | 2021-03-26 | Network layout generation method, production system, image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113172986A true CN113172986A (en) | 2021-07-27 |
CN113172986B CN113172986B (en) | 2022-12-06 |
Family
ID=76922429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110327794.2A Active CN113172986B (en) | 2021-03-26 | 2021-03-26 | Network layout generation method, production system, image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113172986B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115147297A (en) * | 2022-06-09 | 2022-10-04 | 浙江华睿科技股份有限公司 | Image processing method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5625766A (en) * | 1995-05-11 | 1997-04-29 | Creo Products Inc. | Software based proofing method for double sided printing |
CN1857923A (en) * | 2006-06-06 | 2006-11-08 | 赛勒斯·沙纳德 | Four color dyeing and printing process |
CN101969518A (en) * | 2009-07-28 | 2011-02-09 | 方正国际软件(北京)有限公司 | Method and system for previewing trapping region |
CN102568020A (en) * | 2012-01-11 | 2012-07-11 | 广东壮丽彩印股份有限公司 | Overprint plate making method |
CN105630431A (en) * | 2014-11-20 | 2016-06-01 | 海德堡印刷机械股份公司 | Method for generating PDF trapping objects without knowing the description of the contours |
-
2021
- 2021-03-26 CN CN202110327794.2A patent/CN113172986B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5625766A (en) * | 1995-05-11 | 1997-04-29 | Creo Products Inc. | Software based proofing method for double sided printing |
CN1857923A (en) * | 2006-06-06 | 2006-11-08 | 赛勒斯·沙纳德 | Four color dyeing and printing process |
CN101969518A (en) * | 2009-07-28 | 2011-02-09 | 方正国际软件(北京)有限公司 | Method and system for previewing trapping region |
CN102568020A (en) * | 2012-01-11 | 2012-07-11 | 广东壮丽彩印股份有限公司 | Overprint plate making method |
CN105630431A (en) * | 2014-11-20 | 2016-06-01 | 海德堡印刷机械股份公司 | Method for generating PDF trapping objects without knowing the description of the contours |
Non-Patent Citations (1)
Title |
---|
陈艳球 等: "《丝网印刷工艺与制作》", 28 February 2012, 湖南大学出版社 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115147297A (en) * | 2022-06-09 | 2022-10-04 | 浙江华睿科技股份有限公司 | Image processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113172986B (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107507217B (en) | Method and device for making certificate photo and storage medium | |
TWI225624B (en) | Method and apparatus for digital image segmentation using an iterative method | |
KR20200014842A (en) | Image illumination methods, devices, electronic devices and storage media | |
JP2001101437A (en) | Device and method for processing image | |
JP4142614B2 (en) | Trapping method, trapping program, trapping apparatus, and printing system | |
TWI766303B (en) | Measurement system, method for generating learning model used in image measurement of semiconductor including specific structure, and storage useful for computer to generate learning used in image measurement of semiconductor including specific structure The recording medium of the model processing program | |
CN110782466B (en) | Picture segmentation method, device and system | |
US20220180122A1 (en) | Method for generating a plurality of sets of training image data for training machine learning model | |
US10013784B2 (en) | Generating an assembled group image from subject images | |
CN113172986B (en) | Network layout generation method, production system, image processing method and device | |
CN114332895A (en) | Text image synthesis method, text image synthesis device, text image synthesis equipment, storage medium and program product | |
JP2004199248A (en) | Image layouting device, method and program | |
US20040169664A1 (en) | Method and apparatus for applying alterations selected from a set of alterations to a background scene | |
CN107590776A (en) | Image processing apparatus and image processing method | |
CN117911444A (en) | Edge processing-based matting method and system | |
JP2011061860A (en) | Image-data processor, medium with image-data set recorded, medium with image-data processing program recorded and method for processing image data | |
RU2626661C1 (en) | Method and subsystem of determining digital images of fragments containing documents | |
KR102157005B1 (en) | Method of improving precision of deep learning resultant image by using image filtering technique | |
CN112215920A (en) | Personalized card display and manufacturing method and equipment | |
RU2636097C1 (en) | Method and system of preparing text-containing images to optical recognition of symbols | |
JP2005149439A (en) | Image processor and image processing method | |
JP6544004B2 (en) | Color sample creating apparatus and color sample creating method, and image processing system using color sample | |
CN117351032B (en) | Seal removing method and system | |
JP7197875B1 (en) | Program, image processing method and image processing apparatus | |
JP2001101436A (en) | Device and method for processing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 554, 5 / F, building 3, 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province Applicant after: Alibaba (China) Co.,Ltd. Address before: 310052 room 508, 5th floor, building 4, No. 699 Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Applicant before: Alibaba (China) Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |