CN116958296A - Method, device, equipment and storage medium for generating drawing image - Google Patents

Method, device, equipment and storage medium for generating drawing image Download PDF

Info

Publication number
CN116958296A
CN116958296A CN202310834333.3A CN202310834333A CN116958296A CN 116958296 A CN116958296 A CN 116958296A CN 202310834333 A CN202310834333 A CN 202310834333A CN 116958296 A CN116958296 A CN 116958296A
Authority
CN
China
Prior art keywords
image
mask
generating
code
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310834333.3A
Other languages
Chinese (zh)
Inventor
郭文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202310834333.3A priority Critical patent/CN116958296A/en
Publication of CN116958296A publication Critical patent/CN116958296A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The embodiment of the invention provides a method, a device, equipment and a storage medium for generating a drawing image, and relates to the technical field of financial science and technology. Initial generation materials including initial position data, initial images and portrait images are obtained; generating an image code according to the initial position data, acquiring a base map template, synthesizing the base map template with the initial image to obtain a first pictorial image, generating a mask image according to a preset masking rule, synthesizing the image code on the mask image to obtain a second pictorial image, and finally obtaining a target pictorial image according to the portrait image, the first pictorial image and the second pictorial image. The problem of low drawing image synthesis efficiency caused by the transmission of network layer large file data is solved by uploading the initial generation materials to the synthesis server. When a large number of drawing synthesizing operations are needed, the process of drawing generation is dispersed to terminal equipment, and drawing image generation is performed by utilizing an image synthesizing function interface of the terminal, so that the synthesizing efficiency of the drawing images is improved.

Description

Method, device, equipment and storage medium for generating drawing image
Technical Field
The present invention relates to the technical field of financial science and technology, and in particular, to a method, an apparatus, a device, and a storage medium for generating a pictorial image.
Background
The visual poster is a common propaganda poster in the financial field, and generally contains text and picture information, such as links, character images, material pictures, propaganda contents and the like, and the use of the visual poster in a financial institution can cover various aspects, such as product propaganda, brand popularization and the like. Financial institutions utilize visual posters with professionals and recognizabilities to promote brand images and attract attention of targeted customer groups. The visual poster plays an important role in marketing activities of financial institutions by creating attractive visual effects to improve the number of users accessing financial business scenes and the indexes such as duration, frequency and the like of stay of single users in the financial business scenes.
In the related art, the way of making the image poster is to upload propaganda material information to a composition server for composing the poster by using a terminal device, wherein the composition server can be a service system server for running the poster composition service or an algorithm server for executing a poster composition algorithm. The synthesizing server in the mode has the problem of lower synthesizing efficiency of the paintings due to the fact that the load is too high when a large number of the synthesizing operations of the paintings are needed. Therefore, it is necessary to provide a method for generating a poster image, which can improve the efficiency of synthesizing a poster.
Disclosure of Invention
The embodiment of the application mainly aims to provide a method, a device, equipment and a storage medium for generating a drawing image, which do not need to use a composition server to compose the drawing, thereby improving the efficiency of the drawing composition.
To achieve the above object, a first aspect of an embodiment of the present application provides a method for generating a pictorial image, which is applied to a mobile terminal, where the mobile terminal includes an image code generating interface, and the method includes:
acquiring initial generation materials, wherein the initial generation materials comprise: initial position data, an initial image, and a portrait image;
performing character image conversion on the initial position data by using the image code generation interface to generate an image code;
carrying out first image synthesis processing according to a preset base map template and the initial image to obtain a first drawing image;
generating a mask image according to a preset masking rule;
synthesizing the image code on the mask image to obtain a second painting image;
and carrying out second image synthesis processing according to the portrait image, the first poster image and the second poster image to obtain a target poster image.
In some embodiments, the image code generation interface comprises: a two-dimensional code filter interface and a color filter interface; the initial position data includes: initial position linking; the character image conversion is carried out on the initial position data by utilizing an image code generation interface of the terminal to generate an image code, and the method comprises the following steps:
Acquiring the character string content linked at the initial position;
inputting the character string content into the two-dimensional code filter interface to generate a two-dimensional code, so as to obtain a first image code;
and inputting the first image code into the color filter interface to perform color synthesis to obtain the image code.
In some embodiments, the image code generation interface comprises: a character string conversion interface; the initial position data includes: an initial position program interface; the step of performing character image conversion on the initial position data to generate an image code comprises the following steps:
acquiring a link character string of the initial position program interface;
and inputting the linked character string into the character string conversion interface to perform character image conversion, and generating the image code.
In some embodiments, the performing a first image synthesis process according to a preset base map template and the initial image to obtain a first report image includes:
acquiring the image size and the image color style of the initial image;
acquiring the base map template from a preset resource file according to the image size and the image color style, wherein the base map template comprises a first preset area;
and synthesizing the initial image in the first preset area to obtain the first drawing image.
In some embodiments, the preset masking rules include: presetting mask size rules and preset mask transparency; the generating a mask image according to a preset masking rule includes:
acquiring first size information of the first pictorial image, wherein the first size information comprises: first height information and first width information;
generating mask height of the mask image according to the preset mask size rule and the first height information;
generating a mask width of the mask image according to the preset mask size rule and the first width information;
generating the transparency of the mask image according to the preset mask transparency;
and generating the mask image according to the mask height, the mask width and the transparency.
In some embodiments, the synthesizing the image code on the mask image to obtain a second poster image includes:
acquiring position information of a second preset area of the mask image;
generating a synthetic region of the image code according to the position information of the second preset region;
and synthesizing the image code in the synthesis area to obtain the second pictorial image.
In some embodiments, the second image synthesis process includes: performing image stitching according to a preset stitching rule or performing image lamination according to a preset lamination rule;
And performing second image synthesis processing according to the portrait image, the first poster image and the second poster image to obtain a target poster image, wherein the method comprises the following steps:
acquiring the human image size of the human image, wherein the human image size comprises human image width information and human image height information;
adjusting the portrait width information according to the first pictorial image to obtain target width information;
adjusting the portrait height information according to the target width information to obtain target height information;
performing size adjustment on the portrait image according to the target height information and the target width information to obtain a preliminary portrait;
and carrying out second image synthesis processing on the preliminary portrait, the first pictorial image and the second pictorial image to obtain the target pictorial image.
To achieve the above object, a second aspect of an embodiment of the present application provides a pictorial image generating device applied to a mobile terminal, where the mobile terminal includes an image code generating interface, the device includes:
and a material acquisition module is generated: the method is used for acquiring initial generation materials, and the initial generation materials comprise the following steps: initial position data, an initial image, and a portrait image;
An image code generation module: the initial position data is used for carrying out character image conversion to generate an image code;
the first drawing image synthesis module: the first image synthesis processing is used for carrying out first image synthesis processing according to a preset base map template and the initial image to obtain a first drawing image;
a mask image generation module: the method comprises the steps of generating a mask image according to a preset masking rule;
the second drawing image synthesis module: the image code is used for synthesizing the image code on the mask image to obtain a second painting image;
the target drawing image synthesis module: and the target drawing image is obtained by performing second image synthesis processing according to the portrait image, the first drawing image and the second drawing image.
To achieve the above object, a third aspect of the embodiments of the present application proposes an electronic device, including a memory storing a computer program and a processor implementing the method according to the first aspect when the processor executes the computer program.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium storing a computer program that, when executed by a processor, implements the method according to the first aspect.
The method, the device, the equipment and the storage medium for generating the drawing image are applied to the terminal, and the method comprises the following steps: acquiring initial position data, initial images and initial generation materials including portrait images; generating an image code according to the initial position data, acquiring a base map template, synthesizing the base map template with the initial image to obtain a first pictorial image, generating a mask image according to a preset masking rule, synthesizing the image code on the mask image to obtain a second pictorial image, and finally obtaining a target pictorial image according to the portrait image, the first pictorial image and the second pictorial image. The embodiment of the application avoids uploading the initial generation materials to the synthesis server, thereby reducing the problem of low efficiency of synthesizing the drawing image caused by the transmission of the network layer large file data. When a large number of drawing synthesizing operations are needed, the process of drawing generation is dispersed to terminal equipment, and drawing image generation is performed by utilizing an image synthesizing function interface of the terminal, so that the synthesizing efficiency of the drawing images is improved.
Drawings
Fig. 1 is a flowchart of a method for generating a drawing image according to an embodiment of the present application.
Fig. 2 is a schematic diagram of program software or applet provided in an embodiment of the application.
Fig. 3 is a flowchart of step S120 in fig. 1.
Fig. 4 is a further flowchart of step S120 in fig. 1.
Fig. 5 is a flowchart of step S130 in fig. 1.
Fig. 6 a-6 c are schematic diagrams of a base pattern template provided by an embodiment of the present invention.
Fig. 7 is a flowchart of step S140 in fig. 1.
Fig. 8 is a schematic diagram of a mask image according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of a mask image according to still another embodiment of the present invention.
Fig. 10 is a flowchart of step S150 in fig. 1.
Fig. 11 is a flowchart of step S160 in fig. 1.
Fig. 12 is a schematic view of a first poster image according to yet another embodiment of the present invention.
Fig. 13 a-13 b are schematic diagrams of a target poster image according to embodiments of the present invention.
Fig. 14 is a flowchart of a method for generating a poster image according to still another embodiment of the present invention.
Fig. 15 is a block diagram showing a structure of a poster image generating apparatus according to still another embodiment of the present invention.
Fig. 16 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
First, several nouns involved in the present invention are parsed:
the iOS system is an operating system applied to mobile electronic devices.
Two-dimensional code: the two-dimensional bar code is a coding mode, and is characterized by that it uses a specific geometric figure to form a graph which is distributed on the plane (two-dimensional direction) according to a certain rule, is black-white alternate, can record data symbol information, and utilizes the concept of "0" and "1" bit stream forming internal logic basis of computer on the code programming, and uses several geometric shapes correspondent to binary system to represent literal numerical information, and utilizes image input equipment or photoelectric scanning equipment to automatically read so as to implement automatic information processing. The two-dimensional code has some commonalities of bar code technology, for example, each code system has a specific character set, each character occupies a certain width, has a certain verification function and the like, and simultaneously has the functions of automatically identifying information of different rows and processing graphic rotation change points.
Small procedure: is an application program which can be used without downloading and installing, and does not need to be uninstalled.
The uniform resource locator system (uniform resource locator, URL) is a representation on the web service program of the internet for specifying the location of information, originally used by tim bananas Li Faming as the address of the web, and has now been compiled by the web consortium as the internet standard RFC1738.
CoreGraphic library: is a fairly large library of graphics APIs, also called "quantiz", comprising basic geometric elements (points, sizes, vectors, rectangles, etc.) that can be manipulated to render elements in an image, all through event processing. The Quartz is the basis of 2-D picture rendering, various graphs, shapes and stuffing can be drawn, shadow effects can be achieved by using a CoreGrapic library, images can be synthesized, and PDF files can be created.
The visual poster is a propaganda poster, generally contains text and picture information, such as links, character images, material pictures, propaganda contents and the like, and can promote propaganda effects by making proper visual poster according to the propaganda contents.
In the related art, the way of making the image poster is to upload propaganda material information to a composition server for composing the poster by using a terminal device, wherein the composition server can be a service system server for running the poster composition service or an algorithm server for executing a poster composition algorithm. The synthesizing server in the mode has the problem of lower synthesizing efficiency of the paintings due to the fact that the load is too high when a large number of the synthesizing operations of the paintings are needed. Therefore, it is necessary to provide a method for generating a poster image, which can improve the efficiency of synthesizing a poster.
Based on the above, the embodiment of the invention provides a method, a device, equipment and a storage medium for generating a drawing image, which avoid uploading initial generation materials to a synthesis server, thereby reducing the problem of low drawing image synthesis efficiency caused by the transmission of network layer large file data. When a large number of drawing synthesizing operations are needed, the process of drawing generation is dispersed to terminal equipment, and drawing image generation is performed by utilizing an image synthesizing function interface of the terminal, so that the synthesizing efficiency of the drawing images is improved.
The embodiment of the invention provides a method, a device, equipment and a storage medium for generating a drawing image, and specifically, the method for generating the drawing image in the embodiment of the invention is described firstly by describing the following embodiment.
The invention provides a method for generating a drawing image, and relates to the technical field of computer software. The method for generating the drawing image provided by the embodiment of the invention can be applied to the terminal and can also be a computer program running on the terminal. For example, the computer program may be a native program or a software module in an operating system; the Application may be a local (Native) Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a client that supports generation of a drawing image, or an applet, i.e. a program that only needs to be downloaded to a browser environment to run; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in. Wherein the terminal communicates with the server through a network. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, or smart watch, or the like.
The application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In order to facilitate understanding of the embodiments of the present application, the concept of generating a pictorial image will be briefly described below with reference to an example of a specific application scenario.
The image picture is used as a propaganda picture, which contains more texts and picture information, for example, an original picture, a link address, an individual image, two-dimension code information, propaganda contents of other products and the like are dynamically generated into an propaganda image picture, and the image picture contains two-dimension code information and the like. The user shares the image picture report to the target object through social tools or leaflets and the like, the target object primarily knows product information according to the received image picture report, and then enters a corresponding program or applet according to the two-dimensional code information on the image picture report to know more detailed information of the product, so that a proper image picture report is manufactured according to propaganda content, and propaganda effect can be improved.
The embodiment of the application is applied to the mobile terminal, the generation of the image pictorial report is realized by using the mobile terminal, various materials which are uploaded by a user and used for generating the image pictorial report are acquired, the materials are integrated by using the relevant interfaces of the image synthesis function on the mobile terminal, the corresponding image pictorial report is generated, and the user uses the image pictorial report to carry out subsequent operation.
The method for generating a pictorial image in the embodiment of the present application will be described first.
Fig. 1 is an optional flowchart of a method for generating a poster image according to an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S110 to S160. It should be understood that the order of steps S110 to S160 in fig. 1 is not particularly limited, and the order of steps may be adjusted, or some steps may be reduced or increased according to actual requirements.
Step S110: and acquiring initial generation materials.
In one embodiment, initially generating the material includes: initial position data, an initial image, and a portrait image. The initial position data is data representing a link address in the visual poster, the initial image is an image used for propaganda in the visual poster, for example, a product image needing propaganda or a background image of the poster, and the portrait image is a personal image needing to be displayed in the visual poster.
In an embodiment, referring to fig. 2, for installing a program software or applet for generating an avatar on a mobile terminal, the software is named xxx, the user clicks the software or applet to generate the interface shown in fig. 2, uploads an initial image and a portrait image to a software background by clicking an "upload" icon on the interface, uploads initial position data to the software background by clicking an "address" icon on the interface, and performs avatar generation by clicking a "one-key generation" icon on the interface. The software background generates the visual pictorial according to the received initial position data, the initial image and the portrait image, outputs the visual pictorial as a target pictorial image to the front-end interface, and the user obtains the target pictorial image from the front-end interface, so that the user interaction convenience can be improved by further utilizing the target pictorial image in a screenshot, sharing or downloading mode.
The following steps describe the specific process of generating the visual poster from the received initial position data, initial image and portrait image.
Step S120: and performing character image conversion on the initial position data by using an image code generation interface of the mobile terminal to generate an image code.
In an embodiment, the system running in the mobile terminal is an iOS system, and the mobile terminal of the iOS system integrates an interface related to the image synthesis function, so that the related function can be realized by calling the corresponding interface in the background. For example, in one embodiment, an image code generation interface of the mobile terminal is invoked to generate an image code.
In one embodiment, the initial position data includes: the initial position link is a jump link of the visual picture which the user wants to set, namely, the user can jump to a website corresponding to the initial position link through the visual picture, so that the initial position link can be link information in a URL format. The initial position program interface can be an applet entry which is wanted to jump into in the social software, and the initial position program interface is character string information corresponding to the applet entry.
In one embodiment, when the initial position data is an initial position link, the image code generation interface includes: two-dimensional code filter interface and color filter interface, refer to fig. 3, which is a flowchart showing a specific implementation of step S120 in an embodiment, where the step of performing character image conversion on initial position data by using an image code generating interface of a mobile terminal in this embodiment, the step of generating an image code includes:
Step S121: and acquiring the character string content linked at the initial position.
In an embodiment, the initial position links the corresponding URL format link information, and acquires the character string content corresponding to the link information.
Step S122: and inputting the character string content into a two-dimensional code filter interface to generate a two-dimensional code, so as to obtain a first image code.
In an embodiment, a two-dimensional code filter interface is set by calling a CIFilter filter class in the iOS system, a filter related to a two-dimensional code, such as a CIQRCodeGenerator filter, in the CIFilter filter class is called in the iOS system, and then a first image code is generated by using a CoreGrahic library related technology.
In one embodiment, to use the CIQRCodeGenerator filter described above, the following three classes are used: CIContext class, CIimage class, and CIFilter filter class. The CIContext class is used for processing pictures, the CIimage class is used for acquiring picture related data, a CIimage object can be created by using the UIIMage class, the CIFilter filter class is provided with a dictionary for setting various parameters, and the provided parameter setting method can be used for setting filter parameters by using the dictionary.
In an embodiment, the specific steps of inputting the character string content into the two-dimensional code filter interface to generate the two-dimensional code and obtaining the first image code are as follows: 1) Creating a CIimage object by using the CIimage class, and initializing the CIimage object, for example, by a CIimage (contentsOfURL:); 2) Creating a CIQRCodeGenerator filter, inputting corresponding filter parameters, and converting a CIimage object into a two-dimensional code image; 3) And creating a CIContext object based on the CPU or the GPU, and outputting the processed two-dimensional code image through the CIContext object to obtain a first image code.
According to the method, the two-dimensional code filter interface in the mobile terminal iOS system is called, and the two-dimensional code can be generated to obtain the first image code, wherein the first image code is the two-dimensional code.
Step S123: and inputting the first image code into a color filter interface for color synthesis to obtain the image code.
In an embodiment, in order to enhance the display effect of the first image code, the color filter interface is used to perform color synthesis on the first image code, so as to obtain the image code containing color information. In an embodiment, the color filter interface is also a filter in the class of CIFilter filters, for example, a ciradial gradient filter, and in this embodiment, the ciqrcodegen filter and the ciradial gradient filter are superimposed to obtain the image code.
In one embodiment, when the initial position data is the initial position program interface, the image code generation interface includes: the character string conversion interface, referring to fig. 4, is a flowchart of a specific implementation of step S120 shown in an embodiment, where the step of performing character image conversion on the initial position data by using the image code generation interface of the mobile terminal in this embodiment, the step of generating the image code includes:
step S124: and acquiring a link character string of the initial position program interface.
In an embodiment, the initial position program interface may be a link string corresponding to an applet entry in the social software, where the applet entry is to be jumped into, and the link string is string information corresponding to the applet entry, i.e. the applet entry can be jumped into by the link string. In this embodiment, the link string may be user input or may be automatically recognized by an action to jump to the target applet.
Step S125: and inputting the linked character string into a character string conversion interface to perform character image conversion, and generating an image code.
In an embodiment, the image code is a two-dimensional code, and a character string conversion interface integrated in the iOS system is called to perform character image conversion, wherein the character string conversion interface can be a Base64 character string to picture interface, and the Base64 character string to picture interface is used for performing character image conversion on the input link character string to generate the image code. It can be understood that, in an embodiment, the Base64 string-to-picture interface encodes the link string into a byte array by using a Base64 decoding method, the Base64 decoding method is a method for transmitting 8Bit byte codes, and then outputs the encoded byte array into a picture in a two-dimensional code format by using a FileOutputStream class, thereby obtaining the image code.
The steps obtain the two-dimensional code information in the visual picture report, and the synthesis process of the initial image is described below.
Step S130: and carrying out first image synthesis processing according to the preset base pattern template and the initial image to obtain a first drawing image.
In an embodiment, a plurality of base pattern templates can form a base pattern template database and be stored in a server, and the mobile terminal sends a request for obtaining the base pattern templates to the server according to the synthesis requirement of the visualization package, so that corresponding base pattern templates in the base pattern template database are obtained, and the next image synthesis is performed. It will be appreciated that the base pattern template database may also be stored in the mobile terminal, and the mobile terminal may directly retrieve the corresponding base pattern template from the memory.
In one embodiment, the base template database is stored as a resource file of the iOS system, for example in an installation package of program software for generating visual paintings. In general, the installation package includes executable binary files, picture, audio and other resource files, and when the installation package of relevant software of the iOS is downloaded, the resource files are downloaded to the mobile terminal. Therefore, in one embodiment, clicking the program software for generating the visual poster starts the APP, and at this time, the executable binary file is loaded into the memory, and at the same time, the resource files such as the picture, the audio and the like are stored back in the disk.
In an embodiment, referring to fig. 5, which is a flowchart showing a specific implementation of step S130, in this embodiment, the step of obtaining a base map template and performing a first image synthesis process with an initial image to obtain a first report image includes:
step S131: the image size and image color style of the initial image are obtained.
Step S132: and obtaining a base map template from a preset resource file according to the image size and the image color style.
In one embodiment, the initial image is an image used for promotion in a visual poster, such as a product image or a background image of the poster to be promoted, etc. The original image color style requires adaptation to the base pattern template because the base pattern templates for different original image sizes are different and for aesthetic reasons.
In an embodiment, the base pattern template includes a first preset area for placing the initial image, and in fig. 6a to 6c, the first preset area is indicated by a dashed box. Referring to fig. 6a, the image size of the initial image is smaller compared to the size of the first preset area in the base template 1, and thus the initial image and the base template 1 are not adapted. Referring to fig. 6b, the image size of the initial image exceeds the first preset area of the base template 2, and thus the initial image and the base template 2 are not adapted. Referring to fig. 6c, the image size of the initial image is just adapted to the first preset area of the base pattern template 3. Referring to fig. 6 a-6 c, the base template 3 is the base template that needs to be selected for this initial image.
In the above embodiment, the size relation between each base pattern template and the first preset area thereof is stored in the base pattern template database, and after the image size of the initial image is obtained, a suitable base pattern template is selected in the base pattern template database according to the image size. It will be appreciated that the selection criteria may include two points: 1) The initial image can be completely displayed in a first preset area in the base map template; 2) The distance between the upper, lower, left and right edge lines of the initial image and the upper, lower, left and right edge lines of the first preset area is smaller than the corresponding preset distance.
Referring to fig. 6c, a distance d1 between a top line of the initial image and a top line of the first preset area is smaller than the top line preset distance c1; the distance d2 between the lower edge line of the initial image and the lower edge line of the first preset area is smaller than the preset distance c2 of the lower edge line; the distance d3 between the left line of the initial image and the left line of the first preset area is smaller than the preset distance c3 between the left line and the left line; the distance d4 between the right line of the initial image and the right line of the first preset area is smaller than the preset distance c4 between the right line and the right line. It will be appreciated that c1, c2, c3 and c4 may be different or the same, as desired.
In an embodiment, the image color style of the initial image may be obtained, and the initial image may be subjected to color style classification according to a preset style classification rule, for example, different color styles of red, green, yellow, blue or violet. Correspondingly, the base pattern templates are subjected to color style division according to the same preset style division rules, each base pattern template and the color style thereof are stored in a base pattern template database, and after the image color style of the initial image is acquired, the appropriate base pattern template is selected in the base pattern template database according to the image color style.
In an embodiment, the color of each pixel of the initial image is counted, the duty ratio of the pixel of each color in the whole pixel is obtained, and the color system corresponding to the color with the largest duty ratio is selected to obtain the color style of the initial image.
It can be understood that the base pattern templates of the same color system can be selected according to the image color style of the initial image, for example, the base pattern templates of the red color system can be selected, and the base pattern templates of the complementary color system can be selected, for example, the initial image is of the green color system, and the base pattern templates are of the red color system, so as to realize the color bumping effect. The manner of color style selection is not limited in this embodiment.
Step S133: and synthesizing the initial image in a first preset area to obtain a first pictorial image.
In an embodiment, after the above process obtains the base image template from the preset resource file according to the image size and the image color style of the initial image, the initial image is synthesized in the first preset area of the base image template, so as to obtain the base image template including the initial image as the first drawing image.
After the first poster image is obtained, masking operation is needed by the following steps to improve the attractiveness of the visual poster.
Step S140: and generating a mask image according to a preset masking rule.
In an embodiment, the mobile terminal may obtain the preset masking rule from the server, or the preset masking rule is stored in a resource file of program software of the mobile terminal for generating the visual sketch, or stored in a memory space of the mobile terminal itself.
In one embodiment, the predetermined masking rules include a predetermined mask size rule and a predetermined mask transparency. The mask size rule is preset to limit the size of the mask image, and the mask transparency is preset to limit the transparency information of different areas in the mask image.
In an embodiment, referring to fig. 7, which is a flowchart showing a specific implementation of step S140, the step of obtaining a preset masking rule and generating a mask image according to the preset masking rule in this embodiment includes:
step S141: and acquiring first size information of the first pictorial image.
In one embodiment, the first size information of the first poster image includes: and acquiring the first height information and the first width information, namely acquiring the height and the width of the first poster image, and generating a proper mask layer according to the height and the width of the first poster image.
Step S142: and generating the mask height of the mask image according to the preset mask size rule and the first height information.
Step S143: and generating the mask width of the mask image according to the preset mask size rule and the first width information.
In an embodiment, referring to fig. 8, according to a preset mask size rule, a mask width of the mask image may be set to be consistent with the first width information of the first poster image, and a mask height of the mask image may be set according to the first height information of the first poster image. For example, the mask width of the mask image and the first width information of the first poster image are set to x1, the mask height of the mask image starts at 1/4 of the height of the first poster image, and the mask height of the mask image is set to h, which is 3/4 of the first height information h1, to the bottom of the first poster image.
Step S144: generating mask image transparency according to the preset mask transparency.
In one embodiment, to enhance the masking effect of the mask, different transparency is set at different positions of the mask image.
For example, referring to FIG. 9, in one embodiment, the mask height is divided into three regions, the first region being the top of the mask image to a 1/3 mask height position; the second area is from 1/3 mask height position to 2/3 mask height position; the third area is the 2/3 mask height position to the bottom of the mask image. The transparency of the first region is set to less than 50%, the transparency of the second region is set to more than 50%, and the transparency of the third region is set to 100%, that is, the third region is a solid color, and the solid color may be set, for example, to white. Namely, the effects to be achieved in this embodiment are: the mask image is gradually changed from transparent color to pure color from top to bottom.
It will be appreciated that for better masking, a gradual change in transparency may be provided in the first region and the second region, for example, the first region transparency may be linearly graded from 0% to 50% and the second region transparency may be graded from 50% to 100%, and the linear grading relationship of the two regions may be the same.
Step S145: a mask image is generated based on the mask height, the mask width, and the transparency of the different regions.
In one embodiment, after the mask height, mask width and different region transparency are obtained according to the above procedure, a corresponding mask image is generated according to these parameters.
Step S150: and synthesizing the image code on the mask image to obtain a second picture report image.
In one embodiment, the image code obtained in the above step is synthesized on the mask image, so as to obtain a second pictorial image containing the image code. Referring to fig. 10, a flowchart showing a specific implementation of step S150 is shown, where the step of synthesizing the image code on the mask image to obtain the second pictorial image includes:
step S151: and acquiring the position information of a second preset area of the mask image.
In an embodiment, there is a second preset area in the mask image for placing the image code, and the setting may be performed, for example, a right middle position, a right lower position, a left upper position, etc. of the mask image, where the position information of the second preset area is not limited.
Step S152: and generating a synthesized region of the image code according to the position information of the second preset region.
In an embodiment, assuming that the second preset area is located in the middle of the mask image, four angular coordinates of a rectangular frame of the second preset area are determined according to the size of the mask image, and a synthesized area is generated according to the four angular coordinates. It will be appreciated that the size of the composite region needs to be at least greater than the size of the image code.
Step S153: and synthesizing the image codes in the synthesis area to obtain a second pictorial image.
In one embodiment, after the information of the synthesized area is obtained, the image code is synthesized in the synthesized area, so as to obtain the second pictorial image.
Step S160: and carrying out second image synthesis processing according to the portrait image, the first poster image and the second poster image to obtain a target poster image.
In an embodiment, the second image synthesis process includes: and performing image stitching according to a preset stitching rule or performing image lamination according to a preset lamination rule. Referring to fig. 11, a flowchart showing a specific implementation of step S160 is shown in an embodiment, where in this embodiment, the step of performing the second image synthesis process according to the portrait image, the first poster image, and the second poster image to obtain the target poster image includes:
Step S161: and acquiring the size of the portrait image.
In one embodiment, the portrait size of the portrait image includes portrait width information and portrait height information.
Step S162: and obtaining target width information according to the preliminary portrait width information of the first pictorial image.
In an embodiment, if the human image needs to be synthesized, when the base map template is selected, the size information of the human image needs to be combined, the selected base map template includes a third preset area for placing the human image, that is, the first report image includes a third preset area, and the human image is synthesized in the third preset area in the first report image. It is therefore necessary to preliminary portrait width information according to size information of the third preset area, that is, to enable a portrait image to be synthesized in the third preset area, where the target width information may be a scaling, for example, n% reduction or m% enlargement, where n and m are calculated according to size information and portrait width information of the third preset area.
For example, referring to fig. 12, a schematic diagram of a first pictorial image including a third preset area is shown, in which an initial image is synthesized in the first preset area based on a base pattern template, and a portrait image is synthesized in the third preset area, and the width of the third preset area is half of that of the portrait image w, so that it is necessary to reduce the portrait width information by half to obtain a preliminary portrait, and the preliminary portrait is synthesized in the first pictorial image, and the target width information at this time is reduced by 50%.
Step S163: and obtaining target height information according to the target width information and the preliminary human image height information of the preset adjustment proportion.
Step S164: and carrying out size adjustment on the portrait image according to the target height information and the target width information to obtain a preliminary portrait.
In an embodiment, if the portrait width information of the portrait image is changed, in order to prevent the portrait image from being distorted, preliminary portrait height information is obtained according to the same scaling ratio, and at this time, the target height information is the same as the target width information, so as to keep the aspect ratio of the portrait image unchanged, where the aspect ratio is the portrait width information/portrait height information, i.e., the adjusted portrait image is scaled up or down according to an equal ratio. Referring to fig. 12, the target height information of the portrait image is similarly reduced by 50%, assuming that the aspect ratio of the initially obtained portrait image is 3:4, adjusting in a mode of reducing by 50%, wherein the aspect ratio of the obtained preliminary portrait is still 3:4.
step S165: and carrying out second image synthesis processing on the preliminary portrait, the first pictorial image and the second pictorial image to obtain a target pictorial image.
In one embodiment, the preliminary portrait is merged into the first pictorial image, and then the new first pictorial image is merged with the second pictorial image to obtain the target pictorial image. It can be understood that the portrait image, the initial image and the image code are not overlapped in the synthesized target pictorial image.
In an embodiment, the merging of the new first poster image and the second poster image includes two ways, that is, image stitching is performed according to a preset stitching rule or image lamination is performed according to a preset lamination rule.
In an embodiment, the preset stitching rule may be a stitching manner, for example, a manner in which a new first poster image is on top, a second poster image is on bottom, or a first poster image is on top, and a second poster image is on bottom. It will be appreciated that if a top-bottom stitching is performed, the second poster image may be provided with an additional base color, such as a color of the same color system as the first poster image, so that the problem of displaying a transparent area in the second poster image may be avoided.
In an embodiment, the preset stacking rule may be a manner of stacking the new first poster image on top of each other, the second poster image on the bottom of each other, or the first poster image on top of each other and the second poster image on the bottom of each other.
In one embodiment, the mask width of the mask image is set to be consistent with the first width information of the first poster image, the mask height of the mask image starts at 1/4 of the height of the first poster image to the bottom of the first poster image, that is, the mask height of the mask image occupies 3/4 of the first height information. At this time, the second preset area is located in the middle of the mask image, and the image code is synthesized in the mask image to obtain a second poster image.
Referring to fig. 13a, a new first poster image and a second poster image are spliced up and down, the widths of the new first poster image and the second poster image are the same, the new first poster image is placed on the upper side, the second poster image is placed on the lower side, and then an image code is located in the middle of the target poster image and is deviated down. In this embodiment, the image code in the figure is square, and its side length is 1/4 of that of the second pictorial image, so that the upper side line of the image code is located at 1/2 of the height of the second pictorial image, and the lower side line of the image code is located at 1/4 of the height of the second pictorial image.
Referring to fig. 13b, a new first poster image and a second poster image are stacked up and down, the new first poster image is at an upper layer, the second poster image is at a lower layer, and then the image code is located at the middle of the target poster image. In this embodiment, the image code in the figure is square, and its side length is 1/4 of the target drawing image, then the upper edge of the image code is located at 1/2 of the height of the target drawing image, and the lower edge of the image code is located at 1/4 of the height of the target image, and the image code, the portrait image and the initial image in the figure are all non-overlapped. It will be appreciated that the mask image is transparent or nearly transparent above it, so that even if the top line of the second poster image is located on the portrait image or the original image, no image occlusion will result.
From the above, in the embodiment of the present application, initial generation materials including initial position data, an initial image, and a portrait image are obtained; generating an image code according to the initial position data, acquiring a base map template, synthesizing the base map template with the initial image to obtain a first pictorial image, generating a mask image according to a preset masking rule, synthesizing the image code on the mask image to obtain a second pictorial image, and finally obtaining a target pictorial image according to the portrait image, the first pictorial image and the second pictorial image, wherein the target pictorial image can be an image pictorial.
Referring to FIG. 14, a flowchart of the program software for generating visual paintings in one embodiment is shown.
Step S1410: clicking on the icon begins generating the visual poster.
In one embodiment, corresponding initial position data, initial images, and portrait images are acquired. The acquiring process may also be to acquire the downloaded link of the portrait image, the downloaded link of the initial image, etc., and acquire the portrait image and the initial image through a network downloading mode.
Next, an image code generation process is performed, including steps S1421 to S1423.
Step S1421: judging the type of the initial position data, judging whether the initial position data is an initial position link, if so, entering step S1422, executing a first image code generation process, otherwise, entering step S1423, executing a second image code generation process.
Step S1422: and acquiring the character string content linked at the initial position, then calling a CIFilter filter class in the iOS system, and generating an image code by utilizing the technology related to the CoreGrahic library.
Step S1423: and acquiring a link character string of the initial position program interface, inputting the link character string into a Base64 character string-to-picture interface for character image conversion, and generating an image code.
The process of generating the first poster image is performed next, including the following steps S1431 to S1434.
Step S1431: if the portrait image is checked, the step S1432 is performed to select the base pattern template including the third preset area, otherwise the step S1433 is performed to select the base pattern template not including the third preset area.
Step S1434: and synthesizing the initial image in a first preset area of the base map template to obtain a first drawing image.
A process of generating a second poster image is performed next, including steps S1441 to S1443.
In step S1441, a mask image is generated.
Step S1442: and synthesizing the image code on the mask image to obtain a second picture report image.
Step S1443: if the portrait image is checked, the step S1444 is performed, the target picture image is obtained according to the portrait image, the first picture image and the second picture image, otherwise the step S1445 is performed, and the target picture image is obtained according to the first picture image and the second picture image.
Step S1450: and obtaining the visual pictorial and ending the operation, wherein the visual pictorial is the target pictorial image in the step.
Step S1460: ending the operation.
As can be seen from the above embodiments, in the present embodiment, in a one-key generation operation manner, a user only needs to select his or her desired content in advance, and click "generate a drawing" to complete the composition of an image drawing, thereby improving the convenience of user interaction in use. Meanwhile, the positions of the initial image, the image code and the portrait image are obtained through intelligent algorithm analysis, rather than simply splicing the pictures with fixed positions and sizes, the reasonable synthesis positions and the reasonable image sizes of a plurality of picture elements such as the portrait image, the initial image, the image code of the two-dimensional code, the mask image and the like are analyzed, the result is returned to the front end part of the mobile terminal, and then the front end completes final image synthesis according to the information.
According to the technical scheme provided by the embodiment of the invention, the initial generation materials including the initial position data, the initial image and the portrait image are obtained; generating an image code according to the initial position data, acquiring a base map template, synthesizing the base map template with the initial image to obtain a first pictorial image, generating a mask image according to a preset masking rule, synthesizing the image code on the mask image to obtain a second pictorial image, and finally obtaining a target pictorial image according to the portrait image, the first pictorial image and the second pictorial image.
In an embodiment, the method for generating the poster image is used for a financial scene. Because the image poster is a common propaganda poster in the financial field, the use of the image poster in a financial institution can cover various aspects, such as product propaganda, brand popularization and the like. Financial institutions utilize visual posters with professionals and recognizabilities to promote brand images and attract attention of targeted customer groups. In the financial scene, the drawing image generation is carried out according to initial generation materials including initial position data, initial images and portrait images related to specific financial business, and finally the target drawing image is obtained. The target pictorial image is used for improving indexes such as the number of users accessing the financial service scene, the stay time and stay frequency of a single user in the financial service scene, and the like, and plays an important role in marketing activities of financial institutions.
The embodiment of the application avoids uploading the initial generation materials to the synthesis server, thereby reducing the problem of low efficiency of synthesizing the drawing image caused by the transmission of the network layer large file data. When a large number of drawing synthesizing operations are needed, the process of drawing generation is dispersed to the mobile terminal equipment, and the drawing image generation is performed by utilizing the image synthesizing function interface of the mobile terminal, so that the synthesizing efficiency of the drawing image is improved.
The embodiment of the invention also provides a device for generating the drawing image, which can realize the method for generating the drawing image, and referring to fig. 15, the device comprises:
the generate material acquisition module 1510: the method is used for acquiring initial generation materials, and the initial generation materials comprise: initial position data, an initial image, and a portrait image.
An image code generation module 1520: the method is used for converting the initial position data into character images and generating image codes.
The first poster image composition module 1530: and the first image synthesis processing is used for carrying out first image synthesis processing according to the preset base pattern template and the initial image to obtain a first drawing image.
Mask image generation module 1540: for generating a mask image according to a preset masking rule.
The second poster image synthesis module 1550: and the method is used for synthesizing the image code on the mask image to obtain a second picture report image.
Target pictorial image synthesis module 1560: and the target picture image is obtained by performing second image synthesis processing according to the portrait image, the first picture image and the second picture image.
The specific implementation of the drawing image generating device in this embodiment is basically the same as the specific implementation of the above-mentioned drawing image generating method, and will not be described in detail here.
The embodiment of the invention also provides electronic equipment, which comprises:
at least one memory;
at least one processor;
at least one program;
the program is stored in the memory, and the processor executes the at least one program to implement the method for generating a drawing image according to the present invention. The electronic equipment can be any intelligent terminal including a mobile phone, a tablet personal computer, a personal digital assistant (Personal Digital Assistant, PDA for short), a vehicle-mounted computer and the like.
Referring to fig. 16, fig. 16 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 1601 may be implemented by a general-purpose CPU (central processing unit), a microprocessor, an application-specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solution provided by the embodiments of the present invention;
the memory 1602 may be implemented in the form of a ROM (read only memory), a static storage device, a dynamic storage device, or a RAM (random access memory). The memory 1602 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present disclosure are implemented by software or firmware, relevant program codes are stored in the memory 1602, and the processor 1601 invokes the method for generating a drawing image according to the embodiments of the present disclosure;
An input/output interface 1603 for implementing information input and output;
the communication interface 1604 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.), or may implement communication in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.); and
a bus 1605 for transferring information between various components of the device (e.g., processor 1601, memory 1602, input/output interface 1603, and communication interface 1604);
wherein the processor 1601, the memory 1602, the input/output interface 1603 and the communication interface 1604 enable communication connection with each other inside the device via a bus 1605.
The embodiment of the application also provides a storage medium, which is a computer readable storage medium, and the storage medium stores a computer program which realizes the method for generating the drawing image when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The method, the device, the electronic equipment and the storage medium for generating the drawing image are characterized in that initial generation materials including initial position data, initial images and portrait images are obtained; generating an image code according to the initial position data, acquiring a base map template, synthesizing the base map template with the initial image to obtain a first pictorial image, generating a mask image according to a preset masking rule, synthesizing the image code on the mask image to obtain a second pictorial image, and finally obtaining a target pictorial image according to the portrait image, the first pictorial image and the second pictorial image. The embodiment of the application avoids uploading the initial generation materials to the synthesis server, thereby reducing the problem of low efficiency of synthesizing the drawing image caused by the transmission of the network layer large file data. When a large number of drawing synthesizing operations are needed, the process of drawing generation is dispersed to terminal equipment, and drawing image generation is performed by utilizing an image synthesizing function interface of the terminal, so that the synthesizing efficiency of the drawing images is improved.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by persons skilled in the art that the embodiments of the application are not limited by the illustrations, and that more or fewer steps than those shown may be included, or certain steps may be combined, or different steps may be included.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and are not thereby limiting the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. The method for generating the drawing image is characterized by being applied to a mobile terminal, wherein the mobile terminal comprises an image code generation interface, and the method comprises the following steps:
acquiring initial generation materials, wherein the initial generation materials comprise: initial position data, an initial image, and a portrait image;
performing character image conversion on the initial position data by using the image code generation interface to generate an image code;
carrying out first image synthesis processing according to a preset base map template and the initial image to obtain a first drawing image;
generating a mask image according to a preset masking rule;
synthesizing the image code on the mask image to obtain a second painting image;
and carrying out second image synthesis processing according to the portrait image, the first poster image and the second poster image to obtain a target poster image.
2. The method for generating a pictorial image as defined in claim 1 wherein the image code generating interface comprises: a two-dimensional code filter interface and a color filter interface; the initial position data includes: initial position linking; the character image conversion is carried out on the initial position data by utilizing an image code generation interface of the terminal to generate an image code, and the method comprises the following steps:
acquiring the character string content linked at the initial position;
inputting the character string content into the two-dimensional code filter interface to generate a two-dimensional code, so as to obtain a first image code;
and inputting the first image code into the color filter interface to perform color synthesis to obtain the image code.
3. The method for generating a pictorial image as defined in claim 1 wherein the image code generating interface comprises: a character string conversion interface; the initial position data includes: an initial position program interface; the step of performing character image conversion on the initial position data to generate an image code comprises the following steps:
acquiring a link character string of the initial position program interface;
and inputting the linked character string into the character string conversion interface to perform character image conversion, and generating the image code.
4. The method for generating a drawing image according to claim 1, wherein the performing a first image synthesis process according to a preset base pattern template and the initial image to obtain a first drawing image includes:
acquiring the image size and the image color style of the initial image;
acquiring the base map template from a preset resource file according to the image size and the image color style, wherein the base map template comprises a first preset area;
and synthesizing the initial image in the first preset area to obtain the first drawing image.
5. The method for generating a pictorial image according to any one of claims 1 to 4, wherein the preset masking rule includes: presetting mask size rules and preset mask transparency; the generating a mask image according to a preset masking rule includes:
acquiring first size information of the first pictorial image, wherein the first size information comprises: first height information and first width information;
generating mask height of the mask image according to the preset mask size rule and the first height information;
generating a mask width of the mask image according to the preset mask size rule and the first width information;
Generating the transparency of the mask image according to the preset mask transparency;
and generating the mask image according to the mask height, the mask width and the transparency.
6. The method for generating a second pictorial image as in claim 5, wherein said combining said image code onto said mask image to obtain a second pictorial image, comprising:
acquiring position information of a second preset area of the mask image;
generating a synthetic region of the image code according to the position information of the second preset region;
and synthesizing the image code in the synthesis area to obtain the second pictorial image.
7. The method of claim 5, wherein the second image composition process comprises: performing image stitching according to a preset stitching rule or performing image lamination according to a preset lamination rule;
and performing second image synthesis processing according to the portrait image, the first poster image and the second poster image to obtain a target poster image, wherein the method comprises the following steps:
acquiring the human image size of the human image, wherein the human image size comprises human image width information and human image height information;
Adjusting the portrait width information according to the first pictorial image to obtain target width information;
adjusting the portrait height information according to the target width information to obtain target height information;
performing size adjustment on the portrait image according to the target height information and the target width information to obtain a preliminary portrait;
and carrying out second image synthesis processing on the preliminary portrait, the first pictorial image and the second pictorial image to obtain the target pictorial image.
8. A poster image generating apparatus for use with a mobile terminal, the mobile terminal including an image code generating interface, the apparatus comprising:
and a material acquisition module is generated: the method is used for acquiring initial generation materials, and the initial generation materials comprise the following steps: initial position data, an initial image, and a portrait image;
an image code generation module: the initial position data is used for carrying out character image conversion to generate an image code;
the first drawing image synthesis module: the first image synthesis processing is used for carrying out first image synthesis processing according to a preset base map template and the initial image to obtain a first drawing image;
a mask image generation module: the method comprises the steps of generating a mask image according to a preset masking rule;
The second drawing image synthesis module: the image code is used for synthesizing the image code on the mask image to obtain a second painting image;
the target drawing image synthesis module: and the target drawing image is obtained by performing second image synthesis processing according to the portrait image, the first drawing image and the second drawing image.
9. An electronic device comprising a memory storing a computer program and a processor that when executing the computer program implements the method of generating a pictorial image as claimed in any of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method of generating a poster image according to any of claims 1 to 7.
CN202310834333.3A 2023-07-07 2023-07-07 Method, device, equipment and storage medium for generating drawing image Pending CN116958296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310834333.3A CN116958296A (en) 2023-07-07 2023-07-07 Method, device, equipment and storage medium for generating drawing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310834333.3A CN116958296A (en) 2023-07-07 2023-07-07 Method, device, equipment and storage medium for generating drawing image

Publications (1)

Publication Number Publication Date
CN116958296A true CN116958296A (en) 2023-10-27

Family

ID=88445522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310834333.3A Pending CN116958296A (en) 2023-07-07 2023-07-07 Method, device, equipment and storage medium for generating drawing image

Country Status (1)

Country Link
CN (1) CN116958296A (en)

Similar Documents

Publication Publication Date Title
KR102010221B1 (en) Smartphone-based methods and systems
US20070229496A1 (en) Three-dimensional imaging system and methods
US20110167336A1 (en) Gesture-based web site design
Zhang et al. Viscode: Embedding information in visualization images using encoder-decoder network
US8553977B2 (en) Converting continuous tone images
US9342498B2 (en) System and method for generating a design template based on graphical input
US20110167360A1 (en) Incoming web traffic conversion
CN106575158B (en) Environment mapping virtualization mechanism
CN112651475B (en) Two-dimensional code display method, device, equipment and medium
CN110163866A (en) A kind of image processing method, electronic equipment and computer readable storage medium
Datta Learning OpenCV 3 Application Development
CN105045587A (en) Picture display method and apparatus
CN110263301B (en) Method and device for determining color of text
CN106293658B (en) Interface component generation method and equipment
CN110599601A (en) Method and device for generating painting image, terminal and storage medium
CN104184791A (en) Image effect extraction
CN112308939B (en) Image generation method and device
US9064350B2 (en) Methods of providing graphics data and displaying
CN116958296A (en) Method, device, equipment and storage medium for generating drawing image
US9965446B1 (en) Formatting a content item having a scalable object
CN111383289A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
Yu et al. Matplotlib 2. x By Example
US8903120B1 (en) System and method for providing an image having an embedded matrix code
CN113515922A (en) Document processing method, system, device and interaction device
US9449250B1 (en) Image download protection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination