CN111489419B - Poster generation method and system - Google Patents

Poster generation method and system Download PDF

Info

Publication number
CN111489419B
CN111489419B CN202010594003.8A CN202010594003A CN111489419B CN 111489419 B CN111489419 B CN 111489419B CN 202010594003 A CN202010594003 A CN 202010594003A CN 111489419 B CN111489419 B CN 111489419B
Authority
CN
China
Prior art keywords
biological
poster
image information
filling
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010594003.8A
Other languages
Chinese (zh)
Other versions
CN111489419A (en
Inventor
陈万锋
李韶辉
谢统玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kuaizi Information Technology Co ltd
Original Assignee
Guangzhou Kuaizi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kuaizi Information Technology Co ltd filed Critical Guangzhou Kuaizi Information Technology Co ltd
Priority to CN202010594003.8A priority Critical patent/CN111489419B/en
Publication of CN111489419A publication Critical patent/CN111489419A/en
Application granted granted Critical
Publication of CN111489419B publication Critical patent/CN111489419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Abstract

The embodiment of the specification provides a poster generation method and a poster generation system. The method comprises the following steps: obtaining a poster template; acquiring one or more preset filling conditions corresponding to the poster template; acquiring image information of one or more biological objects; determining pose information of a biological object in the image information based on a pose recognition model; screening the image information based on the filling preset condition and the attitude information to determine filling image information; and filling at least part of the filling image information into the poster template based on the filling preset condition to generate the poster.

Description

Poster generation method and system
Technical Field
The present disclosure relates to the field of poster generation technology, and in particular, to a method and a system for poster generation.
Background
The poster is used as a mode for publicizing and displaying products, services and the like of operators or various platforms, and the design of the poster influences the publicizing and displaying effect of the products or the services. For example, the proportion of the primary content in a poster to the background of the poster, and the location of the content in the poster, can affect the promotional display of the poster. The poster may be generated by combining an image of the object to be contained in the poster with a poster template, the size, dimensions, content, etc. of the image affecting the generated poster.
It is therefore desirable to provide a method and system for poster generation.
Disclosure of Invention
One aspect of the present description provides a method of poster generation. The method comprises the following steps: obtaining a poster template; acquiring one or more preset filling conditions corresponding to the poster template; acquiring image information of one or more biological objects; determining pose information of a biological object in the image information based on a pose recognition model; screening the image information based on the filling preset condition and the attitude information to determine filling image information; and filling at least part of the filling image information into the poster template based on the filling preset condition to generate the poster.
In some embodiments, the determining pose information of the biological object in the image information based on the pose recognition model includes: determining a biological key element of a biological object in the image information based on the gesture recognition model; determining the pose information based on the bio-critical elements.
In some embodiments, the gesture recognition model comprises a human bone keypoint recognition model; the biological key elements comprise human skeletal key points.
In some embodiments, the fill preset condition comprises a biological motion pose condition; the determining the pose information of the biological object in the image information based on the pose recognition model comprises: determining the motion posture of the human body according to the human skeleton key points, wherein the motion posture comprises at least one of running, jumping, standing, lying and sitting; the screening the image information based on the preset filling condition and the attitude information to determine filled image information includes: and screening the image information based on the biological motion posture condition and the motion posture to determine filling image information.
In some embodiments, the fill preset condition comprises a biological emotional condition; the method further comprises the following steps: processing the image information based on a machine learning classification model to determine a biological emotional characteristic of a biological object in the image information; and screening the image information based on the biological emotional condition and the biological emotional characteristic to determine filling image information.
In some embodiments, the filling preset condition further includes position information and/or display information of the image to be filled in the poster template.
In some embodiments, the populating at least part of the population image information to the poster template based on the population preset condition to generate the poster includes: and cutting the filling image information based on the filling condition information, and filling the cut filling image information into the poster template.
Another aspect of the present description provides a poster generation system. The system comprises: the device comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining a poster template and obtaining one or more filling preset conditions corresponding to the poster template; a second acquisition module for acquiring image information of one or more biological objects; a first determination module, configured to determine pose information of a biological object in the image information based on a pose recognition model; the second determining module is used for screening the image information based on the filling preset condition and the posture information and determining filling image information; and the poster generating module is used for filling at least part of the filling image information into the poster template based on the filling preset condition to generate the poster.
Another aspect of the specification provides a poster generation apparatus comprising a processor for performing a method as described above.
Another aspect of the present specification provides a computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform the method as described above.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a poster generation system shown in accordance with some embodiments of the present description;
FIG. 2 is a schematic diagram of a poster generation method shown in accordance with some embodiments of the present description;
fig. 3 is an exemplary flow diagram of a poster generation method shown in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Although various references are made herein to certain modules or units in a system according to embodiments of the present description, any number of different modules or units may be used and run on the client and/or server. The modules are merely illustrative and different aspects of the systems and methods may use different modules.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The embodiment of the specification provides a poster generation method. The method may be used to generate posters containing biological objects for promotion or display of relevant information. For example, posters containing animals may be generated for promoting animal protection and the like. For another example, a poster containing a person may be generated for publicizing a movie or the like. In some embodiments, a poster may be generated by combining an image of a biological object that is desired to be contained with a corresponding poster template. The size, the moving posture, the biological part, and the like of the biological object in the image may not coincide with the size, the moving posture, the biological part, and the like of the portion to be filled in the poster template. The poster generation method provided by the embodiment of the specification can screen fillable images from images of biological objects based on one or more filling preset conditions corresponding to poster templates, and can also process the fillable images through cutting, zooming and the like to generate higher-quality posters.
Fig. 1 is a schematic diagram of an application scenario of a poster generation system shown in accordance with some embodiments of the present description.
The poster generation system 100 may be an online platform that may include a server 110, a pending image 120, a poster 130, and a user terminal 140. The poster generation system 100 can be applied in a variety of reasonable scenarios to generate a poster for a user's requirements.
In a typical application scenario, the server 110 may obtain the to-be-processed image 120, and generate the poster 130 meeting the user's requirement after further processing the to-be-processed image 120. For more content generated by the poster, reference may be made to fig. 2 and fig. 3 and their associated descriptions, which are not repeated herein. In some embodiments, the server 110 may retrieve the pending image 120 from the user terminal 140 and/or transmit the generated poster 130 to the user terminal 140 so that the user may retrieve the corresponding poster. In some embodiments, the server 110 may display the generated poster 130 on the user terminal 140 so that more users can see the poster.
The server 110 may be used to process the image to be processed. In some embodiments, the server 110 may be a single server or a group of servers. The set of servers can be centralized or distributed (e.g., the servers 110 can be a distributed system), can be dedicated, or can be serviced by other devices or systems at the same time. In some embodiments, the server 110 may be regional or remote. In some embodiments, the server 110 may be implemented on a cloud platform, or provided in a virtual manner. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof.
In some embodiments, the server 110 may include a processing device. The processing device may process data and/or information obtained from other devices or system components. The processing device may execute program instructions based on the data, information, and/or processing results to perform one or more of the functions described herein. In some embodiments, a processing device may include one or more sub-processing devices (e.g., a single-core processing device or a multi-core, multi-core processing device). By way of example only, a processing device may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a programmable logic circuit (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof.
The to-be-processed image 120 refers to an image that can be used to generate a poster. In some embodiments, the image to be processed 120 may include a biometric image 120-1 and/or a poster template 120-2. The biometric image 120-1 refers to an image including different poses of various types of biometrics. The format of the biological image 120-1 may include, but is not limited to, bmp, jpg, png, tif, gif, cdr, and the like. The poster template 120-2 may be an image containing information of the user's desire for the poster. Based on the poster template, the position of the material in the poster template, which part of the material is displayed (such as above the waist or above the neck of a living being), the posture, expression, hue, saturation and other information of the material display can be obtained. In some embodiments, the poster template may include a background portion and/or a foreground portion. Further description of the biological image and the poster template can be found in other parts of the specification (for example, fig. 2, fig. 3 and the related description thereof), and will not be repeated herein.
The poster 130 is an image generated according to the user's needs. The poster 130 may be composed of elements such as pictures, text, colors, spaces, and the like. The types of objects displayed in the poster may include, but are not limited to, living beings (e.g., humans, animals, plants, etc.), non-living beings (e.g., calligraphy posters, word poster, etc.). For clearer explanation, in the embodiments of the present specification, a biological object poster will be taken as an example, and particularly, a person poster will be taken as an example for description.
The user terminal 140 may be used to input and/or receive information and/or data. For example, the user may input the image to be processed 120 and/or receive the generated poster 130 through the user terminal 140. User terminal 140 refers to one or more terminal devices or software used by a user. In some embodiments, the user terminal 140 may be used by one or more users, which may include users who directly use the service, and may also include other related users. In some embodiments, the user terminal 140 may include one or any combination of a mobile device 140-1, a tablet computer 140-2, a laptop computer 140-3, etc., or other devices having input and/or output capabilities. In some embodiments, other devices with input and/or output capabilities may include a dedicated terminal located in a public place. The above examples are intended only to illustrate the broad scope of the user terminal 140 device and not to limit its scope.
Fig. 2 is a schematic diagram of a poster generation method, shown in accordance with some embodiments of the present description. As shown in fig. 2, a poster generation method 200 may be implemented at a processing device, comprising:
step 210, obtaining the poster template and one or more filling preset conditions corresponding to the poster template. In some embodiments, step 210 may be performed by the first obtaining module.
The poster template may be synthesized with an image containing a biological object to generate a corresponding poster. In some embodiments, the processing device may retrieve the poster template from a database. The database may include various open source databases of web pages, social media, and the like. In some embodiments, the processing device may obtain the poster template from a user terminal or other data source, or generate the poster template in real-time in any feasible manner, which is not limited by this specification.
The filling preset conditions may reflect requirements for the biological object to be filled in the poster template. In some embodiments, the populating preset conditions may include a biological site condition and/or a biological motion pose condition to be populated in the poster template. In some embodiments, the biological site may include, but is not limited to, one or more of a head, a neck, a shoulder, a waist, an extremity, etc. of the organism. The biological site condition may reflect a requirement for a biological site of a biological object contained in the poster that needs to be generated. For example, the biological site condition may be that a part of a person above the waist is included in a poster, a part of a person above the neck is included in a poster, a limb of a person is included in a poster, a whole body of a person is included in a poster, or the like. The biological motion pose conditions may reflect the action requirements for the biological object in the poster that needs to be generated. In some embodiments, the biokinematic pose may include, but is not limited to, one or more of running, jumping, standing, lying, sitting, squatting, and the like. In some embodiments, the populating preset conditions may also include a biological emotional condition of the biological object to be populated in the poster template. The biological emotional condition may reflect emotional requirements of the biological object in the poster that needs to be generated. In some embodiments, the biological emotions may include, but are not limited to, one or more of smiling, laughing, meditation, depression, coolness, and the like. In some embodiments, the filling preset condition may further include position information and/or display information (such as hue, saturation, etc.) of the image to be filled in the poster template, and the like.
In some embodiments, the fill preset condition may be obtained simultaneously with the poster template, or separately from the poster template. For example, the processing device may acquire one or more filling preset conditions corresponding to the poster template based on the poster template after acquiring the poster template. In some embodiments, the preset filling condition may be obtained in any feasible manner, and the present specification is not limited thereto.
At step 220, image information of one or more biological objects is acquired. In some embodiments, step 220 may be performed by the second acquisition module.
In some embodiments, the image information of the biological object includes at least an image of the biological object, i.e. a biological image. In some embodiments, the image information of the biological object may further include one or more of a photographing time of the biological image, a photographing position of the biological image, format information of the biological image, pixel information of the biological image, size information of the biological image, and the like. In some embodiments, the processing device may obtain image information of the biological object from one or more of a database, a user terminal, social media, etc., or other open database. In some alternative embodiments, the processing device may obtain the image information of the biological object in any feasible manner or channel, which is not limited by this specification.
In step 230, pose information of the biological object in the image information is determined based on the pose recognition model. In some embodiments, step 230 may be performed by the first determining module.
The pose information may reflect various poses of the biological object represented in the image. For example, the pose information may include, but is not limited to, a motion pose, a pose, etc. The processing device may screen out images that meet the filling preset condition by determining the posture information of the biological object in the image information, for generating the corresponding poster.
In some embodiments, the processing device may determine pose information of the biological object in the image information by information matching. In some embodiments, the processing device may determine pose information of the biological object in the image information based on the pose recognition model. Wherein the gesture recognition model comprises a machine learning model. In some embodiments, the processing device may determine a biological key element of the biological object in the image information based on the gesture recognition model, and determine the gesture information based on the biological key element.
In some embodiments, when the biological object is a human, the biological image may be a human image, the gesture recognition model may include a human bone keypoint recognition model, and the biological key element may include a human bone keypoint. When the biological object is a person, the specific process of determining the posture information may be:
step 233, determining the human skeleton key points in the image information based on the human skeleton key point identification model.
The human skeleton key point identification model can be input into a person image, and the output can be the positions of the skeleton key points of the person in the image. In some embodiments, the human bone keypoint identification model may comprise a machine learning model. In some embodiments, the machine learning model may include, but is not limited to, a Critica Path Method network, a DeeperCut network, an AlphaPose network, and the like.
In some embodiments, the human skeletal keypoint recognition model may be generated by training. For example, a plurality of human images can be obtained, skeletal key points of tasks in the human images are marked to generate training data, and the training data is input into an initial human skeletal key point recognition model for supervised training to obtain a trained human skeletal key point recognition model.
And 235, determining the motion posture of the human body according to the key points of the human skeleton.
In some embodiments, the processing device may determine the motion pose of the human body from human skeletal key points in the image information. In some embodiments, the processing device may determine the motion pose of the human body based on position information of skeletal key points of the human body in the human image. For example, the distance and/or angle between the relevant skeletal key points can be determined according to the position information of each skeletal key point of the human body in the human image, and then the motion posture of the human body in the biological image can be determined. The related bone key points can be the bone points of the same limb of the human body. For example, for a leg, the related skeletal key points may be key points corresponding to an ankle, a knee, a calf, a thigh, etc., and when an angle between the knee and the key points corresponding to the ankle and the thigh is 45 degrees or less, the exercise posture of the human body may be considered as running.
In some alternative embodiments, the processing device may determine the motion pose of the human body in other reasonable manners, for example, the motion pose of the human body in the image of the person may be recognized by a machine learning model, which is not limited in this specification.
And 240, screening the image based on the filling preset condition and the posture information. In some embodiments, step 240 may be performed by the second determination module.
The processing device may filter the acquired image information based on the fill preset condition and the posture information of the biological object in the image information, and determine fill image information. In some embodiments, the processing device may filter out image information that satisfies one or more fill preset conditions as fill image information. For example, the processing device may filter out image information in which the posture information satisfies the biological motion posture condition as fill-in image information; alternatively, candidate image information in which the posture information satisfies the biological motion posture condition may be screened out, and filled image information in which the posture information satisfies the biological part condition may be determined.
In some embodiments, the biological motion pose condition may include one or more requirements for a motion pose, a magnitude of a pose amplitude, and the like of the biological object.
In some embodiments, the processing device may determine the filling image information by matching pose information of the biological object in the image information with a filling preset condition. For example, image information corresponding to posture information that coincides with or is close to a preset biological motion posture may be screened out as filler image information most suitable for generating a poster based on the biological motion posture condition. By screening the images based on the preset filling condition, the generated poster can better meet the requirements, is more attractive and the like.
In some alternative embodiments, the processing device may filter the filling image information again based on other preset conditions, for example, display information, biological emotional conditions (see fig. 3 and the related description for details), and the like, which is not limited by the present specification.
And step 250, generating the poster based on the filling preset condition and the filling image information. In some embodiments, step 250 may be performed by the poster generation module.
The processing device may populate at least a portion of the populating image information to the poster template based on the population preset conditions, generating a corresponding poster. For example, the screened filling image information can be completely filled into a poster template to generate a corresponding poster; alternatively, the corresponding portion of the image information may be selected for population based on the biological site condition or other preset condition, and populated into the poster template to generate the corresponding poster. In some embodiments, the processing device may determine a filler image satisfying a filler preset condition based on the image obtained in step 233, or step 235, or step 240, and fill the filler image into the poster template to generate a corresponding poster. For example, a biological image may be populated into a foreground portion of a poster template to generate a corresponding poster. In some embodiments, the processing device may match the poster template with the filler image information by post-processing the filler image information and filling the processed filler image information into the poster template. In some embodiments, post-processing may include, but is not limited to, one or more of rotating, cropping, zooming in, zooming out, and the like.
FIG. 3 is an exemplary flow chart of a poster generation method shown in accordance with further embodiments of the present description.
At step 310, a biological emotional condition is obtained.
A biological emotion may be a emotional expression mapped by the subjective experience of the organism to the outside, which may reflect the emotional state of the organism. For example, a person's mood may include a mood of happiness, hurry, anger, low fall, surprise, and the like. In some embodiments, a biological emotion may exhibit a mental state of the organism to infect the organism seeing the emotion. A biological emotional condition refers to a requirement for a biological emotion. For example, if a person with excited emotion is matched with a poster, the excited emotion is the requirement of the poster for the biological emotion, namely the biological emotional condition of the poster. In some embodiments, the processing device may obtain the biological emotional condition from the user terminal. In some embodiments, the processing device may obtain the biological emotional condition from a web page, poster template, or other feasible manner, which is not limited by this specification. For more on biological emotional conditions, see fig. 2 and the related description, which are not repeated herein.
At step 320, a biological emotional characteristic of the biological object is determined based on the machine-learned classification model.
In some embodiments, the machine learning classification Model may include, but is not limited to, Logistic Regression (LR), Decision Tree (Model) Model, Support Vector Machine (SVM) Model, Naive Bayesian Model (NBM), and the like. Preferably, the machine learning classification model may be a convolutional neural network. In some embodiments, the input to the machine-learned classification model may be image information of the biological object and the output may be a biological emotional characteristic of the biological object. The biological emotional characteristic may reflect an emotional type of the biological subject, e.g., crying, smiling, laughing. In some embodiments, the emotion type of the biological object may be directly obtained by inputting the biological image into a machine learning classification model.
The machine learning classification model may be obtained based on training samples. For example, images of various biological emotions can be used as sample data, biological emotion features in the images are identified, training data are generated, the training data with the identifications are input into an initial machine learning model for training, and a trained machine learning classification model is obtained.
Step 330, determining filling image information based on the emotional condition of the living being and the emotional characteristic of the living being.
The processing device may screen out, from the acquired plurality of image information, image information corresponding to a biological emotional characteristic that satisfies a preset biological emotional condition as fill image information based on the biological emotional characteristic of the biological object. For example, the emotional characteristics of the living being in the image information may be compared with a preset emotional requirement of the living being, and if the emotional characteristics and the emotional requirement of the living being in the image information match, the living being is determined to be the filled image information. By screening the image information, the image information with more accurate biological emotion expression can be obtained, and a poster which is more attractive and meets the requirements can be generated.
It should be noted that the above-mentioned embodiments are only described for the purpose of illustration and description, and do not limit the applicable scope of the present application. Various modifications and alterations to methods 200 and 300 will be apparent to those skilled in the art in light of the present disclosure. However, such modifications and variations are intended to be within the scope of the present application.
In some embodiments, a poster generation system (such as poster generation system 100) may include a first acquisition module, a second acquisition module, a first determination module, a second determination module, and a poster generation module.
The first obtaining module may be configured to obtain the poster template and obtain one or more filling preset conditions corresponding to the poster template. In some embodiments, the fill-in preset condition may include one or more of a biological motor posture condition, a biological location condition, a biological emotional condition, and the like. In some embodiments, the filling preset condition may further include position information and/or display information of the image to be filled in the poster template.
The second acquisition module may be for acquiring image information of one or more biological objects.
The first determination module may be configured to determine pose information of the biological object in the image information based on a pose recognition model. In some embodiments, the first determination module may determine a biological key element of the biological object in the image information based on the gesture recognition model and determine the gesture information of the biological object based on the biological key element. In some embodiments, the gesture recognition model may include a human bone keypoint recognition model and the bio-critical elements may include human bone keypoints. In some embodiments, the first determination module may determine the motion pose of the human body from the human bone key points. The motion gesture may include at least one of running, jumping, standing, lying, sitting.
The second determination module may be configured to filter the image information based on the filling preset condition and the posture information, and determine to fill the image information. In some embodiments, the second determination module may filter the image information to determine the filler image information based on the biological motion pose condition and the motion pose of the human body. In some embodiments, the second determination module may process the image information based on the machine-learned classification model to determine a biological emotional characteristic of the biological object in the image information, and filter the image information based on the biological emotional condition and the biological emotional characteristic to determine the filler image information.
The poster generation module can be used for filling at least part of filling image information into the poster template based on filling preset conditions to generate the poster. In some embodiments, the poster generation module may crop the filler image information based on the filling condition information, and fill the cropped filler image information into the poster template to generate the poster.
More descriptions about the modules can be found in other places of this specification (for example, fig. 2, fig. 3 and their related descriptions), and are not repeated here. It should be noted that the above description of the poster generation system and its modules is merely for convenience of description and does not limit the present description to the scope of the illustrated embodiments.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, VisualBasic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (6)

1. A method of poster generation, the method being performed by at least one processor, the method comprising:
obtaining a poster template;
acquiring one or more filling preset conditions corresponding to the poster template, wherein the filling preset conditions comprise biological motion posture conditions and biological emotion conditions;
acquiring image information of one or more biological objects;
determining biological key elements of the biological object in the image information based on a gesture recognition model; determining pose information based on the bio-key element; the gesture recognition model comprises a human skeleton key point recognition model; the biological key elements comprise human skeleton key points; determining the motion posture of the human body according to the human skeleton key points, wherein the motion posture comprises at least one of running, jumping, standing, lying and sitting;
processing the image information based on a machine learning classification model to determine a biological emotional characteristic of a biological object in the image information;
screening the image information based on the biological motion posture condition, the posture information, the biological emotion condition and the biological emotion characteristic to determine filled image information;
and filling at least part of the filling image information into the poster template based on the filling preset condition to generate the poster.
2. The method of claim 1, wherein the preset conditions further comprise position information and/or display information of the image to be filled in the poster template.
3. The method of claim 1, wherein the populating at least a portion of the populated image information into the poster template based on the populated preset conditions to generate the poster includes:
and cutting the filling image information based on the filling preset condition, and filling the cut filling image information into the poster template.
4. A system for poster generation, the system comprising:
the device comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining a poster template and obtaining one or more preset filling conditions corresponding to the poster template, and the preset filling conditions comprise biological motion posture conditions and biological emotion conditions;
a second acquisition module for acquiring image information of one or more biological objects;
a first determination module, configured to determine a biological key element of a biological object in the image information based on a gesture recognition model; determining pose information based on the bio-key element; the gesture recognition model comprises a human skeleton key point recognition model; the biological key elements comprise human skeleton key points; determining the motion posture of the human body according to the human skeleton key points, wherein the motion posture comprises at least one of running, jumping, standing, lying and sitting;
a first determination module, further configured to process the image information based on a machine learning classification model to determine a biological emotional characteristic of a biological object in the image information;
the second determination module is used for screening the image information based on the biological motion posture condition, the posture information, the biological emotion condition and the biological emotion characteristic to determine filled image information;
and the poster generating module is used for filling at least part of the filling image information into the poster template based on the filling preset condition to generate the poster.
5. An apparatus for poster generation comprising a processor for performing the method of any of claims 1-3.
6. A computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform the method of any one of claims 1 to 3.
CN202010594003.8A 2020-06-28 2020-06-28 Poster generation method and system Active CN111489419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010594003.8A CN111489419B (en) 2020-06-28 2020-06-28 Poster generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010594003.8A CN111489419B (en) 2020-06-28 2020-06-28 Poster generation method and system

Publications (2)

Publication Number Publication Date
CN111489419A CN111489419A (en) 2020-08-04
CN111489419B true CN111489419B (en) 2021-01-08

Family

ID=71813431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010594003.8A Active CN111489419B (en) 2020-06-28 2020-06-28 Poster generation method and system

Country Status (1)

Country Link
CN (1) CN111489419B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112349096A (en) * 2020-10-28 2021-02-09 厦门博海中天信息科技有限公司 Method, system, medium and equipment for intelligently identifying pedestrians on road
CN112541954B (en) * 2020-12-22 2024-03-26 畅捷通信息技术股份有限公司 Method for intelligently producing poster according to graphic features
CN117689782A (en) * 2024-02-02 2024-03-12 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for generating poster image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976203B (en) * 2016-04-28 2020-10-30 广州筷子信息科技有限公司 Automatic generation method and device of internet advertisement originality
CN109903076A (en) * 2019-01-16 2019-06-18 北京三快在线科技有限公司 A kind of ad data generation method, system, electronic equipment and storage medium
CN110598017B (en) * 2019-08-29 2020-08-14 杭州光云科技股份有限公司 Self-learning-based commodity detail page generation method
CN111242691A (en) * 2020-01-14 2020-06-05 广东博智林机器人有限公司 Method and device for generating advertisement poster, storage medium and terminal equipment

Also Published As

Publication number Publication date
CN111489419A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN111489419B (en) Poster generation method and system
Vernon et al. Modeling first impressions from highly variable facial images
Pampouchidou et al. Depression assessment by fusing high and low level features from audio, video, and text
US20170300785A1 (en) Deep convolutional neural network prediction of image professionalism
US20170300811A1 (en) Dynamic loss function based on statistics in loss layer of deep convolutional neural network
US10482177B2 (en) Deep reading machine and method
US9886651B2 (en) Cold start machine learning algorithm
US10043240B2 (en) Optimal cropping of digital image based on professionalism score of subject
Chaudhari et al. ViTFER: facial emotion recognition with vision transformers
JP6569183B2 (en) Information processing apparatus, method, and program
CN109599187A (en) A kind of online interrogation point examines method, server, terminal, equipment and medium
Ye et al. MasterplanGAN: Facilitating the smart rendering of urban master plans via generative adversarial networks
US10043254B2 (en) Optimal image transformation based on professionalism score of subject
WO2017202864A1 (en) System and method for imagery mnemonic creation
Chhabra et al. Human emotions recognition, analysis and transformation by the Bioenergy Field in smart grid using image processing
Riganelli et al. EmEx, a tool for automated emotive face recognition using convolutional neural networks
CN110008922A (en) Image processing method, unit, medium for terminal device
Burrows et al. Realtime emotional reflective user interface based on deep convolutional neural networks and generative adversarial networks
Al Luhaybi et al. Automatic Association of Scents Based on Visual Content
Nejadgholi et al. A brain-inspired method of facial expression generation using chaotic feature extracting bidirectional associative memory
Zhuang et al. Image enhancement using modified histogram and log-exp transformation
Peng et al. Recognizing teachers’ hand gestures for effective non-verbal interaction
Hsieh et al. Based on DICOM RT structure and multiple loss function deep learning algorithm in organ segmentation of head and neck image
Förger et al. Animating with style: defining expressive semantics of motion
Clark Picture This: The Screenshot's Use in Digital Humanities Scholarship

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method and system for generating posters

Effective date of registration: 20220826

Granted publication date: 20210108

Pledgee: Guangzhou Ti Dong Technology Co.,Ltd.

Pledgor: GUANGZHOU KUAIZI INFORMATION TECHNOLOGY Co.,Ltd.

Registration number: Y2022440000222

PE01 Entry into force of the registration of the contract for pledge of patent right