CN111768478A - Image synthesis method and device, storage medium and electronic equipment - Google Patents

Image synthesis method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111768478A
CN111768478A CN202010668393.9A CN202010668393A CN111768478A CN 111768478 A CN111768478 A CN 111768478A CN 202010668393 A CN202010668393 A CN 202010668393A CN 111768478 A CN111768478 A CN 111768478A
Authority
CN
China
Prior art keywords
image
player
face
face image
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010668393.9A
Other languages
Chinese (zh)
Other versions
CN111768478B (en
Inventor
覃诗晴
王高垒
肖裕鑫
曾鸿苹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010668393.9A priority Critical patent/CN111768478B/en
Publication of CN111768478A publication Critical patent/CN111768478A/en
Application granted granted Critical
Publication of CN111768478B publication Critical patent/CN111768478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/71Game security or game management aspects using secure communication between game devices and game servers, e.g. by encrypting game data or authenticating players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application discloses an image synthesis method, an image synthesis device, a storage medium and electronic equipment, wherein the method relates to the deep learning direction in the field of artificial intelligence, and comprises the following steps: the method comprises the steps of displaying a game task interface, displaying a virtual object template selection interface corresponding to a game task when a triggering operation for triggering a control for the game task is detected, wherein the virtual object template selection interface comprises a plurality of candidate object templates, determining a target object template from the candidate object templates when the template selection operation for the virtual object template selection interface is detected, displaying a face image uploading interface, acquiring a face image of a player through the face image uploading interface, displaying a plurality of synthesized images, and the synthesized images are synthesized images of the virtual object image corresponding to the face image of the player and preset object images. The scheme can generate a plurality of synthesized images including different preset object images according to the face image of the player.

Description

Image synthesis method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image synthesis method, an image synthesis apparatus, a storage medium, and an electronic device.
Background
The network game is an individual multiplayer online game which takes the internet as a transmission medium, a game operator server and a player computer as processing terminals, and game client software as an information interaction window and aims to realize entertainment, leisure, communication and virtual achievement and has sustainability. In order to improve the interest of the game, game tasks can be arranged in the game, so that the player is guided to carry out game activities purposefully, and a certain reward is given to the player.
Disclosure of Invention
The embodiment of the application provides an image synthesis method, an image synthesis device, a storage medium and electronic equipment.
The embodiment of the application provides an image synthesis method, which comprises the following steps:
displaying a game task interface, wherein the game task interface comprises a game task trigger control;
when the triggering operation aiming at the game task triggering control is detected, displaying a virtual object template selection interface corresponding to the game task, wherein the virtual object template selection interface comprises a plurality of candidate object templates;
determining a target object template from the plurality of candidate object templates when a template selection operation for the virtual object template selection interface is detected;
displaying a facial image uploading interface, and acquiring a facial image of a player through the facial image uploading interface;
and displaying a plurality of synthesized images, wherein the synthesized images are synthesized images of the virtual object image and the preset object image corresponding to the face image of the player.
Correspondingly, an embodiment of the present application further provides an image synthesizing apparatus, including:
the display module is used for displaying a game task interface, and the game task interface comprises a game task trigger control;
the game task triggering control comprises a triggering module, a display module and a control module, wherein the triggering module is used for displaying a virtual object template selection interface corresponding to a game task when triggering operation aiming at the game task triggering control is detected, and the virtual object template selection interface comprises a plurality of candidate object templates;
a selection module, configured to determine a target object template from the plurality of candidate object templates when a template selection operation for the virtual object template selection interface is detected;
the uploading module is used for displaying a facial image uploading interface and acquiring a facial image of a player through the facial image uploading interface;
and the display module is used for displaying a plurality of synthesized images, and the synthesized images are synthesized images of the virtual object image and the preset object image corresponding to the face image of the player.
At this time, the selection module may be specifically configured to display a plurality of candidate object templates corresponding to the selected gender when a gender selection operation for the virtual object template selection interface is detected, and determine the target object template from the plurality of candidate object templates when a template selection operation for the plurality of candidate object templates is detected.
Optionally, in some embodiments, the uploading module may include a first display sub-module, an acquisition sub-module, and a generation sub-module, as follows:
the first display sub-module is used for displaying a facial image uploading interface;
the acquisition submodule is used for acquiring an initial face image uploaded by a player when the player authorization image uploading function is detected;
a generation submodule for generating a player face image based on the initial face image.
At this time, the generating submodule may be specifically configured to perform image compression processing on the initial face image to obtain a compressed image, perform image clipping processing on the compressed image to obtain a clipped image, and perform image effect processing on the clipped image to obtain a player face image.
Optionally, in some embodiments, the image synthesizing apparatus may further include a fusion module and a generation module, as follows:
the fusion module is used for carrying out face image fusion on the player face image and the template face image in the target object template to obtain a virtual object image;
and the generating module is used for generating a plurality of synthesized images based on the virtual object image and a plurality of preset object images.
Optionally, in some embodiments, the fusion module may include a feature point identification submodule and a pixel point fusion submodule, as follows:
the feature point identification submodule is used for identifying feature points of the face image of the player when the proportion of effective face pixel points in the face image of the player in the face pixel points is detected to meet a preset condition, so that the feature points of the face of the player are obtained;
and the pixel point fusion submodule is used for carrying out pixel point fusion on the player face image and the template face image in the target object template based on the player face characteristic point to obtain a virtual object image.
At this time, the feature point identifying sub-module may be specifically configured to, when it is detected that a ratio of valid face pixel points to face pixel points in the player face image satisfies a preset condition, perform feature point identification on the player face image to obtain initial face feature points, and perform feature point registration on the player face image based on a preset face template and the initial face feature points to obtain player face feature points.
In addition, the embodiment of the present application further provides a computer storage medium, where a plurality of instructions are stored, and the instructions are suitable for being loaded by a processor to execute the steps in any one of the image synthesis methods provided by the embodiments of the present application.
In addition, an electronic device is further provided in an embodiment of the present application, and includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps in any one of the image synthesis methods provided in the embodiment of the present application when executing the program.
The method and the device for displaying the game task can display a game task interface, when the triggering operation aiming at the game task triggering control is detected, a virtual object template selection interface corresponding to the game task is displayed, the virtual object template selection interface comprises a plurality of candidate object templates, when the template selection operation aiming at the virtual object template selection interface is detected, a target object template is determined from the candidate object templates, a face image uploading interface is displayed, the face image of a player is collected through the face image uploading interface, a plurality of synthesized images are displayed, and the synthesized images are synthesized images of the virtual object image corresponding to the face image of the player and preset object images. The scheme can generate a plurality of synthesized images including different preset object images according to the face image of the player.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic view of a scene of an image composition system provided in an embodiment of the present application;
FIG. 2 is a first flowchart of an image synthesis method provided in an embodiment of the present application;
FIG. 3 is a second flowchart of an image synthesis method provided by an embodiment of the present application;
FIG. 4 is a game task interface provided by an embodiment of the present application;
FIG. 5 is a virtual object template selection interface provided by an embodiment of the present application;
FIG. 6 is a facial image uploading interface provided by an embodiment of the present application;
FIG. 7 is an image confirmation interface provided by an embodiment of the present application;
FIG. 8 is an image sharing interface provided in an embodiment of the present application;
FIG. 9 is a first post-synthesis image provided by embodiments of the present application;
FIG. 10 is a second post-synthesis image provided by embodiments of the present application;
FIG. 11 is a third post-synthesis image provided by embodiments of the present application;
FIG. 12 is an image sharing interface provided in accordance with an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of a facial feature point template provided by an embodiment of the present application;
fig. 14 is a third flowchart of an image synthesis method provided in the embodiment of the present application;
fig. 15 is a schematic structural diagram of an image synthesis apparatus provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides an image synthesis method, an image synthesis device, a storage medium and electronic equipment. Specifically, the image synthesis method according to the embodiment of the present application may be executed by an electronic device, where the electronic device may be a terminal or a server, and the terminal may be a mobile phone, a tablet Computer, a notebook Computer, a smart television, a wearable smart device, a Personal Computer (PC), and other devices. The terminal may include a client, which may be a video client or a browser client, and the server may be a single server or a server cluster formed by multiple servers.
For example, referring to fig. 1, in an example where the image synthesis method is executed by an electronic device, the electronic device may display a game task interface, and when a trigger operation for a game task trigger control is detected, display a virtual object template selection interface corresponding to the game task, where the virtual object template selection interface includes a plurality of candidate object templates, and when a template selection operation for the virtual object template selection interface is detected, determine a target object template from the plurality of candidate object templates, display a face image upload interface, collect a player face image through the face image upload interface, and display a plurality of synthesized images, where the synthesized images are synthesized images of a virtual object image corresponding to the player face image and a preset object image.
The image synthesis method provided by the embodiment of the application relates to the machine learning direction in the field of artificial intelligence. The embodiment of the application can utilize the face image synthesis technology in the field of machine learning to complete the face image synthesis of the face image of the player and the face image of the template in the target object template, thereby improving the image synthesis effect of the face image synthesis and the image synthesis efficiency.
Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence software technology mainly comprises a computer vision technology, a machine learning/deep learning direction and the like.
Machine Learning (ML) is a multi-domain cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis and algorithm complexity theory. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiment of the application provides an image synthesis method, which can be executed by a terminal or a server, or can be executed by the terminal and the server together; in the embodiment of the present application, an image synthesis method is described as an example executed by a server, and as shown in fig. 2, a specific flow of the image synthesis method may be as follows:
201. and displaying the game task interface.
The game task is a means for purposefully guiding a player to play a game and giving a certain reward to the player, and the game task can enable the player to know game content, participate in game behaviors, experience game functions, complete game events, obtain game fun and the like. For example, the player can complete a co-shooting task by uploading a photo and co-shooting with a specific star, and obtain a corresponding game award.
For example, as shown in fig. 4, the game task interface includes a game task trigger control for "match-up reward", through which a player can enter a game task and experience the game task, and the game task interface may also indicate other game rules or task playing methods.
In practice, for example, a game mission interface such as that shown in FIG. 4 may be displayed, including a "Council reward" game mission trigger control, which may take the form of a button that a player may click to enter a corresponding game mission. The game task interface may also include instructions for scoring the task to enjoy a chance to win: the "landing page can get 1 share chance," the "group photo can get 2 share chance," and "view the" friend share group photo can get 2 share chance, "the player can get the corresponding award by clicking the" get "button behind it. The game task interface can also comprise a sharing reward display, namely the player can receive the corresponding reward when the number of times of obtaining the sharing reward accumulatively reaches a specified amount, and can receive the corresponding reward by clicking a 'receiving' button in a sharing reward display area of the game task interface.
202. And when the triggering operation aiming at the game task triggering control is detected, displaying a virtual object template selection interface corresponding to the game task.
The electronic device can generate a result required by the player based on the file uploaded by the player and the selected virtual object template. For example, the game mission of the present application may be a co-shooting mission of a player and a star, that is, a co-shooting of the player and the star is generated by using a facial image of the player uploaded by the player, wherein the facial image of the player in the co-shooting is obtained by combining the facial image of the player into a preset virtual object template, and the virtual object template provides information such as clothing, actions, expressions and the like for the part of the player in the co-shooting. The game character can comprise a plurality of virtual object templates, and different virtual object templates correspond to one or more items of different information such as clothes, actions, expressions and the like. As shown in fig. 5, the virtual object template selection interface includes three candidate object templates, and the clothes, actions, and expressions of the three candidate object templates are not all the same. The player may select a target object template desired by the player from the three candidate object templates through a template selection operation.
In practical applications, for example, when it is detected that the player clicks the "match-up reward" button in the game task interface shown in fig. 4, which indicates that the player performs a triggering operation for the game task triggering control, a virtual object template selection interface shown in fig. 5 may be displayed, where the virtual object template selection interface includes three candidate object templates corresponding to models with different costumes, actions, and expressions.
203. When a template selection operation for the virtual object template selection interface is detected, a target object template is determined from a plurality of candidate object templates.
In practical applications, for example, when it is detected that the player performs a template selection operation on the virtual object template selection interface shown in fig. 5, the target object template selected by the player may be determined from a plurality of candidate object templates.
In one embodiment, to increase the interest of the game, the player may first select the gender and then select the virtual object template according to the selected gender. Specifically, the step "determining a target object template from the plurality of candidate object templates when a template selection operation for the virtual object template selection interface is detected" may include:
when a gender selection operation for the virtual object template selection interface is detected, displaying a plurality of candidate object templates corresponding to the selected gender;
when a template selection operation for the plurality of candidate object templates is detected, a target object template is determined from the plurality of candidate object templates.
In practical applications, for example, when the game task is a photo task, the player needs to upload a facial image, and the system fuses the facial image uploaded by the player into the target object template selected by the player. Because the difference exists between the clothing, the action and the facial features of the male and the female, and the difference exists between the facial image fusion aiming at the male facial image and the facial image fusion aiming at the female facial image, the player can firstly select the gender, so that on one hand, an image fusion result which is more in line with objective facts can be generated, and on the other hand, the personalized experience of the player in the game can be considered.
As shown in fig. 5, the virtual object template selection interface includes two buttons, namely a "male" button and a "female" button, and when it is detected that the player clicks the "male" button, it is indicated that the gender selected by the player is male; when the player is detected to click on the "woman" button, it is indicated that the gender selected by the player is female. And if the gender selected by the player is male, displaying a plurality of candidate object templates corresponding to the male in the virtual object template selection interface. As shown in FIG. 5, when it is detected that the player clicks on the area corresponding to one of the candidate object templates, the selected candidate object template may be determined as the target object template.
In an embodiment, the user may also perform a gender selection operation by means of a pop-up window, for example, a pop-up window including a "male" button and a "female" button may be popped up when a trigger operation for the game task trigger control is detected, and the user may select gender by clicking the buttons.
204. And displaying a facial image uploading interface, and acquiring a facial image of the player through the facial image uploading interface.
For example, as shown in fig. 6, the facial image uploading interface includes three buttons of "upload", "self-timer", and "confirm", and when it is detected that the player clicks the "upload" button, the facial image of the player can be obtained by pulling up a local file, etc.; when a player clicks a self-timer button, acquiring a face image of the player through a camera; when the player is detected to click the "OK" button, it is indicated that the player has determined that an image needs to be uploaded.
In practical applications, for example, after the player selects the target object template, a facial image uploading interface as shown in fig. 6 may be displayed, and the player may upload the facial image of the player in various ways, such as image uploading or real-time shooting.
In one embodiment, to ensure the security of the game mission, the player may first be allowed to perform the setting of the image upload function. Specifically, the step of "displaying a facial image uploading interface and acquiring a facial image of a player through the facial image uploading interface" may include:
displaying a facial image uploading interface;
when a player is detected to authorize an image uploading function, acquiring an initial facial image uploaded by the player;
a player face image is generated based on the initial face image.
In practical applications, for example, after the player selects the target object template, a facial image uploading interface as shown in fig. 6 may be displayed, and in order to ensure the security of the game task, before the player uploads the image, the player may be allowed to perform setting of an image uploading function in the form of a pop-up window or the like. If the player authorizes the image uploading function, the step of uploading the picture can be continued; if the player does not authorize the image upload function, indicating that the player does not wish the game to capture images, the player may be prompted to fail to continue experiencing the game mission.
If the player is detected to authorize the image uploading function, the player can upload the unprocessed initial face image in various modes such as image uploading or real-time shooting. And when the initial face image is successfully uploaded, the player can determine whether to use the image, if the player uploads an error image due to misoperation or obtains an image unsatisfactory to the player in real time, the player can directly select not to use the image and return to the step of uploading the image until the image satisfactory to the player is uploaded. If the player uploads a satisfactory image, the initial face image uploaded by the player can be subjected to image optimization to obtain an optimized player face image. The method can improve the information security of the player, can choose not to authorize if the player does not want the game to acquire the image, can ensure the game experience of the player, and can choose not to use the image until uploading the satisfactory image if the player is not satisfied with the uploaded image.
In an embodiment, for example, the system may further perform an audit on an initial facial image uploaded by the player, and if the system needs the player to upload the front face image and currently detects that the player uploads the side face image, the system may determine the side face image as an unqualified image, prompt the player that the image is unqualified, and guide the player to perform the image uploading again. For another example, if the system requires the player to upload an unobstructed image and currently detects that the player uploads an image of the mask, the system may determine the image of the mask as an unqualified image, prompt the player that the image is unqualified, and guide the player to re-execute the image uploading step. For another example, if the system requires the player to upload a clear image and currently detects that the player uploads a blurred or unrecognizable image, the system may determine the image as an unqualified image, prompt the player that the image is unqualified, and guide the player to re-execute the image uploading step.
In one embodiment, since the image uploaded by the player may be an unprocessed image, which is not suitable for the direct face image fusion step, the initial face image may be subjected to image processing to obtain a usable player face image. Specifically, the step of "generating a player face image based on the initial face image" may include:
carrying out image compression processing on the initial face image to obtain a compressed image;
carrying out image interception processing on the compressed image to obtain an intercepted image;
and carrying out image effect processing on the intercepted image to obtain a face image of the player.
In practical applications, for example, after a player has determined to use an initial face image, the initial face image may be first subjected to image compression processing to obtain a compressed image, then the compressed image is subjected to size clipping processing according to a preset size to obtain a clipped image with a preset size, and then the clipped image is subjected to image effect processing to obtain a player face image, where the image effect processing may include image processing methods such as image enhancement, image restoration, image matching, image segmentation, and the like. The face image of the player is obtained after image processing, and the face image of the player can be ensured to be the face image capable of completing normal functional experience.
205. And displaying a plurality of synthesized images, wherein the synthesized images are synthesized images of the virtual object image and the preset object image corresponding to the face image of the player.
The preset object image is an image acquired with reference to a preset object. For example, in order to meet the requirements of the player, a game task of a co-shooting of the player and a star may be set, and the finally obtained part of the star in the co-shooting is a preset object image based on a preset star, where the preset object is not necessarily a real person, but may also be a character, an animation character, and the like in a movie television play, and may even be an animal, an article, a background, and the like, and the preset object image may be a photo, an animation, a video, and the like of the preset object.
The virtual object image corresponding to the player face image is an object image obtained by fusing the player face image with the virtual object template as a reference. For example, in a game task in which a player and a star match, a match as shown in fig. 9 is finally obtained, and the right part of the match may be referred to as a virtual object image, which is based on the virtual object template but has a face image related to the player face image.
In practical applications, for example, after the player face image is successfully acquired, a synthesized image as shown in fig. 9 may be directly displayed, and the format of the synthesized image may be PNG24, where the synthesized image includes a virtual object image corresponding to the player face image, the dress, movement, and expression of the virtual object image are determined according to the virtual object template, and the face image of the virtual object image is the result of the player face image fused with the face image of the virtual object template. Different synthesized images include different preset object images, wherein fig. 9 includes a preset object image corresponding to a preset object star 1, and fig. 10 includes a preset object image … corresponding to a preset object star 2, that is, different synthesized images can be regarded as the group shots of a player and different stars, different synthesized images can match template dynamic effects of different styles, after the player uploads one own player face image, a plurality of group shots with different stars can be directly acquired, and the player can select the group shot with the favorite star according to own preference and perform image storage or image sharing.
In one embodiment, for example, as shown in fig. 7, the generated synthesized image may be displayed on the interface, where the synthesized image is the combined image of the player and star 1, and if the player is not satisfied with the synthesized image, the "change star" button may be clicked, and at this time, another synthesized image is displayed on the interface, where the synthesized image is the combined image of the player and star 2, and if the player is not satisfied with the synthesized image, the "change star" button may be clicked continuously, and so on, until the synthesized image that the player is satisfied with is displayed, and at this time, the player may click the "ok" button, which indicates that the player has selected the final synthesized image.
In an embodiment, for example, the portions of the virtual object image and the preset object image in the synthesized image are not limited to static images, that is, the portions of the virtual object image and the preset object image in the synthesized image may also be dynamic images, or images with corresponding animation effects added, etc.
In an embodiment, specifically, the image synthesis method may further include:
carrying out face image fusion on the player face image and a template face image in the target object template to obtain a virtual object image;
and generating a plurality of synthesized images based on the virtual object image and a plurality of preset object images.
In practical applications, for example, after the player face image is acquired, the player face image and the template face image in the target object template may be subjected to face image fusion to obtain a virtual object image, so that the dress, movement and expression of the virtual object image are determined according to the virtual object template, and the face image of the virtual object image is the result of the face image of the player and the face image of the virtual object template being fused. And then generating a synthesized image according to the virtual object image and the preset object image, wherein the image positions of the virtual object image and the preset object image in the synthesized image, the background effect of the synthesized image and the like can be adjusted according to the preference of the user.
In an embodiment, for example, when facial image fusion is required, the client may request a facial image fusion interface, perform a facial image fusion step using a corresponding AI technique, generate a post-synthesis image, store the facial image synthesis result in a database, and then obtain the facial image synthesis result from the database.
In an embodiment, the face image fusion may be performed according to facial feature points, and specifically, the step of performing face image fusion on the player face image and a template face image in the target object template to obtain a virtual object image may include:
when detecting that the proportion of effective face pixel points in the face image of the player to the face pixel points meets a preset condition, carrying out feature point identification on the face image of the player to obtain face feature points of the player;
and based on the player face feature points, carrying out pixel point fusion on the player face image and the template face image in the target object template to obtain a virtual object image.
For example, if the face image is an image including a face, the face image includes both a part of the face and a background part without the face, and the pixel points of the face part in the face image may be referred to as face pixel points.
Wherein, effective facial pixel point can be used for carrying on the pixel point of face's image fusion in for face's image, for example, if the player uploaded the image of wearing the gauze mask, then gauze mask part in this image just can not be used for face's image fusion, is not effective facial pixel point, and the eyes eyebrow part that is not sheltered from by the gauze mask in the image can be used for face's image fusion, and this kind of pixel point just can be called effective facial pixel point.
The feature points are feature points obtained by positioning key parts in the face image, for example, the feature points may include a plurality of eye feature points obtained by positioning eyes, a plurality of nose feature points obtained by positioning a nose, and the like, and the positions of the key parts such as the five sense organs of the face may be determined by using the feature points.
In practical application, the system needs to detect the face image of the player, and the implementation of the image synthesis method can be supported only when the proportion of effective face pixels in the face image of the player to the face pixels reaches a certain degree. For example, it may be preset that the ratio of the effective face pixels to the face pixels in the player face image reaches 50% to satisfy a preset condition, and when it is detected that the ratio reaches 50%, feature point recognition may be performed on the player face image to obtain a plurality of player face feature points. Because the template face image in the target object template also corresponds to a plurality of feature points, pixel point fusion can be carried out on the player face image and the template face image in the target object template according to the feature points of the two face images to obtain a virtual object image. As shown in fig. 13, the facial feature points may be marked and stored by tfjs-facemesh, so as to facilitate invocation.
In an embodiment, if it is detected that the proportion of the effective face pixels in the face image of the player does not reach 50%, it is indicated that the face image uploaded by the player is too much missing, and the subsequent face fusion operation cannot be performed, at this time, a prompt that the image is not qualified can be performed on the user, and the user is guided to perform the step of uploading the image again.
In an embodiment, the face image fusion may be performed by using a triangulation method, and specifically, the step "performing pixel point fusion on the player face image and the template face image in the target object template based on the player face feature point to obtain a virtual object image" may include:
based on the player face feature points, carrying out region division on the player face image to obtain a plurality of face regions;
determining a template face region corresponding to each face region in a template face image of the target object template;
and carrying out pixel point fusion on pixel points in corresponding areas in the face image of the player and the face image of the template of the target object template to obtain a virtual object image.
In practical applications, for example, the player face image may be divided into a plurality of triangular face regions according to a plurality of player face feature points, and correspondingly, the template face image of the target object template may be divided into a plurality of triangular template face regions, where the template face regions of the template face image and the face regions of the player face image are in one-to-one correspondence. Then, the pixel points in the corresponding face area can be fused according to a certain proportion to obtain a virtual object image.
In an embodiment, if the face image of the player is not a complete face, the feature points of the face image of the player may be supplemented, specifically, the step "when it is detected that a proportion of effective face pixels in the face image of the player to face pixels satisfies a preset condition, performing feature point recognition on the face image of the player to obtain the feature points of the face of the player" may include:
when detecting that the proportion of effective face pixel points in the face image of the player to the face pixel points meets a preset condition, carrying out feature point identification on the face image of the player to obtain initial face feature points;
and performing feature point compensation on the face image of the player based on a preset face template and the initial face feature points to obtain face feature points of the player.
In practical application, for example, if a player uploads an image of a mask worn by the player, complete feature points cannot be identified in the face image of the player, and if it is detected that the proportion of effective face pixel points in the face image of the player to face pixel points reaches 50%, feature point identification can be performed on the face image of the player, and a plurality of initial face feature points are obtained. At this time, because of the mask, the feature points of the mouth cannot be identified, that is, the initial face feature points at this time are not complete feature points. Then, according to the preset face template and the obtained initial face feature points, missing feature points in the player face image can be supplemented, and complete player face feature points can be obtained. This method of compensating facial feature points can be smoothly executed even when a player uploads an image such as a sunglass or a mask.
In an embodiment, in order to improve the game dissemination, a communication identification code may be further set, and the game is shared by using the communication identification code. For example, the system may generate a synthesized image with a communication identification code, and when a player shares the synthesized image, other players may know the final effect of the game task and the corresponding game task according to the communication identification code on the synthesized image, thereby achieving the purpose of game sharing.
For example, a player may press the synthesized image for a long time and store the synthesized image in an album, and then click a "share and receive prize" button in the interface shown in fig. 8 to share the generated synthesized image, the shared image may include a game task control "i also need to be matched", the game task control may be designed in a form of a button or the like, and the game task control may guide other players to enter a home page of the game task, that is, the interface shown in fig. 4.
In one embodiment, for example, as shown in fig. 12, the player may share the game task without generating the synthesized image, so that the shared image may also include an identification code through which other players may directly enter the home page shown in fig. 4.
In one embodiment, for example, as shown in fig. 14, if a player performs game sharing after having generated a synthesized image, the game sharing link is with face synthesis special effect page parameters, and other players can check the image effect of the synthesized image through the link and directly enter the first page of the game task through the game task control "i also take a picture" to start executing the game task. If the player does not generate the synthesized image, the game sharing link is without parameters, and other players can directly enter the home page of the game mission through the link and start to execute the game mission. In this way, a closed loop can be formed with the entire marketing experience scheme, so that other players can quickly know about the page effect through the sharing link and attract other players to participate in the game experience.
According to the image synthesis method, the player face image and the template face image in the target object template can be subjected to face image synthesis through machine learning, so that the synthesized image accords with the scene and atmosphere of the target object template, the synthesized image is in accordance with products for promotion and publicity, and the face image in the synthesized image can show corresponding expression and dynamic effect according to the target object template. By the image synthesis method, a plurality of synthesized images with different effects can be obtained only by one face image fusion operation, and players can select favorite synthesized images to store or share according to own preferences.
As can be seen from the above, the embodiment of the application may display a game task interface, and when a trigger operation for triggering a control for a game task is detected, display a virtual object template selection interface corresponding to the game task, where the virtual object template selection interface includes a plurality of candidate object templates, and when a template selection operation for the virtual object template selection interface is detected, determine a target object template from the plurality of candidate object templates, display a face image uploading interface, acquire a face image of a player through the face image uploading interface, and display a plurality of synthesized images, where the synthesized images are synthesized images of a virtual object image corresponding to the face image of the player and a preset object image. The scheme can perform facial image fusion on the facial image of the player and the facial image of the template in the target object template through machine learning, so that the synthesized image conforms to the scene and atmosphere of the target object template, the synthesized image is in accordance with products for promotion and publicity, and the facial image in the synthesized image can show corresponding expression and dynamic effect according to the target object template. By the image synthesis method, a plurality of synthesized images with different effects can be obtained only by one face image fusion operation, and players can select favorite synthesized images to store or share according to own preferences. The game task can be shared in an identification code equal sharing mode, so that other players can enter the game task and start the game according to the identification code.
The method described in the foregoing embodiment will be described in further detail below by way of example with the image synthesis apparatus being specifically integrated in an electronic device.
Referring to fig. 3, a specific flow of the image synthesis method according to the embodiment of the present application may be as follows:
301. and displaying the game task interface.
In practical applications, for example, a player may enter a game mission through a game application, thereby displaying a game mission interface as shown in FIG. 4; the player may also enter the game mission interface by scanning the "I also agree" button in the interface shown in FIG. 11 that other players share; the player may also enter the game mission interface by scanning the two-dimensional code shared by other players in the interface shown in FIG. 12.
302. When it is detected that the player clicks the "Coup reward" button, a virtual object template selection interface is displayed.
303. When it is detected that the player clicks the "man" button, three different male candidate templates are displayed.
304. And when the condition that the player clicks the area corresponding to the first male candidate object template is detected, displaying a facial image uploading interface.
In practical applications, for example, the three different male candidate object templates may be a first male candidate object template, a second male candidate object template, and a third male candidate object template, each of the candidate object templates corresponds to a corresponding region, and a click on a region corresponding to which of the candidate object templates is clicked by the player indicates that the player has selected the candidate object template.
305. When the player authorizes the photographing function and the uploading image function, an initial facial image uploaded by the player is collected.
In practical applications, for example, if a player agrees to authorize the photographing function and the uploading image function, a corresponding image acquisition step can be performed; if the player does not agree with the authorized photographing function and the image uploading function, the game task cannot be completed.
In practical applications, for example, after the image uploaded by the player is acquired, whether the image is used or not may be confirmed to the player, and if the player determines to use the image, the image may be determined as an initial facial image; if the player does not determine to use the image, the player can return to display the facial image uploading interface to guide the player to upload the image again.
306. When it is detected that the player determines to use the initial face image, a plurality of synthesized images are generated from the initial face image.
In practical applications, for example, since the initial face image uploaded by the player may not meet the standard of face image synthesis, the initial face image may be subjected to image beautification to obtain the player face image, where the step of performing image beautification on the initial face image may be: and carrying out compression processing, size interception processing and photo beautification processing on the initial face image.
After the face image of the player is obtained, the face image of the player and the template face image in the first male candidate object template can be subjected to cross fusion, face pixel points of the two face images are mixed according to a certain proportion to obtain a fused face image, and the fused face image is combined with the first male candidate object template to obtain a part of a virtual object image in the synthesized image.
In order to realize the desire of the player to be combined with the stars, photos of a plurality of stars can be obtained in advance to serve as preset object images, then, based on the virtual object images and the preset object images, synthesized images are generated, and the synthesized images are the combination of the player and a certain star.
In an embodiment, for example, in the process of performing face image fusion, a player may upload an image in which a face is partially blocked, such as a mask, sunglasses, a scarf, or a hat, or a face part cannot be recognized, and as long as the proportion of a face region that can be recognized in the acquired player face image in the whole face is greater than 50%, the step of face image fusion is not affected. If the player face image includes an unrecognizable part, the feature points of the unrecognizable part can be filled up according to the recognized part, and the face image can be fused.
307. And determining a final synthesized image according to the image selection operation of the player, and displaying the synthesized image.
In practical applications, for example, the system may generate a plurality of synthesized images according to the face image of the player, that is, generate a plurality of combinations of the player and different stars, and the player may select the combinations to determine the synthesized image that the player is satisfied with.
308. And sharing the synthesized image.
In practical application, for example, after the synthesized image satisfied by the player is determined, the synthesized image can be stored by long pressing a page, and can be shared, other players can check the final effect of the photo through the two-dimensional code on the synthesized image, and can enter the first page shown in fig. 4 through the button of 'i also need to take the photo', and execute the photo task, and the sharing mode can form a closed loop with the whole marketing experience scheme, so that other players can conveniently and quickly know the page effect, and the player can be attracted to participate in the experience.
As can be seen from the above, in the embodiment of the present application, a game task interface may be displayed through an electronic device, when it is detected that a player clicks a "close-up reward" button, a virtual object template selection interface is displayed, when it is detected that the player clicks a "male" button, three different male candidate object templates are displayed, when it is detected that the player clicks a region corresponding to a first male candidate object template, a face image uploading interface is displayed, when the player authorizes a photographing function and an image uploading function, an initial face image uploaded by the player is collected, when it is detected that the player determines to use the initial face image, a plurality of synthesized images are generated according to the initial face image, a final synthesized image is determined according to an image selection operation of the player, the synthesized image is displayed, and the synthesized image is shared. The scheme can perform facial image fusion on the facial image of the player and the facial image of the template in the target object template through machine learning, so that the synthesized image conforms to the scene and atmosphere of the target object template, the synthesized image is in accordance with products for promotion and publicity, and the facial image in the synthesized image can show corresponding expression and dynamic effect according to the target object template. By the image synthesis method, a plurality of synthesized images with different effects can be obtained only by one face image synthesis operation, and players can select favorite synthesized images to store or share according to own preferences. The game task can be shared in an identification code equal sharing mode, so that other players can enter the game task and start the game according to the identification code.
In order to better implement the above method, accordingly, the present embodiment also provides an image synthesis apparatus, which may be integrated in an electronic device, and referring to fig. 15, the image synthesis apparatus includes a display module 151, a trigger module 152, a selection module 153, an upload module 154, and a presentation module 155, as follows:
the display module 151 is configured to display a game task interface, where the game task interface includes a game task trigger control;
a triggering module 152, configured to display a virtual object template selection interface corresponding to the game task when a triggering operation for triggering the control for the game task is detected, where the virtual object template selection interface includes multiple candidate object templates;
a selecting module 153, configured to determine, when a template selection operation for the virtual object template selection interface is detected, a target object template from the plurality of candidate object templates;
an upload module 154, configured to display a facial image upload interface, and acquire a player facial image through the facial image upload interface;
a display module 155, configured to display a plurality of synthesized images, where the synthesized images are synthesized images of a virtual object image and a preset object image corresponding to the player face image.
In an embodiment, the selecting module 153 may be specifically configured to:
when a gender selection operation for the virtual object template selection interface is detected, displaying a plurality of candidate object templates corresponding to the selected gender;
when a template selection operation for the plurality of candidate object templates is detected, a target object template is determined from the plurality of candidate object templates.
In one embodiment, the upload module 154 may include a first display sub-module, an acquisition sub-module, and a generation sub-module, as follows:
the first display sub-module is used for displaying a facial image uploading interface;
the acquisition submodule is used for acquiring an initial face image uploaded by a player when the player authorization image uploading function is detected;
a generation submodule for generating a player face image based on the initial face image.
In an embodiment, the generation submodule may be specifically configured to:
carrying out image compression processing on the initial face image to obtain a compressed image;
carrying out image interception processing on the compressed image to obtain an intercepted image;
and carrying out image effect processing on the intercepted image to obtain a face image of the player.
In an embodiment, the image synthesis apparatus may further include a fusion module and a generation module, as follows:
the fusion module is used for carrying out face image fusion on the player face image and the template face image in the target object template to obtain a virtual object image;
and the generating module is used for generating a plurality of synthesized images based on the virtual object image and a plurality of preset object images.
In an embodiment, the fusion module may include a feature point identification submodule and a pixel point fusion submodule, as follows:
the feature point identification submodule is used for identifying feature points of the face image of the player when the proportion of effective face pixel points in the face image of the player in the face pixel points is detected to meet a preset condition, so that the feature points of the face of the player are obtained;
and the pixel point fusion submodule is used for carrying out pixel point fusion on the player face image and the template face image in the target object template based on the player face characteristic point to obtain a virtual object image.
In an embodiment, the feature point identifying submodule may be specifically configured to:
when detecting that the proportion of effective face pixel points in the face image of the player to the face pixel points meets a preset condition, carrying out feature point identification on the face image of the player to obtain initial face feature points;
and performing feature point compensation on the face image of the player based on a preset face template and the initial face feature points to obtain face feature points of the player.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, in the embodiment of the present application, a game task interface may be displayed through the display module 151, when a trigger operation for triggering a control for a game task is detected, a virtual object template selection interface corresponding to the game task is displayed through the trigger module 152, the virtual object template selection interface includes a plurality of candidate object templates, when a template selection operation for the virtual object template selection interface is detected, a target object template is determined from the plurality of candidate object templates through the selection module 153, a facial image upload interface is displayed through the upload module 154, a facial image of a player is collected through the facial image upload interface, a plurality of synthesized images are displayed through the display module 155, and the synthesized images are synthesized images of a virtual object image corresponding to the facial image of the player and a preset object image. The scheme can perform facial image fusion on the facial image of the player and the facial image of the template in the target object template through machine learning, so that the synthesized image conforms to the scene and atmosphere of the target object template, the synthesized image is in accordance with products for promotion and publicity, and the facial image in the synthesized image can show corresponding expression and dynamic effect according to the target object template. By the image synthesis method, a plurality of synthesized images with different effects can be obtained only by one face image fusion operation, and players can select favorite synthesized images to store or share according to own preferences. The game task can be shared in an identification code equal sharing mode, so that other players can enter the game task and start the game according to the identification code.
The embodiment of the application also provides electronic equipment which can integrate any image synthesis device provided by the embodiment of the application.
For example, as shown in fig. 16, a schematic structural diagram of an electronic device according to an embodiment of the present application is shown, specifically:
the electronic device may include components such as a processor 161 of one or more processing cores, memory 162 of one or more computer-readable storage media, a power supply 163, and an input unit 164. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 16 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 161 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 162 and calling data stored in the memory 162, thereby performing overall monitoring of the electronic device. Optionally, processor 161 may include one or more processing cores; preferably, the processor 161 may integrate an application processor, which primarily handles operating systems, player interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 161.
The memory 162 may be used to store software programs and modules, and the processor 161 executes various functional applications and data processing by operating the software programs and modules stored in the memory 162. The memory 162 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 162 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 162 may also include a memory controller to provide the processor 161 access to the memory 162.
The electronic device further comprises a power supply 163 for supplying power to the various components, and preferably, the power supply 163 is logically connected to the processor 161 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 163 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 164, and the input unit 164 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to player settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 161 in the electronic device loads the executable text corresponding to the processes of one or more application programs into the memory 162 according to the following instructions, and the processor 161 runs the application programs stored in the memory 162, so as to implement various functions as follows:
the method comprises the steps of displaying a game task interface, displaying a virtual object template selection interface corresponding to a game task when a triggering operation for triggering a control for the game task is detected, wherein the virtual object template selection interface comprises a plurality of candidate object templates, determining a target object template from the candidate object templates when the template selection operation for the virtual object template selection interface is detected, displaying a face image uploading interface, acquiring a face image of a player through the face image uploading interface, displaying a plurality of synthesized images, and the synthesized images are synthesized images of the virtual object image corresponding to the face image of the player and preset object images.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, the embodiment of the application may display a game task interface, and when a trigger operation for triggering a control for a game task is detected, display a virtual object template selection interface corresponding to the game task, where the virtual object template selection interface includes a plurality of candidate object templates, and when a template selection operation for the virtual object template selection interface is detected, determine a target object template from the plurality of candidate object templates, display a face image uploading interface, acquire a face image of a player through the face image uploading interface, and display a plurality of synthesized images, where the synthesized images are synthesized images of a virtual object image corresponding to the face image of the player and a preset object image. The scheme can perform facial image fusion on the facial image of the player and the facial image of the template in the target object template through machine learning, so that the synthesized image conforms to the scene and atmosphere of the target object template, the synthesized image is in accordance with products for promotion and publicity, and the facial image in the synthesized image can show corresponding expression and dynamic effect according to the target object template. By the image synthesis method, a plurality of synthesized images with different effects can be obtained only by one face image fusion operation, and players can select favorite synthesized images to store or share according to own preferences. The game task can be shared in an identification code equal sharing mode, so that other players can enter the game task and start the game according to the identification code.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides an electronic device, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the image synthesis methods provided in the embodiments of the present application. For example, the instructions may perform the steps of:
the method comprises the steps of displaying a game task interface, displaying a virtual object template selection interface corresponding to a game task when a triggering operation for triggering a control for the game task is detected, wherein the virtual object template selection interface comprises a plurality of candidate object templates, determining a target object template from the candidate object templates when the template selection operation for the virtual object template selection interface is detected, displaying a face image uploading interface, acquiring a face image of a player through the face image uploading interface, displaying a plurality of synthesized images, and the synthesized images are synthesized images of the virtual object image corresponding to the face image of the player and preset object images.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided in the various alternative implementations of the image composition aspect described above.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image synthesis method provided in the embodiments of the present application, the beneficial effects that can be achieved by any image synthesis method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The foregoing detailed description has provided a method, an apparatus, a storage medium, and an electronic device for image synthesis provided by embodiments of the present application, and specific examples have been applied in the present application to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image synthesis method, comprising:
displaying a game task interface, wherein the game task interface comprises a game task trigger control;
when the triggering operation aiming at the game task triggering control is detected, displaying a virtual object template selection interface corresponding to the game task, wherein the virtual object template selection interface comprises a plurality of candidate object templates;
determining a target object template from the plurality of candidate object templates when a template selection operation for the virtual object template selection interface is detected;
displaying a facial image uploading interface, and acquiring a facial image of a player through the facial image uploading interface;
and displaying a plurality of synthesized images, wherein the synthesized images are synthesized images of the virtual object image and the preset object image corresponding to the face image of the player.
2. The image synthesis method according to claim 1, wherein determining a target object template from the plurality of candidate object templates when a template selection operation for the virtual object template selection interface is detected comprises:
when a gender selection operation for the virtual object template selection interface is detected, displaying a plurality of candidate object templates corresponding to the selected gender;
when a template selection operation for the plurality of candidate object templates is detected, a target object template is determined from the plurality of candidate object templates.
3. The image synthesis method according to claim 1, wherein displaying a facial image upload interface and capturing a player facial image through the facial image upload interface includes:
displaying a facial image uploading interface;
when a player is detected to authorize an image uploading function, acquiring an initial facial image uploaded by the player;
a player face image is generated based on the initial face image.
4. The image synthesis method of claim 3, wherein generating a player face image based on the initial face image comprises:
carrying out image compression processing on the initial face image to obtain a compressed image;
carrying out image interception processing on the compressed image to obtain an intercepted image;
and carrying out image effect processing on the intercepted image to obtain a face image of the player.
5. The image synthesis method according to claim 1, further comprising:
carrying out face image fusion on the player face image and a template face image in the target object template to obtain a virtual object image;
and generating a plurality of synthesized images based on the virtual object image and a plurality of preset object images.
6. The image synthesis method according to claim 5, wherein the obtaining of the virtual object image by face image fusion of the player face image and the template face image in the target object template includes:
when detecting that the proportion of effective face pixel points in the face image of the player to the face pixel points meets a preset condition, carrying out feature point identification on the face image of the player to obtain face feature points of the player;
and based on the player face feature points, carrying out pixel point fusion on the player face image and the template face image in the target object template to obtain a virtual object image.
7. The image synthesis method according to claim 6, wherein when it is detected that a ratio of valid face pixels to face pixels in the player face image satisfies a preset condition, performing feature point recognition on the player face image to obtain player face feature points, includes:
when detecting that the proportion of effective face pixel points in the face image of the player to the face pixel points meets a preset condition, carrying out feature point identification on the face image of the player to obtain initial face feature points;
and performing feature point compensation on the face image of the player based on a preset face template and the initial face feature points to obtain face feature points of the player.
8. An image synthesis method apparatus, comprising:
the display module is used for displaying a game task interface, and the game task interface comprises a game task trigger control;
the game task triggering control comprises a triggering module, a display module and a control module, wherein the triggering module is used for displaying a virtual object template selection interface corresponding to a game task when triggering operation aiming at the game task triggering control is detected, and the virtual object template selection interface comprises a plurality of candidate object templates;
a selection module, configured to determine a target object template from the plurality of candidate object templates when a template selection operation for the virtual object template selection interface is detected;
the uploading module is used for displaying a facial image uploading interface and acquiring a facial image of a player through the facial image uploading interface;
and the display module is used for displaying a plurality of synthesized images, and the synthesized images are synthesized images of the virtual object image and the preset object image corresponding to the face image of the player.
9. A computer storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute the image synthesis method according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method according to any of claims 1 to 7 are implemented when the program is executed by the processor.
CN202010668393.9A 2020-07-13 2020-07-13 Image synthesis method and device, storage medium and electronic equipment Active CN111768478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010668393.9A CN111768478B (en) 2020-07-13 2020-07-13 Image synthesis method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010668393.9A CN111768478B (en) 2020-07-13 2020-07-13 Image synthesis method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111768478A true CN111768478A (en) 2020-10-13
CN111768478B CN111768478B (en) 2023-05-30

Family

ID=72725136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010668393.9A Active CN111768478B (en) 2020-07-13 2020-07-13 Image synthesis method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111768478B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113476834A (en) * 2021-07-06 2021-10-08 网易(杭州)网络有限公司 Method and device for executing tasks in game, electronic equipment and storage medium
CN113642481A (en) * 2021-08-17 2021-11-12 百度在线网络技术(北京)有限公司 Recognition method, training method, device, electronic equipment and storage medium
CN113694517A (en) * 2021-08-11 2021-11-26 网易(杭州)网络有限公司 Information display control method and device and electronic equipment
WO2022147774A1 (en) * 2021-01-08 2022-07-14 浙江大学 Object pose recognition method based on triangulation and probability weighted ransac algorithm

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001216531A (en) * 2000-02-02 2001-08-10 Nippon Telegr & Teleph Corp <Ntt> Method for displaying participant in three-dimensional virtual space and three-dimensional virtual space display device
JP2005196670A (en) * 2004-01-09 2005-07-21 Sony Corp Mobile terminal system and method for generating object
US20070082729A1 (en) * 2005-10-07 2007-04-12 Howard Letovsky Player skill equalizer for video games
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
JP2014155564A (en) * 2013-02-14 2014-08-28 Namco Bandai Games Inc Game system and program
CN105184249A (en) * 2015-08-28 2015-12-23 百度在线网络技术(北京)有限公司 Method and device for processing face image
CN105447480A (en) * 2015-12-30 2016-03-30 吉林纪元时空动漫游戏科技集团股份有限公司 Face recognition game interactive system
US20170304732A1 (en) * 2014-11-10 2017-10-26 Lego A/S System and method for toy recognition
CN107680167A (en) * 2017-09-08 2018-02-09 郭睿 A kind of three-dimensional (3 D) manikin creation method and system based on user image
CN108771868A (en) * 2018-06-14 2018-11-09 广州市点格网络科技有限公司 Game virtual role construction method, device and computer readable storage medium
CN109675315A (en) * 2018-12-27 2019-04-26 网易(杭州)网络有限公司 Generation method, device, processor and the terminal of avatar model
CN109865283A (en) * 2019-03-05 2019-06-11 网易(杭州)网络有限公司 Virtual role face method of adjustment, device, electronic equipment and medium in game
CN110152308A (en) * 2019-06-27 2019-08-23 北京乐动派软件有限公司 A kind of more personages' group photo methods of game virtual image
CN110610127A (en) * 2019-08-01 2019-12-24 平安科技(深圳)有限公司 Face recognition method and device, storage medium and electronic equipment
CN110917612A (en) * 2019-11-13 2020-03-27 芯海科技(深圳)股份有限公司 Game interaction method and device, electronic equipment and storage medium
CN110992493A (en) * 2019-11-21 2020-04-10 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001216531A (en) * 2000-02-02 2001-08-10 Nippon Telegr & Teleph Corp <Ntt> Method for displaying participant in three-dimensional virtual space and three-dimensional virtual space display device
JP2005196670A (en) * 2004-01-09 2005-07-21 Sony Corp Mobile terminal system and method for generating object
US20070082729A1 (en) * 2005-10-07 2007-04-12 Howard Letovsky Player skill equalizer for video games
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
JP2014155564A (en) * 2013-02-14 2014-08-28 Namco Bandai Games Inc Game system and program
US20170304732A1 (en) * 2014-11-10 2017-10-26 Lego A/S System and method for toy recognition
CN105184249A (en) * 2015-08-28 2015-12-23 百度在线网络技术(北京)有限公司 Method and device for processing face image
CN105447480A (en) * 2015-12-30 2016-03-30 吉林纪元时空动漫游戏科技集团股份有限公司 Face recognition game interactive system
CN107680167A (en) * 2017-09-08 2018-02-09 郭睿 A kind of three-dimensional (3 D) manikin creation method and system based on user image
CN108771868A (en) * 2018-06-14 2018-11-09 广州市点格网络科技有限公司 Game virtual role construction method, device and computer readable storage medium
CN109675315A (en) * 2018-12-27 2019-04-26 网易(杭州)网络有限公司 Generation method, device, processor and the terminal of avatar model
CN109865283A (en) * 2019-03-05 2019-06-11 网易(杭州)网络有限公司 Virtual role face method of adjustment, device, electronic equipment and medium in game
CN110152308A (en) * 2019-06-27 2019-08-23 北京乐动派软件有限公司 A kind of more personages' group photo methods of game virtual image
CN110610127A (en) * 2019-08-01 2019-12-24 平安科技(深圳)有限公司 Face recognition method and device, storage medium and electronic equipment
CN110917612A (en) * 2019-11-13 2020-03-27 芯海科技(深圳)股份有限公司 Game interaction method and device, electronic equipment and storage medium
CN110992493A (en) * 2019-11-21 2020-04-10 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DESHPANDE, V.J. 等: "Augmented reality: Technology merging computer vision and image processing by experimental techniques(Article)", 《INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND EXPLORING ENGINEERING》 *
昔克等: "基于人脸检测的多媒体互动游戏系统的研究", 《电子设计工程》 *
李亚辉: "真实感人脸建模和动画研究概述", 《衡水学院学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022147774A1 (en) * 2021-01-08 2022-07-14 浙江大学 Object pose recognition method based on triangulation and probability weighted ransac algorithm
CN113476834A (en) * 2021-07-06 2021-10-08 网易(杭州)网络有限公司 Method and device for executing tasks in game, electronic equipment and storage medium
CN113694517A (en) * 2021-08-11 2021-11-26 网易(杭州)网络有限公司 Information display control method and device and electronic equipment
CN113642481A (en) * 2021-08-17 2021-11-12 百度在线网络技术(北京)有限公司 Recognition method, training method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111768478B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
KR102296906B1 (en) Virtual character generation from image or video data
CN111768478B (en) Image synthesis method and device, storage medium and electronic equipment
US11052321B2 (en) Applying participant metrics in game environments
US20220410007A1 (en) Virtual character interaction method and apparatus, computer device, and storage medium
US10632372B2 (en) Game content interface in a spectating system
US8012023B2 (en) Virtual entertainment
US20210001223A1 (en) Method and Apparatus for Displaying Virtual Pet, Terminal, and Storage Medium
CN110809175B (en) Video recommendation method and device
US20090202114A1 (en) Live-Action Image Capture
US11596872B2 (en) Automated player sponsorship system
CN107210949A (en) User terminal using the message service method of role, execution methods described includes the message application of methods described
CN113507621A (en) Live broadcast method, device, system, computer equipment and storage medium
KR102619465B1 (en) Confirm consent
CN112287848A (en) Live broadcast-based image processing method and device, electronic equipment and storage medium
KR20230148239A (en) Robust facial animation from video using neural networks
JP2022184845A (en) Moving image distribution system, moving image distribution method, and moving image distribution program
TW202123128A (en) Virtual character live broadcast method, system thereof and computer program product
KR20200085029A (en) Avatar virtual pitting system
CN116437137B (en) Live broadcast processing method and device, electronic equipment and storage medium
CN113610953A (en) Information processing method and device and computer readable storage medium
CN115222406A (en) Resource distribution method based on business service account and related equipment
CN115734017A (en) Video playing method, video generating method and related device
US20220053227A1 (en) Video distribution system, video distribution method, and video distribution program
US10092844B2 (en) Generation of vision recognition references from user selected content
CN111659114B (en) Interactive game generation method and device, interactive game processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40030750

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant