CN113643417A - Image adjusting method and device, electronic equipment and storage medium - Google Patents

Image adjusting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113643417A
CN113643417A CN202110944503.4A CN202110944503A CN113643417A CN 113643417 A CN113643417 A CN 113643417A CN 202110944503 A CN202110944503 A CN 202110944503A CN 113643417 A CN113643417 A CN 113643417A
Authority
CN
China
Prior art keywords
face image
image
face
attribute
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110944503.4A
Other languages
Chinese (zh)
Other versions
CN113643417B (en
Inventor
周红花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110944503.4A priority Critical patent/CN113643417B/en
Publication of CN113643417A publication Critical patent/CN113643417A/en
Application granted granted Critical
Publication of CN113643417B publication Critical patent/CN113643417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • G06T3/02
    • G06T3/10
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an image adjusting method, an image adjusting device and electronic equipment, wherein the method comprises the following steps: carrying out standardization processing on the attribute value parameters to obtain a face attribute score corresponding to the first face image; and adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image. Therefore, the first face image can be effectively and accurately processed, the first face image is adjusted and processed based on the face attribute score corresponding to the first face image to obtain the second face image, the neural network model is not depended on, the virtual environments of different types can be adapted, meanwhile, the occupation of hardware resources is reduced in the image processing process, the cost of hardware equipment is reduced, and the use experience of a user is improved.

Description

Image adjusting method and device, electronic equipment and storage medium
Technical Field
The present invention relates to information processing technologies, and in particular, to an image adjustment method and apparatus, an electronic device, and a storage medium.
Background
The interest of game players can be well promoted by enriching the diversity of characters in the game, and the design and manufacture of the face model of the game character are important components in the field of game design. However, because the face model design of the game role usually requires a large amount of art designing cost, the common practice for improving the richness of the game role at present is to enhance the diversity of the face image by the face image transformation of the face pinching system, but in the related art, the transformation process of the face image is cumbersome, the face attribute value needs to be calculated by defining algorithm logic, but the face image processing is not performed based on the attribute value defined by statistical data, usually resulting in exaggerated game roles, which does not meet the use requirements of users, and meanwhile, the parameters of the face pinching system are manually adjusted, the processing process of the method is cumbersome, the method is not used by novice users, the accuracy is poor, and the richness of the game role cannot be really improved.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image adjustment method, an image adjustment device, an electronic device, and a storage medium, which can effectively improve efficiency and accuracy of face image transformation in a virtual scene.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides an image adjusting method, which comprises the following steps:
acquiring a first face image of a target virtual object in a target game;
carrying out face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and face image key points corresponding to the first face image;
determining an attribute value parameter corresponding to the first face image through an attribute category matched with the first face image based on the face image frame and the face image key points;
triggering a face attribute model, and carrying out standardization processing on the attribute value parameters to obtain a face attribute score corresponding to the first face image;
and adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image.
An embodiment of the present invention further provides an image processing apparatus, including:
the information transmission module is used for acquiring a first face image of a target virtual object in a target game;
the information processing module is used for carrying out face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and face image key points corresponding to the first face image;
the information processing module is used for determining an attribute value parameter corresponding to the first face image through an attribute category matched with the first face image based on the face image frame and the face image key point;
the information processing module is used for triggering a face attribute model and carrying out standardization processing on the attribute value parameters to obtain a face attribute score corresponding to the first face image;
and the information processing module is used for adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for responding to a face image acquisition instruction, determining a face region in the face image acquisition environment of the target object through a face acquisition model, and determining illumination information corresponding to the face region;
the information processing module is used for determining the positions of key points of the face image matched in the face region according to the illumination information;
the information processing module is used for collecting a first face image of the target object in the face area based on the positions of the key points of the face image.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for determining key points of the first face image according to the type of the target game;
the information processing module is used for carrying out image augmentation processing on the face image;
the information processing module is used for determining a face image frame corresponding to the first face image through a face image frame coordinate detection algorithm based on the processing result of image augmentation and obtaining a corresponding face position;
and the information processing module is used for calculating face image key points corresponding to the first face image according to face image key point detection based on the processing result of image augmentation.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for determining the coordinates of key points of the face image of the first face image based on the face image frame;
the information processing module is used for carrying out normalization processing on the coordinates of the key points of the face image to obtain normalized coordinates of the key points of the face image;
the information processing module is used for determining an attribute value parameter corresponding to the first face image based on the normalized face image key point coordinates and the attribute category matched with the first face image.
In the above-mentioned scheme, the first step of the method,
the information processing module, configured to determine the attribute category matching the first facial image, includes: left eye, right eye, left eyebrow, right eyebrow, nose, mouth, chin;
the information processing module is used for respectively determining key point abscissa and key point ordinate of the face image, which correspond to different organs in the attribute category matched with the first face image, based on the normalized key point coordinates of the face image; or
The information processing module is configured to, when the target game is a simulated strategy game, determine that the attribute category matched with the first face image includes: left eye, right eye, nose, mouth;
and the information processing module is used for respectively determining the difference value between the abscissa of the key point of the face image and the standard value and the difference value between the ordinate of the key point of the face image and the standard value, which are respectively corresponding to different organs in the attribute category matched with the first face image, based on the normalized coordinates of the key point of the face image.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for determining an attribute average value parameter and an attribute standard deviation parameter corresponding to the attribute value parameter through the face attribute model;
the information processing module is used for standardizing the attribute value parameters based on the attribute average value parameters and the attribute standard deviation parameters to obtain standardized attribute parameters;
and the information processing module is used for carrying out linear mapping processing on the standardized attribute parameters to obtain a face attribute score corresponding to a first face image of the first face image.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for acquiring a game face attribute score corresponding to the target virtual object;
the information processing module is used for determining the coordinates of different key points in the second face image based on the game face attribute scores and the face attribute scores corresponding to the first face image;
and the information processing module is used for adjusting the coordinates of the key points in the first face image based on the coordinates of different key points in the second face image to obtain the second face image.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for acquiring a game role image of a target virtual object in the target game;
the information processing module is used for detecting key points of the game role image to obtain the key points of the game role image;
and the information processing module is used for performing triangular affine transformation processing on the key points of the game role image and the key points of the face image corresponding to the first face image to obtain the game role image subjected to face pinching processing, wherein the game role image subjected to face pinching processing and the first face image have the same face attribute score.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for adjusting a color mode of superposition rendering of skin color texture features of the first face image and texture features of a hair style of the face image when the first face image is a game role image, so that the face of the first face image is matched with the color of a standard object in a target image template; alternatively, the first and second electrodes may be,
and the information processing module is used for adjusting a margin mode of superposition rendering of the skin color texture feature and the texture feature of the hair style of the face image when the first face image is a cartoon image, so that the hair style part of the first face image is matched with the facial feature of a standard object in the target image template.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for responding to a viewing operation aiming at a face image adjusting function item, presenting a content page comprising the first face image and the image template, and presenting at least one interactive function item in the content page, wherein the interactive function item is used for realizing interaction with the first face image;
the information processing module is used for receiving the interaction operation aiming at the first face image triggered based on the interaction function item so as to execute a corresponding interaction instruction.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for presenting first interaction prompt information in the content page, and the first interaction prompt information is used for prompting that the interaction content corresponding to the interaction operation can be presented in a view interface of a target game;
and responding to the operation of switching to the view interface, and switching a content page to the view interface.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for presenting second interaction prompt information in the content page, and the second interaction prompt information is used for prompting that the interaction content corresponding to the interaction operation can be presented in a target image template library interface corresponding to a target image template;
and the information processing module is used for responding to an instruction of switching to the target image template library interface and switching the content page to the target image template library interface.
An embodiment of the present invention further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the image adjusting method when the executable instructions stored in the memory are executed.
The embodiment of the invention also provides a computer-readable storage medium, which stores executable instructions, and the executable instructions are executed by a processor to realize the image adjusting method.
The embodiment of the invention has the following beneficial effects:
the method comprises the steps of obtaining a first face image of a target virtual object in a target game; carrying out face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and face image key points corresponding to the first face image; determining an attribute value parameter corresponding to the first face image through an attribute category matched with the first face image based on the face image frame and the face image key points; triggering a face attribute model, and carrying out standardization processing on the attribute value parameters to obtain a face attribute score corresponding to the first face image; and adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image. Therefore, the first face image can be effectively and accurately processed, the first face image is adjusted and processed based on the face attribute score corresponding to the first face image to obtain the second face image, the neural network model is not depended on, the virtual environments of different types can be adapted, meanwhile, the occupation of hardware resources is reduced in the image processing process, the cost of hardware equipment is reduced, and the use experience of a user is improved.
Drawings
Fig. 1 is a schematic view of a usage scenario of an image adjustment method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a structure of an image processing apparatus according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of an alternative image adjustment method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating the effect of game data processing according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a face image acquisition process according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of face image acquisition according to an embodiment of the present invention;
FIG. 7A is a diagram illustrating an exemplary process for attribute value parameter determination;
FIG. 7B is a diagram illustrating an exemplary process for attribute value parameter determination;
fig. 8 is a schematic flow chart of an alternative image adjustment method according to an embodiment of the present invention;
fig. 9 is a schematic flow chart of an alternative image adjustment method according to an embodiment of the present invention;
FIG. 10 is a schematic process diagram of trigonometric affine transformation processing in the embodiment of the present invention;
FIG. 11A is a schematic diagram of a trigonometric affine transformation in an embodiment of the present invention;
FIG. 11B is a diagram illustrating the effect of triangular affine transformation in the embodiment of the present invention;
FIG. 12 is a schematic view of an alternative display of an image adjustment method according to an embodiment of the present invention;
FIG. 13 is a schematic view of an alternative display of an image adjustment method according to an embodiment of the present invention;
fig. 14 is an alternative display diagram of the image adjustment method according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without inventive work shall fall within the scope of protection of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) Based on the condition or state on which the operation to be performed depends, when the condition or state on which the operation depends is satisfied, the operation or operations to be performed may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) Normal distribution (Normal distribution), also known as Gaussian distribution, if the random variable X obeys a mathematical expectation of μ and the variance of δ2Normal distribution of (d), denoted as N (μ, δ)2). The probability density function determines its position for the expected value μ of a normal distribution and its standard deviation σ determines the amplitude of the distribution. A normal distribution when μ ═ 0 and σ ═ 1 is a standard normal distribution.
4) Affine transformation, also called affine mapping, refers to a geometric transformation in which one vector space is linearly transformed and then translated into another vector space.
5) Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene can be a simulation environment of a real world, a semi-simulation semi-fictional three-dimensional environment or a pure fictional three-dimensional environment.
The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene is also used for virtual scene engagement between at least two virtual objects. Optionally, the virtual scene is also used for a virtual firearm fight between at least two virtual objects. Alternatively, the virtual scene may be, but not limited to, a gunfight Game, a running cool Game, a Racing Game, a Multiplayer Online tactical sports Game (MOBA), a Racing Game (RCG), and a sports Game (SPG). The trained face attribute model can be deployed in game servers corresponding to various game scenes and used for generating a real-time virtual scene advancing route and presenting the virtual scene advancing route in a game interface, executing corresponding actions in corresponding games, simulating the operation of virtual users, and completing different types of games in the virtual scenes together with users who actually participate in the games.
6) Virtual objects, the appearance of various people and objects in the virtual scene that can interact, or movable objects in the virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Fig. 1 is a schematic view of an implementation scenario of an image adjustment method according to an embodiment of the present invention, referring to fig. 1, in order to support an exemplary application, a terminal includes a terminal 10-1 and a terminal 10-2, the terminal is connected to a server 200 running an image processing apparatus through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless or wired link to implement data transmission.
The terminal (e.g., terminal 10-2) is located on the user side and is used for issuing an image processing request for acquiring a new game face image, wherein the target object can be a different game character in various types of game processes. The terminals (including the terminal 10-1 and the terminal 10-2) can acquire data of a virtual scene from the corresponding virtual scene server 200 through the network 300 and present the virtual scene in the display area of the terminal, and the image processing apparatus provided in the terminal can execute the following scheme: acquiring a first face image of a target virtual object in a target game; carrying out face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and face image key points corresponding to the first face image; determining an attribute value parameter corresponding to the first face image through an attribute category matched with the first face image based on the face image frame and the face image key points; triggering a face attribute model, and carrying out standardization processing on the attribute value parameters to obtain a face attribute score corresponding to the first face image; and adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image.
In some embodiments, the terminal 10-1 may be installed and run with applications that support virtual scenarios. The application program can be a virtual reality application program, a three-dimensional map program, a military simulation program, a First-person shooting game (FPS First-person shooting game), a Multiplayer Online tactical competition game (MOBA Multiplayer Online Battle Games), and other different virtual scenes, wherein, taking a role-playing game as an example, a user can upload a First face image after selecting a game role, and can also select a corresponding historical face image (a face image which is generated by the image adjustment method of the application and is used in a game process) in a template library, and adjust the face image by using a face-pinching system, for example, in a cloud game, the image pinching can be a virtual image mode for generating a virtual image corresponding to a character in the image according to the character characteristics of the character contained in the image; the cloud server can obtain an image containing a character image from the terminal, pinch the face of the image and generate an avatar corresponding to the character image in the image, and can also generate a target image corresponding to the avatar, wherein the target image contains the avatar and is sent to the terminal for storage. The image processing method in the cloud game may be, for example, that the cloud server may obtain an image of an object including a gun, a knife, a backpack, and the like from the terminal, and process the image including the object to generate a game item of a game player in the cloud game; or, the cloud server may further acquire an image including a landscape (e.g., including a mountain and a river, flowers and plants, etc.) from the terminal, process the image including the landscape, and generate a game scene of the cloud game. Therefore, if the cloud game has a need to process the image in the terminal, the cloud game needs to access an album for storing the image in the terminal and acquire the image from the album of the terminal. Based on this, the method provided by the embodiment of the application can detect an access event to the album of the terminal in the running process of the cloud game, and if the access event to the album of the terminal is detected in the running process of the cloud game, the cloud server can be triggered to access the album of the terminal, and the first face image is obtained from the album of the terminal, so that the first face image in the terminal can be processed in the cloud game, and the first face image is adjusted based on the face attribute score corresponding to the first face image to obtain the second face image.
An image processing apparatus for implementing the image adjustment method according to an embodiment of the present invention will be described below. The image processing apparatus may be implemented in various forms, such as a terminal with an image processing apparatus processing function, or a server provided with an image processing apparatus processing function, such as the server 200 in the foregoing fig. 1. Fig. 2 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present invention, and it is understood that fig. 2 only shows an exemplary structure of the image processing apparatus, and not a whole structure, and a part of or the whole structure shown in fig. 2 may be implemented as needed.
The image processing apparatus provided by the embodiment of the invention comprises: at least one processor 201, memory 202, user interface 203, and at least one network interface 204. The various components in the image processing apparatus are coupled together by a bus system 205. It will be appreciated that the bus system 205 is used to enable communications among the components. The bus system 205 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 205 in fig. 2.
The user interface 203 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 202 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. The memory 202 in embodiments of the present invention is capable of storing data to support operation of the terminal (e.g., 10-1). Examples of such data include: any computer program, such as an operating system and application programs, for operating on a terminal (e.g., 10-1). The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application program may include various application programs.
In some embodiments, the image processing apparatus provided in the embodiments of the present invention may be implemented by a combination of hardware and software, and by way of example, the image processing apparatus provided in the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the image adjusting method provided in the embodiments of the present invention. For example, a processor in the form of a hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
As an example of the image processing apparatus provided by the embodiment of the present invention implemented by combining software and hardware, the image processing apparatus provided by the embodiment of the present invention may be directly embodied as a combination of software modules executed by the processor 201, where the software modules may be located in a storage medium located in the memory 202, and the processor 201 reads executable instructions included in the software modules in the memory 202, and completes the image adjusting method provided by the embodiment of the present invention in combination with necessary hardware (for example, including the processor 201 and other components connected to the bus 205).
By way of example, the Processor 201 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor or the like.
As an example of the image processing apparatus provided by the embodiment of the present invention implemented by hardware, the apparatus provided by the embodiment of the present invention may be implemented by directly using the processor 201 in the form of a hardware decoding processor, for example, by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components, to implement the image adjusting method provided by the embodiment of the present invention.
The memory 202 in the embodiment of the present invention is used to store various types of data to support the operation of the image processing apparatus. Examples of such data include: any executable instructions for operating on the image processing apparatus, such as executable instructions, may be included in the executable instructions, and the program implementing the slave image adjusting method of the embodiments of the present invention may be included in the executable instructions.
In other embodiments, the image processing apparatus provided by the embodiment of the present invention may be implemented by software, and fig. 2 shows the image processing apparatus stored in the memory 202, which may be software in the form of programs, plug-ins, and the like, and includes a series of modules, and as an example of the programs stored in the memory 202, the image processing apparatus may include the following software modules:
the information transmission module 2081, configured to obtain a first face image of a target virtual object in a target game;
the information processing module 2082 is configured to perform face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and a face image key point corresponding to the first face image;
the information processing module 2082 is configured to determine an attribute value parameter corresponding to the first face image according to an attribute category matched with the first face image based on the face image frame and the face image key point;
the information processing module 2082 is configured to trigger a face attribute model, and perform normalization processing on the attribute value parameters to obtain a face attribute score corresponding to the first face image;
the information processing module 2082 is configured to perform adjustment processing on the first face image based on the attribute value parameter corresponding to the first face image, so as to obtain a second face image.
In some embodiments, the image processing apparatus may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform. The terminal (e.g., terminal 10-1) may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present invention.
According to the image processing apparatus shown in fig. 2, in one aspect of the present application, the present application also provides a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the different embodiments and the combination of the embodiments provided in the various alternative implementations of the point image adjusting method described above.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
To solve the defects that the processing process is complicated and the use of a novice user is not facilitated due to the manual adjustment of the parameters of the face-pinching system, as shown in fig. 3, fig. 3 is an optional schematic flow chart of the image adjustment method provided by the embodiment of the present invention, the image adjustment method provided by the present invention may be executed by a game accelerator operating in the terminal 10-1 or 10-2 shown in fig. 1, so as to achieve the purpose of improving the processing efficiency of the face image, processing the complex face image more quickly and simplifying the use steps of the game user while ensuring the accuracy of the generation of the face image in the virtual scene, and the steps shown in fig. 3 may be executed by various electronic devices operating the image processing apparatus, such as a terminal with an image processing apparatus, for example, a motion sensing game machine can also execute the image adjusting method provided by the application through game accelerator software in a mobile phone.
The following specifically describes, by taking the image processing apparatus as an example to implement the image adjusting method provided by the embodiment of the present invention, with reference to the steps shown in fig. 3.
Step 301: the image processing apparatus acquires a first face image of a target virtual object in a target game.
In some embodiments of the present invention, referring to fig. 4, fig. 4 is a schematic diagram illustrating an effect of game data processing in an embodiment of the present invention, where a game acceleration identifier may be disposed at an edge position of a game identifier (including but not limited to an upper left corner position or an upper right corner position of the game identifier), specifically, a game screen presented in a game client is also various due to different types of target games, but for any type of game, adjustment of an accelerator display position may be performed according to a use habit of a user, as shown in fig. 6, a game terminal is a motion sensing game machine, an acceleration process is started by clicking the acceleration identifier, an image adjustment method of the present application is executed, and a first face image uploaded by the user is received at the same time, or a first face image that has been saved is obtained according to an instruction of the user.
Taking the first face image as the face image of the game user as an example, when the first face image is acquired, a face region in the face image acquisition environment of the target object can be determined through a face acquisition model in response to a face image acquisition instruction, and illumination information corresponding to the face region is determined; determining the positions of key points of the face image matched in the face region according to the illumination information; and acquiring a first face image of the target object in the face area based on the positions of the key points of the face image. Specifically, when a real-time image of a game user is obtained, the positions of key points of a face image matched with a face area can be determined according to illumination information, and meanwhile, the light intensity of the face area can be determined according to the illumination information; when the light intensity of the face area is smaller than the light threshold, the game terminal is triggered to adjust the light intensity of the face area, so that the light intensity of the face area is larger than or equal to the light threshold, and the game indicated by the game identifier 1 shown in fig. 4 is triggered as an example. Meanwhile, when the light intensity of the face area is smaller than the light threshold, the flash lamp of the game terminal is triggered to perform light supplementing processing on the face area so as to enhance the light intensity and obtain clear key points of the face image. Therefore, the situation that the whole and clear face image is collected by the game terminal due to insufficient light can be avoided.
In some embodiments of the present invention, referring to fig. 5, fig. 5 is a schematic diagram of an acquisition process of a face image in an embodiment of the present invention, when a shooting environment of an image acquisition device is dark, dark channel defogging processing may be performed on the face image to form an enhanced image, where the enhanced image may include a face feature and/or a limb feature, and the specific steps include:
determining a dark channel value, a gray value and a defogging adjusting value of the face image; determining an atmospheric light value of the face image based on the dark channel value, the defogging adjustment value, and the grayscale value of the face image; and processing the face image according to the atmospheric light value and the light regulation value of the face image to form an enhanced image. The dark channel is obtained by taking the minimum value from the rgb three channels of the acquired face image to form a gray level image and performing minimum value filtering processing on the minimum value forming gray level image when the face image is acquired, and the defogging adjusting value can be obtained by analyzing the image parameters of the face image acquired by the game terminal; after the collected face image is converted into a gray scale image, the gray scale value and the dark channel value of the face image can be obtained. Recording the Dark channel value as Dark _ channel, the gray values of the face image as Mean _ H and Mean _ V, and the atmospheric light value of the face image as AirLight; the defogging adjustment value is P, the light adjustment value is A, the face image to be enhanced is Input, the result of taking the inverse number is IR, for any Input image, M% of pixel points with the maximum gray value of the dark channel image of the Input image are taken, the average value of the gray value of each channel corresponding to the M% of the pixel points is determined, wherein the value range of M is 0.1-0.3, and therefore the atmospheric light value of each channel is calculated, namely the atmospheric light value Airlight is a three-element vector, and each element corresponds to each color channel. Therefore, in some embodiments of the present invention, when the face image is collected, the minimum value of three channels of each pixel point of the face image can be determined; assigning the minimum value of the three channels of each pixel point of the defogged image to the corresponding pixel point in the image of the dark channel, wherein the minimum value is obtained by a formula: dark _ channel ═ min (Input _ R, Input _ G, Input _ B); the dark channel value of the face image can be determined, and the collected face image is adjusted through the atmospheric light value and the light adjusting value, so that a clearer face image collection result is obtained.
Step 302: and the image processing device carries out face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and face image key points corresponding to the first face image.
In some embodiments of the present invention, when a face image frame and face image key points corresponding to a first face image are obtained, the key points of the first face image may be determined according to the type of the target game; carrying out image augmentation processing on the face image; obtaining a processing result of image augmentation of a preset multiple, determining a face image frame corresponding to the first face image through a face image frame coordinate detection algorithm based on the processing result of image augmentation, and obtaining a corresponding face position; and calculating face image key points corresponding to the first face image by face image key point detection based on the processing result of image augmentation.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating the acquisition of a face image according to an embodiment of the present invention; because the position of the image acquisition device is fixed, the heights of game users are different, and the comprehensiveness of the acquired face images is also different (the defect that the height of a target object is too low or too high to acquire an accurate face image may occur), in order to acquire a more comprehensive face image, the acquired face image may be subjected to image augmentation processing, specifically, after a first face image acquired through a terminal is acquired, an area where the face of the user is located may be firstly framed by a face detection technology, and N times of the area is enlarged by taking the area as a center, as shown in fig. 6, the detection area of a detection frame 601 is adjusted to the detection area of a detection frame 602, wherein the value range of N is 1.8-2.0, more contents are acquired, for example, the face image may include a complete face image and partial background contents, and then the face image including the background contents is cut, to delete redundant background content and only keep a complete face image; for example: the following may be used: selecting corresponding face positions by adopting a face detection algorithm; when a face detection algorithm is used, in the processing result of the image augmentation, determining coordinates of a face image frame corresponding to the first face image (such as coordinates of the face detection frame 602 in fig. 6); and performing face image key point coordinate matching on the face image contained in each face image frame based on the coordinates of the face image frame, and obtaining a corresponding face position when the coordinates of the face image key points are superposed with the coordinates of the face image frame.
Then, using a five sense organs positioning algorithm to mark key points of human eyes, mouths, noses and the like; and intercepting a face image comprising background content according to the position of the detected face. After the face image including the background content is acquired, the background content also needs to be clipped, and a pre-trained deep processing network may be triggered, where the deep processing network may include but is not limited to: LeNet, AlexNet, VGG, inclusion series networks, ResNet networks; the method comprises the steps of extracting features of a face image (for example, extracting features based on gray scale such as mean value and variance and features based on a distribution histogram, and features based on a correlation matrix such as GLCM and GLRLM or signal features after image Fourier transform), and performing background cleaning processing based on the extracted features to obtain a complete face image cut by a depth processing network.
Step 303: the image processing device determines an attribute value parameter corresponding to the first face image by an attribute category matched with the first face image based on the face image frame and the face image key point.
When the face pinching effect is achieved in the target game, through the face pinching function of the target game, a game user can create a makeup image of a favorite game role with high degree of freedom, in this process, the attribute value parameters corresponding to the face image are used for representing results of different dimensions of spatial attributes of the face image in the space, the categories of the different dimensions of the spatial attributes include but are not limited to area, horizontal attributes and vertical attributes, and in this embodiment, the attribute value parameters of the different attribute categories specifically may include: the organ size score of the face image, the score of the organ of the face image in the horizontal direction, and the score of the organ of the face image in the vertical direction.
Since the game has various types and the results of the spatial attributes of the face images in different dimensions in the space are different, when the target game is a role playing game, the face image of the target virtual object to be processed is more complex due to the complex game environment, so that more attribute categories can be determined when the attribute value parameters are obtained, for example, it can be determined that different organs corresponding to the attribute category matched with the first face image include: left eye, right eye, left eyebrow, right eyebrow, nose, mouth, chin; and respectively determining the face image key point abscissa and the face image key point ordinate which respectively correspond to different organs in the attribute category matched with the first face image based on the normalized face image key point coordinate.
Referring to fig. 7A, fig. 7A is a process intention of attribute value parameter determination in an embodiment of the present invention, when the target game is a simulated strategy game, because a game environment is simple, and a face image of a target virtual object to be processed is also simpler, when the attribute value parameters are obtained, fewer attribute categories may be determined to increase a processing speed of the face image and reduce a waiting time of a game user, for example, it may be determined that different organs corresponding to the attribute category matched with the first face image include: left eye (keypoints 42-46 shown in FIG. 7A), right eye (keypoints 36-42), nose (keypoints 27-35), mouth (keypoints 47-58); and respectively determining the difference value between the abscissa of the key point of the face image and the standard value and the difference value between the ordinate of the key point of the face image and the standard value, which are respectively corresponding to different organs in the attribute category matched with the first face image, based on the normalized coordinates of the key point of the face image. Wherein determining an attribute value parameter corresponding to the first face image comprises: determining coordinates of facial image key points of the first facial image based on the facial image frame; carrying out normalization processing on the coordinates of the key points of the face image to obtain normalized coordinates of the key points of the face image; and determining an attribute value parameter corresponding to the first face image based on the normalized face image key point coordinates and the attribute category matched with the first face image. In the normalization process, the key point coordinates of the first face image may be first processed into coordinate data with a mean value of 0 and a variance of 1 to achieve the normalization process on the key point coordinates, and then the normalization process results of each key point coordinate are combined to obtain a face image key point list of the normalized first face image. Then, the mean and variance of the key points of the face image are calculated (or the standard deviation of the coordinates of the key points of the face image can be obtained first, and then the variance is calculated).
In some embodiments of the present invention, normalizing the coordinates of the key points of the face image comprises: and subtracting the mean coordinate from the coordinate of each key point, and dividing the mean coordinate by the standard deviation to obtain a result of the standardization processing. Wherein, the formula of the normalization process refers to formula 1:
Figure BDA0003216297000000181
wherein, valid is the original key point coordinate of the first face image, v is the coordinate mean, and the standard deviation of the a-number coordinate, when calculating the spatial attribute of the first face image, the spatial attribute of the first face image can be represented by the fraction of the organ size (occupied area) of the five sense organs of the first face image, the fraction of the organ of the first face image in the horizontal direction, and the fraction of the organ of the first face image in the vertical direction, for example, the organ whose attribute needs to be calculated in this embodiment includes the one shown in fig. 7A: left eye, right eye, left eyebrow, right eyebrow, nose, mouth, chin. During specific calculation, the area occupation ratio of the face space attribute can be the proportion of the area of a maximum rectangle formed by all key points corresponding to the organ in the first face image; the horizontal direction of the spatial attributes of the face may be: and taking the horizontal coordinate of the key point of the first face image after the normalization processing as an attribute value. The vertical direction of the spatial attributes of the face may be: and taking the vertical coordinate of the key point of the normalized first face image as an attribute value.
It should be noted that, when the image processing method provided by the present application is used for processing a face image, the method may be used for processing a face image, and may also be used for processing a face image of a cartoon animation, referring to fig. 7B, where fig. 7B is a process intention for determining attribute value parameters in an embodiment of the present invention, where a face image to be processed is a cartoon face image, and after a user triggers a game acceleration flag shown in fig. 2 and uploads the cartoon face image, it may be determined that different organs corresponding to attribute categories matched with the first face image include: left eye (keypoints 44-46 shown in FIG. 7B), right eye (keypoints 35-41), nose (keypoints 27-35), mouth (keypoints 48-57); respectively determining the difference value between the abscissa of the key point of the face image corresponding to the left eye, the right eye, the nose and the mouth in the attribute category matched with the face image of the cartoon and the standard value and the difference value between the ordinate of the key point of the face image of the cartoon and the standard value based on the normalized key point coordinates of the face image; because different cartoon images are used by the game user, when the face image of the uploaded cartoon image only comprises the left eyes (key points 44-46 shown in fig. 7B), the noses (key points 27-35) and the mouths (key points 48-57) in the side face image, the difference value between the abscissa of the key point of the face image corresponding to the left eye, the nose and the mouth in the attribute category matched with the face image of the cartoon image and the standard value can be determined, and prompt information is also sent out in the interface shown in fig. 2, so that the user is prompted to upload the complete cartoon face image, distortion in image processing is avoided, and the use of the user is prevented from being influenced.
Step 304: and triggering the human face attribute model by the image processing device, and carrying out standardization processing on the attribute value parameters to obtain a human face attribute score corresponding to the first face image.
In some embodiments of the present invention, obtaining the face attribute score corresponding to the first face image may be implemented by:
determining an attribute average value parameter and an attribute standard deviation parameter corresponding to the attribute value parameter through the triggered face attribute model; based on the attribute mean value parameter and the attribute standard deviation parameter, carrying out standardization processing on the attribute value parameter to obtain a standardized attribute parameter; and performing linear mapping processing on the standardized attribute parameters, and mapping the standardized attribute parameters to a range of scores (0-100) so as to obtain the face attribute score corresponding to the first face image. It should be noted that the face attribute model used in the present application is a normal distribution statistical model, where, for example, in a role playing game, face feature extraction, face multiple attribute numerical calculation, and a statistical model calculation based on normal distribution can be performed through 10000 face pictures (or different sample sets are selected according to game types), each attribute average value v and standard deviation a in all face pictures are counted, and the statistical model calculation based on normal distribution including the attribute average value v and the standard deviation a is used as a face attribute model and deployed in a corresponding server. When the attribute value parameter is normalized based on the attribute average value parameter and the attribute standard deviation parameter, the normalization process may be performed by using the normal distribution-based average value and standard deviation normalization method according to the formula 2,
Figure BDA0003216297000000191
wherein v represents the mean value of the face attribute in the face attribute model, and a represents the standard deviation of the face attribute in the face attribute model. Standardized attribute value data can be obtained when obtaining attribute score through linear mappingstd2.5, 2.5]Is mapped to [0, 100 ] of the score]When data is availablestdWhen the score is less than-2.5, the score is 0, and when the data is recordedstdWhen the score is greater than 2.5, the score is taken as 100, and the process of linear mapping refers to formula 3:
Figure BDA0003216297000000192
it should be noted that, in the processing process of formula 3, since the area (difference between the upper and lower limits of the error function) of a certain interval on the horizontal axis under the normal curve reflects the percentage of the number of instances in the interval to the total number of instances, or the probability that the variable value falls in the interval, it can be determined by combining the standard normal distribution table:
under the normal curve, the length of the half-interval on the horizontal axis is 0.67448975 sigma to obtain 50% probability. The area in the range of the horizontal axis (. mu. - σ.,. mu. + σ.) was 68.268949%. The area in the range of the horizontal axis (. mu. -2. sigma.,. mu. +2. sigma.) was 95.449974%. The area in the range of the horizontal axis (. mu. -2.5. mu.,. mu. + 2.5. mu.) was 99%. The area in the range of the horizontal axis (. mu. -3. sigma., μ + 3. sigma.) was 99.730020%.
Therefore, when the face image is processed by the face attribute model provided by the application in the game environment, the area in the horizontal axis interval (mu-2.5 sigma, mu +2.5 sigma) is 99%, the actual attribute data covering the statistic 99% is obtained, the interval range is linearly mapped to the range of the fraction (0-100), the obtained fraction is comprehensive and has no redundancy, and the interpretability is strongstd2.5, 2.5]Is mapped to [0, 100 ] of the score]Interval threshold in different game environmentsCan be flexibly adjusted according to the use environment.
Certainly, the image processing apparatus provided by the present invention may configure different face attribute models for virtual scenes of the same type of virtual scene, and the face attribute models configured for the same user may also be called by other application programs (for example, a game simulator or a motion sensing game device).
Step 305: and the image processing device adjusts the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image.
In some embodiments of the invention, the second face image may be acquired by at least one of: 1) processing images by using key points of the face uploaded by a user; 2) performing image processing face pinching by using the face attribute score; 3) and locally adjusting the face attribute by using the existing face attribute score to pinch the face. The following are examples of different embodiments.
Before adjusting the first facial image to obtain the second facial image, firstly determining an obtaining manner of the second facial image, referring to the terminals 10-1 and 10-2 shown in fig. 1, wherein the usage habits of the users of the terminal 10-1 and the terminal 10-2 are different, so that when executing the same game process, the obtaining manner of the second facial image can be different to adapt to the usage habits of different users, specifically, taking a game scene as an example of an FPS game, a user can operate on the terminal in advance, after detecting the operation of the user, the terminal can download a game configuration file of the electronic game, the game configuration file can include an application program, interface display data, virtual scene data, or the like of the electronic game, so that the user can call the game configuration file when logging in the electronic game on the terminal, and rendering and displaying the electronic game interface. A user may perform a touch operation on a terminal, and after the terminal detects the touch operation, the terminal may determine game data corresponding to the touch operation, and render and display the game data, where the game data may include information such as virtual scene data and behavior data of a virtual object in the virtual scene, and select a corresponding acquisition manner of the second facial image according to the operation of the user (for example, uploading a head portrait of the user, acquiring a real-time image of the user, or selecting an image template).
In some embodiments, referring to fig. 8, fig. 8 is an optional flowchart of the image adjustment method provided in the embodiments of the present invention, and it can be understood that the steps shown in fig. 8 may be executed by various electronic devices running the image processing apparatus, for example, a game terminal with the image processing apparatus, or a server cluster of a game operator, and then encapsulate the face attribute model in corresponding game accelerator software to provide a service of "face-pinching function" for a game user. The face-pinching system is one of the standard configuration systems of most of RPG (Role-playing game) type games, and through the face-pinching system, a player can create a makeup image of a favorite game Role with a high degree of freedom, specifically including:
step 801: and acquiring the game face attribute score corresponding to the target virtual object.
Step 802: and determining the coordinates of different key points in the second face image based on the game face attribute scores and the face attribute scores corresponding to the first face image.
When determining the coordinates of different key points in the second face image, the area of each organ in the game face may be calculated based on the coordinates of the key points corresponding to different organs in the game face; calculating the area of each organ in the first face image based on the key point coordinates respectively corresponding to different organs in the first face image; determining key point coordinates respectively corresponding to different organs in the first face image based on the attribute value parameters corresponding to the first face image; and adjusting the coordinates of key points respectively corresponding to different organs in the first face image based on the ratio of the area of each organ in the game face to the area of each organ in the first face image to obtain the coordinates of different key points in the second face image.
Step 803: and adjusting the coordinates of the key points in the first face image based on the coordinates of different key points in the second face image to obtain a second face image.
During adjustment, the second face image is obtained by adjusting the coordinates of the key points in the first face image, so that the skin color and the hair style contour of the second face image are easily deformed greatly, a large difference exists, and the identification degree of the image is reduced, therefore, when the first face image is a game role image, the color mode of superposition rendering of the skin color texture feature of the first face image and the texture feature of the hair style of the face image can be adjusted, so that the face of the first face image is matched with the color of the standard object in the target image template; therefore, by adjusting the color mode of the superposition rendering of the skin color texture feature and the texture feature of the hair style of the face image, the distortion caused by the overlarge difference between the color of the face color and the hair style of the generated first face image and the color of the second face image can be avoided.
When the first face image is a cartoon image, adjusting a margin mode of overlaying and rendering the texture features of the hair style of the face image to enable the hair style part of the first face image to be matched with the face features of the standard object in the target image template, and by adjusting the margin mode of overlaying and rendering the texture features of the hair style of the face image, distortion caused by non-lamination of the hair style during face image adjustment can be avoided.
Referring to fig. 9, fig. 9 is an optional flowchart of an image adjustment method according to an embodiment of the present invention, and it can be understood that the steps shown in fig. 9 and the steps shown in fig. 8 may be selectively executed to meet different user habits, and specifically include:
step 901: and acquiring a game role image of a target virtual object in the target game.
Step 902: and detecting key points of the game role image to obtain the key points of the game role image.
Step 903: and performing triangular affine transformation processing on the key points of the game role image and the key points of the face image corresponding to the first face image to obtain the game role image subjected to face pinching processing.
The following describes a process of performing triangular affine transformation on key points of a face image corresponding to a first face image, with reference to fig. 10, where fig. 10 is a schematic diagram of a process of triangular affine transformation in an embodiment of the present invention, and specifically includes the following steps:
step 1001: three vertices of a first triangle are determined in the keypoints of the first face image, and the first triangle is constructed based on the determined three vertices.
Step 1002: and constructing a first triangle group by utilizing Diloney triangulation processing through the first triangle.
Wherein each triangle in the first triangle group includes all keypoints of the first face image.
Referring to fig. 11A, fig. 11A is a schematic diagram illustrating a principle of triangular affine transformation in an embodiment of the present invention, so as to implement dironey triangulation by a dironey triangulation method, and generate a triangular mesh. Specifically, a rectangle can be created first, as shown by a in fig. 11A, to enclose all the key points a/B/C/D to form 2 triangles, and then each key point is inserted step by step in a loop. Then, as shown in b in fig. 11A, inserting a point p into the existing triangular mesh, and connecting the point with three vertices of the triangle containing the point p to form three new triangles; then, as shown in c in fig. 11A, the empty circumcircle detection is performed on the three new triangles in sequence, and the triangles of which all circumcircles contain the point p are determined. And continuously deleting all the detected triangles to form a polygonal cavity. Finally, as shown by d in fig. 11A, a new triangular mesh is formed by connecting p and each vertex of the generated polygonal cavity, so as to ensure that the formed triangular mesh is a dironi triangular mesh, forming a triangular group.
Step 1003: three vertices of a second triangle are determined in the keypoints of the original game character image, and the second triangle is constructed based on the determined three vertices.
Referring also to the process shown in FIG. 11A, a second triangle may be constructed from the original game character image, and an affine matrix may be calculated using the second triangle and the first triangle created in the preceding step.
Step 1004: and determining a corresponding affine matrix through the first triangle and the second triangle, wherein the affine matrix comprises parameters for affine transformation of the first triangle and the second triangle.
When calculating the affine matrix, assuming that three vertices of the original triangle are a0, B0, and C0, respectively, and three vertices of the target triangle are a1, B1, and C1, the affine transformation matrix T can be obtained by using the coordinates of these 6 points and using equation 4 (transformation matrix calculation equation):
Figure BDA0003216297000000241
then, affine transformation is performed according to formula 5 to perform corresponding affine transformation processing on the pixels of the target triangle, and a new triangle is generated.
Figure BDA0003216297000000242
Corresponding matrix representation
Figure BDA0003216297000000243
Step 1005: and carrying out affine processing on each first triangle in the first triangle group through the affine matrix to obtain each third triangle corresponding to the target game character image.
Referring to fig. 11B, fig. 11B is a schematic diagram illustrating an effect of triangular affine transformation in the embodiment of the present invention, where each triangle in the first triangle group of the first face image is affine transformed by an affine matrix to each triangle formed by different key points in the target game character image, so as to implement triangular affine transformation.
Step 1006: and determining the key points of the target game character image through the vertex of each third triangle corresponding to the target game character image.
Step 1007: and obtaining the target game role image subjected to face pinching processing through the key points of the target game role image.
The game role image subjected to face pinching processing and the first face image have the same face attribute score, so that a game user can create a makeup image of a favorite game role with high freedom through the service of a face pinching function, and the use requirements of different users are met.
In some embodiments, referring to fig. 12, fig. 12 is an optional display schematic diagram of an image adjustment method provided by an embodiment of the present invention, and may present, in response to a viewing operation for a face image adjustment function item, a content page including the first face image and the image template, and present at least one interactive function item in the content page, where the interactive function item is used to implement interaction with the first face image; and receiving an interaction operation aiming at the first face image and triggered based on the interaction function item so as to execute a corresponding interaction instruction. When a user triggers a small program with a face pinching function, when a first face image presented in a view interface is not suitable, a user terminal can acquire the first face image in a real-time environment again, the user confirms that the first face image in the real-time environment acquired again by the terminal can be presented in the view interface through first interaction prompt information so as to be used in a face pinching process, and a content page is switched to the view interface, so that the first face image to be processed is adjusted according to the real-time environment of the user, and the selection variety of the user is enriched.
Referring to fig. 13, fig. 13 is an optional display schematic diagram of the image adjustment method according to the embodiment of the present invention, and second interaction prompt information may be further presented in the content page, where the second interaction prompt information is used to prompt that the interaction content corresponding to the interaction operation can be presented in a target image template library interface corresponding to a target image template; and responding to an instruction of switching to the target image template library interface, and switching the content page to the target image template library interface. Different target image templates in the target image template library of the face pinching process are preprocessed to determine texture feature sets corresponding to different target parts, the types of the templates are fixed, and a user can confirm that a second face image in a real-time environment acquired again by the terminal can be presented in an interface of the target image template library through second interaction prompt information so that the user can configure the image templates in the target image template library by himself according to different real-time environments of the user.
Next, taking a role playing game as an example, a process of adjusting a face attribute score corresponding to a first face image to obtain a second face image is described further, with reference to fig. 14, where fig. 14 is an optional flowchart of an image adjustment method provided in an embodiment of the present invention, and specifically includes the following steps:
step 1401: and acquiring the face attribute score corresponding to the first face image and the face attribute score of the game role image.
Step 1402: and adjusting the key points of the first face image according to the area ratio of the face attribute scores corresponding to the first face image to the key points of the first face image.
The coordinate adjustment of the key points is performed by taking the point set of the organ as a unit, and when the score of one organ is adjusted, all the key points corresponding to the organ can be adjusted. For example, the set of keypoints is a ═ a0,a1,a2,...,an]Each key point is composed of a horizontal coordinate and a vertical coordinate, and is marked as ai=(xi,yi). The raw area fraction was designated scoreoriginAnd the target area fraction is recorded as scoretarget
The original attribute value of the area is areaoriginThe target attribute value is areatargetAnd modifying the coordinate values of the key points according to the area proportion, wherein the new coordinate calculation process of each key point refers to a formula 6:
Figure BDA0003216297000000261
wherein the content of the first and second substances,
Figure BDA0003216297000000262
represents the mean of the set of organ points,
Figure BDA0003216297000000263
is the second face image keypoint coordinate mean.
Step 1403: and adjusting the key point coordinates of the first face image according to the horizontal scores.
Referring to formula 7, the original attribute and the new attribute are calculated to obtain the difference between the attribute values:
Figure BDA0003216297000000264
wherein, score2And score1Respectively target score and original score, value2And value1The attribute values of the face attribute of the game character image and the attribute values of the face attribute corresponding to the first face image are respectively used for calculating the attribute difference of the face attribute score of the game character image relative to the face attribute score corresponding to the first face image. The horizontal movement distance may be calculated first, based on equation 8, and the difference between the attributes is converted into the actual coordinate movement distance:
Figure BDA0003216297000000265
wherein, avgfaceIs the mean value of the coordinates of the key points of the face image,
Figure BDA0003216297000000266
is the face image gatewayThe standard deviation of coordinates of the key points, Distance is a coordinate point, including a horizontal offset DistancexAnd vertical offset distancey. And then calculating the coordinates of the target key points, and adding an offset distance to each key point of the organ to obtain new key point coordinates. The horizontal new coordinates and vertical new coordinates calculation formula of the single key point refer to formula 9:
new_x=x+distancexin the formula 9, the first and second groups,
where x is the original keypoint horizontal coordinate, new _ x is the new keypoint horizontal coordinate, calculated to the new coordinate (new _ x, new _ y)
Step 1404: and adjusting the key point coordinates of the first face image according to the vertical scores.
Wherein, the vertical moving distance is firstly calculated, and the difference of the attributes is converted into the actual coordinate moving distance. And calculating the vertical coordinates of the new key points by formula 10:
new_y=y+distanceyequation 10
Where x is the original keypoint vertical coordinate and new _ x is the new keypoint vertical coordinate.
Step 1405: and obtaining and displaying a second face image.
The resulting second facial image may also be stored in the game terminal, and when the game user triggers the game acceleration flag shown in fig. 4, the second facial image is presented to the game user, so that the game user can more quickly select the facial image to be used.
The beneficial technical effects are as follows:
the method comprises the steps of obtaining a first face image of a target virtual object in a target game; carrying out face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and face image key points corresponding to the first face image; determining an attribute value parameter corresponding to the first face image through an attribute category matched with the first face image based on the face image frame and the face image key points; triggering a face attribute model, and carrying out standardization processing on the attribute value parameters to obtain a face attribute score corresponding to the first face image; and adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image. Therefore, the first face image can be effectively and accurately processed, the first face image is adjusted and processed based on the face attribute score corresponding to the first face image to obtain the second face image, the neural network model is not depended on, the virtual environments of different types can be adapted, meanwhile, the occupation of hardware resources is reduced in the image processing process, the cost of hardware equipment is reduced, and the use experience of a user is improved.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (15)

1. An image processing method, characterized in that the method comprises:
acquiring a first face image of a target virtual object in a target game;
carrying out face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and face image key points corresponding to the first face image;
determining an attribute value parameter corresponding to the first face image through an attribute category matched with the first face image based on the face image frame and the face image key points;
and adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image.
2. The method according to claim 1, wherein the performing a face image frame coordinate detection process and a face image key point detection process on the first face image to obtain a face image frame and a face image key point corresponding to the first face image comprises:
determining key points of the first face image according to the type of the target game;
carrying out image augmentation processing on the face image to obtain a processing result of image augmentation with preset times;
determining a face image frame corresponding to the first face image through a face image frame coordinate detection algorithm based on the processing result of image augmentation, and obtaining a corresponding face position;
and calculating face image key points corresponding to the first face image by face image key point detection in the face position based on the processing result of image augmentation.
3. The method of claim 2, wherein the determining a face image frame corresponding to the first face image by a face image frame coordinate detection algorithm based on the processing result of the image augmentation and obtaining a corresponding face position comprises:
determining coordinates of a face image frame corresponding to the first face image in the processing result of the image augmentation through a face image frame coordinate detection algorithm;
and performing face image key point coordinate matching on the face image contained in each face image frame based on the coordinates of the face image frame, and obtaining a corresponding face position when the coordinates of the face image key points are superposed with the coordinates of the face image frame.
4. The method of claim 1, wherein determining an attribute value parameter corresponding to the first facial image by an attribute class matching the first facial image based on the facial image frame and facial image keypoints comprises:
determining coordinates of facial image key points of the first facial image based on the facial image frame;
carrying out normalization processing on the coordinates of the key points of the face image to obtain normalized coordinates of the key points of the face image;
and determining an attribute value parameter corresponding to the first face image based on the normalized face image key point coordinates and the attribute category matched with the first face image.
5. The method of claim 4, further comprising:
when the target virtual object is a virtual object of a role-playing game,
determining the attribute class that matches the first facial image comprises: left eye, right eye, left eyebrow, right eyebrow, nose, mouth, chin;
and respectively determining the face image key point abscissa and the face image key point ordinate which respectively correspond to different organs in the attribute category matched with the first face image based on the normalized face image key point coordinate.
6. The method of claim 4, further comprising:
when the target object is a virtual object simulating a strategy game,
determining the attribute class that matches the first facial image comprises: left eye, right eye, nose, mouth;
respectively determining the difference value between the abscissa and the abscissa standard value of the key point of the face image respectively corresponding to different organs in the attribute category matched with the first face image and the difference value between the ordinate and the ordinate standard value of the key point of the face image based on the normalized coordinates of the key point of the face image;
determining the face image key point abscissas corresponding to different organs in the attribute category matched with the first face image respectively based on the difference value between the face image key point abscissas and the abscissa standard value;
and determining the longitudinal coordinates of the key points of the face image respectively corresponding to different organs in the attribute category matched with the first face image based on the difference value between the longitudinal coordinate of the key points of the face image and the standard value of the longitudinal coordinate.
7. The method according to claim 1, wherein the adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image comprises:
triggering a face attribute model, and carrying out standardization processing on the attribute value parameters through the face attribute model to obtain a face attribute score corresponding to the first face image;
acquiring a game face attribute score corresponding to a target virtual object;
determining coordinates of different key points in the second face image based on the game face attribute scores and the face attribute scores corresponding to the first face image;
and adjusting the coordinates of the key points in the first face image based on the coordinates of different key points in the second face image to obtain a second face image.
8. The method of claim 7, wherein the triggering the face attribute model and normalizing the attribute value parameter by the face attribute model to obtain a face attribute score corresponding to the first face image comprises:
determining an attribute average value parameter and an attribute standard deviation parameter corresponding to the attribute value parameter through the face attribute model, wherein the attribute average value parameter is an attribute average value of each face image in a face image sample set;
based on the attribute mean value parameter and the attribute standard deviation parameter, performing Z-score standardization processing on the attribute value parameter, and mapping the attribute value parameter to a target interval to obtain a standardized attribute parameter;
and performing linear mapping processing on the standardized attribute parameters to obtain a face attribute score corresponding to the first face image of the first face image.
9. The method of claim 7, wherein determining coordinates of different keypoints in the second facial image based on the game face attribute score and a face attribute score corresponding to the first facial image comprises:
calculating the area of each organ in the game face based on the key point coordinates respectively corresponding to different organs in the game face;
calculating the area of each organ in the first face image based on the key point coordinates respectively corresponding to different organs in the first face image;
determining key point coordinates respectively corresponding to different organs in the first face image based on the attribute value parameters corresponding to the first face image;
and adjusting the coordinates of key points respectively corresponding to different organs in the first face image based on the ratio of the area of each organ in the game face to the area of each organ in the first face image to obtain the coordinates of different key points in the second face image.
10. The method of claim 1, further comprising:
acquiring an original game role image of a target virtual object in the target game;
detecting key points of the original game role image to obtain the key points of the original game role image;
and performing triangular affine transformation processing on the key points of the original game role image and the key points of the face image corresponding to the first face image to obtain a target game role image subjected to face pinching processing, wherein the target game role image and the first face image have the same face attribute score.
11. The method according to claim 10, wherein the obtaining of the object game character image subjected to the face-pinching processing by performing a trigonometric affine transformation processing based on the key points of the original game character image and the key points of the face image corresponding to the first face image comprises:
determining three vertices of a first triangle in the keypoints of the first face image, and constructing the first triangle based on the determined three vertices;
constructing a first triangle group by using a Dirony triangulation process through the first triangle, wherein each triangle in the first triangle group comprises all key points of the first face image;
determining three vertices of a second triangle among the key points of the original game character image, and constructing the second triangle based on the determined three vertices;
determining a corresponding affine matrix through the first triangle and the second triangle, wherein the affine matrix comprises parameters for affine transformation of the first triangle and the second triangle;
carrying out affine processing on each first triangle in the first triangle group through the affine matrix to obtain each third triangle corresponding to the target game character image;
determining key points of the target game role image through the vertexes of each third triangle corresponding to the target game role image;
and obtaining the target game role image subjected to face pinching processing through the key points of the target game role image.
12. The method of claim 1, further comprising:
in response to a viewing operation for adjusting a function item for a face image, presenting a content page including the first face image and the image template, and presenting at least one interactive function item in the content page, the interactive function item being used for realizing interaction with the first face image;
and receiving an interaction operation aiming at the first face image and triggered based on the interaction function item so as to execute a corresponding interaction instruction.
13. An image processing apparatus, characterized in that the apparatus comprises:
the information transmission module is used for acquiring a first face image of a target virtual object in a target game;
the information processing module is used for carrying out face image frame coordinate detection processing and face image key point detection processing on the first face image to obtain a face image frame and face image key points corresponding to the first face image;
the information processing module is used for determining an attribute value parameter corresponding to the first face image through an attribute category matched with the first face image based on the face image frame and the face image key point;
the information processing module is used for triggering a face attribute model and carrying out standardization processing on the attribute value parameters to obtain a face attribute score corresponding to the first face image;
and the information processing module is used for adjusting the first face image based on the attribute value parameter corresponding to the first face image to obtain a second face image.
14. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the image adjustment method of any one of claims 1 to 12 when executing the executable instructions stored by the memory.
15. A computer-readable storage medium storing executable instructions, wherein the executable instructions when executed by a processor implement the image adjustment method of any one of claims 1-12.
CN202110944503.4A 2021-08-17 2021-08-17 Image adjustment method, device, electronic equipment and storage medium Active CN113643417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110944503.4A CN113643417B (en) 2021-08-17 2021-08-17 Image adjustment method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110944503.4A CN113643417B (en) 2021-08-17 2021-08-17 Image adjustment method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113643417A true CN113643417A (en) 2021-11-12
CN113643417B CN113643417B (en) 2023-06-27

Family

ID=78422445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110944503.4A Active CN113643417B (en) 2021-08-17 2021-08-17 Image adjustment method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113643417B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327705A (en) * 2021-12-10 2022-04-12 重庆长安汽车股份有限公司 Vehicle-mounted assistant virtual image self-defining method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019000777A1 (en) * 2017-06-27 2019-01-03 五邑大学 Internet-based face beautification system
CN111583280A (en) * 2020-05-13 2020-08-25 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN111632374A (en) * 2020-06-01 2020-09-08 网易(杭州)网络有限公司 Method and device for processing face of virtual character in game and readable storage medium
CN111738914A (en) * 2020-07-29 2020-10-02 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112116589A (en) * 2020-09-30 2020-12-22 腾讯科技(深圳)有限公司 Method, device and equipment for evaluating virtual image and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019000777A1 (en) * 2017-06-27 2019-01-03 五邑大学 Internet-based face beautification system
CN111583280A (en) * 2020-05-13 2020-08-25 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN111632374A (en) * 2020-06-01 2020-09-08 网易(杭州)网络有限公司 Method and device for processing face of virtual character in game and readable storage medium
CN111738914A (en) * 2020-07-29 2020-10-02 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112116589A (en) * 2020-09-30 2020-12-22 腾讯科技(深圳)有限公司 Method, device and equipment for evaluating virtual image and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TASEEF HASAN FAROOK等: "Designing 3D prosthetic templates for maxillofacial defect rehabilitation: A comparative analysis of different virtual workflows", 《COMPUTERS IN BIOLOGY AND MEDICINE》 *
詹红燕;张磊;陶培亚;: "基于姿态估计的单幅图像三维人脸重建", 微电子学与计算机, no. 09 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327705A (en) * 2021-12-10 2022-04-12 重庆长安汽车股份有限公司 Vehicle-mounted assistant virtual image self-defining method
CN114327705B (en) * 2021-12-10 2023-07-14 重庆长安汽车股份有限公司 Vehicle assistant virtual image self-defining method

Also Published As

Publication number Publication date
CN113643417B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US11406899B2 (en) Virtual character generation from image or video data
US11276216B2 (en) Virtual animal character generation from image or video data
US11620800B2 (en) Three dimensional reconstruction of objects based on geolocation and image data
CN112348969B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
US20090202114A1 (en) Live-Action Image Capture
JP7050883B2 (en) Foveal rendering optimization, delayed lighting optimization, particle foveal adaptation, and simulation model
WO2022083452A1 (en) Two-dimensional image display method and apparatus for virtual object, and device and storage medium
US20230285857A1 (en) Video frame rendering method and apparatus
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN113577774A (en) Virtual object generation method and device, electronic equipment and storage medium
CN114095744A (en) Video live broadcast method and device, electronic equipment and readable storage medium
CN109104619A (en) Image processing method and device for live streaming
CN113643417B (en) Image adjustment method, device, electronic equipment and storage medium
CN110446091A (en) A kind of virtual spectators' display methods, system, device and storage medium
US20220172431A1 (en) Simulated face generation for rendering 3-d models of people that do not exist
CN114494556A (en) Special effect rendering method, device and equipment and storage medium
CN114758041A (en) Virtual object display method and device, electronic equipment and storage medium
CN116982088A (en) Layered garment for conforming to underlying body and/or garment layers
CN112396683A (en) Shadow rendering method, device and equipment of virtual scene and storage medium
US20230330541A1 (en) Method and apparatus for man-machine interaction based on story scene, device and medium
WO2023201937A1 (en) Human-machine interaction method and apparatus based on story scene, device, and medium
US11948252B2 (en) Three-dimensional mesh generator based on two-dimensional image
CN117398680A (en) Virtual object display method and device, terminal equipment and storage medium
CN117692724A (en) Video data generation method and device and electronic equipment
CN117475068A (en) Model data processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40055251

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant