CN116433852B - Data processing method, device, equipment and storage medium - Google Patents

Data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116433852B
CN116433852B CN202310707673.XA CN202310707673A CN116433852B CN 116433852 B CN116433852 B CN 116433852B CN 202310707673 A CN202310707673 A CN 202310707673A CN 116433852 B CN116433852 B CN 116433852B
Authority
CN
China
Prior art keywords
head
target
grid
initial
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310707673.XA
Other languages
Chinese (zh)
Other versions
CN116433852A (en
Inventor
李文娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310707673.XA priority Critical patent/CN116433852B/en
Publication of CN116433852A publication Critical patent/CN116433852A/en
Application granted granted Critical
Publication of CN116433852B publication Critical patent/CN116433852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the application discloses a data processing method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring an initial object grid model to be optimized and K images comprising a target object; extracting head characteristics of the K images to obtain K candidate head shape data and K candidate head texture data of the target object; correcting the initial head shape data according to the K candidate head shape data to obtain target head shape data of a target object; correcting the initial head texture data according to the K candidate head texture data to obtain target head texture data; and according to the target head shape data and the target head texture data, optimizing and updating M head grids in the initial object grid model of the target object to obtain an optimized target object grid model. The accuracy of the head mesh model (head mesh in the target object mesh model) is improved by the present application.

Description

Data processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a data processing method, apparatus, device, and storage medium.
Background
With the development of internet technology, the head reconstruction technology is widely applied to scenes such as games, animations, medical treatment and the like, and the head reconstruction technology refers to constructing a head grid model of an object (such as a virtual object or a user in the real world) in the virtual world, and can render a realistic image based on the head grid model. At present, a head grid model needs to be constructed manually, and due to the fact that the complexity of the head grid model is high, misoperation is easy to occur, and the acquired head grid model is low in accuracy.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, data processing equipment and a storage medium, which can improve the accuracy of a head grid model (head grid in a target object grid model).
An aspect of an embodiment of the present application provides a data processing method, including:
acquiring an initial object grid model to be optimized and K images comprising the target object; the initial object mesh model includes M head meshes of initial head shape data and initial head texture data of the target object, K, M are integers greater than 1;
extracting head characteristics of the K images to obtain K candidate head shape data and K candidate head texture data of the target object;
Correcting the initial head shape data of the target object according to the K candidate head shape data to obtain target head shape data of the target object;
correcting the initial head texture data of the target object according to the K candidate head texture data to obtain target head texture data of the target object;
and according to the target head shape data and the target head texture data, optimizing and updating the M head grids in the initial object grid model of the target object to obtain an optimized target object grid model.
An aspect of an embodiment of the present application provides a data processing method, including:
acquiring annotation head contour data of a sample object, an initial object grid model to be optimized and P images comprising the sample object; the initial object grid model comprises Q head grids of initial head shape data and initial head texture data of the sample object, P, Q are integers greater than 1, and the training data comprises labeling head shape data of the sample object;
invoking an initial head reconstruction model, and extracting head characteristics of the P images to obtain P candidate head shape data and P candidate head texture data of the sample object;
Correcting the initial head shape data of the sample object according to the P candidate head shape data to obtain predicted head shape data of the sample object;
correcting the initial head texture data of the sample object according to the P candidate head texture data to obtain predicted head texture data of the sample object;
optimizing and updating the Q head grids in the initial object grid model of the sample object according to the predicted head texture data and the predicted head shape data to obtain an optimized predicted object grid model;
training the initial head reconstruction model according to the labeling head outline data, the prediction head texture data, the prediction object grid model and the prediction head shape data to obtain a target head reconstruction model; the target head reconstruction model, when invoked, is used to perform the method as described previously.
An aspect of an embodiment of the present application provides a data processing apparatus, including:
the acquisition module is used for acquiring an initial object grid model to be optimized and K images comprising the target object; the initial object mesh model includes M head meshes of initial head shape data and initial head texture data of the target object, K, M are integers greater than 1;
The extraction module is used for extracting head characteristics of the K images to obtain K candidate head shape data and K candidate head texture data of the target object;
the first correction module is used for correcting the initial head shape data of the target object according to the K candidate head shape data to obtain target head shape data of the target object;
the second correction module is used for correcting the initial head texture data of the target object according to the K candidate head texture data to obtain target head texture data of the target object;
and the updating module is used for carrying out optimization updating on the M head grids in the initial object grid model of the target object according to the target head shape data and the target head texture data to obtain an optimized target object grid model.
An aspect of an embodiment of the present application provides a data processing apparatus, including:
the acquisition module is used for acquiring the annotation head outline data of the sample object, an initial object grid model to be optimized and P images comprising the sample object; the initial object grid model comprises Q head grids of initial head shape data and initial head texture data of the sample object, P, Q are integers greater than 1, and the training data comprises labeling head shape data of the sample object;
The extraction module is used for calling an initial head reconstruction model, extracting head characteristics of the P images, and obtaining P candidate head shape data and P candidate head texture data of the sample object;
the first correction module is used for correcting the initial head shape data of the sample object according to the P candidate head shape data to obtain predicted head shape data of the sample object;
the second correction module is used for correcting the initial head texture data of the sample object according to the P candidate head texture data to obtain predicted head texture data of the sample object;
the updating module is used for carrying out optimization updating on the Q head grids in the initial object grid model of the sample object according to the prediction head texture data and the prediction head shape data to obtain an optimized prediction object grid model;
the training module is used for training the initial head reconstruction model according to the marking head outline data, the prediction head texture data, the prediction object grid model and the prediction head shape data to obtain a target head reconstruction model; the target head reconstruction model, when invoked, is used to perform the method as described previously.
In one aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
In one aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, the computer program implementing the steps of the method described above when executed by a processor.
In one aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method described above.
In the present application, since the K candidate head shape data and the K candidate head texture data are extracted from K images including the target object, i.e., the K candidate head shape data and the K candidate head texture data can provide more detailed information about the head of the target object. Therefore, the initial head shape data in the initial object grid model is corrected through the K candidate head shape data to obtain the target head shape data, and the initial head texture data in the initial object grid model is corrected through the K candidate head texture data to obtain the target head texture data, so that the accuracy of the target head shape data and the target head texture data can be improved. Further, the head grids in the initial object grid model are optimized and updated according to the target head shape data and the target head texture data, so that the accuracy of the head grids in the optimized target object grid model can be improved; in addition, the optimization updating process of the head grid model in the initial object grid model does not need to be manually participated, so that the labor cost can be saved, and the efficiency of acquiring the object grid model is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a data processing system provided by the present application;
FIG. 2 is a schematic diagram of an interaction scenario of a data processing method provided by the present application;
FIG. 3 is a schematic view of an interaction scenario of another data processing method provided by the present application;
FIG. 4 is a flow chart of a data processing method according to the present application;
FIG. 5 is a schematic view of a scene for acquiring target head depth data of a target image according to the present application;
FIG. 6 is a flow chart of another data processing method according to the present application;
FIG. 7 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another data processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Embodiments of the present application may relate to the fields of artificial intelligence technology, autopilot, intelligent transportation, etc., and so-called artificial intelligence (Artificial Intelligence, AI), which is a theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary discipline involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
For example, in the present application, a machine learning technique is used to construct a target head reconstruction model, the target head reconstruction model is called, head feature extraction is performed on K images to obtain K candidate head shape data and K candidate head texture data of a target object, the initial head shape data is corrected according to the K candidate head shape data to obtain target head shape data of the target object, and the initial head texture data is corrected according to the K candidate head texture data to obtain target head texture data of the target object. Further, the target head reconstruction model is called, and the M head grids in the initial object grid model are optimized and updated according to the target head shape data and the target head texture data, so that an optimized target object grid model is obtained. The head grid in the initial object grid model can be rapidly optimized and updated through the target head reconstruction model, and the accuracy and the optimization and update efficiency of the head grid model are improved.
For a clearer understanding of the present application, a data processing system implementing the present application will first be described, as shown in fig. 1, comprising a server 10 and a terminal cluster, which may comprise one or more terminals, the number of which will not be limited here. As shown in fig. 1, the terminal cluster may specifically include a terminal 1, a terminal 2, a terminal …, and a terminal n; it will be appreciated that terminals 1, 2, 3, …, n may all be network connected to the server 10, so that each terminal may interact with the server 10 via a network connection.
The terminal is provided with one or more target applications, wherein the target applications can refer to applications with the function of reconstructing a head grid model, for example, the target applications can comprise independent application programs, web page applications, applets in host applications and the like, and the target applications can specifically refer to applications for producing animation videos or applications for producing games and the like. The server 10 refers to a device for providing a back-end service for a target application in a terminal, and in one embodiment, the server 10 may be configured to obtain an initial object mesh model of a target object, correct initial header shape data and initial header texture data in the initial object mesh model according to multiple images to obtain target header shape data and target header texture data, and perform optimization update on header meshes in the initial object mesh model according to the target header shape data and the target header texture data to obtain an optimized target object mesh model. Further, rendering the optimized target object grid model to obtain a target image comprising the target object, sending the target image to a terminal, and displaying the target image by the terminal.
The target object in the application may refer to an object to be reconstructed in the head, the sample object refers to an object with head data (such as labeling head outline data) for training an initial head reconstruction model, the object may refer to a user, an animal, etc. in reality, or the object may refer to a virtual object in a virtual world, such as a virtual game character, an animation character, a virtual game character or an animation character may refer to a virtual character, a virtual animal, and other virtual objects with heads.
In the game and three-dimensional modeling industry, an object can be approximately represented by a grid (such as triangle mesh) or quadrilateral mesh) for a mesh model, also called a three-dimensional mesh/3D model. The process of using a mesh to build an object is called 3D modeling. The most basic primitive in the 3D world is a triangle. Generally we see that the inverse 3-dimensional model is hollow, essentially only a closed surface. From a storage perspective, the mesh model is composed of multiple vertices only, with neither a "face" nor a "volume": since the plane can be defined by 3 points, the stereo can be defined by a closed plane without additional storage of information, thereby achieving the goal of compression maximization. So the 3-dimensional mesh model appears to be composed of several triangles, which are all some points in storage. The object mesh model is used for reflecting the information of the body shape, the gesture, the expression, the facial color, the clothing color and the like of the object, and the initial object mesh model refers to an object mesh model with accuracy less than an accuracy threshold, namely, part of detail information of the object is generally lacking in a head mesh in the initial object mesh model, and the initial object mesh model can be manually created, or the initial object mesh model can be obtained by scanning (such as laser scanning) the object, or the initial object mesh model is generated based on a small amount of images comprising the object. The target object grid model in the application is obtained by optimizing and updating the head grids in the initial object grid of the target object, and the predicted object grid model is obtained by optimizing and updating the head grids in the initial object grid model of the sample object.
In the present application, the head reconstruction model refers to an algorithm for performing optimization update on a head mesh in an initial object mesh model, and the head reconstruction model may refer to a multi-layer perceptron model, a support vector machine (Support Vector Machine, SVM) model, a convolutional neural network (Convolutonal Neural NetworR, CNN) model, a fully connected neural network (Fully Convolutional Networks, FCN) model, and the like. The initial head reconstruction model in the application refers to a head reconstruction model to be optimally trained, and the target head reconstruction model is obtained by optimally training the initial head reconstruction model.
It can be understood that the server may be an independent physical server, or may be a server cluster or a distributed system formed by at least two physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, networK services, cloud communication, middleware services, domain name services, security services, content Delivery NetworK (CDN), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal may specifically refer to a vehicle-mounted terminal, a smart phone, a tablet computer, a notebook computer, a desktop computer, an intelligent sound box, a screen sound box, a smart watch, and the like, but is not limited thereto. The terminals and the servers may be directly or indirectly connected through wired or wireless communication, and meanwhile, the number of the terminals and the servers may be one or at least two, which is not limited herein.
The data processing system in fig. 1 may be used to implement the data processing method of the present application, as shown in fig. 2 and 3, the server 20a in fig. 2 and 3 may refer to the server 10 in fig. 1, and the terminal 21a in fig. 3 may refer to any one of the terminals in the terminal cluster in fig. 1. In fig. 2 and 3, an example is illustrated of an optimization update of a head mesh in an initial object mesh model of an object a (i.e., a target object), which may refer to a game character in a game scene, or which may refer to a character in an animated video, or. The object a may refer to an actor character in a movie. As shown in fig. 2, the server 20a may acquire K images including the object a, the K images being photographed from different perspectives of the object a, the K images being respectively labeled as image 1, image 2, … …, and image K. Further, the server 20a may acquire an initial object mesh model of the object a, which may be obtained by scanning the object a, or which may be obtained by constructing a small number of images including the object a, or which may be manually created.
In particular, when the object a is a game character or an animated character in a game scene, the server 20a may obtain an object model of the object a, where the object model may be obtained by three-dimensional modeling of the server 20a based on object features (such as size, color, etc.) of the object a, and performing operations such as rotation on the object model to obtain K images including the object a, that is, K images obtained by capturing images of the object a from different perspectives. When the object a may refer to an actor character in a movie, the computer device may take a photograph of the object a from different perspectives in the real world, resulting in K images including the object a.
Then, the server 20a may perform the head feature extraction on the image 1 to obtain a candidate head shape feature 1 and a candidate head texture feature 1 of the object a, perform the head feature extraction on the image 2 to obtain a candidate head shape feature 2 and a candidate head texture feature 2 of the object a, and so on, and repeat the above steps until obtaining a candidate head texture feature and a candidate head shape feature corresponding to the K images respectively. Since the K images are obtained by photographing the object a from different perspectives, the K images can capture the head features of the object a under different perspectives, that is, the K candidate head shape features and the K candidate head texture features can more comprehensively and accurately reflect the head features of the object a. Based on this, the server 20a may correct the initial head texture feature according to the K candidate head texture features to obtain the target head texture feature of the object a, and similarly correct the initial head shape feature according to the K candidate head shape features to obtain the target head shape feature of the object a. The accuracy of the corrected target head shape feature is improved by correcting the initial head shape feature based on the K candidate head shape features, and the accuracy of the corrected target head texture feature is improved by correcting the initial head texture feature based on the K candidate head texture features, and the target head shape feature and the target head texture feature can provide rich head features on the object a.
As shown in fig. 3, after the server 20a obtains the target head shape feature and the target head texture feature, the server 20a may generate a target head mesh of the object a according to the target head shape feature and the target head texture feature, and perform optimization update on the head mesh in the initial object mesh model according to the target head mesh of the target object, to obtain an optimized target object mesh model. The target object mesh model can reflect the head features of the object a more accurately, i.e. the target object mesh model can reflect the detail features of the head of the object a. After acquiring the target object mesh model, the computer device may render the target object mesh model, resulting in an image 22a comprising the object a, send the image 22a to the terminal 21a, and the terminal 21a may display the image 22a. Comparing the image of the K images in fig. 2 with the image 22a in fig. 3, the image 22a can provide more detailed features about the face of the object a, improving the fidelity of the image.
Further, please refer to fig. 4, which is a flowchart illustrating a data processing method according to an embodiment of the present application. As shown in fig. 4, the method may be performed by any terminal in the terminal cluster in fig. 1, may be performed by a server in fig. 1, or may be performed cooperatively by a terminal and a server in the terminal cluster in fig. 1, and the apparatus for performing the data processing method in the present application may be collectively referred to as a computer apparatus. Wherein, the method can comprise the following steps:
S101, acquiring an initial object grid model to be optimized and K images comprising the target object; the initial object mesh model includes M head meshes of initial head shape data and initial head texture data of the target object, K, M each being an integer greater than 1.
In the application, the computer equipment can acquire an initial object grid model to be optimized for reflecting the target object; the initial object grid model of the target object is generated based on a small number of images comprising the target object, or the initial object grid model of the target object is artificially generated, or the initial object grid model of the target object is obtained by scanning the target object, and usually, the detail information of the head of the target object is absent in the initial object grid model. For example, the computer device may scan the target object from different perspectives for a preset period of time, resulting in tens of thousands, hundreds of thousands, or even more scan points for the target object, such as 100000 to 250000 scan points, from which an initial object grid model for the target object is derived. Thus, the initial object mesh model may be referred to as the object mesh model to be optimized. The initial object mesh model may include M head meshes of the target object, where the M head meshes have initial head shape data and initial head texture data of the target object, that is, the M head meshes are used to reflect the initial head shape data and the initial head texture data of the target object. The initial head shape data reflects the head pose, facial expression, size, shape, etc. of the target object, and the initial head texture data reflects the head color information of the target object. Further, the computer device may acquire K images including the target object, where the K images may be obtained by photographing the target object from a plurality of different angles, for example, the K images include images obtained by photographing the front, side, back, and the like of the target object; or, the K images are obtained by shooting the target object in different postures, for example, the K images are obtained by shooting the target object in the postures of head elevation, head lowering and the like, and richer detail information can be provided for the head of the target object through the K images.
It should be noted that, the shape of the M head grids may be triangle, quadrangle, or the like, and each head grid includes a plurality of head vertices, and if a certain head grid is triangle, the head grid has three head vertices, and if a certain head grid is quadrangle, the head grid has four head vertices. At the same time, any two adjacent head grids of the M head grids share one or more head vertices.
Optionally, the head mesh in the initial object mesh model is obtained based on principal component analysis (Principal Component Analysis, PCA), so that the number of vertices of the head mesh in the initial object mesh model can be reduced, for example, the number of vertices in the initial object mesh model is reduced to 236, so that the calculation amount in the subsequent processing process can be reduced.
Optionally, the computer device may shoot the target object to obtain a plurality of initial images including the target object, where the number of initial images is greater than K, input the plurality of initial images into the face recognition model to obtain the face integrity of the target object in the plurality of initial images, where the face integrity reflects whether the face of the target object is blocked by invalid objects such as hair, earrings, or the problem of poor shooting vision, that is, the higher the face integrity reflects the more head (i.e., face) information content presented in the initial image, and the lower the face integrity reflects the less head (i.e., face) information content presented in the initial image. Thus, the computer device may screen out K initial images from the plurality of initial images that have facial integrity greater than the integrity threshold, as K images comprising the target object, typically the K images may have a resolution greater than a resolution threshold, such as 300x300.
S102, extracting head features of the K images to obtain K candidate head shape data and K candidate head texture data of the target object.
In the application, the computer equipment can call a target head reconstruction model, respectively extract head characteristics of the K images to obtain K candidate head shape data and K candidate head texture data of the target object, and concretely, the computer equipment can call a characteristic processing network in the target head reconstruction model, respectively extract head characteristics of the K images to obtain K candidate head shape data and K candidate head texture data of the target object.
It should be noted that, the target head reconstruction model is obtained by training an initial head reconstruction model, the target head reconstruction model includes a feature processing network and an optimization updating network, the feature processing network is used for extracting head features in an image, and correcting initial head shape features and initial head texture features; the optimization updating network is used for carrying out optimization updating on the head grids in the initial object grid model.
It is to be understood that candidate mesh shape data including a plurality of candidate head meshes among the K candidate head shape data is three-dimensional position information of head vertices of the candidate head meshes, and is noted as candidate three-dimensional position information, the candidate mesh shape data is used to reflect a head pose, a facial expression, a size, a shape, and the like of the target object, and the number of candidate head meshes reflected by the K candidate head shape data may be the same or different. Since the K candidate head shape data are obtained based on K images obtained by photographing the target object from different viewpoints, or K images obtained by photographing the target object in different attitudes, the K candidate head shapes can more comprehensively reflect the head shape features of the target object. The present application is mainly described by taking the example that the number of candidate head meshes reflected by K candidate head shape data is the same, and the number of head networks in the initial head shape data is the same as the number of candidate head meshes.
S103, correcting the initial head shape data of the target object according to the K candidate head shape data to obtain target head shape data of the target object.
In the application, since the K candidate head shape data are extracted from the K images, the K candidate head shape data can provide more shape detail information about the head of the target object, so that the computer equipment can correct the initial head shape data of the target object according to the K candidate head shape data to obtain the target head shape data of the target object, and the accuracy of the target head shape data can be improved.
Alternatively, the computer device may perform linear combination (e.g., weighted summation) on the K candidate head shape data and the initial head shape data to obtain target head shape data of the target object.
Optionally, the initial head shape data includes initial mesh shape data of a head mesh n of the M head meshes, the initial mesh shape data of the head mesh n including initial three-dimensional position information of head vertices of the head mesh n of the target object, and n is a positive integer less than or equal to M. The K candidate head shape data include candidate mesh shape data of a plurality of candidate head meshes, respectively, the candidate mesh shape data including candidate three-dimensional position information of head vertices of the candidate head meshes of the target object.
Optionally, the correcting the initial head shape data of the target object according to the K candidate head shape data to obtain target head shape data of the target object includes: the computer equipment can respectively acquire candidate grid shape data of candidate head grids with position matching relation with the head grid n from the K candidate head shape data to acquire K candidate grid shape data; the candidate head mesh having a positional matching relationship with the head mesh n may be referred to herein as: the candidate head mesh, of the plurality of candidate head meshes, having a distance to the head mesh n smaller than a distance threshold, may be determined from candidate three-dimensional position information of the candidate head mesh and initial three-dimensional position information of the head mesh n. Further, the K candidate mesh shape data are subjected to an averaging process to obtain average head shape data corresponding to the head mesh n, wherein the average head shape data reflect three-dimensional position information of center points corresponding to the L vertices in the head mesh n, that is, the average head shape data includes position values of the center points corresponding to the L vertices of the head mesh n in the X, Y, Z axis direction. Further, the mesh number of the head meshes shared by the L head vertices, respectively, is obtained, the L head vertices being head vertices for constituting the head mesh n, L being an integer greater than 1, and the shape offset of the head mesh n is determined from the mesh number of the head meshes shared by the L head vertices, respectively, and the initial mesh shape data of the head mesh n, the shape offset of the head mesh n being a positional offset between the position of the head vertex of the head mesh n and the position of the center point of the head mesh n. The computer device may then determine target mesh shape data for the head mesh n based on the shape offset for the head mesh n and the average head shape data corresponding to the head mesh n. For example, the sum of the shape offset of the head mesh n and the average shape data corresponding to the head mesh n may be used to determine the target head shape data of the head mesh n, that is, the updated three-dimensional position information of the head mesh n is obtained according to the sum of the corresponding position offsets of the head vertices of the head mesh n and the three-dimensional position information of the center point indicated by the average head shape data, and the updated three-dimensional position information of the head mesh n is determined as the target mesh shape data of the head mesh n. Repeating the steps until the target grid shape data corresponding to the M head grids respectively are obtained, and taking the target grid shape data corresponding to the M head grids respectively as target head shape data of the target object. The K candidate head shape data are subjected to averaging processing to obtain target head shape data of the target object, so that the problem that the accuracy of the obtained target head shape data is low due to errors of single candidate head shape data can be avoided, and the accuracy of obtaining the target head shape data is improved.
For example, the computer device may calculate average head shape data corresponding to the head mesh n according to a manner that, assuming that the L head vertices of the head mesh n include head vertices A, B, C, the computer device may obtain candidate head vertices having a position matching relationship with the head vertices a from K candidate mesh shape data, obtain K first candidate head vertices, the candidate head vertices having a position matching relationship with the head vertices a may refer to head vertices having a distance between the candidate head vertices a and the head vertices a smaller than a distance threshold in the candidate head mesh indicated by the candidate mesh shape data, perform an averaging operation on candidate three-dimensional positions of the K first candidate head vertices in the X-axis direction, obtain a position value of a center point corresponding to the head vertex a in the X-axis direction, perform an averaging operation on candidate three-dimensional positions of the K first candidate head vertices in the Y-axis direction, obtain a position value of a center point corresponding to the K first candidate head vertices in the Z-axis direction, perform an averaging operation on candidate three-dimensional positions of the K first candidate head vertices in the Z-axis direction, obtain a position value of a center point corresponding to the K first candidate head vertices in the X-axis direction, and determine the three-dimensional position value of the center point corresponding to the head vertex a in the X-axis direction as the center position value of the head vertex 35. And by analogy, calculating the three-dimensional position information of the center point corresponding to the head vertex B and the three-dimensional position information of the center point corresponding to the head vertex C, and determining the three-dimensional position information of the center points corresponding to the head vertices A, B, C as the average head shape data corresponding to the head grid n.
Optionally, the initial mesh shape data of the head mesh n includes initial three-dimensional position information of the L head vertices; the determining a shape offset of the head mesh n according to the mesh number of the head meshes respectively shared by the L head vertices and the initial mesh shape data of the head mesh n includes: the computer device may generate a mesh number matrix for reflecting the mesh number of the head meshes to which the L head vertices are respectively shared, i.e., the mesh number matrix for reflecting the mesh number of the head networks to which the L head vertices of the head mesh n are respectively shared. Then, the computer device may generate an initial position matrix for reflecting initial three-dimensional position information of the L head vertices, the initial position matrix including initial three-dimensional position information of the L head vertices, and generate a covariance matrix corresponding to the head mesh n according to the initial position matrix; the covariance matrix is used to reflect a linear relationship between initial three-dimensional positional locations of the L head vertices, the linear relationship comprising a linear positive correlation and a linear negative correlation. Then, carrying out transformation processing on the covariance matrix corresponding to the head grid n to obtain a characteristic value pair corresponding to the head vertex n; the feature value pair corresponding to the head grid n comprises a feature value and a feature vector reflecting the shape change feature of the head grid n; the feature value here reflects the degree of shape shift of the head grid n, and the feature vector is used to reflect the shift direction of the head grid n with respect to the head grid n, which is the direction of the normal line of the head grid n, which is the shift amount of the head grid n with respect to the vertical direction of the display screen. And carrying out product processing on the characteristic value and the characteristic vector in the characteristic value pair corresponding to the head grid n and the grid quantity matrix to obtain the shape offset of the head grid n.
For example, if the L head vertices include head vertices A, B, C, head vertices A, B, C all belong to vertices of head grids n, and at the same time, head vertex a is shared by 3 head grids (i.e., head vertex a belongs to vertices of 3 head grids, head grid n is included in 3 head grids), head vertex B is shared by 2 head grids, head grid C is shared by 4 head grids, and the grid number matrix corresponding to head grid n is [3,2,4 ]]The initial three-dimensional position information of the head vertex A, B, C is (1, 2, 3), (4, 5, 6), (7, 8, 9) in mm, respectively; the initial position matrix corresponding to the header mesh n is: p=The computer device can obtain mean= [4,5,6 ] at the mean of each dimension]Centering the initial position matrix to obtain a processed matrix, namely subtracting the average value of the corresponding dimension from the element in the initial position matrix to obtain a processed matrix, namely centered_P= (N/L)>From this processed matrix, a covariance matrix can be calculated,,/>refers to the transpose of the processed matrix, C = -j =>Since the elements in the covariance matrix are positive numbers, it is indicated that there is a linear positive correlation between the initial three-dimensional position information of the head vertex A, B, C. The computer device can be based on C- >=/>Calculating eigenvalues and eigenvectors of covariance matrix, i.e. +.>Is the eigenvector of the covariance matrix corresponding to the head grid n, < >>Refers to the eigenvalue of the covariance matrix corresponding to the head grid n. And carrying out product processing on the characteristic value and the characteristic vector in the characteristic value pair corresponding to the head grid n and the grid quantity matrix to obtain the shape offset of the head grid n. For example, the computer device may calculate the shape offset of the head mesh n using the following equation (1):
(1)
wherein, formula (1)For the shape offset of the head grid n, +.>Is the eigenvector of the covariance matrix corresponding to the head grid n, < >>Is the grid number matrix corresponding to L head vertexes of the head grid n, ++>Refers to the eigenvalue of the covariance matrix corresponding to the head grid n.
For example, the computer device may calculate target head shape data of the target object using the following equation (2):
(2)
wherein in formula (2)Target mesh shape data representing the head mesh n, i.e. data reflecting the head shape of the target object,/>Representing the average shape data corresponding to the header mesh n.
It should be noted that, when the covariance matrix has a plurality of eigenvalue pairs, this is that the computer device may perform weighted summation processing on the shape offset corresponding to each eigenvalue pair of the computer according to formula (1), to obtain the shape offset of the head grid n.
S104, correcting the initial head texture data of the target object according to the K candidate head texture data to obtain target head texture data of the target object.
In the application, since the K candidate head texture data are extracted from the K images, the K candidate head texture data can provide more texture detail information about the head of the target object, so that the computer equipment can correct the initial head texture data of the target object according to the K candidate head texture data to obtain the target head texture data of the target object, and the accuracy of the target head texture data can be improved.
Optionally, step S104 includes: the computer device may invoke the feature processing network to linearly combine the K candidate head texture data and the initial head texture data to obtain target head texture data for the target object. For example, the computer device may invoke the feature processing network to obtain weights corresponding to the K candidate head texture data and the initial texture data, and perform weighted summation processing on the K candidate head texture data and the initial head texture data according to the weights, so as to obtain target head texture data of the target object. The weights corresponding to the K candidate head texture data may be determined by image attributes of the corresponding image, where the image attributes include at least one or more of image sharpness, face integrity, and the like, for example, the higher the face integrity is, the higher the weights corresponding to the candidate head texture data extracted from the image are; the lower the facial integrity, the lower the weight corresponding to the candidate head texture data extracted from the image. The weight corresponding to the initial head texture data may be determined according to the accuracy of the initial object mesh model, i.e. the higher the accuracy of the initial object mesh model, the higher the weight corresponding to the initial head texture data; conversely, the lower the accuracy of the initial object mesh model, the lower the weight corresponding to the initial head texture data. The sum between the weights corresponding to the K candidate head texture data and the weights corresponding to the initial head texture data is 1. Alternatively, the computer device may perform an averaging process on the K pieces of head texture data to obtain average texture data, and perform a summation process (or a weighted summation process) on the average texture data and the initial head texture data to obtain target head texture data of the target object.
For example, as shown in FIG. 5, the target head reconstruction model may include a feature processing network, which may include a multi-layer perceptron encoder, and an optimization update network, which may include a decoder. The multi-layer perceptron encoder may include a plurality of neural network layers and convolutional layers, as shown in fig. 5, including a neural network layer 50a, a first intermediate layer 51a, a second intermediate layer 52a, a third intermediate layer 53a, a fourth intermediate layer 54a, each intermediate layer including a neural network layer and a convolutional layer. The computer device may obtain the target head shape feature and the target head texture feature of the target object through the multi-layer perceptron encoder, taking the target object as a virtual character b as an example, the computer device may obtain the image p1 of the virtual character and the initial object grid model p2 of the virtual character b, the image p1 and the initial object grid model p2 may be input to the neural network layer 50a, the neural network layer 50a performs head feature extraction on the image p1 to obtain a head feature map b1, and the size of the head feature map b1 isWherein->Height of the head characteristic map b1, < ->Representing the width of the head feature map b1, 64 representing the number of channels of the head feature map b1, describing the dimension of the vector corresponding to the color of one pixel; / >The resolution of the head characteristic map b1, that is, the number of pixels, is described. For each pixel, a color is indicated, using a +.>Vector description of dimensions; at the same time (I)>The width and height of the reflected head feature map b1 are half the width and height of the image p1, respectively. Further, the first intermediate layer 51a is called to perform head feature extraction on the head feature map b1 to obtain a head feature map b2, and the size of the head feature map b2 is +.>I.e. the width and height of the head profile b2 is one quarter of the width and height of the image p 1. Invoking the second middle layer 52a to extract head features from the head feature map b2 to obtain the head feature map b3, wherein the size of the head feature map b3 is +.>I.e. the head profile b3 is one eighth as wide and high as the image p 1. Invoking the third intermediate layer 53a to extract the head features of the head feature map b3 to obtain the head feature map b4, the head feature map b4 having a size of +.>I.e. the head profile b4 has a width and height of the image p1One sixteenth of the width and height. The head shape feature in the head feature map b4 may be referred to as a candidate head shape feature corresponding to the image p1, and the head texture feature in the head feature map b4 may be referred to as a candidate head texture feature corresponding to the image p 1. Repeating the steps for K images to obtain candidate head shape features and candidate head texture features corresponding to each image, calling a fourth intermediate layer 52a to correct the initial head shape features and initial texture features of the initial object grid model p2 according to the head feature images b4 of the K images to obtain a head feature image b5 comprising target head shape features and target texture features, wherein the size of the head feature image b5 is- >I.e. the head profile b5 is thirty-one-half the width and height of the image p 1.
It should be noted that, the neural network layer in the multi-layer perceptron encoder is fully connected with the first intermediate layer and each intermediate layer, wherein each intermediate layer can also be called a perceptron, and the output of each perceptron can be represented by the following formula (3):
(3)
wherein in formula (3)Representing the output of the xth sensor, +.>Indicate->The output of the individual sensors,/>Is an activation function and may be a nonlinear activation function. />Representing weights +.>And representing the bias, wherein the weight and the bias belong to corresponding model parameters of the x-th layer perceptron, and the model parameters are obtained by adjusting in the training process of the initial head reconstruction model. For example, the weight of each layer of the sensor can be calculated according to the following formula (4):
w=argmin w (4)
wherein in formula (4)Representing the loss function of the initial head reconstruction model, equation (4) represents that the weight w can be calculated by minimizing the class of loss functions.
S105, optimizing and updating the M head grids in the initial object grid model of the target object according to the target head shape data and the target head texture data to obtain an optimized target object grid model.
In the application, the computer equipment can optimize and update M head grids in the initial object grid model of the target object according to the target head shape data and the target head texture data to obtain an optimized target object grid model, and the target object grid model can provide more detail information about the head of the target object relative to the initial object grid model, so the target object grid model can also be called as an optimized object grid model.
Optionally, the optimizing updating is performed on the M head grids in the initial object grid model according to the target head shape data and the target head texture data to obtain an optimized target object grid model, which includes: the computer equipment can call an optimization updating network of the target head reconstruction model, and fusion is carried out on the target head shape data and the target head texture data to obtain target fusion characteristics, namely the target head shape data and the target head texture data are added into the same matrix to obtain the target fusion characteristics. Further, up-sampling is performed on the target fusion feature to obtain target head depth data of the target object, wherein the target head depth data is used for reflecting the vertical distance from the head of the target object to the camera imaging plane. Further, the optimization updating network is called, and a target head grid reflecting the head of the target object is generated according to the target head shape data, the target head texture data and the target head depth data, namely, the target head shape data is used for limiting the shape, the size and the position of the target head grid, the target head texture data is used for limiting the type of the color of the target head grid, such as red, yellow and green, and the target head depth data is used for limiting the brightness of the color of the target head grid. Further, the M head grids in the initial object grid model are replaced by the target head grid, and an optimized target object grid model is obtained. And optimizing and updating the head grids in the initial object grid model through the target head shape data, the target head texture data and the target head depth data, so that the accuracy of the head grids in the optimized target object grid model is improved, and the method is favorable for subsequent rendering to obtain a vivid image.
It is understood that the optimization update network may include a decoder, where the decoder may include X upsampling layers, and the computer device may obtain a header feature map including the target fusion feature, and splice and upsample the header feature map and the header feature map in the encoder through the X upsampling layers to obtain a depth feature map including target header depth data of the target object.
For example, as shown in fig. 5, the optimization update network may include a decoder that may include an upsampling layer 55a, an upsampling layer 56a, an upsampling layer 57a, an upsampling layer 58a, and an upsampling layer 59a. The computer device may invoke the upsampling layer 55a to upsample the head characteristic map b5 comprising the target head shape characteristic and the target texture characteristic to obtain an upsampled characteristic map c1, the upsampled characteristic map c1 having a size ofInvoking the upsampling layer 56a to splice the upsampling feature map c1 and the head feature map b4 to obtain a spliced feature map d1, wherein the spliced feature map d1 has a size ofThe up-sampling processing is carried out on the spliced characteristic diagram d1 to obtain an up-sampling characteristic diagram c2, and the size of the up-sampling characteristic diagram c2 is +.>. And (3) calling an up-sampling layer 57a to splice the up-sampling feature map c2 and the head feature map b3 to obtain a spliced feature map d2, wherein the size of the spliced feature map d2 is +. >The spliced characteristic diagram d2 is subjected to up-sampling processing to obtain an up-sampling characteristic diagram c3, and the up-sampling characteristic diagram c3 has the following size. And (3) calling the up-sampling layer 58a to splice the up-sampling feature map c3 and the head feature map b2 to obtain a spliced feature map d3, wherein the size of the spliced feature map d3 is +.>The up-sampling processing is carried out on the spliced characteristic diagram d3 to obtain an up-sampling characteristic diagram c4, and the size of the up-sampling characteristic diagram c4 is +.>. And (3) calling the up-sampling layer 59a to splice the up-sampling feature map c4 and the head feature map b1 to obtain a spliced feature map d4, wherein the size of the spliced feature map d4 is +.>The stitching feature map d4 is subjected to maximum pooling processing to obtain a depth feature map p3 comprising target head depth data of a target object, wherein the length and width of the depth feature map p3 are the same as those of an original image (such as an image p 1), namely, the depth image feature p3Restoring to the resolution of the original image, avoiding information loss caused by the characteristic extraction process, and improving the accuracy of acquiring the target head depth data.
Optionally, replacing the M head grids in the initial object grid model with the target head grid to obtain an optimized target object grid model includes: the computer device may replace the M head grids in the initial object grid model with the target head grid to obtain a candidate object grid model of the target object, the computer device may further optimize the candidate object grid model, and specifically, the computer device may perform crack detection on the head and the neck of the target object according to the target head grid and the neck grid in the candidate object grid model to obtain a detection result. The detection result reflects that a crack exists between the head and the neck of the target object, or the detection result reflects that no crack exists between the head and the neck of the target object, wherein the crack exists between the head and the neck of the target object, which means that a crack exists at the edge between the head and the neck of the target object, and the no crack exists between the head and the neck of the target object, which means that a crack does not exist at the edge between the head and the neck of the target object. If a crack exists between the head and the neck of the target object, namely the detection result reflects the crack exists between the head and the neck of the target object, the target head grid in the candidate object grid model is corrected, and an optimized target object grid model is obtained. If no crack exists between the head and the neck of the target object, namely the detection result reflects that no crack exists between the head and the neck of the target object, the candidate object grid model is determined to be the optimized target object grid model. When a crack is detected between the head and the neck of the target object, the fitting degree between the head and the neck of the target object is improved by correcting the target head grids in the candidate object grid model, so that the object grid model of the target object is more accurate and lifelike.
Optionally, the detecting the crack of the head and the neck of the target object according to the target head grid and the neck grid in the candidate object grid model includes: the computer equipment can perform two-dimensional projection on the head edge grid in the target head grid to obtain a two-dimensional plane comprising the head edge grid; the head edge grid is a grid in which the distance between the head edge grid and the neck grid in the target head grid is smaller than a first distance threshold, namely the head edge grid is a head grid adjacent to the neck grid in the target head grid. Further, two-dimensional projection is carried out on the neck edge grids in the neck grids of the candidate object grid model, so that a two-dimensional plane comprising the neck edge grids is obtained; the neck edge grid is a grid, in the neck grid, of which the distance from the target head grid is smaller than a second distance threshold, namely the neck edge grid is a neck grid, in the neck grid, of which the position adjacent relation is between the neck grid and the head edge grid. If the two-dimensional plane including the head edge grid is not matched with the two-dimensional plane including the neck edge grid, determining that a crack exists between the head and the neck of the target object, and if the two-dimensional plane including the head edge grid is matched with the two-dimensional plane including the neck edge grid, determining that no crack exists between the head and the neck of the target object.
It should be noted that, the computer device may obtain a plane angle between the two-dimensional plane including the head edge mesh and the two-dimensional plane including the neck edge mesh, and if the plane angle is smaller than the angle threshold value, determine that the two-dimensional plane including the head edge mesh matches the two-dimensional plane including the neck edge mesh; if the plane angle is greater than or equal to the angle threshold, it is determined that the two-dimensional plane including the head edge grid does not match the two-dimensional plane including the neck edge grid. The plane angle between the two-dimensional plane comprising the head edge mesh and the two-dimensional plane comprising the neck edge mesh may refer to: an acute angle between a normal vector of a two-dimensional plane comprising the head edge grid and a normal vector of a two-dimensional plane comprising the neck edge grid.
Optionally, if a crack is detected between the head and the neck of the target object, the target head mesh in the candidate object mesh model is modified to obtain an optimized target object mesh model, which includes: if a crack is detected to exist between the head and the neck of the target object, the computer equipment can acquire the side length of the head edge grid in the target head grid; the head edge grid is a grid in which the distance between the head edge grid and the neck grid in the target head grid is smaller than a first distance threshold; the computer equipment can adjust the head edge grid with the edge length larger than the edge length threshold value to obtain an optimized target object grid model, for example, can combine the head edge grids with the edge length larger than the edge length threshold value to obtain an optimized target object grid model, or can add new grids into the head edge grid with the edge length larger than the edge length threshold value to obtain an optimized target object grid model, so that the minimum internal angle of grids in the target object grid model can be maximized, grids with the edge length larger and the internal angle smaller are avoided, cracks are caused to the head and the neck of the target object, and smoothness between the head and the neck of the target object is improved.
Optionally, the computer device may render a target image including the target object according to the target object mesh model, and specifically, the computer device may obtain three-dimensional coordinates of vertices in the target object mesh model, map the three-dimensional coordinates of vertices in the target object mesh model to a texture space, and obtain texture coordinates of vertices in the target object mesh model. Then, according to the texture data of the grids in the target object grid model, determining a texture map corresponding to the grids in the target object grid model, wherein the size of the texture map is as large as the size of the corresponding grids, and the texture map has pictures of colors of the corresponding grids. Further, according to the texture coordinates of the vertexes in the target object grid model and the texture mapping corresponding to the grids in the target object grid model, mapping processing is carried out on the grids in the target object grid model; and aligning the texture map with the corresponding grids in the target object grid model to obtain the target object grid model after the map processing. Further, the computer device may render the target object mesh model after the mapping process to obtain a target image including the target object, that is, the computer device may fine-tune color brightness and the like in the target object mesh model after the mapping process to obtain a target image including the target object.
Optionally, mapping the three-dimensional coordinates of the vertices in the target object mesh model to a texture space to obtain texture coordinates of the vertices in the target object mesh model includes: the computer device may obtain three-dimensional coordinates of a target vertex in the target object mesh model, and three-dimensional coordinates corresponding to K adjacent vertices having an adjacent positional relationship with the target vertex, where the target vertex may refer to any vertex in the target object mesh model, and the three-dimensional coordinates of the target vertex may refer to a position of the target vertex in a three-dimensional coordinate system created by using one vertex in the target object mesh model as a coordinate origin. Then, the computer device may determine distances between the target vertex and the K adjacent vertices according to the three-dimensional coordinates corresponding to the K adjacent vertices and the three-dimensional coordinates of the target vertex, and determine texture coordinates of the target vertex according to the distances between the target vertex and the K adjacent vertices, and the three-dimensional coordinates of the K adjacent vertices.
For example, the computer device may calculate texture coordinates of vertices in the target object mesh model using equation (5) as follows:
(5)
wherein, in the formula (5), May refer to the target vertices in the target object mesh model,/for>Texture coordinates representing the vertices of the object,/->Representing the distance between the target vertex and its ith neighboring vertex,distance between the target vertex and its j-th neighboring vertex,/and>representing the three-dimensional coordinates of the ith neighboring vertex of the target vertex.
In the present application, since the K candidate head shape data and the K candidate head texture data are extracted from K images including the target object, i.e., the K candidate head shape data and the K candidate head texture data can provide more detailed information about the head of the target object. Therefore, the initial head shape data in the initial object grid model is corrected through the K candidate head shape data to obtain the target head shape data, and the initial head texture data in the initial object grid model is corrected through the K candidate head texture data to obtain the target head texture data, so that the accuracy of the target head shape data and the target head texture data can be improved. Further, the head grids in the initial object grid model are optimized and updated according to the target head shape data and the target head texture data, so that the accuracy of the head grids in the optimized target object grid model can be improved; in addition, the optimization updating process of the head grid model in the initial object grid model does not need to be manually participated, so that the labor cost can be saved, and the efficiency of acquiring the object grid model is improved.
Further, please refer to fig. 6, which is a flowchart illustrating a data processing method according to an embodiment of the present application. As shown in fig. 6, the method may be performed by any terminal in the terminal cluster in fig. 1, may be performed by a server in fig. 1, or may be performed cooperatively by a terminal and a server in the terminal cluster in fig. 1, and the apparatus for performing the data processing method in the present application may be collectively referred to as a computer apparatus. Wherein, the method can comprise the following steps:
s201, obtaining annotation head outline data of a sample object, an initial object grid model to be optimized for reflecting the sample object and P images comprising the sample object; the initial object mesh model includes Q head meshes having initial head shape data and initial head texture data for the sample object, P, Q each being an integer greater than 1, the training data including labeled head shape data for the sample object.
In the application, the computer equipment can acquire the labeling head outline data of the sample object, wherein the labeling head outline data is obtained by manually labeling an image comprising the sample object, the labeling head outline data reflects the real head outline of the sample object, and the head outline refers to a rectangle formed by the head width (the distance between the left and right zygomatic arches) and the head height. Further, an initial object mesh model to be optimized for reflecting the sample object is obtained, wherein the initial object mesh model of the sample object is generated based on a small number of images comprising the sample object, or the initial object mesh model of the sample object is artificially produced, or the initial object mesh model of the sample object is obtained by scanning the sample object. The initial object mesh model is typically devoid of detailed information of the head of the sample object, and thus may be referred to as the object mesh model to be optimized. The initial object grid model may include Q head grids of the sample object, where the Q head grids have initial head shape data and initial head texture data of the sample object, i.e., the Q head grids are used to reflect the initial head shape data and the initial head texture data of the sample object. The initial head shape data reflects the head pose, facial expression, size, shape, etc. of the sample object, and the initial head texture data reflects the head color information of the sample object. Further, the computer device may acquire P images including the sample object, where the P images may be obtained by photographing the sample object from a plurality of different angles, for example, the P images include images obtained by photographing the front, side, back, and the like of the sample object, and the P images may provide richer detailed information for the head of the sample object.
S202, calling an initial head reconstruction model, and extracting head characteristics of the P images to obtain P candidate head shape data and P candidate head texture data of the sample object.
In the application, the computer equipment can call an initial head reconstruction model, and head characteristic extraction is carried out on the P images to obtain P candidate head shape data and P candidate head texture data of the sample object, namely, the computer equipment can call a characteristic processing network of the initial head reconstruction model, and head characteristic extraction is carried out on the P images to obtain P candidate head shape data and P candidate head texture data of the sample object.
S203, correcting the initial head shape data of the sample object according to the P candidate head shape data to obtain the predicted head shape data of the sample object.
In the application, since the P candidate head shape data are extracted from the P images, the P candidate head shape data can provide more shape detail information about the head of the sample object, so that the computer device can call the feature processing network of the initial reconstruction model, correct the initial head shape data of the sample object according to the P candidate head shape data, obtain the predicted head shape data of the sample object, and improve the accuracy of the predicted head shape data.
S204, correcting the initial head texture data of the sample object according to the P candidate head texture data to obtain the predicted head texture data of the sample object.
In the application, since the P candidate head texture data are extracted from the P images, the P candidate head texture data can provide more texture detail information about the head of the sample object, so that the computer device can call the feature processing network of the initial reconstruction model, correct the initial head texture data of the sample object according to the P candidate head texture data, obtain the predicted head texture data of the sample object, and improve the accuracy of the predicted head texture data.
And S205, optimizing and updating the Q head grids of the initial object grid model of the sample object according to the prediction head texture data and the prediction head shape data to obtain an optimized prediction object grid model.
In the application, the computer equipment can call an optimization updating network of the initial head reconstruction model, and according to the prediction head shape data and the prediction head texture data, the Q head grids in the initial object grid model of the sample object are optimized and updated to obtain an optimized prediction object grid model. The prediction object mesh model can provide more detailed information about the head of the sample object than the initial object mesh model of the sample object, and thus, the prediction object mesh model may also be referred to as an optimized object mesh model.
S206, training the initial head reconstruction model according to the marked head outline data, the predicted head texture data, the predicted object grid model and the predicted head shape data to obtain a target head reconstruction model; the target head reconstruction model, when invoked, is used to perform the method as described previously.
According to the method, the computer equipment can train the initial head reconstruction model according to the marking head outline data, the prediction head texture data, the prediction object grid model and the prediction head shape data until the trained initial head reconstruction model meets the training stopping condition, so as to obtain a target head reconstruction model, and the target head reconstruction model is used for executing the method when being called. The accuracy of the updated object mesh model of the sample object obtained by the target head reconstruction model is greater than that of the predicted object mesh model obtained by the initial head reconstruction model, and the updated object mesh model is obtained by the target head reconstruction model by referring to the sample object mesh model obtained in the steps S201-S205. The accuracy of the updated object mesh model may be determined based on the updated head contour data determined based on the updated object mesh model and the noted head contour data; the accuracy of the prediction object mesh model may be determined based on prediction head contour data determined based on the prediction object mesh model and the noted head contour data.
Optionally, training the initial head reconstruction model according to the labeling head contour data, the prediction head texture data, the prediction object grid model and the prediction head shape data to obtain a target head reconstruction model includes: the computer device may determine the predicted head contour data of the sample object from the predicted object mesh model, e.g., may obtain a rectangle formed by the head width (left and right zygomatic arch point spacing) and the head height of the sample object from the edge head mesh in the predicted object mesh model, and determine the rectangle as the predicted head contour data of the sample object. Further, if the difference between the labeling head outline data and the prediction head outline data is smaller, the fact that the initial head reconstruction model is reconstructed to obtain a prediction object grid model is indicated, and the head outline of the sample object can be accurately reflected; otherwise, if the difference between the labeling head outline data and the prediction head outline data is larger, the method indicates that the initial head reconstruction model is reconstructed to obtain a prediction object grid model, and the head outline of the sample object cannot be accurately reflected. Thus, the computer device may obtain a distance between the annotated head contour data and the predicted head contour data, from which a contour prediction error of the initial head reconstruction model, i.e. an accuracy with which the contour prediction error is used for the initial head reconstruction model to predict the head contour of the sample object, i.e. an accuracy with which the contour prediction error is used to reflect the prediction object mesh model obtained based on the initial head reconstruction model, is determined. The distance between the predicted head contour data and the marked head contour data and the contour prediction error have positive correlation, namely the larger the distance is, the larger the contour prediction error is; the smaller the distance, the smaller the contour prediction error. For example, the labeling head profile data includes three-dimensional coordinates of 4 labeling vertices of a rectangle formed by the head width and the head height of the sample object, the predicting head profile data includes three-dimensional coordinates of 4 predicting vertices of a rectangle formed by the head width and the head height of the sample object, and the computer device may calculate a profile prediction error of the initial head reconstruction model by the following formula (6):
(6)
Wherein, in the formula (6),representing contour prediction error of the initial head reconstruction model, < >>Representing the three-dimensional coordinates of the ith labeled vertex, < ->Representing the three-dimensional coordinates of the i-th predicted vertex,representing the distance between the ith labeled vertex and the ith predicted vertex. Further, the computer device may generate a shape prediction error of the initial head reconstruction model from the predicted head shape data, the shape prediction error reflecting an accuracy with which the initial head reconstruction model obtained the predicted head shape data of the sample object. The prediction head texture data comprises texture data corresponding to grids in a prediction object grid model respectively, a difference value between the texture data of every two grids in the prediction object grid model is obtained, the difference value between the texture data of every two grids in the prediction object grid model is accumulated, and a texture prediction error of an initial head reconstruction model is obtained. The texture prediction error reflects the smoothness of the predicted head texture data output by the initial reconstruction model, namely the greater the texture prediction error is, the lower the smoothness of the predicted head texture data output by the initial reconstruction model is; the smaller the texture prediction error, the higher the smoothness of the predicted head texture data output by the initial reconstruction model. The computer device may then adjust model parameters of the initial head reconstruction model based on the shape prediction error, the contour prediction error, and the texture prediction error to obtain a target head reconstruction model. Optionally, the number of sample objects is S S is a positive integer greater than 1; generating a shape prediction error of the initial head reconstruction model according to the predicted head shape data, including: the computer device may determine the similarity between the predicted head shape data between each two of the S sample objects according to a similarity algorithm, the similarity algorithm including a cosine similarity algorithm, a euclidean distance similarity, and the like. Further, accumulating the similarity between the predicted head shape data between every two sample objects in the S sample objects to obtain a similarity sum, and generating a shape prediction error of the initial head reconstruction model according to the similarity sum. Typically, two different sample objects, each of which has a larger difference in their corresponding head shape, thus indicating that the accuracy of the predicted head shape data output by the initial head reconstruction model is lower if the similarity between the corresponding predicted head shape data of the two different sample objects is greater; the smaller the similarity between the corresponding predicted head shape data of two different sample objects, the higher the accuracy of the predicted head shape data output by the initial head reconstruction model. In other words, the similarity sum and the shape prediction shape error have a positive correlation relationship, that is, the larger the similarity sum is, the larger the prediction shape error is; the smaller the sum of the similarity, the smaller the prediction shape error. And the shape prediction error of the initial head reconstruction model is determined through the similarity between the predicted shape data of different sample objects, so that the high-accuracy target head reconstruction model is obtained based on the training of the shape prediction error and the like.
For example, taking the similarity algorithm as the cosine similarity algorithm as an example, the computer device may calculate the shape prediction error of the initial head reconstruction model by the following formula (7):
(7)
wherein, the liquid crystal display device comprises a liquid crystal display device,shape prediction error representing an initial head reconstruction model,/>Predictive head shape data representing the ith sample object, +.>Predictive head shape data representing the jth sample object,/->Representing the number of sample objects. Optionally, the adjusting the model parameters of the initial head reconstruction model according to the shape prediction error, the contour prediction error and the texture prediction error to obtain a target head reconstruction model includes: the computer equipment can sum the shape prediction error, the contour prediction error and the texture prediction error to obtain the prediction total error of the initial head reconstruction model; and determining a convergence state of the initial head reconstruction model according to the prediction total error, wherein the convergence state of the initial head reconstruction model comprises a converged state or a non-converged state, the converged state means that the prediction total error of the initial head reconstruction model is smaller than an error threshold value, the non-converged state means that the prediction total error of the initial head reconstruction model is larger than or equal to the error threshold value, and the error threshold value can be the minimum prediction error of the initial head reconstruction model or the prediction error threshold value can be preset. If the initial head reconstruction model is in a converged state, determining the initial head reconstruction model as a target head reconstruction model; and if the initial head reconstruction model is in an unconverged state, adjusting model parameters of the initial head reconstruction model according to the prediction total error until the adjusted initial head reconstruction model is in a converged state, and determining the adjusted initial head reconstruction model as a target head reconstruction model. It should be noted that, the predicted total error of the initial head reconstruction model may be calculated based on the loss function in the above formula (4).
According to the application, the accuracy of the target head reconstruction model obtained by training is improved by training the initial head reconstruction model, so that the head grid in the initial object grid model can be automatically optimized and updated through the target head reconstruction model, and the method can be widely applied to scenes such as games, animation videos and the like, and the application range of the method is improved.
Fig. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 7, the data processing apparatus may include:
an acquisition module 711, configured to acquire an initial object mesh model to be optimized, and K images including the target object; the initial object mesh model includes M head meshes of initial head shape data and initial head texture data of the target object, K, M are integers greater than 1;
an extraction module 712, configured to perform head feature extraction on the K images to obtain K candidate head shape data and K candidate head texture data of the target object;
a first correction module 713, configured to correct the initial head shape data of the target object according to the K candidate head shape data, to obtain target head shape data of the target object;
A second correction module 714, configured to correct the initial header texture data of the target object according to the K candidate header texture data, so as to obtain target header texture data of the target object;
and the updating module 715 is configured to perform optimization updating on the M head grids in the initial object grid model of the target object according to the target head shape data and the target head texture data, so as to obtain an optimized target object grid model.
Optionally, the updating module 715 includes a fusion unit 7111a, a sampling unit 7112a, a first generating unit 7113a, and an updating unit 7114a;
a fusion unit 7111a, configured to fuse the target head shape data and the target head texture data to obtain a target fusion feature;
a sampling unit 7112a, configured to perform upsampling processing on the target fusion feature to obtain target head depth data of the target object;
a first generation unit 7113a that generates a target head mesh reflecting a head of the target object from the target head shape data, the target head texture data, and the target head depth data;
and an updating unit 7114a, configured to replace the M head grids in the initial object grid model of the target object with the target head grids, to obtain an optimized target object grid model.
Optionally, the updating unit 7114a replaces the M head grids in the initial object grid model of the target object with the target head grids to obtain an optimized target object grid model, including:
replacing the M head grids in the initial object grid model of the target object with the target head grids to obtain candidate object grid models of the target object;
performing crack detection on the head and neck of the target object according to the target head grids and neck grids in the candidate object grid model;
and if the crack exists between the head and the neck of the target object, correcting the target head grid in the candidate object grid model to obtain an optimized target object grid model.
Optionally, the updating unit 7114a performs crack detection on the head and the neck of the target object according to the target head mesh and the neck mesh in the candidate object mesh model, including:
performing two-dimensional projection on a head edge grid in the target head grid to obtain a two-dimensional plane comprising the head edge grid; the head edge grid is a grid, in the target head grid, of which the distance from the neck grid is smaller than a first distance threshold;
Performing two-dimensional projection on neck edge grids in the neck grids of the candidate object grid model to obtain a two-dimensional plane comprising the neck edge grids; the neck edge grid is a grid, in the neck grid, of which the distance from the target head grid is smaller than a second distance threshold;
if the two-dimensional plane including the head edge mesh does not match the two-dimensional plane including the neck edge mesh, determining that a crack exists between the head and the neck of the target object.
Optionally, if the updating unit 7114a detects that a crack exists between the head and the neck of the target object, the updating unit 7114a performs correction processing on the target head mesh in the candidate object mesh model to obtain an optimized target object mesh model, including:
if a crack is detected to exist between the head and the neck of the target object, acquiring the side length of a head edge grid in the target head grid; the head edge grid is a grid, in the target head grid, of which the distance from the neck grid is smaller than a first distance threshold;
and adjusting the head edge grid with the edge length larger than the edge length threshold value to obtain an optimized target object grid model.
Optionally, the initial head shape data includes initial mesh shape data of a head mesh n of the M head meshes, n being a positive integer less than or equal to M; the K candidate head shape data respectively comprise candidate grid shape data of a plurality of candidate head grids;
the first correction module 713 includes a processing unit 7115b, a second generation unit 7116b, an acquisition unit 7117b, and a third generation unit 7118b;
an obtaining unit 7117b, configured to obtain candidate mesh shape data of candidate head meshes having a position matching relationship with the head mesh n from the K candidate head shape data, respectively, to obtain K candidate mesh shape data;
a processing unit 7115b, configured to perform an averaging process on the K candidate mesh shape data, to obtain average head shape data corresponding to the head mesh n;
an acquisition unit 7117b for acquiring the mesh number of the head meshes in which the L head vertices are respectively shared; the L head vertexes are head vertexes used for forming the head grid n, and L is an integer greater than 1;
a second generating unit 7116b for determining a shape offset of the head mesh n according to the mesh number of the head meshes in which the L head vertices are respectively shared, and the initial mesh shape data of the head mesh n;
A third generating unit 7118b, configured to determine target mesh shape data of the head mesh n according to the shape offset of the head mesh n and average head shape data corresponding to the head mesh n, until target mesh shape data corresponding to the M head meshes respectively are acquired, and use the target mesh shape data corresponding to the M head meshes respectively as target head shape data of the target object.
Optionally, the initial mesh shape data of the head mesh n includes initial three-dimensional position information of the L head vertices; the third generating unit 7118b determines a shape offset of the head mesh n according to the mesh number of the head meshes in which the L head vertices are respectively shared, and the initial mesh shape data of the head mesh n, including:
generating a grid number matrix for reflecting the grid number of the head grids respectively shared by the L head vertexes, and generating an initial position matrix for reflecting initial three-dimensional position information of the L head vertexes;
generating a covariance matrix corresponding to the head grid n according to the initial position matrix; the covariance matrix is used for reflecting the linear relation between the initial three-dimensional position indexes of the L head vertexes;
Performing transformation processing on the covariance matrix corresponding to the head grid n to obtain a characteristic value pair corresponding to the head vertex n; the feature value pair corresponding to the head grid n comprises a feature value and a feature vector reflecting the shape change feature of the head grid n;
and carrying out product processing on the characteristic values and the characteristic vectors in the characteristic value pairs corresponding to the head grids n and the grid quantity matrix to obtain the shape offset of the head grids n.
The apparatus further includes a rendering module 716; rendering module 716, configured to perform the following steps:
acquiring three-dimensional coordinates of vertexes in the target object grid model;
mapping the three-dimensional coordinates of the vertexes in the target object grid model to a texture space to obtain texture coordinates of the vertexes in the target object grid model;
determining a texture map corresponding to the grid in the target object grid model according to the texture data of the grid in the target object grid model;
mapping the grids in the target object grid model according to the texture coordinates of the vertexes in the target object grid model and the texture mapping corresponding to the grids in the target object grid model;
And rendering the target object grid model subjected to mapping processing to obtain a target image comprising the target object.
According to the application, the accuracy of the target head reconstruction model obtained by training is improved by training the initial head reconstruction model, so that the head grid in the initial object grid model can be automatically optimized and updated through the target head reconstruction model, and the method can be widely applied to scenes such as games, animation videos and the like, and the application range of the method is improved.
Fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 8, the data processing apparatus may include:
an obtaining module 811, configured to obtain labeling head profile data of a sample object, an initial object grid model to be optimized of the sample object, and P images including the sample object; the initial object grid model comprises Q head grids of initial head shape data and initial head texture data of the sample object, P, Q are integers greater than 1, and the training data comprises labeling head shape data of the sample object;
an extracting module 812, configured to invoke an initial head reconstruction model, and perform head feature extraction on the P images to obtain P candidate head shape data and P candidate head texture data of the sample object;
The first correction module 813 is configured to correct the initial head shape data of the sample object according to the P candidate head shape data, so as to obtain predicted head shape data of the sample object;
a second correction module 814, configured to correct the initial header texture data of the sample object according to the P candidate header texture data, so as to obtain predicted header texture data of the sample object;
an updating module 815, configured to perform optimization updating on the Q head grids in the initial object grid model of the sample object according to the predicted head texture data and the predicted head shape data, to obtain an optimized predicted object grid model;
the training module 816 is configured to train the initial head reconstruction model according to the labeling head profile data, the predicted head texture data, the predicted object grid model, and the predicted head shape data, to obtain a target head reconstruction model; the target head reconstruction model, when invoked, is used to perform the method as previously described.
The training module 816 includes a determination unit 8111a, a generation unit 8112a, and an adjustment unit 8113a;
A determining unit 8111a, configured to determine predicted head contour data of the sample object according to the prediction object mesh model, and generate a contour prediction error of the initial head reconstruction model according to the labeling head contour data and the predicted head contour data;
a generating unit 8112a for generating a shape prediction error of the initial head reconstruction model from the predicted head shape data; generating texture prediction errors of the initial head reconstruction model according to the prediction head texture data;
and an adjusting unit 8113a, configured to adjust model parameters of the initial head reconstruction model according to the shape prediction error, the contour prediction error, and the texture prediction error, so as to obtain a target head reconstruction model.
Optionally, the number of the sample objects is S, and S is a positive integer greater than 1; the generating unit 8112a generates a shape prediction error of the initial head reconstruction model according to the predicted head shape data, including:
determining a similarity between predicted head shape data between each two of the S sample objects;
accumulating the similarity between the predicted head shape data between every two sample objects in the S sample objects to obtain a similarity sum;
And generating a shape prediction error of the initial head reconstruction model according to the similarity sum.
Optionally, the adjusting unit 8113a adjusts model parameters of the initial head reconstruction model according to the shape prediction error, the contour prediction error, and the texture prediction error, to obtain a target head reconstruction model, including:
determining a predicted total error of the initial head reconstruction model according to the shape prediction error, the contour prediction error and the texture prediction error;
determining a convergence state of the initial head reconstruction model according to the predicted total error;
and if the initial head reconstruction model is in an unconverged state, adjusting model parameters of the initial head reconstruction model according to the prediction total error to obtain a target head reconstruction model.
In the present application, since the K candidate head shape data and the K candidate head texture data are extracted from K images including the target object, i.e., the K candidate head shape data and the K candidate head texture data can provide more detailed information about the head of the target object. Therefore, the initial head shape data in the initial object grid model is corrected through the K candidate head shape data to obtain the target head shape data, and the initial head texture data in the initial object grid model is corrected through the K candidate head texture data to obtain the target head texture data, so that the accuracy of the target head shape data and the target head texture data can be improved. Further, the head grids in the initial object grid model are optimized and updated according to the target head shape data and the target head texture data, so that the accuracy of the head grids in the optimized target object grid model can be improved; in addition, the optimization updating process of the head grid model in the initial object grid model does not need to be manually participated, so that the labor cost can be saved, and the efficiency of acquiring the object grid model is improved.
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 9, the computer device 1000 may be a first device in the above method, and specifically may refer to a terminal or a server, including: processor 1001, network interface 1004, and memory 1005, and in addition, the above-described computer device 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. In some embodiments, the user interface 1003 may include a DiSPlay (DiSPlay), a Keyboard (keyBoard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface, among others. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The MeMory 1005 may be a high-speed RAM MeMory or a nonvolatile MeMory (non-volatile MeMory), such as at least one magnetic disk MeMory. The memory 1005 may also optionally be at least one storage device remote from the processor 1001. As shown in fig. 9, an operating system, a network communication module, a user interface module, and computer applications may be included in the memory 1005, which is a type of computer-readable storage medium.
In the computer device 1000 shown in fig. 9, the network interface 1004 may provide network communication functions; while user interface 1003 is primarily used as an interface to provide input; and the processor 1001 may be used to invoke computer applications stored in the memory 1005 to implement:
acquiring an initial object grid model to be optimized of a target object, and including K images of the target object; the initial object mesh model includes M head meshes of initial head shape data and initial head texture data of the target object, K, M are integers greater than 1;
extracting head characteristics of the K images to obtain K candidate head shape data and K candidate head texture data of the target object;
correcting the initial head shape data of the target object according to the K candidate head shape data to obtain target head shape data of the target object;
correcting the initial head texture data of the target object according to the K candidate head texture data to obtain target head texture data of the target object;
and according to the target head shape data and the target head texture data, optimizing and updating the M head grids in the initial object grid model of the target object to obtain an optimized target object grid model.
Optionally, the processor 1001 invokes a computer application stored in the memory 1005 to implement the optimizing update on the M head grids in the initial object grid model of the target object according to the target head shape data and the target head texture data, to obtain an optimized target object grid model, including:
fusing the target head shape data and the target head texture data to obtain target fusion characteristics;
performing up-sampling processing on the target fusion characteristics to obtain target head depth data of the target object;
generating a target head mesh reflecting the head of the target object according to the target head shape data, the target head texture data and the target head depth data;
and replacing the M head grids in the initial object grid model of the target object with the target head grids to obtain an optimized target object grid model.
Optionally, the processor 1001 invokes a computer application stored in the memory 1005 to replace the M header grids in the initial object grid model of the target object with the target header grid, to obtain an optimized target object grid model, including:
Replacing the M head grids in the initial object grid model of the target object with the target head grids to obtain candidate object grid models of the target object;
performing crack detection on the head and neck of the target object according to the target head grids and neck grids in the candidate object grid model;
and if the crack exists between the head and the neck of the target object, correcting the target head grid in the candidate object grid model to obtain an optimized target object grid model.
Optionally, the processor 1001 invokes a computer application program stored in the memory 1005 to implement crack detection on the head and neck of the target object according to the target head mesh and neck mesh in the candidate object mesh model, including:
performing two-dimensional projection on a head edge grid in the target head grid to obtain a two-dimensional plane comprising the head edge grid; the head edge grid is a grid, in the target head grid, of which the distance from the neck grid is smaller than a first distance threshold;
performing two-dimensional projection on neck edge grids in the neck grids of the candidate object grid model to obtain a two-dimensional plane comprising the neck edge grids; the neck edge grid is a grid, in the neck grid, of which the distance from the target head grid is smaller than a second distance threshold;
If the two-dimensional plane including the head edge mesh does not match the two-dimensional plane including the neck edge mesh, determining that a crack exists between the head and the neck of the target object.
Optionally, the processor 1001 invokes a computer application program stored in the memory 1005 to implement, if a crack is detected between the head and the neck of the target object, correction processing on the target head mesh in the candidate object mesh model to obtain an optimized target object mesh model, including:
if a crack is detected to exist between the head and the neck of the target object, acquiring the side length of a head edge grid in the target head grid; the head edge grid is a grid, in the target head grid, of which the distance from the neck grid is smaller than a first distance threshold;
and adjusting the head edge grid with the edge length larger than the edge length threshold value to obtain an optimized target object grid model.
Optionally, the initial head shape data includes mesh shape data of the M head meshes; optionally, the processor 1001 invokes a computer application stored in the memory 1005 to implement correcting the initial head shape data of the target object according to the K candidate head shape data to obtain target head shape data of the target object, including:
Optionally, the initial head shape data includes initial mesh shape data of a head mesh n of the M head meshes, n being a positive integer less than or equal to M; the K candidate head shape data respectively comprise candidate grid shape data of a plurality of candidate head grids;
optionally, the processor 1001 invokes a computer application stored in the memory 1005 to implement correcting the initial head shape data of the target object according to the K candidate head shape data to obtain target head shape data of the target object, including:
respectively acquiring candidate grid shape data of candidate head grids with a position matching relation with the head grid n from the K candidate head shape data to obtain K candidate grid shape data;
averaging the K candidate grid shape data to obtain average head shape data corresponding to the head grid n;
acquiring the grid number of head grids with which the L head vertexes are respectively shared; the L head vertexes are head vertexes used for forming the head grid n, and L is an integer greater than 1;
determining the shape offset of the head grids n according to the grid number of the head grids respectively shared by the L head vertexes and the initial grid shape data of the head grids n;
Determining target grid shape data of the head grid n according to the shape offset of the head grid n and average head shape data corresponding to the head grid n;
and until the target grid shape data corresponding to the M head grids respectively are obtained, taking the target grid shape data corresponding to the M head grids respectively as target head shape data of the target object.
Optionally, the initial mesh shape data of the head mesh n includes initial three-dimensional position information of the L head vertices;
optionally, the processor 1001 invokes a computer application stored in the memory 1005 to implement determining a shape offset of the head mesh n according to the mesh number of the head meshes respectively shared by the L head vertices and the initial mesh shape data of the head mesh n, including:
generating a grid number matrix for reflecting the grid number of the head grids respectively shared by the L head vertexes, and generating an initial position matrix for reflecting initial three-dimensional position information of the L head vertexes;
generating a covariance matrix corresponding to the head grid n according to the initial position matrix; the covariance matrix is used for reflecting the linear relation between the initial three-dimensional position indexes of the L head vertexes;
Performing transformation processing on the covariance matrix corresponding to the head grid n to obtain a characteristic value pair corresponding to the head vertex n; the feature value pair corresponding to the head grid n comprises a feature value and a feature vector reflecting the shape change feature of the head grid n;
and carrying out product processing on the characteristic values and the characteristic vectors in the characteristic value pairs corresponding to the head grids n and the grid quantity matrix to obtain the shape offset of the head grids n.
Optionally, the processor 1001 invokes a computer application stored in the memory 1005 to implement:
acquiring three-dimensional coordinates of vertexes in the target object grid model;
mapping the three-dimensional coordinates of the vertexes in the target object grid model to a texture space to obtain texture coordinates of the vertexes in the target object grid model;
determining a texture map corresponding to the grid in the target object grid model according to the texture data of the grid in the target object grid model;
mapping the grids in the target object grid model according to the texture coordinates of the vertexes in the target object grid model and the texture mapping corresponding to the grids in the target object grid model;
And rendering the target object grid model subjected to mapping processing to obtain a target image comprising the target object.
Optionally, the processor 1001 invokes a computer application stored in the memory 1005 to implement:
acquiring annotation head contour data of a sample object, an initial object grid model to be optimized and P images comprising the sample object; the initial object grid model comprises Q head grids of initial head shape data and initial head texture data of the sample object, P, Q are integers greater than 1, and the training data comprises labeling head shape data of the sample object;
invoking an initial head reconstruction model, and extracting head characteristics of the P images to obtain P candidate head shape data and P candidate head texture data of the sample object;
correcting the initial head shape data of the sample object according to the P candidate head shape data to obtain predicted head shape data of the sample object;
correcting the initial head texture data of the sample object according to the P candidate head texture data to obtain predicted head texture data of the sample object;
Optimizing and updating the Q head grids in the initial object grid model of the sample object according to the predicted head texture data and the predicted head shape data to obtain an optimized predicted object grid model;
training the initial head reconstruction model according to the labeling head outline data, the prediction head texture data, the prediction object grid model and the prediction head shape data to obtain a target head reconstruction model; the target head reconstruction model, when invoked, is used to perform the method as described previously.
Optionally, the processor 1001 invokes a computer application program stored in the memory 1005 to train the initial head reconstruction model to obtain a target head reconstruction model according to the labeling head contour data, the prediction head texture data, the prediction object mesh model, and the prediction head shape data, including:
determining predicted head contour data of the sample object according to the predicted object grid model, and generating a contour prediction error of the initial head reconstruction model according to the marked head contour data and the predicted head contour data;
Generating a shape prediction error of the initial head reconstruction model according to the predicted head shape data;
generating texture prediction errors of the initial head reconstruction model according to the prediction head texture data;
and adjusting model parameters of the initial head reconstruction model according to the shape prediction error, the contour prediction error and the texture prediction error to obtain a target head reconstruction model.
Optionally, the number of the sample objects is S, and S is a positive integer greater than 1; optionally, the processor 1001 invokes a computer application stored in the memory 1005 to implement generating a shape prediction error of the initial head reconstruction model from the predicted head shape data, including:
determining a similarity between predicted head shape data between each two of the S sample objects;
accumulating the similarity between the predicted head shape data between every two sample objects in the S sample objects to obtain a similarity sum;
and generating a shape prediction error of the initial head reconstruction model according to the similarity sum.
Optionally, the processor 1001 invokes a computer application program stored in the memory 1005 to implement adjusting model parameters of the initial head reconstruction model according to the shape prediction error, the contour prediction error, and the texture prediction error to obtain a target head reconstruction model, including:
Determining a predicted total error of the initial head reconstruction model according to the shape prediction error, the contour prediction error and the texture prediction error;
determining a convergence state of the initial head reconstruction model according to the predicted total error;
and if the initial head reconstruction model is in an unconverged state, adjusting model parameters of the initial head reconstruction model according to the prediction total error to obtain a target head reconstruction model.
In the present application, since the K candidate head shape data and the K candidate head texture data are extracted from K images including the target object, i.e., the K candidate head shape data and the K candidate head texture data can provide more detailed information about the head of the target object. Therefore, the initial head shape data in the initial object grid model is corrected through the K candidate head shape data to obtain the target head shape data, and the initial head texture data in the initial object grid model is corrected through the K candidate head texture data to obtain the target head texture data, so that the accuracy of the target head shape data and the target head texture data can be improved. Further, the head grids in the initial object grid model are optimized and updated according to the target head shape data and the target head texture data, so that the accuracy of the head grids in the optimized target object grid model can be improved; in addition, the optimization updating process of the head grid model in the initial object grid model does not need to be manually participated, so that the labor cost can be saved, and the efficiency of acquiring the object grid model is improved.
It should be understood that the computer device described in the embodiments of the present application may perform the description of the data processing method in the foregoing corresponding embodiments, or may perform the description of the data processing apparatus in the foregoing corresponding embodiments, which is not repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
Furthermore, it should be noted here that: the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program executed by the aforementioned data processing apparatus, where the computer program includes program instructions, when executed by the processor, can perform the description of the data processing method in the corresponding embodiment, and therefore, a detailed description will not be given here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application.
As an example, the above-described program instructions may be executed on one computer device or at least two computer devices disposed at one site, or alternatively, at least two computer devices distributed at least two sites and interconnected by a communication network, which may constitute a blockchain network.
The computer readable storage medium may be the data processing apparatus provided in any of the foregoing embodiments or a middle storage unit of the foregoing computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a SMart Media Card (SMC), a Secure Digital (SD) card, a flaSh memory card (flashh card), etc. provided on the computer device. Further, the computer-readable storage medium may also include both a central storage unit and an external storage device of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The terms first, second and the like in the description and in the claims and drawings of embodiments of the application, are used for distinguishing between different media and not necessarily for describing a particular sequential or chronological order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or modules but may, in the alternative, include other steps or modules not listed or inherent to such process, method, apparatus, article, or device.
In the application, the collection and processing of related data (such as driving track data) in the application of the embodiment should be strictly according to the requirements of relevant national laws and regulations, obtain the informed consent or independent consent of the personal information body (such as the user corresponding to the driving track data), and develop the subsequent data use and processing behaviors within the authorized range of the laws and regulations and the personal information body.
The embodiments of the present application further provide a computer program product, which includes a computer program/instruction, where the computer program/instruction, when executed by a processor, implements the description of the data processing method and the decoding method in the foregoing corresponding embodiments, and therefore, will not be repeated herein. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer program product according to the present application, reference is made to the description of the method embodiments according to the present application.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and related apparatus provided in the embodiments of the present application are described with reference to the flowchart and/or schematic structural diagrams of the method provided in the embodiments of the present application, and each flow and/or block of the flowchart and/or schematic structural diagrams of the method may be implemented by computer program instructions, and combinations of flows and/or blocks in the flowchart and/or block diagrams. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable network connection device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable network connection device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable network connection device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable network connection device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or structures.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.

Claims (15)

1. A method of data processing, comprising:
acquiring an initial object grid model to be optimized and K images comprising a target object; the initial object mesh model includes M head meshes of initial head shape data and initial head texture data of the target object, K, M are integers greater than 1; the initial head shape data comprises initial mesh shape data of a head mesh n in the M head meshes, wherein n is a positive integer less than or equal to M;
extracting head characteristics of the K images to obtain K candidate head shape data and K candidate head texture data of the target object; the K candidate head shape data respectively comprise candidate grid shape data of a plurality of candidate head grids;
respectively acquiring candidate grid shape data of candidate head grids with a position matching relation with the head grid n from the K candidate head shape data to obtain K candidate grid shape data;
Averaging the K candidate grid shape data to obtain average head shape data corresponding to the head grid n;
acquiring the grid number of head grids with which the L head vertexes are respectively shared; the L head vertexes are head vertexes used for forming the head grid n, and L is an integer greater than 1;
determining the shape offset of the head grids n according to the grid number of the head grids respectively shared by the L head vertexes and the initial grid shape data of the head grids n;
determining target grid shape data of the head grid n according to the shape offset of the head grid n and average head shape data corresponding to the head grid n;
until the target grid shape data corresponding to the M head grids respectively are obtained, taking the target grid shape data corresponding to the M head grids respectively as target head shape data of the target object;
correcting the initial head texture data of the target object according to the K candidate head texture data to obtain target head texture data of the target object;
and according to the target head shape data and the target head texture data, optimizing and updating the M head grids in the initial object grid model of the target object to obtain an optimized target object grid model.
2. The method of claim 1, wherein optimizing and updating the M head meshes in the initial object mesh model of the target object according to the target head shape data and the target head texture data to obtain an optimized target object mesh model comprises:
fusing the target head shape data and the target head texture data to obtain target fusion characteristics;
performing up-sampling processing on the target fusion characteristics to obtain target head depth data of the target object;
generating a target head mesh reflecting the head of the target object according to the target head shape data, the target head texture data and the target head depth data;
and replacing the M head grids in the initial object grid model of the target object with the target head grids to obtain an optimized target object grid model.
3. The method of claim 2, wherein replacing the M head grids in the initial object grid model of the target object with the target head grid results in an optimized target object grid model, comprising:
Replacing the M head grids in the initial object grid model of the target object with the target head grids to obtain candidate object grid models of the target object;
performing crack detection on the head and neck of the target object according to the target head grids and neck grids in the candidate object grid model;
and if the crack exists between the head and the neck of the target object, correcting the target head grid in the candidate object grid model to obtain an optimized target object grid model.
4. The method of claim 3, wherein the performing crack detection on the head and neck of the target object based on the target head mesh and neck mesh in the candidate object mesh model comprises:
performing two-dimensional projection on a head edge grid in the target head grid to obtain a two-dimensional plane comprising the head edge grid; the head edge grid is a grid, in the target head grid, of which the distance from the neck grid is smaller than a first distance threshold;
performing two-dimensional projection on neck edge grids in the neck grids of the candidate object grid model to obtain a two-dimensional plane comprising the neck edge grids; the neck edge grid is a grid, in the neck grid, of which the distance from the target head grid is smaller than a second distance threshold;
If the two-dimensional plane including the head edge mesh does not match the two-dimensional plane including the neck edge mesh, determining that a crack exists between the head and the neck of the target object.
5. The method of claim 3, wherein if a crack is detected between the head and the neck of the target object, performing a correction process on the target head mesh in the candidate object mesh model to obtain an optimized target object mesh model, including:
if a crack is detected to exist between the head and the neck of the target object, acquiring the side length of a head edge grid in the target head grid; the head edge grid is a grid, in the target head grid, of which the distance from the neck grid is smaller than a first distance threshold;
and adjusting the head edge grid with the edge length larger than the edge length threshold value to obtain an optimized target object grid model.
6. The method of claim 1, wherein the initial mesh shape data of the head mesh n includes initial three-dimensional position information of the L head vertices;
the determining the shape offset of the head mesh n according to the mesh number of the head meshes respectively shared by the L head vertices and the initial mesh shape data of the head mesh n includes:
Generating a grid number matrix for reflecting the grid number of the head grids respectively shared by the L head vertexes, and generating an initial position matrix for reflecting initial three-dimensional position information of the L head vertexes;
generating a covariance matrix corresponding to the head grid n according to the initial position matrix; the covariance matrix is used for reflecting the linear relation between the initial three-dimensional position indexes of the L head vertexes;
performing transformation processing on the covariance matrix corresponding to the head grid n to obtain a characteristic value pair corresponding to the head vertex n; the feature value pair corresponding to the head grid n comprises a feature value and a feature vector reflecting the shape change feature of the head grid n;
and carrying out product processing on the characteristic values and the characteristic vectors in the characteristic value pairs corresponding to the head grids n and the grid quantity matrix to obtain the shape offset of the head grids n.
7. The method of claim 1, wherein the method further comprises:
acquiring three-dimensional coordinates of vertexes in the target object grid model;
mapping the three-dimensional coordinates of the vertexes in the target object grid model to a texture space to obtain texture coordinates of the vertexes in the target object grid model;
Determining a texture map corresponding to the grid in the target object grid model according to the texture data of the grid in the target object grid model;
mapping the grids in the target object grid model according to the texture coordinates of the vertexes in the target object grid model and the texture mapping corresponding to the grids in the target object grid model;
and rendering the target object grid model subjected to mapping processing to obtain a target image comprising the target object.
8. A method of data processing, comprising:
acquiring annotation head contour data, training data, an initial object grid model to be optimized and P images comprising the sample object; the initial object grid model comprises Q head grids of initial head shape data and initial head texture data of the sample object, P, Q are integers greater than 1, and the training data comprises labeling head shape data of the sample object;
invoking an initial head reconstruction model, and extracting head characteristics of the P images to obtain P candidate head shape data and P candidate head texture data of the sample object;
Correcting the initial head shape data of the sample object according to the P candidate head shape data to obtain predicted head shape data of the sample object;
correcting the initial head texture data of the sample object according to the P candidate head texture data to obtain predicted head texture data of the sample object;
optimizing and updating the Q head grids in the initial object grid model of the sample object according to the predicted head texture data and the predicted head shape data to obtain an optimized predicted object grid model;
training the initial head reconstruction model according to the labeling head outline data, the prediction head texture data, the prediction object grid model and the prediction head shape data to obtain a target head reconstruction model; the target head reconstruction model when invoked for performing the method of any one of claims 1-7.
9. The method of claim 8, wherein training the initial head reconstruction model based on the labeling head contour data, the prediction head texture data, the prediction object mesh model, and the prediction head shape data to obtain a target head reconstruction model comprises:
Determining predicted head contour data of the sample object according to the predicted object grid model, and generating a contour prediction error of the initial head reconstruction model according to the marked head contour data and the predicted head contour data;
generating a shape prediction error of the initial head reconstruction model according to the predicted head shape data;
generating texture prediction errors of the initial head reconstruction model according to the prediction head texture data;
and adjusting model parameters of the initial head reconstruction model according to the shape prediction error, the contour prediction error and the texture prediction error to obtain a target head reconstruction model.
10. The method of claim 9, wherein the number of sample objects is S, S being a positive integer greater than 1;
the generating a shape prediction error of the initial head reconstruction model according to the predicted head shape data comprises:
determining a similarity between predicted head shape data between each two of the S sample objects;
accumulating the similarity between the predicted head shape data between every two sample objects in the S sample objects to obtain a similarity sum;
And generating a shape prediction error of the initial head reconstruction model according to the similarity sum.
11. The method of claim 9, wherein said adjusting model parameters of the initial head reconstruction model based on the shape prediction error, the contour prediction error, and the texture prediction error to obtain a target head reconstruction model comprises:
determining a predicted total error of the initial head reconstruction model according to the shape prediction error, the contour prediction error and the texture prediction error;
determining a convergence state of the initial head reconstruction model according to the predicted total error;
and if the initial head reconstruction model is in an unconverged state, adjusting model parameters of the initial head reconstruction model according to the prediction total error to obtain a target head reconstruction model.
12. A data processing apparatus, comprising:
the acquisition module is used for acquiring an initial object grid model to be optimized and K images comprising a target object; the initial object mesh model includes M head meshes of initial head shape data and initial head texture data of the target object, K, M are integers greater than 1; the initial head shape data comprises initial mesh shape data of a head mesh n in the M head meshes, wherein n is a positive integer less than or equal to M;
The extraction module is used for extracting head characteristics of the K images to obtain K candidate head shape data and K candidate head texture data of the target object; the K candidate head shape data respectively comprise candidate grid shape data of a plurality of candidate head grids;
the first correction module includes:
an obtaining unit, configured to obtain candidate mesh shape data of candidate head meshes having a position matching relationship with the head mesh n from the K candidate head shape data, respectively, to obtain K candidate mesh shape data;
the processing unit is used for carrying out averaging processing on the K candidate grid shape data to obtain average head shape data corresponding to the head grid n;
an acquisition unit configured to acquire a mesh number of head meshes in which L head vertices are respectively shared; the L head vertexes are head vertexes used for forming the head grid n, and L is an integer greater than 1;
a second generating unit, configured to determine a shape offset of the head mesh n according to a mesh number of the head meshes in which the L head vertices are respectively shared, and initial mesh shape data of the head mesh n;
A third generating unit, configured to determine target mesh shape data of the head mesh n according to the shape offset of the head mesh n and average head shape data corresponding to the head mesh n, until target mesh shape data corresponding to the M head meshes respectively are obtained, and use the target mesh shape data corresponding to the M head meshes respectively as target head shape data of the target object;
the second correction module is used for correcting the initial head texture data of the target object according to the K candidate head texture data to obtain target head texture data of the target object;
and the updating module is used for carrying out optimization updating on the M head grids in the initial object grid model of the target object according to the target head shape data and the target head texture data to obtain an optimized target object grid model.
13. A data processing apparatus, comprising:
the acquisition module is used for acquiring the labeling head outline data, training data, an initial object grid model to be optimized and P images comprising the sample object; the initial object grid model comprises Q head grids of initial head shape data and initial head texture data of the sample object, P, Q are integers greater than 1, and the training data comprises labeling head shape data of the sample object;
The extraction module is used for calling an initial head reconstruction model, extracting head characteristics of the P images, and obtaining P candidate head shape data and P candidate head texture data of the sample object;
the first correction module is used for correcting the initial head shape data of the sample object according to the P candidate head shape data to obtain predicted head shape data of the sample object;
the second correction module is used for correcting the initial head texture data of the sample object according to the P candidate head texture data to obtain predicted head texture data of the sample object;
the updating module is used for carrying out optimization updating on the Q head grids in the initial object grid model of the sample object according to the prediction head texture data and the prediction head shape data to obtain an optimized prediction object grid model;
the training module is used for training the initial head reconstruction model according to the marking head outline data, the prediction head texture data, the prediction object grid model and the prediction head shape data to obtain a target head reconstruction model; the target head reconstruction model when invoked for performing the method of any one of claims 1-7.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 11 when the computer program is executed.
15. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 11.
CN202310707673.XA 2023-06-15 2023-06-15 Data processing method, device, equipment and storage medium Active CN116433852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310707673.XA CN116433852B (en) 2023-06-15 2023-06-15 Data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310707673.XA CN116433852B (en) 2023-06-15 2023-06-15 Data processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116433852A CN116433852A (en) 2023-07-14
CN116433852B true CN116433852B (en) 2023-09-12

Family

ID=87084082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310707673.XA Active CN116433852B (en) 2023-06-15 2023-06-15 Data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116433852B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049896A (en) * 2012-12-27 2013-04-17 浙江大学 Automatic registration algorithm for geometric data and texture data of three-dimensional model
CN115512073A (en) * 2022-09-19 2022-12-23 南京信息工程大学 Three-dimensional texture grid reconstruction method based on multi-stage training under differentiable rendering
CN116109798A (en) * 2023-04-04 2023-05-12 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049896A (en) * 2012-12-27 2013-04-17 浙江大学 Automatic registration algorithm for geometric data and texture data of three-dimensional model
CN115512073A (en) * 2022-09-19 2022-12-23 南京信息工程大学 Three-dimensional texture grid reconstruction method based on multi-stage training under differentiable rendering
CN116109798A (en) * 2023-04-04 2023-05-12 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN116433852A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
JP7270761B2 (en) Image processing method and apparatus, electronic equipment and computer program
CN108764048B (en) Face key point detection method and device
WO2021174939A1 (en) Facial image acquisition method and system
CN112037320B (en) Image processing method, device, equipment and computer readable storage medium
CN116109798B (en) Image data processing method, device, equipment and medium
CN112052839A (en) Image data processing method, apparatus, device and medium
CN112085835B (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN111401216A (en) Image processing method, model training method, image processing device, model training device, computer equipment and storage medium
CN111553267A (en) Image processing method, image processing model training method and device
WO2021063271A1 (en) Human body model reconstruction method and reconstruction system, and storage medium
CN111553284A (en) Face image processing method and device, computer equipment and storage medium
CN115330947A (en) Three-dimensional face reconstruction method and device, equipment, medium and product thereof
CN111754622B (en) Face three-dimensional image generation method and related equipment
US11961266B2 (en) Multiview neural human prediction using implicit differentiable renderer for facial expression, body pose shape and clothes performance capture
CN114549291A (en) Image processing method, device, equipment and storage medium
Wang A Comparison Study of Five 3D Modeling Systems Based on the SfM Principles
KR20180035359A (en) Three-Dimensional Space Modeling and Data Lightening Method using the Plane Information
CN114170290A (en) Image processing method and related equipment
Song et al. Weakly-supervised stitching network for real-world panoramic image generation
WO2022208440A1 (en) Multiview neural human prediction using implicit differentiable renderer for facial expression, body pose shape and clothes performance capture
CN116363320B (en) Training of reconstruction model and three-dimensional model reconstruction method, device, equipment and medium
CN116012626B (en) Material matching method, device, equipment and storage medium for building elevation image
CN116433852B (en) Data processing method, device, equipment and storage medium
CN115393471A (en) Image processing method and device and electronic equipment
CN115760888A (en) Image processing method, image processing device, computer and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40090380

Country of ref document: HK