CN114693872A - Eyeball data processing method and device, computer equipment and storage medium - Google Patents

Eyeball data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114693872A
CN114693872A CN202210298578.4A CN202210298578A CN114693872A CN 114693872 A CN114693872 A CN 114693872A CN 202210298578 A CN202210298578 A CN 202210298578A CN 114693872 A CN114693872 A CN 114693872A
Authority
CN
China
Prior art keywords
virtual
eyeball
point cloud
key point
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210298578.4A
Other languages
Chinese (zh)
Inventor
田泽藩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210298578.4A priority Critical patent/CN114693872A/en
Publication of CN114693872A publication Critical patent/CN114693872A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an eyeball data processing method and device, computer equipment and a storage medium, which can be applied to scenes such as cloud technology, computer vision, intelligent traffic, auxiliary driving and the like. The method comprises the following steps: acquiring a first key point set corresponding to an eyesocket of the first virtual face from the point cloud of the first virtual face; acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball; and determining a first adjusting parameter based on the coordinates of each key point in the first key point set in the point cloud of the first virtual face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the first adjusting parameter is used for adjusting the point cloud of the virtual eyeball so that the virtual eyeball is matched with the eye socket of the first virtual face. By adopting the method and the device for self-adaptive adjustment of the point cloud of the virtual eyeball, the virtual eyeball can be matched with the eye socket of the virtual face more, and the matching efficiency of the eyeball and the eye socket is improved.

Description

Eyeball data processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an eyeball data processing method and apparatus, a computer device, and a storage medium.
Background
With the continuous development of network technology, virtual human technology is widely applied to human face reconstruction in the fields of 3D games, 3D film and television works, short videos and the like. However, in the virtual human technology, the reconstructed human face does not include eyeballs, so that the reality of the virtual human remains to be improved. In order to better improve the reality of the virtual human and generate a more beautiful result, the virtual eyeball needs to be hung on the reconstructed 3D face. That is, for a three-dimensional model of a virtual face and a three-dimensional model of a virtual eyeball, matching between the virtual eyeball and the orbit of the virtual face is to be realized.
At present, the method for matching the virtual eyeball with the eye socket of the virtual face is mainly to approximate the position of the virtual eyeball on two coordinate axes of x and y through the 3D coordinates of the eye corner in the virtual face and the points of the upper and lower eye sockets, and then to continuously adjust the value in the z direction until the phenomenon of mold penetration occurs. However, this method is inefficient in a manner of continuously trying to adjust the z-direction value, and the fit between the virtual eyeball and the eye socket of the virtual human face is poor, so that the reality of the virtual human is reduced.
Disclosure of Invention
The embodiment of the application provides an eyeball data processing method, an eyeball data processing device, computer equipment and a storage medium, which can quickly and accurately match a virtual eyeball with an eye socket of a virtual face, so that the authenticity of the virtual person is improved.
In a first aspect, an embodiment of the present application provides an eyeball data processing method, where the method includes:
acquiring a first key point set corresponding to an eyesocket of a first virtual face from a point cloud of the first virtual face;
acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball;
determining a first adjustment parameter based on the coordinates of each key point in the first key point set in the point cloud of the first virtual human face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the first adjustment parameter is used for adjusting the point cloud of the virtual eyeball so that the virtual eyeball is matched with the eye socket of the first virtual human face.
In a second aspect, an embodiment of the present application provides an eyeball data processing apparatus, including:
the acquisition module is used for acquiring a first key point set corresponding to an eyesocket of a first virtual face from a point cloud of the first virtual face;
the acquisition module is further used for acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball;
a processing module, configured to determine a first adjustment parameter based on coordinates of each key point in the first set of key points in the point cloud of the first virtual face and coordinates of each key point in the second set of key points in the point cloud of the virtual eyeball, where the first adjustment parameter is used to adjust the point cloud of the virtual eyeball so that the virtual eyeball matches with the orbit of the first virtual face.
In a third aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor, a communication interface, and a memory, where the processor, the communication interface, and the memory are connected to each other, where the memory stores a computer program, and the processor is configured to call the computer program to execute the eyeball data processing method provided in the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements an eyeball data processing method provided by an embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the eyeball data processing method provided by the embodiment of the application.
In the embodiment of the application, computer equipment acquires a first key point set corresponding to an eyesocket of a first virtual face from a point cloud of the first virtual face; acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball; and determining a first adjustment parameter based on the coordinates of each key point in the first key point set in the point cloud of the first virtual human face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the first adjustment parameter is used for adjusting the point cloud of the virtual eyeball so as to enable the virtual eyeball to be matched with the eyesocket of the first virtual human face. By adopting the method and the device, the point cloud of the virtual eyeball is adaptively adjusted according to the adjustment parameters, so that the matching efficiency of the eyeball and the eye socket is improved, the virtual eyeball and the eye socket of the virtual face can be matched better, and the authenticity of the virtual person is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an eyeball data processing scheme provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of an eyeball data processing method provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a first virtual face according to an embodiment of the present application;
fig. 4 is a schematic view of a virtual eyeball according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating the matching effect between the eyeball and the orbit provided by the embodiment of the application;
fig. 6 is a schematic diagram of an eyeball data processing device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to facilitate understanding of the embodiments of the present application, the eyeball data processing method of the present application is described below.
In order to improve the authenticity of the virtual human, the embodiment of the application provides an eyeball data processing scheme. Referring to fig. 1, fig. 1 is a schematic diagram of an eyeball data processing scheme provided in an embodiment of the present application, and a general implementation process of the eyeball data processing scheme provided in the embodiment of the present application is described below with reference to fig. 1: firstly, the computer device 101 acquires a first virtual face 102 and a virtual eyeball 103, and mounts the virtual eyeball 103 on an eyesocket of the first virtual face 102 in response to an operation instruction of a user for the virtual eyeball 103 and the first virtual face 102; secondly, the computer device performs point sampling around the eye socket of the first virtual face 102, acquires a first key point set (i.e. a key point set on the eye socket or referred to as a target) corresponding to the eye socket of the first virtual face 102 from the point cloud of the first virtual face 102, and acquires a second key point set (i.e. a key point set on the virtual eyeball or referred to as a source) corresponding to the first key point set from the point cloud of the virtual eyeball 103; then, the computer device 101 determines adjustment parameters such as a rotation amount R, a translation amount t, a scaling amount s and the like of the point cloud of the virtual eyeball 103 transformed to the point cloud of the eye socket of the first virtual face based on the coordinates of each key point in the first key point set in the point cloud of the first virtual face 102 and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball 103; finally, each point cloud point included in the point cloud of the virtual eyeball 103 is adjusted according to R, t, and s, so that the point cloud of the virtual eyeball 103 is aligned with the point cloud of the eye socket of the first virtual face 102.
Practice shows that the eyeball data processing scheme provided by the embodiment of the application can have the following beneficial effects: the method comprises the steps of acquiring a first key point set corresponding to an eyepit of a first virtual human face and a second key point set corresponding to the first key point set in a virtual eyeball, and determining a first adjustment parameter based on the coordinates of each key point in the first key point set and the coordinates of each key point in the second key point set, so that the virtual eyeball at any position can be perfectly combined with the eyepit of the virtual human face at any position according to the first adjustment parameter, the problem of mold penetration is avoided, the authenticity of the virtual human is improved, and better user experience is achieved. Secondly, the first adjustment parameter comprises one or more of a rotation amount, a translation amount and a scaling amount, so that the scheme can cope with the situation that the sizes of the eye sockets of different virtual persons are different, and the virtual eyeballs can be perfectly fit with the eye sockets of the virtual persons based on the scaling amount. The matching of the virtual eyeballs and the eyesockets of the virtual face can be completed more efficiently and quickly, and a more attractive result is generated, so that the virtual eyeballs can be used in corresponding products more easily, and the user experience of the products is improved.
It should be noted that: in a specific implementation, the above scheme can be executed by a computer device, and the computer device can be a terminal or a server; among others, the terminals mentioned herein may include but are not limited to: smart phones, tablet computers, notebook computers, desktop computers, smart watches, smart televisions, smart vehicle terminals, and the like; various clients (APPs) can be operated in the terminal, such as a video playing client, a social client, a browser client, an information flow client, an education client, and the like. The server mentioned here may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, a cloud server providing basic cloud computing services such as cloud service, cloud database, cloud computing, cloud function, cloud storage, web service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data and artificial intelligence platform, and the like. Moreover, the computer device mentioned in the embodiment of the present application may be located outside the blockchain network, or may be located inside the blockchain network, which is not limited to this; the blockchain network is a network formed by a peer-to-peer network (P2P network) and blockchains, and a blockchain is a novel application model of computer technologies such as distributed data storage, peer-to-peer transmission, consensus mechanism, encryption algorithm, etc., and is essentially a decentralized database, which is a string of data blocks (or called blocks) associated by using cryptography.
The eyeball data processing method provided by the embodiment of the application can be realized based on an Artificial Intelligence (AI) technology. Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology is a comprehensive subject, and the field related to the artificial intelligence technology is wide, and the technologies of the existing hardware level and the technology AI of the software level generally comprise technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, a big data processing technology, an operation/interaction system, electromechanical integration and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The eyeball data processing method provided by the embodiment of the application mainly relates to a Computer Vision technology (CV) in an AI technology. Computer vision is a science for researching how to make a machine "see", and in particular, it refers to that a camera and a computer are used to replace human eyes to make machine vision of identifying, following and measuring the target, and further make image processing, so that the computer processing becomes an image more suitable for human eyes observation or transmitted to an instrument for detection. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronized positioning and mapping, among other techniques.
It should be noted that the present application can be applied to various scenarios, including but not limited to cloud technology, artificial intelligence, smart traffic, driving assistance, and the like.
Based on the above scheme, an embodiment of the present application provides an eyeball data processing method, please refer to fig. 2, and fig. 2 is a schematic flow chart of the eyeball data processing method provided in the embodiment of the present application. The method is executed by a computer device, as shown in fig. 2, the eyeball data processing method may include the following steps S201 to S203:
s201, acquiring a first key point set corresponding to an eyesocket of the first virtual face from the point cloud of the first virtual face.
The point cloud refers to a massive point set of target surface characteristics, and the point cloud of the first virtual face is a set of all point cloud points of the surface of the first virtual face.
The first key point set is a set of key points corresponding to an eyepit of the first virtual face. The key point descriptor is combined with the local feature descriptor to form a key point descriptor, is usually used for representing original data and has representativeness and descriptive property. The key points can be used for accelerating the subsequent data processing speed.
In an alternative embodiment, the computer device obtaining a first set of keypoints corresponding to an eyebox of the first virtual face from a point cloud of the first virtual face includes: acquiring identification information of key points corresponding to eye sockets of a first virtual face from a target topological structure corresponding to the first virtual face, wherein the target topological structure comprises identification information of key points corresponding to all parts in the first virtual face; acquiring a first key point set corresponding to the eye socket from the point cloud of the first virtual face based on the identification information of the key points corresponding to the eye socket, wherein the first key point set comprises the identification information of the key points corresponding to the eye socket.
It is understood that topology in 3D modeling refers to the point-line-surface layout, structure, and connection of the polygonal mesh model. The target topology structure refers to the point-line-surface layout, structure, and connection condition of the first virtual face model, and indicates identification information of key points corresponding to each part in a virtual face (such as the first virtual face, the second virtual face, and the like), such as identification information of eye socket key points, identification information of nose key points, identification information of mouth key points, and the like.
Taking the above-mentioned first virtual face as an example, it is assumed that the target topology corresponding to the first virtual face includes 100 key points, which are denoted as a1、a2、a3、…、a100Where 1, 2, …, 100 is the identification information of each keypoint (or a subscript called a keypoint). The computer device may obtain subscripts (e.g., 51, 52, …, 70) of N (e.g., 20) keypoints corresponding to the eyebox of the first virtual face from the 100 keypoints; a set of the 20 key points corresponding to the eye socket, which includes the index of the 20 key points corresponding to the eye socket, denoted as index-T, may be obtained from the point cloud of the first virtual face based on the index of the 20 key points corresponding to the eye socket. Referring to fig. 3, fig. 3 is a schematic view of a first virtual face according to an embodiment of the present disclosure. In fig. 3, the black dots are key points of the eye socket of the first virtual face.
In an alternative embodiment, the computer device may obtain a first key point set corresponding to the eye socket of the first virtual face from the point cloud of the first virtual face through a point cloud key point extraction algorithm. Wherein. Point cloud keypoint extraction algorithms include, but are not limited to, Scale-invariant feature transform (SIFT) algorithm, Harris keypoint extraction algorithm, internal Shape descriptor (ISS) algorithm, and the like.
S202, acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball.
The second key point set is a key point set corresponding to the virtual eyeball, wherein the key points corresponding to the virtual eyeball correspond to the key points corresponding to the eye socket of the first virtual face.
Optionally, before acquiring the second key point set corresponding to the first key point set from the point cloud of the virtual eyeball, the user may manually mount the virtual eyeball on the eye socket of the first virtual human face. Alternatively, the computer device may obtain an initial position of the virtual eyeball in the orbit of the first virtual face and store the initial position.
In an optional implementation, the obtaining, by the computer device, a second set of keypoints corresponding to the first set of keypoints from the point cloud of the virtual eyeball includes: acquiring at least one vertex included in a point cloud of a virtual eyeball; and determining a second key point set corresponding to the first key point set from the at least one vertex according to the distance between the at least one vertex and the key points included in the first key point set.
In an optional embodiment, the computer device determining, from the at least one vertex and the keypoints included in the first set of keypoints, a second set of keypoints corresponding to the first set of keypoints includes: for any key point in the first key point set, determining the distance between each vertex and any key point based on the coordinates of each vertex in at least one vertex and the coordinates of any key point; acquiring a target vertex with the minimum distance to any key point; and taking the target vertex as a corresponding key point of any key point in the second point cloud, and adding the target vertex to a second key point set corresponding to the first key point set.
Optionally, the computer device may use a K-nearest neighbor (KNN) algorithm to find a second set of keypoints, denoted as set T, corresponding to the N orbital keypoints in the point cloud of the virtual eyeball.
The process of searching the key points of the virtual eyeball corresponding to the key points of the orbit by utilizing the KNN algorithm comprises the following steps:
Figure BDA0003563570840000071
Figure BDA0003563570840000081
the computer equipment can obtain a subscript set of key points in the point cloud of the virtual eyeball corresponding to the orbit key points through the method, and the subscript set is recorded as index-S. Please refer to fig. 4, fig. 4 is a schematic diagram of a virtual eyeball key point according to an embodiment of the present disclosure. In fig. 4, the black dots are key points of the virtual eyeball.
S203, determining a first adjusting parameter based on the coordinates of each key point in the first key point set in the point cloud of the first virtual face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the first adjusting parameter is used for adjusting the point cloud of the virtual eyeball so that the virtual eyeball is matched with the eye socket of the first virtual face.
In an alternative embodiment, the computer device may obtain coordinates of a key point corresponding to the virtual eyeball from the point cloud of the virtual eyeball according to index-S, and the coordinates are used to represent 3D coordinates (a) of the initial eyeballj、bj、cj) Marked as Source; obtaining coordinates of key points corresponding to the eyepit from the point cloud of the first virtual face according to index-S, and using the coordinates to represent 3D coordinates (x) of the initial virtual face eyepiti、yi、zi) And is marked as Target.
In an optional implementation, the computer device determines the first adjustment parameter based on coordinates of each key point in the first key point set in the point cloud of the first virtual face and coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, and the first adjustment parameter includes: and calling an iteration closest point method to align the coordinates of each key point in the first key point set in the point cloud of the first virtual face and the coordinates of each key point in the second key point set in the virtual eyeball point cloud to obtain a first adjustment parameter, wherein the first adjustment parameter comprises one or more of a rotation amount R, a translation amount t and a scaling amount s.
Among them, an Iterative Closest Points (ICP) algorithm is the most classical point cloud registration algorithm. It comprises two parts: searching corresponding points and solving the pose. Using this algorithm, a matching relationship between the sets of points can be sought, the solution of which results in translation and rotation between the two sets of points. Alternatively, the computer device may calculate the rigid body transformation Ti by finding k pairs of correspondence points between the virtual eyeball point cloud and the orbit point cloud of the virtual face using the ICP algorithm so that the sum of the distances of k pairs of matching points is minimum. And transforming the virtual eyeball point cloud to the coordinate system of the orbit point cloud of the virtual face by using the matrix obtained, and estimating an error function of the transformed virtual eyeball point cloud and the orbit point cloud of the virtual face. If the error is greater than the threshold, the iteration continues until a given error requirement is met. That is, each iteration, the whole model is close to one point, the closest point is found again each time, then the rotation and translation matrix is calculated, the variance error is compared, and if the variance error is not satisfied, the iteration is continued. And finally, obtaining the adjustment parameters of the eye socket point cloud transformed to the virtual face from the virtual eyeball point cloud.
In an alternative embodiment, after obtaining the first adjustment parameter, the computer device may adjust each point cloud point included in the point cloud of the virtual eyeball according to the first adjustment parameter, so that the point cloud of the virtual eyeball is aligned with the point cloud of the eye socket of the first virtual human face.
In an alternative embodiment, a virtual eyeball corresponds to a set of first adjustment parameters (i.e., R, t, s). Alternatively, the computer device adjusting each point cloud point included in the point cloud of the virtual eyeball according to the first adjustment parameter may be performing a multiplication operation on each point cloud point included in the point cloud of the virtual eyeball and the first adjustment parameter.
In the embodiment of the application, the adjustment parameters include the zoom amount, so that under the condition that the sizes of the eyesockets for different virtual faces are not consistent, the point cloud of the virtual eyeball can be adjusted based on the zoom amount, and the point cloud of the virtual eyeball is better aligned with the point cloud of the eyesockets for the virtual faces. And the point cloud of the virtual eyeball is aligned with the point cloud of the eye socket of the virtual face by using the adjusting parameters, so that the problem of mold penetration around the virtual eyeball and the eye socket of the virtual face can be effectively avoided.
In an optional embodiment, the computer device may further obtain a second virtual face; and if the second virtual face and the first virtual face correspond to the same topological structure, determining a second adjustment parameter based on the coordinates of each key point in the first key point set in the point cloud of the second virtual face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball. The second adjustment parameter is used for adjusting the point cloud of the virtual eyeball so that the virtual eyeball is matched with the eye socket of the second virtual face.
Because the identification information of the 3D points in the same topological structure is the same and the semantic information represented by the same, when the second virtual face and the first virtual face correspond to the same topological structure, the first key point set corresponding to the eye socket of the first virtual face obtained from the point cloud of the first virtual face may be directly used as the key point set corresponding to the eye socket of the second virtual face. For example, with a key point A30For example, assume point A in the topology corresponding to the first virtual face30Representing the left eye angle of the first virtual face, point a is then when the second virtual face corresponds to the same topology as the first virtual face30Also shown is the left corner of the eye of the second virtual face. That is to say, for any virtual face having the same topology as the first virtual face, the identification information of the key points corresponding to the eyesockets is not changed.
In this embodiment, the determining, by the computer device, the second adjustment parameter based on the coordinates of each key point in the first key point set in the point cloud of the second virtual face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball includes: and calling an iteration closest point method to align the coordinates of each key point in the first key point set in the point cloud of the second virtual face with the coordinates of each key point in the second key point set in the virtual eyeball point cloud to obtain a second adjustment parameter.
In this embodiment, the computer device may further adjust each point cloud point included in the point cloud of the virtual eyeball according to the second adjustment parameter, so that the point cloud of the virtual eyeball is aligned with the point cloud of the eye socket of the second virtual human face.
In an alternative embodiment, the computer device may further obtain an initial position of the virtual eye in the eye socket of the first virtual face prior to adjusting the point cloud of the virtual eye based on the first adjustment parameter; adding the virtual eyeball to the orbit of the second virtual face based on the initial position.
From the foregoing, the computer device has stored an initial position of the virtual eye in the eye socket of the first virtual face. Then, the computer device may invoke the stored initial position when the second virtual face corresponds to the same topology as the first virtual face; the virtual eyeball is directly added to the orbit of the second virtual face based on the initial position. Therefore, the user does not need to manually mount the virtual eyeballs in the second virtual face again, and the efficiency of mounting the virtual eyeballs can be improved.
It is understood that when a plurality of virtual faces correspond to the same topology, the computer device only needs to perform step S501 and step S502 once. That is, for any other virtual face having the same topology as the first virtual face and any virtual eyeball having the same topology as the virtual eyeball, the computer device may determine different adjustment parameters based on the coordinates of each key point in the first set of key points in the point cloud of the other virtual face and the coordinates of each key point in the second set of key points in the point cloud of the virtual eyeball. Therefore, according to the adjustment parameters, each point cloud point included in the point cloud of the virtual eyeball is subjected to self-adaptive adjustment, so that the point cloud of the virtual eyeball is aligned with the point cloud of the eye socket of the virtual face.
In order to verify the accuracy of the eyeball data processing method provided by the embodiment of the application in the application, tests are performed on virtual human faces with different face shapes by using the eyeball data processing method. Please refer to fig. 5, fig. 5 is a schematic diagram illustrating the matching effect between the eyeball and the orbit according to the embodiment of the present application. In fig. 5, a picture 501, a picture 504, and a picture 507 respectively refer to schematic diagrams of virtual eyes not mounted on a virtual face 1, a virtual face 2, and a virtual face 3; the picture 502, the picture 505 and the picture 508 respectively refer to schematic diagrams of front view angles of the virtual human face 1, the virtual human face 2 and the virtual human face 3 after mounting virtual eyeballs; the pictures 503, 506, and 509 are schematic diagrams of side views of the virtual eyes mounted on the virtual faces 1, 2, and 3, respectively. As shown in fig. 5, the eyeball data processing method provided by the application can perfectly combine the eyes at any position with the virtual human at any position, the mold penetration phenomenon cannot occur, and no problem can be seen from the front angle and the side angle by naked eyes. Therefore, the eyeball data processing method provided by the application can improve the authenticity of the virtual human and has better user experience.
In the embodiment of the application, computer equipment acquires a first key point set corresponding to eyesockets of a first virtual face from a point cloud of the first virtual face; acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball; and determining a first adjustment parameter based on the coordinates of each key point in the first key point set in the point cloud of the first virtual human face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the first adjustment parameter is used for adjusting the point cloud of the virtual eyeball so as to enable the virtual eyeball to be matched with the eyesocket of the first virtual human face. By adopting the method and the device, the point cloud of the virtual eyeball is adaptively adjusted according to the adjustment parameters, so that the matching efficiency of the eyeball and the eye socket is improved, the virtual eyeball and the eye socket of the virtual face can be matched better, and the authenticity of the virtual person is improved.
In addition, in the embodiment of the present application, since the computer device is an adjustment parameter determined based on the coordinates of the key points corresponding to the virtual eyeballs and the coordinates of the key points corresponding to the eyesockets of the virtual faces, even if different virtual faces are located in different coordinate systems, the computer may determine the adjustment parameter by acquiring the coordinates of the key points corresponding to the eyesockets of the virtual faces and the coordinates corresponding to the key points of the virtual eyeballs. Therefore, the method provided by the embodiment of the application can be applied to Face reconstruction of people in a plurality of scenes such as a 3D game, a 3D movie work, a short video and the like, and compared with a Face 3D deformation statistical Model (3D portable Face Model, 3DMM), the eyeball data processing method provided by the application can more efficiently and quickly mount the virtual eyeballs in the eyesockets of the virtual Face, so that a more attractive result is generated, the method is easier to be used in corresponding products, and the user experience of the products is improved.
It should be noted that, when the embodiment of the present application is applied to a specific product or technology, the first virtual face, the virtual eyeball, the second virtual face, and the like related to the embodiment of the present application are obtained after obtaining permission or approval of a user; and the collection, use and handling of the first virtual face, the virtual eyeball, the second virtual face, etc. are required to comply with relevant laws and regulations and standards of relevant countries and regions.
Based on the description of the related embodiments of the eyeball data processing method, the present application also proposes an eyeball data processing apparatus, which may be a computer program (including program code) running in a computer device. The eyeball data processing device can execute the eyeball data processing method shown in fig. 2; referring to fig. 6, fig. 6 is a schematic diagram of an eyeball data processing apparatus according to an embodiment of the present disclosure, where the eyeball data processing apparatus may include the following modules:
an obtaining module 601, configured to obtain a first key point set corresponding to an eyesocket of a first virtual face from a point cloud of the first virtual face;
the obtaining module 601 is further configured to obtain a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball;
the processing module 602 is configured to determine a first adjustment parameter based on coordinates of each key point in the first set of key points in the point cloud of the first virtual human face and coordinates of each key point in the second set of key points in the point cloud of the virtual eyeball, where the first adjustment parameter is used to adjust the point cloud of the virtual eyeball so that the virtual eyeball is matched with the orbit of the first virtual human face.
In an optional embodiment, the obtaining module 601, when configured to obtain a first set of key points corresponding to an eyesocket of a first virtual face from a point cloud of the first virtual face, is specifically configured to:
acquiring identification information of key points corresponding to eye sockets of a first virtual face from a target topological structure corresponding to the first virtual face, wherein the target topological structure comprises identification information of key points corresponding to all parts in the first virtual face;
acquiring a first key point set corresponding to the eye socket from the point cloud of the first virtual face based on the identification information of the key points corresponding to the eye socket, wherein the first key point set comprises the identification information of the key points corresponding to the eye socket.
In an optional implementation manner, when the obtaining module 601 is configured to obtain, from a point cloud of a virtual eyeball, a second key point set corresponding to a first key point set, specifically:
acquiring at least one vertex included in a point cloud of a virtual eyeball;
and determining a second key point set corresponding to the first key point set from the at least one vertex according to the distance between the at least one vertex and the key points included in the first key point set.
In an optional implementation manner, the obtaining module 601, when configured to determine, according to a distance between at least one vertex and a keypoint included in the first set of keypoints, a second set of keypoints corresponding to the first set of keypoints from the at least one vertex, has a function of:
for any key point in the first key point set, determining the distance between each vertex and any key point based on the coordinates of each vertex in the at least one vertex and the coordinates of any key point;
acquiring a target vertex with the minimum distance to any key point;
and taking the target vertex as a corresponding key point of any key point in the point cloud of the virtual eyeball, and adding the target vertex into a second key point set corresponding to the first key point set.
In an optional implementation manner, the processing module 602, when configured to determine the first adjustment parameter based on coordinates of each key point in the first key point set in the point cloud of the first virtual face and coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, is specifically configured to:
and calling an iteration closest point method to align the coordinates of each key point in the first key point set in the point cloud of the first virtual face with the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball to obtain a first adjustment parameter.
In an alternative embodiment, the processing module 602 is further configured to adjust each point cloud point included in the point cloud of the virtual eyeball according to a first adjustment parameter, so that the point cloud of the virtual eyeball is aligned with the point cloud of the eye socket of the first virtual human face, where the first adjustment parameter includes one or more of a rotation amount, a translation amount, and a zoom amount.
In an optional embodiment, the obtaining module 601 is further configured to obtain a second virtual face.
In an optional implementation manner, if the second virtual face corresponds to the same topology as the first virtual face, the processing module 602 is further configured to determine a second adjustment parameter based on coordinates of each key point in the first key point set in the point cloud of the second virtual face and coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, where the second adjustment parameter is used to adjust the point cloud of the virtual eyeball so that the virtual eyeball matches with the eye socket of the second virtual face.
In an alternative embodiment, the obtaining module 601 is further configured to obtain an initial position of the virtual eyeball in the eye socket of the first virtual human face before adjusting the point cloud of the virtual eyeball based on the first adjustment parameter.
In an alternative embodiment, the processing module 602 is further configured to add the virtual eyeball to the orbit of the second virtual face based on the initial position.
According to an embodiment of the present application, each step involved in the method shown in fig. 2 can be performed by each module in the eyeball data processing device shown in fig. 6. For example, steps S201, S202 shown in fig. 2 may be performed by the acquisition module 601 shown in fig. 6, and step S203 may be performed by the processing module 602 shown in fig. 6.
According to another embodiment of the present application, the eyeball data processing apparatus shown in fig. 6 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the corresponding method shown in fig. 2 on a general-purpose computer device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and the eyeball data processing method of the embodiment of the present application may be realized. The computer program may be embodied on a computer-readable storage medium, for example, and loaded into and executed by the computer apparatus described above via the computer-readable storage medium.
It can be understood that, for specific implementation and achievable beneficial effects of each module in the eyeball data processing apparatus provided in the embodiment of the present application, reference may be made to the description of the foregoing eyeball data processing method embodiment, and details are not described herein again.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application further provides a computer device. Referring to fig. 7, the computer device at least includes a processor 701, a memory 702, and a communication interface 703. The processor 701, the memory 702 and the communication interface 703 may be connected by a bus 704 or by other means, and the embodiment of the present application is exemplified by being connected by the bus 704.
The processor 701 (or referred to as a Central Processing Unit (CPU)) is a computing core and a control core of the computer device, and can analyze various instructions in the computer device and process various data of the computer device, for example: the CPU can be used for analyzing a power-on and power-off instruction sent to the computer equipment by a user and controlling the computer equipment to carry out power-on and power-off operation; and the following steps: the CPU may transmit various types of interactive data between the internal structures of the computer device, and so on. The communication interface 703 may optionally include a standard wired interface, a wireless interface (e.g., Wi-Fi, mobile communication interface, etc.), controlled by the processor 701 for transceiving data. Memory 702(Memory) is a Memory device in a computer device for storing computer programs and data. It is understood that the memory 702 herein can comprise both the built-in memory of the computer device and, of course, the expansion memory supported by the computer device. The memory 702 provides storage space that stores the operating system of the computer device, which may include, but is not limited to: windows system, Linux system, Android system, iOS system, etc., which are not limited in this application. In an alternative implementation, the processor 701 according to the embodiment of the present application may execute the following operations by executing the computer program stored in the memory 702:
acquiring a first key point set corresponding to an eye socket of a first virtual face from a point cloud of the first virtual face;
acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball;
and determining a first adjusting parameter based on the coordinates of each key point in the first key point set in the point cloud of the first virtual face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the first adjusting parameter is used for adjusting the point cloud of the virtual eyeball so that the virtual eyeball is matched with the eye socket of the first virtual face.
In an alternative embodiment, the processor 701, when configured to obtain the first set of key points corresponding to the eye socket of the first virtual face from the point cloud of the first virtual face, is specifically configured to:
acquiring identification information of key points corresponding to eye sockets of a first virtual face from a target topological structure corresponding to the first virtual face, wherein the target topological structure comprises identification information of key points corresponding to all parts in the first virtual face;
acquiring a first key point set corresponding to the eye socket from the point cloud of the first virtual face based on the identification information of the key points corresponding to the eye socket, wherein the first key point set comprises the identification information of the key points corresponding to the eye socket.
In an optional implementation manner, when the processor 701 is configured to obtain, from the point cloud of the virtual eyeball, the second keypoint set corresponding to the first keypoint set, the processor is specifically configured to:
acquiring at least one vertex included in a point cloud of a virtual eyeball;
and determining a second key point set corresponding to the first key point set from the at least one vertex according to the distance between the at least one vertex and the key points included in the first key point set.
In an alternative embodiment, the processor 701, when being configured to determine a second keypoint set corresponding to the first keypoint set from the at least one vertex according to a distance between the at least one vertex and the keypoints included in the first keypoint set, is configured to:
for any key point in the first key point set, determining the distance between each vertex and any key point based on the coordinates of each vertex in the at least one vertex and the coordinates of any key point;
acquiring a target vertex with the minimum distance to any key point;
and taking the target vertex as a corresponding key point of any key point in the point cloud of the virtual eyeball, and adding the target vertex into a second key point set corresponding to the first key point set.
In an optional implementation manner, the processor 701, when configured to determine the first adjustment parameter based on coordinates of each key point in the first key point set in the point cloud of the first virtual face and coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, is specifically configured to:
and calling an iteration closest point method to align the coordinates of each key point in the first key point set in the point cloud of the first virtual face with the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball to obtain a first adjustment parameter.
In an alternative embodiment, the processor 701 is further configured to adjust each point cloud point included in the point cloud of the virtual eyeball according to a first adjustment parameter, so that the point cloud of the virtual eyeball is aligned with the point cloud of the eye socket of the first virtual human face, where the first adjustment parameter includes one or more of a rotation amount, a translation amount, and a scaling amount.
In an alternative embodiment, the processor 701 is further configured to:
acquiring a second virtual face;
if the second virtual face and the first virtual face correspond to the same topological structure, determining a second adjustment parameter based on the coordinates of each key point in the first key point set in the point cloud of the second virtual face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the second adjustment parameter is used for adjusting the point cloud of the virtual eyeball so that the virtual eyeball is matched with the eye socket of the second virtual face.
In an alternative embodiment, the processor 701 is further configured to:
acquiring an initial position of the virtual eyeball in an eyebox of the first virtual face before adjusting the point cloud of the virtual eyeball based on the first adjustment parameter;
adding the virtual eyeball to the orbit of the second virtual face based on the initial position.
In a specific implementation, the processor 701, the memory 702, and the communication interface 703 described in this embodiment may execute an implementation manner of the computer device described in the eyeball data processing method provided in this embodiment, and may also execute an implementation manner described in the eyeball data processing apparatus provided in this embodiment, which is not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is enabled to execute the eyeball data processing method according to any one of the foregoing possible implementation manners. For specific implementation, reference may be made to the foregoing description, which is not repeated herein.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the eyeball data processing method of any one of the possible implementation manners. For specific implementation, reference may be made to the foregoing description, which is not repeated herein.
It should be noted that, for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the order of acts described, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above disclosure is only a few examples of the present application, and certainly should not be taken as limiting the scope of the present application, which is therefore intended to cover all modifications that are within the scope of the present application and which are equivalent to the claims.

Claims (12)

1. A method of eyeball data processing, the method comprising:
acquiring a first key point set corresponding to an eyesocket of a first virtual face from a point cloud of the first virtual face;
acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball;
determining a first adjustment parameter based on the coordinates of each key point in the first key point set in the point cloud of the first virtual human face and the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the first adjustment parameter is used for adjusting the point cloud of the virtual eyeball so that the virtual eyeball is matched with the eye socket of the first virtual human face.
2. The method of claim 1, wherein the obtaining the first set of keypoints corresponding to the eye socket of the first virtual face from the point cloud of the first virtual face comprises:
acquiring identification information of key points corresponding to eyesockets of a first virtual face from a target topological structure corresponding to the first virtual face, wherein the target topological structure comprises identification information of key points corresponding to each part in the first virtual face;
acquiring a first key point set corresponding to the eye socket from the point cloud of the first virtual face based on the identification information of the key points corresponding to the eye socket, wherein the first key point set comprises the identification information of the key points corresponding to the eye socket.
3. The method according to claim 1, wherein the obtaining a second set of key points corresponding to the first set of key points from the point cloud of the virtual eyeball comprises:
acquiring at least one vertex included in a point cloud of a virtual eyeball;
and determining a second key point set corresponding to the first key point set from the at least one vertex according to the distance between the at least one vertex and the key points included in the first key point set.
4. The method according to claim 3, wherein said determining a second set of keypoints from the at least one vertex corresponding to the first set of keypoints according to the distance between the at least one vertex and the keypoints included in the first set of keypoints comprises:
for any keypoint of the first set of keypoints, determining a distance of each vertex of the at least one vertex from the any keypoint based on the coordinates of the each vertex and the coordinates of the any keypoint;
acquiring a target vertex with the minimum distance to any key point;
and taking the target vertex as a corresponding key point of any key point in the point cloud of the virtual eyeball, and adding the target vertex to a second key point set corresponding to the first key point set.
5. The method according to any one of claims 1 to 4, wherein the determining a first adjustment parameter based on the coordinates of each key point in the first set of key points in the point cloud of the first virtual face and the coordinates of each key point in the second set of key points in the point cloud of the virtual eyeball comprises:
and calling an iteration closest point method to align the coordinates of each key point in the first key point set in the point cloud of the first virtual face with the coordinates of each key point in the second key point set in the point cloud of the virtual eyeball to obtain a first adjustment parameter.
6. The method of claim 1, further comprising:
adjusting each point cloud point included in the point cloud of the virtual eyeball according to the first adjustment parameter so as to align the point cloud of the virtual eyeball with the point cloud of the eye socket of the first virtual human face, wherein the first adjustment parameter comprises one or more of rotation amount, translation amount and scaling amount.
7. The method of claim 1, further comprising:
acquiring a second virtual face;
if the second virtual face and the first virtual face correspond to the same topological structure, determining a second adjustment parameter based on coordinates of each key point in the first key point set in the point cloud of the second virtual face and coordinates of each key point in the second key point set in the point cloud of the virtual eyeball, wherein the second adjustment parameter is used for adjusting the point cloud of the virtual eyeball so that the virtual eyeball is matched with the eye socket of the second virtual face.
8. The method of claim 1, further comprising:
acquiring an initial position of the virtual eyeball in an eyebox of the first virtual face before adjusting the point cloud of the virtual eyeball based on the first adjustment parameter;
adding the virtual eyeball to the orbit of the second virtual face based on the initial position.
9. An eyeball data processing apparatus characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first key point set corresponding to an eyesocket of a first virtual face from a point cloud of the first virtual face;
the acquisition module is further used for acquiring a second key point set corresponding to the first key point set from the point cloud of the virtual eyeball;
a processing module, configured to determine a first adjustment parameter based on coordinates of each key point in the first set of key points in the point cloud of the first virtual face and coordinates of each key point in the second set of key points in the point cloud of the virtual eyeball, where the first adjustment parameter is used to adjust the point cloud of the virtual eyeball so that the virtual eyeball matches with the orbit of the first virtual face.
10. A computer-readable storage medium, characterized in that a computer program is stored therein, which when executed by a processor implements the eyeball data processing method according to any one of claims 1 to 8.
11. A computer device, comprising a memory, a communication interface, and a processor, wherein the memory, the communication interface, and the processor are interconnected; the memory stores a computer program, and the processor calls the computer program stored in the memory to implement the eyeball data processing method according to any one of claims 1 to 8.
12. A computer program product, characterized in that it comprises a computer program or computer instructions which, when executed by a processor, implement the eyeball data processing method according to any one of claims 1 to 8.
CN202210298578.4A 2022-03-24 2022-03-24 Eyeball data processing method and device, computer equipment and storage medium Pending CN114693872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210298578.4A CN114693872A (en) 2022-03-24 2022-03-24 Eyeball data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210298578.4A CN114693872A (en) 2022-03-24 2022-03-24 Eyeball data processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114693872A true CN114693872A (en) 2022-07-01

Family

ID=82138457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210298578.4A Pending CN114693872A (en) 2022-03-24 2022-03-24 Eyeball data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114693872A (en)

Similar Documents

Publication Publication Date Title
CN110717977B (en) Method, device, computer equipment and storage medium for processing game character face
US20210334942A1 (en) Image processing method and apparatus, device, and storage medium
WO2021052375A1 (en) Target image generation method, apparatus, server and storage medium
JP6046501B2 (en) Feature point output device, feature point output program, feature point output method, search device, search program, and search method
CN109754464B (en) Method and apparatus for generating information
CN111583399A (en) Image processing method, device, equipment, medium and electronic equipment
CN111754622B (en) Face three-dimensional image generation method and related equipment
US20230100427A1 (en) Face image processing method, face image processing model training method, apparatus, device, storage medium, and program product
US20230401799A1 (en) Augmented reality method and related device
WO2023184817A1 (en) Image processing method and apparatus, computer device, computer-readable storage medium, and computer program product
CN112085835A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN114202615A (en) Facial expression reconstruction method, device, equipment and storage medium
CN115984447A (en) Image rendering method, device, equipment and medium
CN113902848A (en) Object reconstruction method and device, electronic equipment and storage medium
CN113822114A (en) Image processing method, related equipment and computer readable storage medium
CN111814811A (en) Image information extraction method, training method and device, medium and electronic equipment
CN117011449A (en) Reconstruction method and device of three-dimensional face model, storage medium and electronic equipment
CN114693872A (en) Eyeball data processing method and device, computer equipment and storage medium
CN113223137B (en) Generation method and device of perspective projection human face point cloud image and electronic equipment
CN111738087B (en) Method and device for generating face model of game character
CN115965839A (en) Image recognition method, storage medium, and apparatus
CN115708135A (en) Face recognition model processing method, face recognition method and device
CN115760888A (en) Image processing method, image processing device, computer and readable storage medium
WO2024104144A1 (en) Image synthesis method and apparatus, storage medium, and electrical device
CN111222448A (en) Image conversion method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination