CN112562048A - Control method, device and equipment of three-dimensional model and storage medium - Google Patents

Control method, device and equipment of three-dimensional model and storage medium Download PDF

Info

Publication number
CN112562048A
CN112562048A CN202011485517.6A CN202011485517A CN112562048A CN 112562048 A CN112562048 A CN 112562048A CN 202011485517 A CN202011485517 A CN 202011485517A CN 112562048 A CN112562048 A CN 112562048A
Authority
CN
China
Prior art keywords
pose
dimensional model
corrected
parameters
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011485517.6A
Other languages
Chinese (zh)
Inventor
彭昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011485517.6A priority Critical patent/CN112562048A/en
Publication of CN112562048A publication Critical patent/CN112562048A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a control method, a control device, control equipment and a storage medium of a three-dimensional model, and relates to the fields of computer vision, augmented reality, deep learning and the like. The specific implementation scheme is as follows: acquiring the position of the characteristic point of the target person; determining pose conversion parameters of the three-dimensional model by using the initialized pose of the three-dimensional model and the positions of the characteristic points of the target character; and adjusting the initialized pose by using the pose conversion parameters to obtain the actual pose. Through the scheme of the application, the control on the three-dimensional model can be realized by adopting relatively less computation.

Description

Control method, device and equipment of three-dimensional model and storage medium
Technical Field
The present application relates to the field of image processing, and more particularly to the fields of computer vision, augmented reality, and deep learning.
Background
In the dynamic display process of generating and controlling the personalized three-dimensional model based on the user image, a large amount of data redundancy exists due to the complex algorithm, so that the problem of overlarge flow in an application program for generating the personalized three-dimensional animation is caused.
Based on the above problems, an application program for generating the personalized three-dimensional animation is difficult to deploy at a mobile terminal.
Disclosure of Invention
The application provides a control method, a control device, control equipment and a storage medium of a three-dimensional model.
According to an aspect of the present application, there is provided a method of controlling a three-dimensional model, which may include the steps of:
acquiring the position of the characteristic point of the target person;
determining pose conversion parameters of the three-dimensional model by using the initialized pose of the three-dimensional model and the positions of the characteristic points of the target character;
and adjusting the initialized pose by using the pose conversion parameters to obtain the actual pose.
According to another aspect of the present application, there is provided a control apparatus of a three-dimensional model, which may include the following components:
the target person feature point position acquisition module is used for acquiring the position of the target person feature point;
the pose conversion parameter determining module is used for determining pose conversion parameters of the three-dimensional model by utilizing the initialized pose of the three-dimensional model and the positions of the characteristic points of the target person;
and the actual pose generation module is used for adjusting the initialized pose by using the pose conversion parameters to obtain an actual pose.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method provided by any one of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method provided by any one of the embodiments of the present application.
According to another aspect of the application, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement the method of any of the embodiments of the application.
By the scheme, the three-dimensional model is adjusted by utilizing the difference between the feature points of the target person and the pose of the three-dimensional model, and the three-dimensional model can be controlled by adopting relatively less computation. Therefore, the execution main body controlled by the three-dimensional model can be handed over to the mobile terminal conveniently, and the three-dimensional model can be conveniently controlled to land.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow chart of a method of controlling a three-dimensional model according to the present application;
FIG. 2 is a schematic diagram of a face model according to the present application;
FIG. 3 is a schematic illustration of the interaction of nodes of a face region according to the present application;
FIG. 4 is a flow chart of determining pose transition parameters according to the present application;
FIG. 5 is a schematic diagram of three axes of a corresponding coordinate system of an object pose matrix according to the present application;
FIG. 6 is a schematic diagram comparing coordinate systems containing miscut information according to the present application;
FIG. 7 is a comparative schematic of a coordinate system with corrected miscut information according to the present application;
FIG. 8 is a schematic diagram of a control device according to the three-dimensional model of the present application;
fig. 9 is a block diagram of an electronic device for implementing the method for controlling a three-dimensional model according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the purpose of understanding, which are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, an embodiment of the present application provides a method for controlling a three-dimensional model, which may include the following steps:
s101: acquiring the position of the characteristic point of the target person;
s102: determining pose conversion parameters of the three-dimensional model by using the initialized pose of the three-dimensional model and the positions of the characteristic points of the target character;
s103: and adjusting the initialized pose by using the pose conversion parameters to obtain the actual pose.
The execution main body of the application can be a screen device such as a smart phone and a smart sound box.
The position of the target person or feature points can be obtained from the image. In the embodiments of the present application, only the face of the target person will be described as an example. In the actual scene, other parts such as the trunk of the target person may be included.
For example, the image of the target person may be acquired in advance by downloading, shooting, or the like. The feature points may be points that characterize the facial contour and five-official equal locations of the target person. The positions of the feature points are the facial contour of the target person and the coordinates of each point on the five sense organs.
Alternatively, as shown in fig. 2, the feature points may be a skin skeleton model and a patch model of the target person.
The covering skeleton model is composed of tree nodes (nodes) of a hierarchical structure and covering areas covering the nodes, and each Node stores the local rigid pose (TRS, Translate, Rotation and Scale).
The local rigid poses are presented in a 4 x 4 matrix. TRS information of the nodes is transmitted from an upper layer (root node) to a lower layer (child node) layer by layer, and a global rigid pose of the face area can be obtained through calculation according to the local rigid pose of each node.
Each node contains a skinned region covering a three-dimensional Vertex (Vertex). Each three-dimensional vertex is controlled by at least one node so that the position (three-dimensional coordinates) can be changed.
Referring to FIG. 3, Node in FIG. 3RootCan represent root Node, by NodeRootFrom the root node, the hierarchy of the nodes decreases in sequence from left to right. For example, in modifying NodeFWhen the Node corresponds to the area of the nose, the Node is defined asFsub-Node of Node (Node in FIG. 3)GCorresponding nose tip region), and as NodeGsub-Node of Node (Node in FIG. 3)HThe corresponding nasal tip corresponds to the trigonal region) are affected.
Each triangle of the model face in fig. 2 is a patch, and all patches are combined into a patch model. The patch model and the skinned skeleton model have the same topological relation, namely the number of vertexes, the relative positions of the vertexes and the connection sequence of the vertexes are the same.
The initialization pose of the three-dimensional model may include skinning the bone model, i.e., rendering with a 4 x 4 matrix. In addition, the initialization pose also includes the position of the three-dimensional vertex in each node. And taking the three-dimensional vertex of each node as a characteristic point of the three-dimensional model.
Based on the position, the difference between the position of the characteristic point of the three-dimensional model and the position of the characteristic point of the target person is calculated, and the pose conversion parameter can be obtained by utilizing the difference. For example, the pose conversion parameter can adjust the initialization pose of the three-dimensional model by using the coordinate difference of the feature points to obtain the actual pose.
In addition, in the case where the feature points are the face contour of the target person and the positions of the respective points on the five sense organs, the initialization pose of the three-dimensional model may be the coordinates of the feature points such as the face contour of the three-dimensional model and the five sense organs. The coordinate difference between the characteristic points of the three-dimensional model and the coordinate difference between the characteristic points of the target person can be used as pose conversion parameters, so that the initial pose of the three-dimensional model is adjusted, and the actual pose is obtained.
By the scheme, the three-dimensional model is adjusted by utilizing the difference between the feature points of the target person and the pose of the three-dimensional model, and the three-dimensional model can be controlled by adopting relatively less computation. Therefore, the execution main body controlled by the three-dimensional model can be handed over to the mobile terminal conveniently, and the three-dimensional model can be conveniently controlled to land.
As shown in fig. 4, in one embodiment, step S102 may include the following sub-steps:
s1021: acquiring a rigid pose parameter corresponding to the initialized pose and the position of a feature point of the three-dimensional model;
s1022: calculating the difference between the positions of the characteristic points of the three-dimensional model and the positions of the characteristic points of the target person;
s1023: obtaining pose conversion parameters to be corrected by using the difference and the rigid pose parameters;
s1024: and carrying out error correction on the pose conversion parameters to be corrected to obtain the pose conversion parameters of the three-dimensional model.
The initialization pose of the three-dimensional model may comprise a skinned skeleton model form, i.e. presented in a 4 x 4 matrix. The 4 x 4 matrix is expressed as the initialized rigid pose of the three-dimensional model.
In addition, the initialization pose also includes the position of the three-dimensional vertex in each node, i.e., the initialization position of the three-dimensional vertex. In the present embodiment, the three-dimensional vertex of each node may be used as a feature point of the three-dimensional model.
Based on the positions of the feature points of the target person and the positions of the feature points of the three-dimensional model, the difference between the positions of the feature points of the three-dimensional model and the positions of the feature points of the target person can be calculated.
And calculating to obtain a transfer matrix by using the calculated difference and the rigid pose parameter. And multiplying the rigid pose parameters and the transfer matrix to obtain a target pose matrix.
The target pose matrix can be used as pose conversion parameters to be corrected.
And converting the target pose matrix into a bone driving coefficient (the bone driving coefficient can also be converted into the pose matrix). The bone drive coefficients are represented by 9 values of 3-axis translation values of xyz, 3 euler angular rotation values, and 3 scaling values.
And under the condition of obtaining the bone driving coefficient, adjusting the initialized rigid pose of the three-dimensional model by using the bone driving coefficient so as to obtain the actual pose.
The target pose matrix calculated by using the calculated difference and the rigid pose parameters is also a 4 x 4 matrix. The structure of the matrix is as follows:
Figure BDA0002839238660000051
the matrix contains information about rotation, displacement, scaling, miscut, etc. Wherein Tx, Ty and Tz in the matrix respectively represent displacement information corresponding to three axes; and the rotation, scaling and miscut information is coupled with m of the matrix00、m01、m02、m10、m11、m12、m20、m21、m22And the like.
As shown in fig. 5, three lines indicated by solid arrows in the figure are three axes of the coordinate system corresponding to the matrix. In the process of converting the target pose matrix into the bone driving coefficient, the bone driving coefficient only contains information of rotation, scaling and displacement. Since the miscut information is coupled in the matrix, if the target pose matrix is not corrected, a large error exists in the skeleton driving coefficient obtained by converting the target pose matrix. This error is due to the miscut information.
Referring to fig. 6, in a contrast test, a target pose matrix containing miscut information (to be corrected) is converted into a bone driving coefficient, and the bone driving coefficient is converted into a target pose matrix, so that the difference between the two can be visually observed. The three lines indicated by the solid arrows in fig. 6 are the same as the three lines indicated by the solid arrows in fig. 5, and are the three axes of the target pose matrix to the corresponding coordinate system. Three lines indicated by dotted arrows in fig. 6 are three axes of the coordinate system corresponding to the target pose matrix to be corrected converted into the bone driving coefficients and then the bone driving coefficients are converted into the target pose matrix. As can be seen by comparing fig. 6 and 5, although one axis can be made to coincide, the error of the other two axes is large.
In the current embodiment, the miscut information can be removed from the target pose matrix by decoupling, so that error correction is performed on the pose conversion parameters to be corrected, and the pose conversion parameters of the three-dimensional model are obtained.
By the scheme, the pose conversion parameters to be corrected are corrected, so that the interference of the miscut information on pose conversion can be reduced, and the effect of reducing errors is achieved.
In one embodiment, step S1024 may specifically adopt the following method for error correction:
and carrying out singular value decomposition processing on the pose conversion parameters to be corrected to obtain a correction result.
By using a singular value decomposition algorithm, the pose conversion parameters (target pose matrix) to be corrected can be decomposed to obtain three decomposition matrices of U, sigma and V.
The pose transformation parameter (MatTRS) after correction may be expressed as MatTRS ═ U × VTThat is, the corrected target pose matrix can be expressed as MatTRS ═ U × VTIn the formula VTRepresenting the transpose of the decomposition matrix V.
In another mode, the error correction can be performed in the following manner:
and adjusting the pose conversion parameters so that the difference between the inverse matrix and the transposed matrix of the pose conversion parameters is within an allowable range.
The pose transformation parameters (target pose matrix) to be corrected may be recorded as M.
Computing the transpose matrix of M, the result of the computation is noted as MT(ii) a Calculating the inverse matrix of M, and recording the calculation result as M-1
By adjusting the pose conversion parameters to be corrected, M is enabledT=M-1. The adjustment may be as follows:
Figure BDA0002839238660000061
in the present embodiment, a difference allowable range may be set, indicating that the adjustment is ended in a case where an error between the inverse matrix of the pose conversion parameter and the transposed matrix of the pose conversion parameter is within the difference allowable. Alternatively, the number of times of calculation may also be preset, for example, after N times of calculation (N times of adjustment of the pose conversion parameter to be corrected) is completed, it may be indicated that an error between the inverse matrix of the pose conversion parameter and the transposed matrix of the pose conversion parameter is within a tolerance of a difference, where N is a positive integer.
As shown in fig. 7, the three lines indicated by the solid arrows in fig. 7 are the same as the three lines indicated by the solid arrows in fig. 5, and are the three axes of the target pose matrix to the corresponding coordinate system. Three lines indicated by dotted arrows in fig. 7 are three axes of the coordinate system corresponding to the corrected target pose matrix converted into the bone driving coefficients and then the bone driving coefficients are converted into the target pose matrix. Comparing fig. 7 with fig. 6, it can be found that the corrected pose transformation parameters (target pose matrix) are balanced among the three axes, and the error situation is reduced.
In one embodiment, step S102 may further include the following sub-steps:
determining a parameter influencing displacement in the pose conversion parameters, and resetting the parameter influencing displacement; and/or
And in the pose conversion parameters, determining parameters influencing scaling, and normalizing the parameters influencing scaling.
As mentioned above, the pose transformation parameters are a 4 × 4 matrix containing information such as rotation, displacement, scaling, miscut, etc. Wherein Tx, Ty, Tz in the matrix is used to represent displacement information, i.e. Tx, Ty, Tz may be determined as a parameter affecting the displacement.
In the matrix, the effect of the displacement on the subsequent calculation can be eliminated by setting Tx, Ty, Tz to 0.
In addition, based on the geometric relationship corresponding to the matrix, each column parameter in the matrix can be regarded as three vertical vectors. By normalizing the three longitudinal quantities, the influence of scaling on subsequent calculations can be eliminated.
For example, the normalized result of the first column of parameters may be written as:
Figure BDA0002839238660000071
similarly, the parameters in the second and third columns are normalized.
By the scheme, the influence of displacement and scaling on subsequent calculation can be eliminated, so that the calculation accuracy can be improved.
As shown in fig. 8, the present application provides a control apparatus of a three-dimensional model, which may include the following components:
a target person feature point position obtaining module 801, configured to obtain a position of a target person feature point;
a pose conversion parameter determining module 802, configured to determine a pose conversion parameter of the three-dimensional model by using the initialized pose of the three-dimensional model and the positions of the feature points of the target person;
and an actual pose generating module 803, configured to adjust the initialized pose by using the pose conversion parameter, so as to obtain an actual pose.
In one embodiment, the pose transformation parameter determination module 802 may further include:
the pose information acquisition submodule is used for acquiring rigid pose parameters corresponding to the initialized pose and the positions of the feature points of the three-dimensional model;
the difference calculation submodule is used for calculating the difference between the positions of the feature points of the three-dimensional model and the positions of the feature points of the target person;
the pose conversion parameter generation submodule to be corrected is used for obtaining pose conversion parameters to be corrected by utilizing the difference and the rigid pose parameters;
and the pose conversion parameter correction submodule is used for performing error correction on the pose conversion parameter to be corrected to obtain the pose conversion parameter of the three-dimensional model.
In one embodiment, the pose transformation parameter modification submodule is specifically configured to:
and carrying out singular value decomposition processing on the pose conversion parameters to be corrected to obtain a correction result.
In one embodiment, the pose transformation parameter modification sub-module may further include:
the displacement correction unit is used for determining a parameter influencing displacement in the pose conversion parameters to be corrected and clearing the parameter influencing displacement; and/or
And the scaling correction unit is used for determining parameters influencing scaling in the pose conversion parameters to be corrected and normalizing the parameters influencing scaling.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 9, the present invention is a block diagram of an electronic device according to a method for controlling a three-dimensional model according to an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 910, memory 920, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). One processor 910 is illustrated in fig. 9.
The memory 920 is a non-transitory computer readable storage medium provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to execute the method for controlling a three-dimensional model provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the control method of a three-dimensional model provided by the present application.
The memory 920 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the control method of the three-dimensional model in the embodiment of the present application (for example, the target person feature point position acquisition module 801, the pose conversion parameter determination module 802, and the actual pose generation module 803 shown in fig. 8). The processor 910 executes various functional applications of the server and data processing, i.e., implements the control method of the three-dimensional model in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 920.
The memory 920 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the control method of the three-dimensional model, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 920 may optionally include a memory remotely located from the processor 910, and these remote memories may be connected to the electronics of the control method of the three-dimensional model through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the control method of a three-dimensional model may further include: an input device 930 and an output device 940. The processor 910, the memory 920, the input device 930, and the output device 940 may be connected by a bus or other means, and fig. 9 illustrates an example of a connection by a bus.
The input device 930 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the control method of the three-dimensional model, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 940 may include a display device, an auxiliary lighting device (e.g., an LED), a haptic feedback device (e.g., a vibration motor), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service. The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (11)

1. A method of controlling a three-dimensional model, comprising:
acquiring the position of the characteristic point of the target person;
determining pose conversion parameters of the three-dimensional model by using the initialized pose of the three-dimensional model and the positions of the characteristic points of the target character;
and adjusting the initialized pose by using the pose conversion parameters to obtain an actual pose.
2. The method of claim 1, wherein determining pose transition parameters for the three-dimensional model using the initialized pose of the three-dimensional model and the positions of the target character feature points comprises:
acquiring a rigid pose parameter corresponding to the initialized pose and the position of a feature point of the three-dimensional model;
calculating the difference between the positions of the feature points of the three-dimensional model and the positions of the feature points of the target character;
obtaining a pose conversion parameter to be corrected by using the difference and the rigid pose parameter;
and carrying out error correction on the pose conversion parameter to be corrected to obtain the pose conversion parameter of the three-dimensional model.
3. The method according to claim 2, wherein the error correction of the pose conversion parameters to be corrected comprises:
and carrying out singular value decomposition processing on the pose conversion parameters to be corrected to obtain a correction result.
4. The method according to claim 2 or 3, wherein the error correction of the pose conversion parameters to be corrected comprises:
determining a parameter influencing displacement in the pose conversion parameters to be corrected, and resetting the parameter influencing displacement; and/or
And determining parameters influencing scaling in the pose conversion parameters to be corrected, and carrying out normalization processing on the parameters influencing scaling.
5. A control apparatus of a three-dimensional model, comprising:
the target person feature point position acquisition module is used for acquiring the position of the target person feature point;
the pose conversion parameter determining module is used for determining pose conversion parameters of the three-dimensional model by utilizing the initialized pose of the three-dimensional model and the positions of the characteristic points of the target character;
and the actual pose generation module is used for adjusting the initialized pose by using the pose conversion parameters to obtain an actual pose.
6. The apparatus of claim 5, wherein the pose transformation parameter determination module comprises:
the pose information acquisition submodule is used for acquiring a rigid pose parameter corresponding to the initialized pose and the position of a feature point of the three-dimensional model;
the difference calculation submodule is used for calculating the difference between the positions of the feature points of the three-dimensional model and the positions of the feature points of the target character;
the pose conversion parameter generation submodule to be corrected is used for obtaining pose conversion parameters to be corrected by utilizing the difference and the rigid pose parameters;
and the pose conversion parameter correction submodule is used for carrying out error correction on the pose conversion parameters to be corrected to obtain the pose conversion parameters of the three-dimensional model.
7. The apparatus according to claim 6, wherein the pose transformation parameter modification submodule is specifically configured to:
and carrying out singular value decomposition processing on the pose conversion parameters to be corrected to obtain a correction result.
8. The apparatus according to claim 6 or 7, the pose conversion parameter correction submodule including:
the displacement correction unit is used for determining a parameter influencing displacement in the pose conversion parameters to be corrected and clearing the parameter influencing displacement; and/or
And the scaling correction unit is used for determining parameters influencing scaling in the pose conversion parameters to be corrected and carrying out normalization processing on the parameters influencing scaling.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 4.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 4.
11. A computer program product comprising computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 4.
CN202011485517.6A 2020-12-16 2020-12-16 Control method, device and equipment of three-dimensional model and storage medium Pending CN112562048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011485517.6A CN112562048A (en) 2020-12-16 2020-12-16 Control method, device and equipment of three-dimensional model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011485517.6A CN112562048A (en) 2020-12-16 2020-12-16 Control method, device and equipment of three-dimensional model and storage medium

Publications (1)

Publication Number Publication Date
CN112562048A true CN112562048A (en) 2021-03-26

Family

ID=75064140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011485517.6A Pending CN112562048A (en) 2020-12-16 2020-12-16 Control method, device and equipment of three-dimensional model and storage medium

Country Status (1)

Country Link
CN (1) CN112562048A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562047A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Control method, device and equipment of three-dimensional model and storage medium
CN113610992A (en) * 2021-08-04 2021-11-05 北京百度网讯科技有限公司 Bone driving coefficient determining method and device, electronic equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160088223A (en) * 2015-01-15 2016-07-25 삼성전자주식회사 Method and apparatus for pose correction on face image
US20160217318A1 (en) * 2013-08-29 2016-07-28 Nec Corporation Image processing device, image processing method, and program
US9648303B1 (en) * 2015-12-15 2017-05-09 Disney Enterprises, Inc. Systems and methods for facilitating three-dimensional reconstruction of scenes from videos
CN108170282A (en) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 For controlling the method and apparatus of three-dimensional scenic
KR20190069750A (en) * 2017-12-12 2019-06-20 왕한호 Enhancement of augmented reality using posit algorithm and 2d to 3d transform technique
CN110648361A (en) * 2019-09-06 2020-01-03 深圳市华汉伟业科技有限公司 Real-time pose estimation method and positioning and grabbing system of three-dimensional target object
CN111145339A (en) * 2019-12-25 2020-05-12 Oppo广东移动通信有限公司 Image processing method and device, equipment and storage medium
CN111639567A (en) * 2020-05-19 2020-09-08 广东小天才科技有限公司 Interactive display method of three-dimensional model, electronic equipment and storage medium
CN111783820A (en) * 2020-05-08 2020-10-16 北京沃东天骏信息技术有限公司 Image annotation method and device
CN111844130A (en) * 2020-06-22 2020-10-30 深圳市智流形机器人技术有限公司 Method and device for correcting pose of robot end tool
CN112562047A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Control method, device and equipment of three-dimensional model and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217318A1 (en) * 2013-08-29 2016-07-28 Nec Corporation Image processing device, image processing method, and program
KR20160088223A (en) * 2015-01-15 2016-07-25 삼성전자주식회사 Method and apparatus for pose correction on face image
US9648303B1 (en) * 2015-12-15 2017-05-09 Disney Enterprises, Inc. Systems and methods for facilitating three-dimensional reconstruction of scenes from videos
KR20190069750A (en) * 2017-12-12 2019-06-20 왕한호 Enhancement of augmented reality using posit algorithm and 2d to 3d transform technique
CN108170282A (en) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 For controlling the method and apparatus of three-dimensional scenic
CN110648361A (en) * 2019-09-06 2020-01-03 深圳市华汉伟业科技有限公司 Real-time pose estimation method and positioning and grabbing system of three-dimensional target object
CN111145339A (en) * 2019-12-25 2020-05-12 Oppo广东移动通信有限公司 Image processing method and device, equipment and storage medium
CN111783820A (en) * 2020-05-08 2020-10-16 北京沃东天骏信息技术有限公司 Image annotation method and device
CN111639567A (en) * 2020-05-19 2020-09-08 广东小天才科技有限公司 Interactive display method of three-dimensional model, electronic equipment and storage medium
CN111844130A (en) * 2020-06-22 2020-10-30 深圳市智流形机器人技术有限公司 Method and device for correcting pose of robot end tool
CN112562047A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Control method, device and equipment of three-dimensional model and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
尚洋;孙晓亮;张跃强;李由;于起峰;: "三维目标位姿跟踪与模型修正", 测绘学报, no. 06, 15 June 2018 (2018-06-15), pages 113 - 122 *
张慧智;高箴;周健;: "基于激光视觉技术的运动目标位姿测量与误差分析", 激光杂志, no. 04, 25 April 2020 (2020-04-25), pages 85 - 89 *
詹红燕;张磊;陶培亚;: "基于姿态估计的单幅图像三维人脸重建", 微电子学与计算机, no. 09, 5 September 2015 (2015-09-05), pages 101 - 105 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562047A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Control method, device and equipment of three-dimensional model and storage medium
CN112562047B (en) * 2020-12-16 2024-01-19 北京百度网讯科技有限公司 Control method, device, equipment and storage medium for three-dimensional model
CN113610992A (en) * 2021-08-04 2021-11-05 北京百度网讯科技有限公司 Bone driving coefficient determining method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
JP7227292B2 (en) Virtual avatar generation method and device, electronic device, storage medium and computer program
US20210383605A1 (en) Driving method and apparatus of an avatar, device and medium
CN112509099B (en) Avatar driving method, apparatus, device and storage medium
CN113643412B (en) Virtual image generation method and device, electronic equipment and storage medium
KR20210103435A (en) Method and apparatus for synthesizing virtual object image, electronic device and storage medium
CN112785674A (en) Texture map generation method, rendering method, device, equipment and storage medium
CN112819971B (en) Method, device, equipment and medium for generating virtual image
JP2021108206A (en) Image adjustment method, apparatus, electronic device, storage medium and program
CN111968203B (en) Animation driving method, device, electronic equipment and storage medium
CN112862933B (en) Method, apparatus, device and storage medium for optimizing model
CN112614213A (en) Facial expression determination method, expression parameter determination model, medium and device
CN111291218B (en) Video fusion method, device, electronic equipment and readable storage medium
CN112562048A (en) Control method, device and equipment of three-dimensional model and storage medium
CN113409454B (en) Face image processing method and device, electronic equipment and storage medium
CN112184851B (en) Image editing method, network training method, related device and electronic equipment
CN112330805A (en) Face 3D model generation method, device and equipment and readable storage medium
CN112509098B (en) Animation image generation method and device and electronic equipment
CN111599002A (en) Method and apparatus for generating image
CN112562047B (en) Control method, device, equipment and storage medium for three-dimensional model
CN111523467A (en) Face tracking method and device
CN112562043B (en) Image processing method and device and electronic equipment
CN112465985A (en) Mesh model simplification method and device
CN111833391A (en) Method and device for estimating image depth information
JP7419226B2 (en) Image conversion method and device, image conversion model training method and device
CN114078184A (en) Data processing method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination