CN114037814B - Data processing method, device, electronic equipment and medium - Google Patents

Data processing method, device, electronic equipment and medium Download PDF

Info

Publication number
CN114037814B
CN114037814B CN202111336046.7A CN202111336046A CN114037814B CN 114037814 B CN114037814 B CN 114037814B CN 202111336046 A CN202111336046 A CN 202111336046A CN 114037814 B CN114037814 B CN 114037814B
Authority
CN
China
Prior art keywords
data
model
subdata
determining
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111336046.7A
Other languages
Chinese (zh)
Other versions
CN114037814A (en
Inventor
刘豪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111336046.7A priority Critical patent/CN114037814B/en
Publication of CN114037814A publication Critical patent/CN114037814A/en
Application granted granted Critical
Publication of CN114037814B publication Critical patent/CN114037814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a data processing method, apparatus, device, medium, and product, relating to the technical field of artificial intelligence, specifically to the technical field of augmented/virtual reality, computer vision, and image processing. The data processing method comprises the following steps: segmenting the model data of the object to be evaluated to obtain corresponding first model subdata; determining second model subdata corresponding to the first model subdata from the target object model data; determining transformation data between the first model subdata and the second model subdata; and determining the similarity between the model data of the object to be evaluated and the model data of the target object based on the transformation data.

Description

Data processing method, device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, in particular to the field of augmented/virtual reality, computer vision, and image processing technologies, and more particularly, to a data processing method, apparatus, electronic device, medium, and program product.
Background
In the related art, an object model, which is a kind of virtual 3D model, may be generally constructed based on an image. The object models include, for example, a user head model, a face model, and the like. It is often difficult for related techniques to accurately determine the model quality of an object model, including, for example, the style of the model, the aesthetics of the model, and the like.
Disclosure of Invention
The present disclosure provides a data processing method, apparatus, electronic device, storage medium, and program product.
According to an aspect of the present disclosure, there is provided a data processing method including: carrying out segmentation processing on model data of an object to be evaluated to obtain corresponding first model subdata; determining second model subdata corresponding to the first model subdata from target object model data; determining transformation data between the first model subdata and the second model subdata; and determining the similarity between the object model data to be evaluated and the target object model data based on the transformation data.
According to another aspect of the present disclosure, there is provided a data processing apparatus including: the device comprises a processing module, a first determining module, a second determining module and a third determining module. The processing module is used for segmenting the model data of the object to be evaluated to obtain corresponding first model subdata; the first determining module is used for determining second model subdata corresponding to the first model subdata from target object model data; a second determining module, configured to determine transformation data between the first model sub-data and the second model sub-data; and the third determining module is used for determining the similarity between the object model data to be evaluated and the target object model data based on the transformation data.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data processing method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described data processing method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the data processing method described above.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically illustrates an application scenario of a data processing method and apparatus according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of a data processing method according to an embodiment of the present disclosure;
FIG. 3 schematically shows an object model segmentation schematic according to an embodiment of the present disclosure;
FIG. 4 schematically shows a schematic diagram of data processing according to an embodiment of the present disclosure;
FIG. 5 schematically shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure; and
fig. 6 is a block diagram of an electronic device for performing data processing used to implement an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).
An embodiment of the present disclosure provides a data processing method, including: and carrying out segmentation processing on the model data of the object to be evaluated to obtain corresponding first model subdata. Then, second model sub-data corresponding to the first model sub-data is determined from the target object model data, and transformation data between the first model sub-data and the second model sub-data is determined. Next, based on the transformation data, a similarity between the object model data to be evaluated and the target object model data is determined.
Fig. 1 schematically illustrates an application scenario of a data processing method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of an application scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, according to the application scenario 100 of the present disclosure, the object model data 110 to be evaluated is, for example, a 3D avatar, the object model data 110 to be evaluated may be input into the electronic device 120, and the electronic device 120 performs data processing on the object model data 110 to be evaluated.
Illustratively, the electronic device 120 includes, for example, a smartphone, a computer, or the like. The electronic device 120 has a data processing function.
For example, the electronic device 120 processes the object model data 110 to be evaluated to obtain an evaluation result 130 for the object model data 110 to be evaluated, where the evaluation result 130 represents, for example, the style, the beauty, and the like of the object model corresponding to the object model data 110 to be evaluated.
The data processing method according to the exemplary embodiment of the present disclosure is described below with reference to fig. 2 to fig. 4 in conjunction with the application scenario of fig. 1.
Fig. 2 schematically shows a flow chart of a data processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the data processing method 200 of the embodiment of the present disclosure may include, for example, operations S210 to S240.
In operation S210, the model data to be evaluated is segmented to obtain corresponding first model sub-data.
In operation S220, second model sub-data corresponding to the first model sub-data is determined from the target object model data.
In operation S230, transformation data between the first model sub-data and the second model sub-data is determined.
In operation S240, a similarity between the object model data to be evaluated and the target object model data is determined based on the transformation data.
The target object model data is used, for example, for evaluating the object model data to be evaluated.
For example, when the object model corresponding to the target object model data is an object model of one style, it is determined whether the object model corresponding to the object model data to be evaluated is an object model of the one style by comparing the object model data to be evaluated and the target object model data.
For example, when the object model corresponding to the target object model data is a standard beauty object model, by comparing the object model data to be evaluated and the target object model data, the beauty of the object model corresponding to the object model data to be evaluated is determined.
For example, the model data to be evaluated may be segmented to obtain a plurality of first model sub-data. For each first model sub-data, second model sub-data corresponding to the first model sub-data may be determined from the target object model data. Next, transformation data between the first model sub-data and the second model sub-data is determined, the transformation data characterizing, for example, a difference between the first model sub-data and the second model sub-data.
After the transformation data for each of the first model sub-data is obtained, the similarity between the object model data to be evaluated and the target object model data may be determined based on the transformation data. The higher the similarity is, the closer the style of the model data representing the object to be evaluated and the model data representing the target object is or the higher the aesthetic degree of the model data representing the object to be evaluated is.
According to the embodiment of the disclosure, a plurality of first model subdata is obtained by dividing the object model data to be evaluated, and then second model subdata corresponding to each first model subdata is determined from the target object model data serving as a reference. Next, transformation data between the first model sub-data and the second model sub-data is determined, and the degree of similarity between the object model data to be evaluated and the target object model data is evaluated based on the transformation data, so that the style or beauty of the object model data to be evaluated is evaluated based on the degree of similarity. Therefore, through the technical scheme of the embodiment of the disclosure, objective evaluation of the model data of the object to be evaluated is realized, the evaluation accuracy is high, the evaluation effect is good, manual evaluation is not needed, and the labor cost is reduced.
Fig. 3 schematically shows an object model segmentation schematic according to an embodiment of the present disclosure.
As shown in fig. 3, the object model data 310 to be evaluated includes, for example, head model data, and the head model data is segmented based on the features of the five sense organs to obtain a plurality of first model sub-data 320.
Illustratively, the plurality of first model sub-data 320 includes, for example: the model sub-data for the left eyebrow, the model sub-data for the right eyebrow, the model sub-data for the left eye, the model sub-data for the right eye, the model sub-data for the nose, the model sub-data for the mouth, the model sub-data for the cheek, the model sub-data for the forehead, and the model sub-data for the neck.
Illustratively, segmentation is based on the features of the five sense organs because the expressive power of an object model is mainly expressed by the characteristics of the regions of the five sense organs. For example, a person is usually shaped to have large eyes, a tall and straight nose, a melon seed face, etc., which are defined by the characteristics of five sense organs. Therefore, the head model data is divided based on the features of the five sense organs, so that the divided result can better show the features of each part after division.
According to the embodiment of the disclosure, data segmentation is carried out according to the features of the five sense organs, so that the style or the attractiveness of the model data of the object to be evaluated is evaluated based on the features of the five sense organs, and the evaluation accuracy is improved.
After the object model data to be evaluated is segmented to obtain a plurality of first model subdata, second model subdata corresponding to each first model subdata is determined from the target object model data aiming at each first model subdata in the plurality of first model subdata.
For example, the object model data to be evaluated includes a first topological relation, the target object model data includes a second topological relation, and the first topological relation and the second topological relation are associated. The topological relation represents, for example, a mesh structure of the 3D model, and the first topological relation and the second topological relation are associated to represent that the number and the connection relation of mesh points in the mesh structure corresponding to the object model data to be evaluated are consistent with those of mesh points in the mesh structure corresponding to the target object model data, but the positions of the mesh points corresponding to the object model data to be evaluated are not consistent with those of the mesh points corresponding to the target object model data.
For example, for each first model sub-data, a first feature point set corresponding to the first model sub-data is determined from the object model data to be evaluated. Then, based on the first topological relation and the second topological relation, a second feature point set corresponding to the first feature point set is determined from the target object model data, and the second feature point set is determined as second model subdata.
After the model data of the object to be evaluated is segmented according to the features of the five sense organs, the corresponding relation between each piece of first model subdata and the whole model data of the object to be evaluated needs to be determined. Since the first topological relation about the point-line-plane in the model data of the object to be evaluated is uniquely determined, after the plurality of first model subdata is obtained by segmentation, the local topological relation corresponding to each first model subdata is inconsistent with the first topological relation corresponding to the model data of the entire object to be evaluated, and therefore, which feature points (grid points) in the model data of the entire object to be evaluated correspond to the feature points (grid points) corresponding to the first model subdata needs to be known. Since the model data of the object to be evaluated is three-dimensional data, the feature point search can be performed by using a K nearest neighbor method. For example, for the feature points in each first model subdata, the closest feature points are searched in the whole model data of the object to be evaluated, so as to obtain a first feature point set.
Since the first topological relation corresponding to the model data of the object to be evaluated is consistent with the second topological relation corresponding to the model data of the target object, after the first feature point set aiming at the model data of the object to be evaluated is obtained, the second feature point set corresponding to the first feature point set can be determined from the model data of the target object based on the first topological relation and the second topological relation, and the second feature point set is determined as the second model subdata.
In the embodiment of the disclosure, after the model data of the object to be evaluated is segmented, the second model subdata corresponding to the first model subdata is searched based on the topological relation, so that the accuracy of data search is improved, and the accuracy of subsequent processing based on the first model subdata and the second model subdata is ensured.
After the second model subdata corresponding to each first model subdata is determined, for each first model subdata and the corresponding second model subdata, rotation data and translation data of the second model subdata obtained by each first model subdata are determined. Then, transformation data is determined based on the rotation data and the translation data.
Illustratively, the rotation data and the translation data represent, for example, a difference between the first model sub-data and the second model sub-data, such as representing that the second model sub-data is obtained after the first model sub-data is rotated and translated.
For example, an objective function is first constructed, the objective function being associated with, for example, first model sub-data, second model sub-data, rotation data, and translation data. The rotation data for example comprises a rotation matrix R and the translation data for example comprises a translation matrix t. The objective function is shown in equation (1).
Figure BDA0003350183370000071
Wherein the first model sub-data p = { p = { (p) } 1 ,P 2 ,P 3 ,...,p n And second model sub-data q = { q } 1 ,q 2 ,q 3 ,...,q n And n is an integer greater than 1, and n represents the number of feature points (grid points) in the first model sub-data and the second model sub-data. p is a radical of i And p i For example, all three-dimensional coordinate values. The rotation matrix R has a dimension of, for example, m, and the translation matrix t has a dimension of, for example, m 1, m being, for example, an integer greater than 1, m being, for example, 3 in one example.
Next, the objective function is solved under the condition of minimization based on Singular Value Decomposition (SVD) algorithm, so as to obtain a rotation matrix R and a translation matrix t, thereby obtaining rotation data and translation data.
After obtaining the rotation data and the translation data, a difference between the rotation data and the first reference data may be determined, and a product between the translation data and the second parameter data may be determined, and then transformation data may be obtained based on the difference and the product.
For example, the first reference data is an identity matrix and the second parameter data is a coefficient. Calculating the modulus of the difference between the rotation matrix and the identity matrix, and multiplying the modulus of the translation matrix by the coefficient to obtain the product. The modulo of the difference is then added to the product to obtain transform data for each of the first model sub-data.
After the conversion data for each first model sub-data is obtained, a plurality of conversion data corresponding to the plurality of first model sub-data one-to-one is subjected to weighted average processing to obtain a weighted average value. Then, based on the weighted average, the similarity between the object model data to be evaluated and the target object model data is determined. For example, the smaller the weighted average, the greater the similarity; the larger the weighted average, the smaller the similarity. In an example, the weights corresponding to the plurality of transform data may be the same or different.
According to the embodiment of the disclosure, an objective function is constructed based on the first model subdata and the second model subdata, the objective function is solved to obtain a rotation matrix and a translation matrix, and then the similarity between the first model subdata and the second model subdata is obtained based on the rotation matrix and the translation matrix. Therefore, the style or the attractiveness of the model data of the object to be evaluated can be evaluated based on the similarity, the evaluation accuracy is high, the evaluation effect is good, manual evaluation is not needed, and the labor cost is reduced.
FIG. 4 schematically shows a schematic diagram of data processing according to an embodiment of the present disclosure.
As shown in fig. 4, for a plurality of user images 410 to 419, image processing, three-dimensional reconstruction, and the like are performed for each user image, and a plurality of object model data 420 to 429 to be evaluated, which correspond to the plurality of user images 410 to 419 one by one, are obtained.
In the embodiment of the present disclosure, the style or beauty of the object model corresponding to each of the plurality of object model data to be evaluated 420 to 429 needs to be evaluated, and therefore the target object model data 430 is used as an evaluation reference.
Transformation data between each object model data to be evaluated and the target object model data 430 is determined, the transformation data being expressed in degrees of transformation, for example. The transformation degree represents the transformation degree of the target object model data 430 obtained from the object model data to be evaluated, i.e., the distance between the object model data to be evaluated and the target object model data 430. The smaller the transformation degree is, the greater the similarity representing the object model data to be evaluated and the target object model data 430 is. The larger the transformation degree is, the smaller the similarity representing the object model data to be evaluated and the target object model data 430 is. For example, the degree of transformation corresponding to the object model data 420 to be evaluated is 0.697814, the degree of transformation corresponding to the object model data 422 to be evaluated is 0.738627, the degree of similarity between the object model data 420 to be evaluated and the target object model data 430 is high, and the degree of similarity between the object model data 422 to be evaluated and the target object model data 430 is low.
When the object model corresponding to the target object model data 430 belongs to a style, if the degree of transformation between the object model data to be evaluated and the target object model data 430 is smaller, it indicates that the style between the object model data to be evaluated and the target object model data 430 is closer. Styles include, for example, but are not limited to, cartoon styles, anthropomorphic styles, and realistic styles.
When the object model corresponding to the target object model data 430 is regarded as a standard beauty, if the transformation degree between the object model data to be evaluated and the target object model data 430 is smaller, the beauty degree of the object model data to be evaluated is larger, that is, the object model data to be evaluated is more beautiful.
Fig. 5 schematically shows a block diagram of a data processing device according to an embodiment of the present disclosure.
As shown in fig. 5, the data processing apparatus 500 of the embodiment of the present disclosure includes, for example, a processing module 510, a first determining module 520, a second determining module 530, and a third determining module 540.
The processing module 510 may be configured to perform segmentation processing on the model data of the object to be evaluated to obtain corresponding first model sub-data. According to the embodiment of the present disclosure, the processing module 510 may perform, for example, the operation S210 described above with reference to fig. 2, which is not described herein again.
The first determination module 520 may be configured to determine second model sub-data corresponding to the first model sub-data from the target object model data. According to the embodiment of the present disclosure, the first determining module 520 may perform, for example, operation S220 described above with reference to fig. 2, which is not described herein again.
The second determination module 530 may be used to determine transformation data between the first model subdata and the second model subdata. According to an embodiment of the present disclosure, the second determining module 530 may perform, for example, the operation S230 described above with reference to fig. 2, which is not described herein again.
The third determining module 540 may be configured to determine a similarity between the object model data to be evaluated and the target object model data based on the transformation data. According to an embodiment of the present disclosure, the third determining module 540 may, for example, perform operation S240 described above with reference to fig. 2, which is not described herein again.
According to an embodiment of the present disclosure, the second determining module 530 includes: a first determination submodule and a second determination submodule. The first determining submodule is used for determining the rotation data and the translation data of the second model subdata obtained by the first model subdata; a second determination submodule for determining transformation data based on the rotation data and the translation data.
According to an embodiment of the present disclosure, the first determination submodule 520 includes: a construction unit and a first obtaining unit. A construction unit for constructing an objective function, wherein the objective function is associated with the first model sub-data, the second model sub-data, the rotation data, and the translation data; a first obtaining unit for obtaining rotation data and translation data in case of minimization of the objective function.
According to an embodiment of the present disclosure, the second determination submodule includes: the device comprises a first determining unit, a second determining unit and a second obtaining unit. A first determination unit for determining a difference between the rotation data and the first reference data; a second determination unit configured to determine a product between the translation data and the second parameter data; a second obtaining unit for obtaining the transformation data based on the difference and the product.
According to the embodiment of the disclosure, the object model data to be evaluated comprises a first topological relation, the target object model data comprises a second topological relation, and the first topological relation is associated with the second topological relation; wherein the first determining module 520 includes: a third determination submodule, a fourth determination submodule, and a fifth determination submodule. The third determining submodule is used for determining a first feature point set corresponding to the first model subdata from the model data of the object to be evaluated based on the first model subdata; the fourth determining submodule is used for determining a second feature point set corresponding to the first feature point set from the target object model data based on the first topological relation and the second topological relation; and the fifth determining submodule is used for determining the second feature point set as second model subdata.
According to an embodiment of the present disclosure, the first model sub-data includes a plurality of first model sub-data; the third determining module 540 includes: a processing sub-module and a sixth determining sub-module. The processing submodule is used for carrying out weighted average processing on a plurality of conversion data which are in one-to-one correspondence with the plurality of first model subdata to obtain a weighted average value; and the sixth determining submodule is used for determining the similarity between the model data of the object to be evaluated and the model data of the target object based on the weighted average value.
According to an embodiment of the present disclosure, the object model data to be evaluated includes head model data; the processing module 510 is further configured to: segmenting the head model data based on the features of the five sense organs to obtain corresponding first model subdata; wherein the first model subdata comprises at least one of: the model sub-data for the left eyebrow, the model sub-data for the right eyebrow, the model sub-data for the left eye, the model sub-data for the right eye, the model sub-data for the nose, the model sub-data for the mouth, the model sub-data for the cheek, the model sub-data for the forehead, and the model sub-data for the neck.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
Fig. 6 is a block diagram of an electronic device for performing data processing used to implement an embodiment of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. The electronic device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a data processing method. For example, in some embodiments, the data processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the data processing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the data processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (10)

1. A method of data processing, comprising:
segmenting the model data of the object to be evaluated to obtain corresponding first model subdata;
determining second model subdata corresponding to the first model subdata from the target object model data;
determining transformation data between the first model subdata and the second model subdata; and
determining the similarity between the model data of the object to be evaluated and the model data of the target object based on the transformation data;
the object model data to be evaluated comprises a first topological relation, the target object model data comprises a second topological relation, and the first topological relation is associated with the second topological relation; the determining, from the target object model data, second model sub-data corresponding to the first model sub-data includes:
determining a first feature point set corresponding to the first model subdata from the model data of the object to be evaluated based on the first model subdata;
determining a second feature point set corresponding to the first feature point set from the target object model data based on the first topological relation and the second topological relation; and
determining the second feature point set as the second model subdata;
wherein the determining transformation data between the first model subdata and the second model subdata comprises:
determining rotation data and translation data of the second model subdata obtained by the first model subdata; and
determining the transformation data based on the rotation data and the translation data;
wherein the determining that the rotation data and the translation data of the second model sub-data are obtained from the first model sub-data comprises:
constructing an objective function, wherein the objective function is associated with the first model sub-data, the second model sub-data, the rotation data, and the translation data; and
obtaining the rotation data and the translation data with the objective function minimized.
2. The method of claim 1, wherein the determining the transformation data based on the rotation data and the translation data comprises:
determining a difference between the rotation data and a first reference data;
determining a product between the translation data and second parameter data; and
and obtaining the transformation data based on the difference and the product.
3. The method of claim 1 or 2, wherein the first model subdata includes a plurality of first model subdata; the determining the similarity between the object model data to be evaluated and the target object model data based on the transformation data comprises:
carrying out weighted average processing on a plurality of conversion data which are in one-to-one correspondence with the plurality of first model subdata to obtain a weighted average value; and
and determining the similarity between the object model data to be evaluated and the target object model data based on the weighted average value.
4. The method according to claim 1, wherein the object model data to be evaluated includes head model data; the step of segmenting the model data of the object to be evaluated to obtain corresponding first model subdata comprises the following steps:
segmenting the head model data based on the characteristics of the five sense organs to obtain corresponding first model subdata;
wherein the first model subdata comprises at least one of:
the model sub-data for the left eyebrow, the model sub-data for the right eyebrow, the model sub-data for the left eye, the model sub-data for the right eye, the model sub-data for the nose, the model sub-data for the mouth, the model sub-data for the cheek, the model sub-data for the forehead, and the model sub-data for the neck.
5. A data processing apparatus comprising:
the processing module is used for carrying out segmentation processing on the model data of the object to be evaluated to obtain corresponding first model subdata;
the first determining module is used for determining second model subdata corresponding to the first model subdata from target object model data;
a second determining module, configured to determine transformation data between the first model sub-data and the second model sub-data; and
a third determining module, configured to determine, based on the transformation data, a similarity between the object model data to be evaluated and the target object model data;
the object model data to be evaluated comprises a first topological relation, the target object model data comprises a second topological relation, and the first topological relation is associated with the second topological relation; the first determining module includes:
a third determining submodule, configured to determine, based on the first model subdata, a first feature point set corresponding to the first model subdata from the model data of the object to be evaluated;
a fourth determining submodule, configured to determine, based on the first topological relation and the second topological relation, a second feature point set corresponding to the first feature point set from the target object model data; and
a fifth determining submodule, configured to determine the second feature point set as the second model subdata;
wherein the second determining module comprises:
the first determining submodule is used for determining the rotation data and the translation data of the second model subdata obtained by the first model subdata; and
a second determination submodule for determining the transformation data based on the rotation data and the translation data;
wherein the first determination submodule includes:
a construction unit configured to construct an objective function, wherein the objective function is associated with the first model sub-data, the second model sub-data, the rotation data, and the translation data; and
a first obtaining unit, configured to obtain the rotation data and the translation data when the objective function is minimized.
6. The apparatus of claim 5, wherein the second determination submodule comprises:
a first determination unit for determining a difference between the rotation data and first reference data;
a second determining unit for determining a product between the translation data and second parameter data; and
a second obtaining unit configured to obtain the transform data based on the difference and the product.
7. The apparatus of claim 5 or 6, wherein the first model subdata comprises a plurality of first model subdata; the third determining module includes:
the processing submodule is used for carrying out weighted average processing on a plurality of conversion data which are in one-to-one correspondence with the plurality of first model subdata to obtain a weighted average value; and
and the sixth determining submodule is used for determining the similarity between the object model data to be evaluated and the target object model data based on the weighted average value.
8. The apparatus according to claim 5, wherein the object model data to be evaluated includes head model data; the processing module is further configured to:
segmenting the head model data based on the features of the five sense organs to obtain corresponding first model subdata;
wherein the first model subdata comprises at least one of:
model sub-data for the left eyebrow, model sub-data for the right eyebrow, model sub-data for the left eye, model sub-data for the right eye, model sub-data for the nose, model sub-data for the mouth, model sub-data for the cheek, model sub-data for the forehead, model sub-data for the neck.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-4.
CN202111336046.7A 2021-11-11 2021-11-11 Data processing method, device, electronic equipment and medium Active CN114037814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111336046.7A CN114037814B (en) 2021-11-11 2021-11-11 Data processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111336046.7A CN114037814B (en) 2021-11-11 2021-11-11 Data processing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN114037814A CN114037814A (en) 2022-02-11
CN114037814B true CN114037814B (en) 2022-12-23

Family

ID=80144178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111336046.7A Active CN114037814B (en) 2021-11-11 2021-11-11 Data processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114037814B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121969A (en) * 2017-12-22 2018-06-05 百度在线网络技术(北京)有限公司 For handling the method and apparatus of image
CN109636886A (en) * 2018-12-19 2019-04-16 网易(杭州)网络有限公司 Processing method, device, storage medium and the electronic device of image
WO2019079766A1 (en) * 2017-10-20 2019-04-25 Alibaba Group Holding Limited Data processing method, apparatus, system and storage media
CN111325851A (en) * 2020-02-28 2020-06-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112148907A (en) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 Image database updating method and device, electronic equipment and medium
CN112330756A (en) * 2021-01-04 2021-02-05 中智行科技有限公司 Camera calibration method and device, intelligent vehicle and storage medium
CN112837391A (en) * 2021-03-04 2021-05-25 北京柏惠维康科技有限公司 Coordinate conversion relation obtaining method and device, electronic equipment and storage medium
CN113255484A (en) * 2021-05-12 2021-08-13 北京百度网讯科技有限公司 Video matching method, video processing device, electronic equipment and medium
CN113269719A (en) * 2021-04-16 2021-08-17 北京百度网讯科技有限公司 Model training method, image processing method, device, equipment and storage medium
CN113313053A (en) * 2021-06-15 2021-08-27 北京百度网讯科技有限公司 Image processing method, apparatus, device, medium, and program product
CN113327193A (en) * 2021-05-27 2021-08-31 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN113362231A (en) * 2021-07-23 2021-09-07 百果园技术(新加坡)有限公司 Interpolation method and device for key points of human face, computer equipment and storage medium
CN113378696A (en) * 2021-06-08 2021-09-10 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN113409454A (en) * 2021-07-14 2021-09-17 北京百度网讯科技有限公司 Face image processing method and device, electronic equipment and storage medium
WO2021196548A1 (en) * 2020-04-01 2021-10-07 北京迈格威科技有限公司 Distance determination method, apparatus and system
CN113569912A (en) * 2021-06-28 2021-10-29 北京百度网讯科技有限公司 Vehicle identification method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907550B (en) * 2021-03-01 2024-01-19 创新奇智(成都)科技有限公司 Building detection method and device, electronic equipment and storage medium
CN112862813B (en) * 2021-03-04 2021-11-05 北京柏惠维康科技有限公司 Mark point extraction method and device, electronic equipment and computer storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019079766A1 (en) * 2017-10-20 2019-04-25 Alibaba Group Holding Limited Data processing method, apparatus, system and storage media
CN108121969A (en) * 2017-12-22 2018-06-05 百度在线网络技术(北京)有限公司 For handling the method and apparatus of image
CN109636886A (en) * 2018-12-19 2019-04-16 网易(杭州)网络有限公司 Processing method, device, storage medium and the electronic device of image
CN111325851A (en) * 2020-02-28 2020-06-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
WO2021196548A1 (en) * 2020-04-01 2021-10-07 北京迈格威科技有限公司 Distance determination method, apparatus and system
CN112148907A (en) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 Image database updating method and device, electronic equipment and medium
CN112330756A (en) * 2021-01-04 2021-02-05 中智行科技有限公司 Camera calibration method and device, intelligent vehicle and storage medium
CN112837391A (en) * 2021-03-04 2021-05-25 北京柏惠维康科技有限公司 Coordinate conversion relation obtaining method and device, electronic equipment and storage medium
CN113269719A (en) * 2021-04-16 2021-08-17 北京百度网讯科技有限公司 Model training method, image processing method, device, equipment and storage medium
CN113255484A (en) * 2021-05-12 2021-08-13 北京百度网讯科技有限公司 Video matching method, video processing device, electronic equipment and medium
CN113327193A (en) * 2021-05-27 2021-08-31 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN113378696A (en) * 2021-06-08 2021-09-10 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN113313053A (en) * 2021-06-15 2021-08-27 北京百度网讯科技有限公司 Image processing method, apparatus, device, medium, and program product
CN113569912A (en) * 2021-06-28 2021-10-29 北京百度网讯科技有限公司 Vehicle identification method and device, electronic equipment and storage medium
CN113409454A (en) * 2021-07-14 2021-09-17 北京百度网讯科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN113362231A (en) * 2021-07-23 2021-09-07 百果园技术(新加坡)有限公司 Interpolation method and device for key points of human face, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
三维点云数据配准方法研究;陈阳;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20150515(第5期);全文 *
基于视觉特征点云的空间姿态鲁棒估计方法研究;薛艳龙;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20200515(第5期);全文 *

Also Published As

Publication number Publication date
CN114037814A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN114140603B (en) Training method of virtual image generation model and virtual image generation method
CN113643412B (en) Virtual image generation method and device, electronic equipment and storage medium
CN115147265B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
US20230206578A1 (en) Method for generating virtual character, electronic device and storage medium
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN112581573A (en) Avatar driving method, apparatus, device, medium, and program product
CN115222879A (en) Model surface reduction processing method and device, electronic equipment and storage medium
CN114092673B (en) Image processing method and device, electronic equipment and storage medium
CN114708374A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113052962B (en) Model training method, information output method, device, equipment and storage medium
CN114120413A (en) Model training method, image synthesis method, device, equipment and program product
CN114078184B (en) Data processing method, device, electronic equipment and medium
CN113177466A (en) Identity recognition method and device based on face image, electronic equipment and medium
CN113380269A (en) Video image generation method, apparatus, device, medium, and computer program product
CN115393488B (en) Method and device for driving virtual character expression, electronic equipment and storage medium
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN114037814B (en) Data processing method, device, electronic equipment and medium
CN115906987A (en) Deep learning model training method, virtual image driving method and device
CN113608615B (en) Object data processing method, processing device, electronic device, and storage medium
CN113610992B (en) Bone driving coefficient determining method and device, electronic equipment and readable storage medium
CN116206035B (en) Face reconstruction method, device, electronic equipment and storage medium
CN116229214B (en) Model training method and device and electronic equipment
CN116524165B (en) Migration method, migration device, migration equipment and migration storage medium for three-dimensional expression model
CN116229008B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant