CN116758124A - 3D model correction method and terminal equipment - Google Patents

3D model correction method and terminal equipment Download PDF

Info

Publication number
CN116758124A
CN116758124A CN202310720903.6A CN202310720903A CN116758124A CN 116758124 A CN116758124 A CN 116758124A CN 202310720903 A CN202310720903 A CN 202310720903A CN 116758124 A CN116758124 A CN 116758124A
Authority
CN
China
Prior art keywords
image
correction
target object
model
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310720903.6A
Other languages
Chinese (zh)
Inventor
王晓见
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Codespace Technology Co ltd
Original Assignee
Beijing Codespace Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Codespace Technology Co ltd filed Critical Beijing Codespace Technology Co ltd
Priority to CN202310720903.6A priority Critical patent/CN116758124A/en
Publication of CN116758124A publication Critical patent/CN116758124A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The application provides a 3D model correction method and terminal equipment, wherein the method comprises the following steps: s10, acquiring a 3D model to be corrected of a target object; s20, acquiring a second 2D image of the target object; s30, setting photographing parameters of the 3D model to be corrected, and virtually photographing the 3D model to be corrected to obtain a virtual image; s40, correcting the 3D model to be corrected, and calculating at least one of the following correction parameters: the target object of the virtual image and the second 2D image is at least partially key point position matching parameters; target object outline matching parameters of the virtual image and the second 2D image; a ratio of target object keypoints of the second 2D image to within the target object outline of the virtual image; s50, repeating the steps S30-S40 until the repeated steps reach the set times or the convergence condition is triggered. The method and the device can obtain a final 3D model of the target object with high accuracy, and avoid large difference with the real state of the target object.

Description

3D model correction method and terminal equipment
Technical Field
The application relates to the technical field of 3D model correction, in particular to a 3D model correction method and terminal equipment.
Background
The current 3D reconstruction stereo modeling of human body mainly has three technical routes: the first is based on infrared technology, the second is based on laser radar technology, and the third is based on computer vision; the human body 3D reappearance stereo modeling technical route based on computer vision comprises the following two conditions: the first is a photograph shot based on a depth camera, the photograph has depth information, the second is a photograph shot based on a common camera, the photograph has no depth information, and reproduction of a human body 3D model is mainly performed through an artificial intelligent gesture capturing technology. The price of equipment consisting of infrared technology, laser radar technology and industrial depth camera is high, and most of the equipment is large-sized equipment with large occupied area, so that the equipment is generally applied to high-end 2B scenes, and is difficult to enter families and walk into daily life; the 3D human body model obtained by performing human body 3D reconstruction stereo modeling based on the common visual photo is not accurate enough, and has a large difference from the real state of the human body when photographing. Therefore, the application provides a 3D model correction method and terminal equipment.
Disclosure of Invention
The application aims to solve the problems and provides a 3D model correction method and terminal equipment.
In a first aspect, the present application provides a 3D model correction method, the method comprising the steps of:
s10, acquiring a 3D model to be corrected of a target object;
s20, acquiring a second 2D image of the target object, wherein the second 2D image can be one, two or more images with different angles;
s30, setting photographing parameters of the 3D model to be corrected, virtually photographing the 3D model to be corrected to obtain a virtual image, wherein the photographing parameters are obtained based on the second 2D image;
s40, correcting the 3D model to be corrected, and calculating at least one of the following correction parameters:
a. the virtual image and the target object of the second 2D image are at least partially key point position matched with parameters;
b. the target object outline matching parameters of the virtual image and the second 2D image;
c. the proportion of the target object key points of the second 2D image in the target object outline of the virtual image;
s50, repeating the steps S30-S40 until the repeated steps reach the set times or the convergence condition is triggered.
According to the technical solutions provided in some embodiments of the present application, setting the photographing parameters of the 3D model to be corrected at least includes one of the following steps:
Setting photographing parameters based on photographing parameters of the second 2D image;
and iterating the photographing parameters so that the virtual image is matched with at least part of the key point positions of the second 2D image.
According to the technical solutions provided in some embodiments of the present application, at least some of the key point position matching parameters of the target objects of the virtual image and the second 2D image are function values of a loss function, where the function values of the loss function are distances between the first data set and the second data set calculated according to a set algorithm;
the first data set is: a set of location information of at least a portion of the keypoints of the target object in the virtual image;
the second data set is: a set of location information of at least part of the keypoints of the target object in the second 2D image;
the setting algorithm is at least one of the following algorithms:
the distance sum A of each corresponding point in the first data set and the second data set;
a sum of squares of distances B for respective points in the first data set and the second data set;
a distance weighted sum C of the corresponding points in the first data set and the second data set;
and the distance square weighted sum D of the corresponding points in the first data set and the second data set.
According to some embodiments of the present application, the distance between the corresponding points in the first data set and the second data set is any one of the following:
euclidean distance, manhattan distance, chebyshev distance, minkowski distance, normalized euclidean distance, mahalanobis distance, angle cosine, hamming distance, correlation coefficient, correlation distance, information entropy.
According to some embodiments of the present application, the target object outline matching parameter of the virtual image and the second 2D image is an area proportion of the target object in the virtual image within the target object outline of the second 2D image.
According to some embodiments of the present application, the number of the second 2D images is one;
when at least two of the correction parameters are calculated in step S40, the convergence condition is: each correction parameter reaches a corresponding threshold value, or the judgment parameter reaches a set threshold value;
the judging parameter is a weighted sum of the correction parameters;
the weight of each of the correction parameters may be set to the same or different values in each correction step.
According to some embodiments of the present application, the number of the second 2D images is a plurality;
When at least two of the correction parameters are calculated in step S40, the convergence condition is: each comprehensive correction parameter respectively reaches a corresponding threshold value, or the judgment parameter reaches a set threshold value;
the comprehensive correction parameters are weighted sums of correction parameters corresponding to a plurality of second 2D images;
the judging parameter is a weighted sum of the comprehensive correction parameters;
the weight of each of the correction parameters or each of the integrated correction parameters may be set to the same or different values in each correction step.
According to some embodiments of the present application, the photographing parameter includes at least one of a focal length of the photographing apparatus and relative position information of the photographing apparatus and the target object.
According to the technical scheme provided by some embodiments of the present application, the 3D model to be corrected is corrected by affine transformation or gesture transformation;
the affine transformation includes: translational transformation, rotational transformation, scaling transformation, shearing transformation, mirror transformation, and any combination of the above;
the gesture conversion includes: skin transformations formed based on basic morphology transformations of the linear matrix, partial morphology transformations, and movements relative to the joints.
According to the technical scheme provided by some embodiments of the present application, the 3D model to be corrected is generated based on the first 2D image; the first 2D image and the second 2D image are acquired at the same time.
In a second aspect, the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing when executing the computer program: the steps of the 3D model correction method as described above.
Compared with the prior art, the application has the beneficial effects that: the 3D model correction method provided by the application is used for carrying out iterative correction on the 3D model to be corrected of the target object to obtain a more accurate 3D model, setting photographing parameters based on a second 2D image of the target object after each correction, carrying out virtual photographing on the 3D model to be corrected of the target object to obtain a virtual image, carrying out contrast calculation on the virtual image and the second 2D image to obtain correction parameters, checking the 3D model after the correction by using the correction parameters, and determining whether the correction is needed to be continued or not according to a checking result; the 3D model correction method can obtain a final 3D model of the target object with high accuracy, avoids large difference with the real state of the target object, and the second 2D image used in the correction method can be obtained by adopting a smart phone or a common camera, has low cost, can be accepted by common families, and can be used in daily life.
Drawings
Fig. 1 is a schematic structural diagram of a 3D model correction method according to embodiment 1 of the present application;
FIG. 2 is an imaging schematic diagram of a virtual photo;
FIG. 3 is a schematic diagram of a 3D model of a human body and its upper skeletal node distribution;
fig. 4 is a schematic diagram of a computer system of an electronic device according to the present application.
The text labels in the figures are expressed as:
400. a computer system; 401. a CPU; 402. a ROM; 403. a RAM; 404. a bus; 405. an I/O interface; 406. an input section; 407. an output section; 408. a storage section; 409. a communication section; 410. a driver; 411. removable media.
Detailed Description
In order that those skilled in the art may better understand the technical solutions of the present application, the following detailed description of the present application with reference to the accompanying drawings is provided for exemplary and explanatory purposes only and should not be construed as limiting the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Example 1
The embodiment provides a 3D model correction method, fig. 1 is a flowchart of the 3D model correction method, and the method includes the following steps:
S10, acquiring a 3D model to be corrected of a target object;
the posture of the target object may be changed with time, such as a human body, an animal body, etc., and the posture of the target object may be unchanged with time, such as an artwork, etc., in this embodiment, the selected target object is a human body whose posture changes with time.
The 3D model to be corrected, that is, the 3D model established by using the artificial intelligence technology, in this embodiment, specifically refers to a human body 3D model established by using the body shape and posture capturing technology based on the first 2D image of the human body, and the established human body 3D model includes skeletal nodes of a human body skeleton; the first 2D image is a photograph of a human body at a certain moment, which is acquired by an intelligent mobile terminal (such as a mobile phone), and can be one photograph, two photographs or a plurality of photographs of the human body at any visual angle at the same moment, wherein the photographs at any visual angle can be side photographs, front photographs, back photographs or photographs at other visual angles;
it should be noted that, the 3D model to be corrected may be an unadjusted 3D model which is directly established by using an artificial intelligence technology based on the first 2D image, or may be a 3D model which is established based on the first 2D image and is subjected to certain adjustment.
S20, acquiring a second 2D image of the target object, wherein the second 2D image can be one, two or more images with different angles;
when the target object is an object whose posture does not change with time, the timings of acquiring the first 2D image and the second 2D image may be different; in the present embodiment, however, since the target object is an object whose posture changes with time, the first 2D image and the second 2D image are human body 2D images acquired at the same time; the same moment is in a time range in which the movement and deformation of the human body are negligible; the second 2D image is also a photograph of a human body acquired by an intelligent mobile terminal (such as a mobile phone), and the photograph can be one photograph, two photographs or a plurality of photographs of the human body at any visual angle at the same moment, each photograph records shooting parameters, and based on the shooting parameters, the shooting environment of the second 2D image can be simulated; the shooting parameters at least comprise one of the focal length of the shooting equipment and the relative position information of the shooting equipment and the target object; the relative position information of the shooting device and the target object comprises a distance between the shooting device and the target object, a shooting angle and the like, and the shooting angle refers to an azimuth angle of the shooting device for shooting the target object.
In this embodiment, only one of the second 2D images is acquired.
It should be noted that the second 2D image and the first 2D image may be the same photo or may be different photos, and in this embodiment, the second 2D image and the first 2D image are acquired at the same time but are different photos.
S30, setting photographing parameters of the 3D model to be corrected, virtually photographing the 3D model to be corrected to obtain a virtual image, wherein the photographing parameters are obtained based on the second 2D image;
preferably, setting the photographing parameters of the 3D model to be corrected at least includes one of the following steps:
setting photographing parameters based on photographing parameters of the second 2D image;
and iterating the photographing parameters so that the virtual image is matched with at least part of the key point positions of the second 2D image.
Wherein, the recorded shooting parameters of the second 2D image specifically include two cases:
the first is that the shooting parameters are relatively accurate and complete, namely, the shooting parameters comprise the focal length of the shooting equipment, the distance between the shooting equipment and the target object, the shooting angle and the like.
The second is that the shooting parameters are not accurate and complete enough, for example, do not include the distance between the shooting device and the target object, or only give an approximate shooting angle (such as the right front of the target object), etc.
For the first case, the photographing parameters of the second 2D image may be directly adopted as photographing parameters of the virtual photographing; in the second case, multiple fine adaptation adjustment can be performed on the basis of the shooting parameters of the second 2D image, so that the virtual image obtained after virtual shooting is matched with at least part of key points of the target object of the second 2D image, wherein the number of the selected part of key points can be determined according to actual requirements, the higher the requirement on accuracy is, the larger the number of the selected key points is, the lower the requirement on accuracy is, and the smaller the number of the selected key points is; generally, for the human body, four key points (such as two at the crotch and two at the shoulders) can be selected; regarding the key point matching, detailed description will be given below, so that the description is omitted here.
The virtual photographing means that the imaging of the 3D model of the human body to be corrected on a physical imaging plane is simulated on a computer to obtain a virtual image, and an imaging schematic diagram of the virtual image is shown in fig. 2.
S40, correcting the 3D model to be corrected, and calculating at least one of the following correction parameters:
a. the virtual image and the target object of the second 2D image are at least partially key point position matched with parameters;
b. The target object outline matching parameters of the virtual image and the second 2D image;
c. the proportion of the target object key points of the second 2D image in the target object outline of the virtual image;
wherein, the 3D model to be corrected is corrected through affine transformation or gesture transformation;
the affine transformation includes: translational transformation, rotational transformation, scaling transformation, shearing transformation, mirror transformation, and any combination of the above; the gesture conversion includes: the three kinds of gesture transformations are specifically explained as follows, based on the basic morphological transformation of the linear matrix, the local morphological transformation and the skin transformation formed by the movement of the opposite joints:
(1) Basic form transformation
In actual operation, the basic form of the target object, such as height, fat and thin, is represented by a vector, and the vector is subjected to linear transformation based on the learned parameter matrix to form a basic form transformation matrix for realizing the basic form transformation of the 3D model of the target object.
(2) Local morphological transformation
In actual operation, representing the local form of the target object through a vector, and carrying out linear transformation on the vector based on the learned parameter matrix to form a local form transformation matrix for realizing local form transformation of a 3D model of the target object; unlike the basic morphological transformation described above, the basic morphological transformation matrix transforms global coordinate information of the 3D model of the target object, while the local morphological transformation matrix transforms local coordinate information of the 3D model of the target object only.
(3) Skin transformation with respect to joint movement
When a human skeletal point moves, the skin will change with the movement of the joints, and in the case where the skin is associated with multiple joints, the final transformation of the skin is equal to the weighted sum of its transformations with respect to each joint.
Next, three different kinds of correction parameters a, b, and c will be described in detail.
A: the correction parameters are at least partial key point position matching parameters of the target objects of the virtual image and the second 2D image; the key points are skeletal nodes in a human body skeleton, and fig. 3 is a plurality of skeletal nodes distributed on a certain human body 3D model, and the figure contains 24 skeletal nodes in total.
The correction parameter characterizes a position matching relationship between at least part of key points of the human body on the virtual image and corresponding key points of the human body on the second 2D image, and in this embodiment, the position matching parameter of at least part of key points of the target objects of the virtual image and the second 2D image is a value of a loss function, that is, the correction parameter is a function value of the loss function.
The function value of the loss function is the distance between the first data set and the second data set calculated according to a set algorithm;
The first data set is: a set of location information of at least a portion of the keypoints of the target object in the virtual image;
the second data set is: and a set of positional information of at least part of the keypoints of the target object in the second 2D image.
In this embodiment, the first data set is a set of position information of at least part of skeletal nodes of a human body in the virtual image, where the number of skeletal node position information included in the set may be determined according to actual requirements, the higher the requirement on accuracy is, the greater the number of selected key points is, the lower the requirement on accuracy is, the smaller the number of selected key points is, and generally, for a human body, four key points may be selected; the second data set is a set of position information of at least part of bone nodes of the human body in the second 2D image, the number of the position information of the bone nodes contained therein is consistent with the number in the first data set, and the positions of the bone nodes contained in the first data set and the second data set are corresponding, for example, the first data set contains the position information of eight bone nodes on two arms of the human body, and the second data set should also contain the position information of eight bone nodes on two arms of the human body.
Further, the setting algorithm is at least one of the following algorithms:
first kind: the specific formulas of the distance sum A of each corresponding point in the first data set and the second data set are as follows:
wherein n is the number of key points corresponding to the first data set and the second data set, A i The distance of the ith corresponding key point in the first data set and the second data set;
second kind: the distance square sum B of the corresponding points in the first data set and the second data set has the following specific formula:
third kind: the specific formula of the distance weighted sum C of the corresponding points in the first data set and the second data set is as follows:
wherein a is i For the weights of the i-th corresponding key points in the first data set and the second data set, the weights of the corresponding key points can be set to be the same value or different values;
fourth kind: the distance square weighted sum D of the corresponding points in the first data set and the second data set has the following specific formula:
preferably, the distance between each corresponding point in the first data set and the second data set is any one of the following:
euclidean distance, manhattan distance, chebyshev distance, minkowski distance, normalized euclidean distance, mahalanobis distance, angle cosine, hamming distance, correlation coefficient, correlation distance, information entropy.
B: and the correction parameters are target object outline matching parameters of the virtual image and the second 2D image.
The correction parameters represent a matching relationship between the outer contour of the human body on the virtual image and the outer contour of the human body on the second 2D image, and because the second 2D image is obtained by directly photographing the human body, the outer contour of the human body on the second 2D image is theoretically larger than the outer contour of the human body on the virtual image corresponding to the 3D model to be corrected because the human body in the second 2D image is worn with clothes, and therefore, the optimal correction result is that the outer contour of the human body in the virtual image is completely located in the outer contour of the human body in the second 2D image.
Preferably, the target object outline matching parameters of the virtual image and the second 2D image are the area ratio R of the target object in the virtual image within the target object outline of the second 2D image 1 Specifically, it can be expressed by the following formula:
wherein S is 1 Is the area of the target object in the virtual image within the target object outline of the second 2D image, S 0 Is the total area of the target object in the virtual image.
C: and the target object key point of the second 2D image is positioned in the proportion of the target object outline of the virtual image.
The correction parameters represent the position matching relation between each key point of the human body on the virtual image and the outer contour of the human body on the second 2D image, and because the second 2D image is obtained by directly shooting the human body, the human body in the second 2D image is worn with clothes, in theory, the optimal correction result is that all key points of the human body on the virtual image are located in the outer contour of the human body in the second 2D image.
Specifically, the describedThe proportion R of the target object key point of the second 2D image in the target object outline of the virtual image 2 The expression can be expressed as follows:
wherein N is 1 N is the number of key points of the target object on the virtual image, which are positioned in the outline of the target object of the second 2D image 0 Is the total number of key points of the target object on the virtual image.
In this embodiment, only one of three different correction parameter types a, b, and c is calculated for each correction of the 3D model to be corrected, because in this embodiment, the number of second 2D images is one, and one correction parameter is calculated after each correction is completed.
S50, repeating the steps S30-S40 until the repeated steps reach the set times or the convergence condition is triggered.
The set number of times is manually set repetition number, for example, 500 times can be set, when the number of times of repeating the steps S30-S40 reaches the set number of times but the convergence condition is not satisfied, the repetition is not continued, and the 3D model after the last correction is used as a final 3D model; wherein, the reaching of the set number means that the number of the current repeated steps S30 to S40 is equal to the set number.
In this embodiment, the convergence condition is that the correction parameter calculated in step S40 reaches a corresponding threshold, and the value of the corresponding threshold is different according to the type of the correction parameter adopted, specifically as follows:
when the a-th correction parameter is calculated in the step S40, the corresponding threshold is a threshold of a matching parameter of at least part of the key points of the target objects of the virtual image and the second 2D image, the specific value of the threshold is different according to the different setting algorithms, and is also related to the actual requirement, the higher the requirement on the accuracy is, the smaller the value of the corresponding threshold is, and the lower the requirement on the accuracy is, the larger the value of the corresponding threshold is; for this kind of correction parameters, the corresponding trigger convergence condition means that the value of the correction parameter is less than or equal to the corresponding threshold value.
When the b-th correction parameter is calculated in the step S40, the corresponding threshold is a threshold of the target object outline matching parameters of the virtual image and the second 2D image, the specific value of the threshold is related to the actual requirement, the higher the requirement on the accuracy is, the larger the value of the corresponding threshold is, and the lower the requirement on the accuracy is, the smaller the value of the corresponding threshold is; for this kind of correction parameters, the corresponding trigger convergence condition means that the value of the correction parameter is greater than or equal to the corresponding threshold value.
When the c-th correction parameter is calculated in the step S40, the corresponding threshold is a proportional threshold of the target object key point of the second 2D image located in the target object outline of the virtual image, the specific value of the proportional threshold is related to the actual requirement, the higher the requirement on the accuracy is, the larger the value of the corresponding threshold is, and the lower the requirement on the accuracy is, the smaller the value of the corresponding threshold is; for this kind of correction parameters, the corresponding trigger convergence condition means that the value of the correction parameter is greater than or equal to the corresponding threshold value.
In step S40, a correction parameter is calculated after finishing correction of the 3D model to be corrected, after the correction parameter is obtained, it is determined whether the correction parameter reaches a convergence condition, that is, whether the value of the correction parameter reaches a corresponding threshold value is determined, if yes, steps S30-S40 are not repeated, the 3D model after the last correction is used as a final 3D model, and if not, steps S30-S40 are repeated continuously; in addition, while repeating the steps S30-S40, the number of repetitions is recorded, the recorded number of times is updated every time the correction process is completed, and whether the current number of times is equal to the set number of repetitions is also determined, if yes, the steps S30-S40 are not repeated regardless of whether the value of the correction parameter calculated after the correction reaches the corresponding threshold value, and the 3D model after the correction is used as the final 3D model.
The 3D model correction method provided in this embodiment may be used to perform iterative correction on a 3D model to be corrected of a target object to obtain a relatively accurate 3D model, after each correction, set a photographing parameter based on a second 2D image of the target object, perform virtual photographing on the 3D model to be corrected of the target object to obtain a virtual image, and calculate the virtual image and the target object of the second 2D image to obtain a correction parameter, so as to check the 3D model after the correction, and determine whether to continue the correction according to a check result; the 3D model correction method can obtain a final 3D model of the target object with high accuracy, avoids large difference with the real state of the target object, and the second 2D image used in the correction method can be obtained by adopting a smart phone or a common camera, has low cost, can be accepted by common families, and can be used in daily life.
Example 2
The present embodiment provides a 3D model correction method, which is the same as that of embodiment 1, and is not described in detail, and is different in that: in embodiment 1, only one of three different correction parameter types a, b, and c is calculated for each correction of the 3D model to be corrected; in this embodiment, at least two of three different correction parameter types a, b, and c, that is, only a and b, a and c, b and c, or all three correction parameters a, b, and c, are calculated after the 3D model to be corrected is corrected once.
When at least two of the correction parameters are calculated in step S40, the convergence condition is: each correction parameter reaches a corresponding threshold value, or the judgment parameter reaches a set threshold value;
the judging parameter is a weighted sum of the correction parameters;
the weight of each of the correction parameters may be set to the same or different values in each correction step.
Specifically, because the number of the second 2D images is one in the embodiment, after each correction is completed, the number of the correction parameters of the same kind is calculated to be one, that is, at least two correction parameters of different kinds are obtained, and each correction parameter of different kinds has a corresponding threshold value corresponding to the correction parameter of the same kind; in step S50, when determining whether to trigger the convergence condition, the convergence condition may be whether each correction parameter reaches a corresponding threshold.
Further, after each correction is completed and at least two kinds of correction parameters are calculated, the method further includes calculating a weighted sum of the correction parameters and obtaining a judgment parameter, and the convergence condition may further be that the judgment parameter reaches a set threshold.
In this embodiment, each time the correction of the 3D model to be corrected is completed, whether two types of correction parameters or three types of correction parameters are calculated, a judgment parameter may be calculated; when two types of correction parameters are calculated, judging the parameters as a weighted sum of the two correction parameters; when three types of correction parameters are calculated, judging the parameters as weighted sums of the three correction parameters; the types and the number of the correction parameters related to the calculation judgment parameters are different, the corresponding setting thresholds can be the same or different, and the setting thresholds can be specifically set according to actual needs.
When the different types of correction parameters are weighted and summed, dimension unification processing is needed, and the dimension unification processing can be embodied on the correction parameters, namely, preprocessing the correction parameters before the weighted and summed, and can be embodied on weight coefficients, and if necessary, the weight coefficients with different positive and negative values can be set.
When the correction parameters are weighted and summed, the weights of the correction parameters can be set to the same value or different values; in addition, in different correction steps, the weights of the same correction parameter may be set to the same value or may be set to different values.
In order that those skilled in the art can better understand the technical solution of the present embodiment, the following description will be given by way of example.
In the correction process of a certain human body 3D model, after a certain correction step, namely a certain repetition of steps S30-S40, only the a type and b type correction parameters after the correction are calculated, and two correction parameters are obtainedThe values of the correction parameters are denoted as m for convenience of description below 1 And m 2 The convergence condition in step S50 may be: the value m of the correction parameter of the a-th kind 1 Less than or equal to the corresponding threshold M 1 And, the value m of the correction parameter of the b 2 Greater than or equal to the corresponding threshold M 2 The method comprises the steps of carrying out a first treatment on the surface of the When the two conditions are met, judging a triggering convergence condition, namely, without repeating the steps S30-S40, and taking the 3D model after the correction as a final 3D model; when only one of the values of the two correction parameters reaches the corresponding threshold, or when both of the values of the two correction parameters do not reach the corresponding threshold, repeating steps S30-S40, i.e. performing the next correction step, and after correction, determining whether the correction needs to be ended by adopting the same method as the previous step, or determining by adopting a different method from the previous step, in this example, determining by adopting a different method from the previous correction step, specifically, calculating the a-th and b-th correction parameters after the correction, to obtain the value m of the two correction parameters 3 And m 4 Second to m 3 And m 4 Weighted summation to calculate the judgment parameter m 5 Due to the difference in the dimensions of the two correction parameters, the value m 3 Is in mm, the value m 4 Since the dimension of (2) is%and thus the dimension unification is required, for example, the dimension unification can be performed by setting the weight coefficient of the two, and the value m can be set here 3 The dimension of the weight coefficient of (2) is also adjusted to be the same, and the value m is required to be the same 4 The corresponding weight coefficient is set as a negative number, and the convergence condition is as follows: the judging parameter m 5 Reaching a set threshold value, in particular being less than or equal to the corresponding threshold value M 3 Here M 3 The specific values of (2) may be set according to the actual needs.
According to the 3D model correction method provided by the embodiment, at least two correction parameters are calculated based on the virtual image and the target object in the second 2D image, the corrected 3D model is checked, and whether the correction is needed to be continued or not is determined according to the check result; the 3D model correction method can also obtain a final 3D model of the target object with high accuracy, and avoid large difference with the real state of the target object.
Example 3
The present embodiment provides a 3D model correction method, which is the same as that of embodiment 1, and is not described in detail, and is different in that: in embodiment 1, a second 2D image of the target object is acquired in step S20; in this embodiment, two or more second 2D images of the target object need to be acquired, and correspondingly in step S30, photographing parameters of virtual photographing may be set based on photographing parameters of the plurality of second 2D images, and virtual photographing may be performed on the 3D model to be corrected of the target object to obtain a plurality of virtual images; correspondingly, in step S40, only one correction parameter is calculated (i.e. only one of three different correction parameter types a, b, and c is calculated) when the correction of the 3D model to be corrected is completed, a plurality of values belonging to the same correction parameter are obtained corresponding to the plurality of second 2D images and the plurality of virtual images, and the values of the correction parameters are weighted and summed to obtain a comprehensive correction parameter, where the convergence condition is: and the comprehensive correction parameters reach corresponding threshold values.
The detailed calculation method of the correction parameters corresponding to each second 2D image and the virtual image corresponding to the second 2D image is described in embodiment 1, and is not repeated here; in the determination of the integrated correction parameters, the weights of the correction parameters may be set to the same or different values in the respective correction steps.
Specifically, in step S40, a plurality of correction parameters are calculated each time the correction of the 3D model to be corrected is completed, and a comprehensive correction parameter is calculated according to the plurality of correction parameters, then it is determined whether the comprehensive correction parameter reaches a convergence condition, that is, whether the value of the comprehensive correction parameter reaches a corresponding threshold value is determined, if yes, steps S30-S40 are not repeated, and the 3D model after the last correction is taken as a final 3D model, if not, steps S30-S40 are continuously repeated; in addition, while repeating the steps S30-S40, the number of repetitions is recorded, the recorded number of times is updated every time the correction process is completed, and whether the current number of times is equal to the set number of repetitions is also determined, if yes, the steps S30-S40 are not repeated regardless of whether the value of the integrated correction parameter calculated after the correction reaches the corresponding threshold value, and the 3D model after the correction is used as the final 3D model.
The 3D model correction method provided in this embodiment may be used to perform iterative correction on a 3D model to be corrected of a target object to obtain a relatively accurate 3D model, after each correction, set a photographing parameter based on a plurality of second 2D images of the target object, perform virtual photographing on the 3D model to be corrected of the target object to obtain a plurality of virtual images, calculate the plurality of virtual images and the target object in the corresponding second 2D images to obtain a plurality of correction parameters, perform weighted summation on the plurality of correction parameters to obtain a comprehensive correction parameter, check the 3D model after the correction by using the comprehensive correction parameter, and determine whether to continue the correction according to a check result; the 3D model correction method can obtain a final 3D model of the target object with high accuracy, avoids large difference with the real state of the target object, and the second 2D image used in the correction method can be obtained by adopting a smart phone or a common camera, has low cost, can be accepted by common families, and can be used in daily life.
Example 4
The present embodiment provides a 3D model correction method, and the points of the embodiment that are the same as those of embodiment 3 are not described in detail, and the difference is that: in embodiment 3, each time the correction of the 3D model to be corrected is completed, only a plurality of sets of values corresponding to one type of correction parameters of the plurality of second 2D images and the virtual image are calculated; in this embodiment, each time the correction of the 3D model to be corrected is completed, at least two types of correction parameters are calculated, that is, only a and b, a and c, b and c, or three types of correction parameters a, b and c may be calculated.
When at least two of the correction parameters are calculated in step S40, the convergence condition is: each comprehensive correction parameter respectively reaches a corresponding threshold value, or the judgment parameter reaches a set threshold value;
the comprehensive correction parameters are weighted sums of correction parameters corresponding to a plurality of second 2D images;
the judging parameter is a weighted sum of the comprehensive correction parameters;
the weight of each of the correction parameters or each of the integrated correction parameters may be set to the same or different values in each correction step.
Specifically, because the number of the second 2D images is multiple in the embodiment, after each correction is completed, the number of the correction parameters of the same kind is multiple, that is, at least two correction parameters of different kinds are obtained, and then the correction parameters of the same kind are weighted and summed to obtain corresponding comprehensive correction parameters, in the embodiment, at least two values of the comprehensive correction parameters are obtained, and each comprehensive correction parameter has a corresponding threshold value corresponding to the value; in step S50, when determining whether to trigger the convergence condition, the convergence condition may be whether each integrated correction parameter reaches a corresponding threshold.
Further, after each correction is completed and at least two kinds of comprehensive correction parameters are obtained through calculation, the method further comprises the step of calculating a weighted sum of all comprehensive correction parameters and obtaining a judgment parameter, and the convergence condition can further be that the judgment parameter reaches a set threshold value.
In this embodiment, each time the correction of the 3D model to be corrected is completed, a determination parameter may be calculated no matter whether two types of comprehensive correction parameters or three types of comprehensive correction parameters are calculated; when two types of comprehensive correction parameters are calculated, judging the parameters as the weighted sum of the two comprehensive correction parameters; when three types of comprehensive correction parameters are calculated, judging the parameters as weighted sums of the three comprehensive correction parameters; the types and the number of the comprehensive correction parameters related to the calculation judgment parameters are different, the corresponding set thresholds can be the same or different, and the set thresholds can be specifically set according to actual needs.
When the weighted summation is performed on the comprehensive correction parameters of different types, dimension unification processing is required, and the dimension unification processing may be performed on the comprehensive correction parameters, that is, the comprehensive correction parameters before the weighted summation are preprocessed, or may be performed on weight coefficients, and if necessary, the weight coefficients with different positive and negative values may be set.
When the weight summation is performed on the comprehensive correction parameters, the weights of the comprehensive correction parameters can be set to be the same value or different values; in addition, in different correction steps, the weights of the same integrated correction parameter may be set to the same value or may be set to different values.
In order that those skilled in the art can better understand the technical solution of the present embodiment, the following description will be given by way of example.
In the correction process of a certain human body 3D model, after a certain correction step, that is, a certain repetition of steps S30-S40, only calculating the a-th and b-th correction parameters after the correction to obtain values of two groups of correction parameters, for convenience of description below, the two groups of correction parameters are recorded as a first group of correction parameters and a second group of correction parameters, the two groups of correction parameters comprise a plurality of correction parameters with equal number, and the number of correction parameters in each group is equal to the number of virtual shots, respectively calculating weighted sums of the correction parameters in the two groups of correction parameters to obtain two comprehensive correction parameters, for convenience of description below, recording the first comprehensive correction parameters and the second comprehensive correction parameters, and then the convergence condition in step S50 may be: the value of the first integrated correction parameter is less than or equal to the corresponding threshold value Z 1 And the value of the second comprehensive correction parameter is greater than or equal to the corresponding threshold Z 2 The method comprises the steps of carrying out a first treatment on the surface of the When the two conditions are met, judging a triggering convergence condition, namely, without repeating the steps S30-S40, and taking the 3D model after the correction as a final 3D model; when only one of the values of the two comprehensive correction parameters reaches the corresponding threshold, or neither of the values of the two comprehensive correction parameters reaches the corresponding threshold, the steps S30-S40 are repeated, i.e. the next correction step is performed, and after correction, the same method as the previous step can be adoptedThe method may be used to determine whether the correction is to be ended or not, or may be used in a different manner from the previous step.
In this example, a method different from the previous correction step is adopted to perform the judgment, specifically, first, the first comprehensive correction parameter and the second comprehensive correction parameter after the correction are calculated, then the two comprehensive correction parameters are weighted and summed to obtain corresponding judgment parameter values, and because the dimensions of the two comprehensive correction parameters are different, the dimensions of the first comprehensive correction parameter are millimeter, the dimensions of the second comprehensive correction parameter are% and therefore, dimension unification processing needs to be performed, for example, the dimension unification can be performed by setting the weight coefficients of the first comprehensive correction parameter and the second comprehensive correction parameter, the dimension of the first comprehensive correction parameter is also adjusted to be% by setting the weight coefficient of the first comprehensive correction parameter, and in addition, the corresponding weight coefficient of the second comprehensive correction parameter needs to be set to be negative, then the convergence condition is that: the judging parameter reaches a set threshold, and is specifically smaller than or equal to the set threshold, where the specific value of the set threshold can be set according to actual needs.
According to the 3D model correction method provided by the embodiment, at least two correction parameters are calculated based on the virtual image and the target object in the second 2D image, the corrected 3D model is checked, and whether the correction is needed to be continued or not is determined according to the check result; the 3D model correction method can also obtain a final 3D model of the target object with high accuracy, and avoid large difference with the real state of the target object.
Example 5
The embodiment provides a terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor is implemented when executing the computer program: the steps of the 3D model modification method as described in example 1, or example 2, or example 3, or example 4.
As shown in fig. 4, the computer system 400 of the electronic device includes a CPU (central processing unit) 401 that can execute various appropriate actions and processes according to a program stored in a ROM (read only memory) 402 or a program loaded from a storage section 408 into a RAM (random access memory) 403. In the RAM403, various programs and data required for the system operation are also stored. The CPU401, ROM402, and RAM403 are connected to each other by a bus 404. An I/O (input/output) interface 405 is also connected to bus 404.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output portion 407 including a cathode ray tube, a liquid crystal display, and the like, a speaker, and the like; a storage section 408 including a hard disk or the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
In particular, according to an embodiment of the present invention, the procedure of the 3D model correction method described above with reference to embodiment 1 may be implemented as a computer software program. For example, embodiment 1 of the present invention includes a computer program product comprising a computer program loaded on a computer readable storage medium, the computer program containing program code for performing the 3D model modification method described in embodiment 1. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 409 and/or installed from the removable medium 411. When executed by the CPU401, the computer program performs the functions defined above in the present computer system 400.
The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present application and its core ideas. The foregoing is merely illustrative of the preferred embodiments of this application, and it is noted that there is objectively no limit to the specific structure disclosed herein, since numerous modifications, adaptations and variations can be made by those skilled in the art without departing from the principles of the application, and the above-described features can be combined in any suitable manner; such modifications, variations and combinations, or the direct application of the inventive concepts and aspects to other applications without modification, are contemplated as falling within the scope of the present application.

Claims (11)

1. A method for 3D model modification, the method comprising the steps of:
s10, acquiring a 3D model to be corrected of a target object;
s20, acquiring a second 2D image of the target object, wherein the second 2D image can be one, two or more images with different angles;
s30, setting photographing parameters of the 3D model to be corrected, virtually photographing the 3D model to be corrected to obtain a virtual image, wherein the photographing parameters are obtained based on the second 2D image;
S40, correcting the 3D model to be corrected, and calculating at least one of the following correction parameters:
a. the virtual image and the target object of the second 2D image are at least partially key point position matched with parameters;
b. the target object outline matching parameters of the virtual image and the second 2D image;
c. the proportion of the target object key points of the second 2D image in the target object outline of the virtual image;
s50, repeating the steps S30-S40 until the repeated steps reach the set times or the convergence condition is triggered.
2. The method for 3D model modification according to claim 1, wherein,
setting photographing parameters of the 3D model to be corrected at least comprises one of the following steps:
setting photographing parameters based on photographing parameters of the second 2D image;
and iterating the photographing parameters so that the virtual image is matched with at least part of the key point positions of the second 2D image.
3. The method for 3D model modification according to claim 1, wherein,
the target object position matching parameters of the virtual image and the second 2D image are function values of a loss function, wherein the function values of the loss function are distances between a first data set and a second data set calculated according to a set algorithm;
The first data set is: a set of location information of at least a portion of the keypoints of the target object in the virtual image;
the second data set is: a set of location information of at least part of the keypoints of the target object in the second 2D image;
the setting algorithm is at least one of the following algorithms:
the distance sum A of each corresponding point in the first data set and the second data set;
a sum of squares of distances B for respective points in the first data set and the second data set;
a distance weighted sum C of the corresponding points in the first data set and the second data set;
and the distance square weighted sum D of the corresponding points in the first data set and the second data set.
4. The method for 3D model modification according to claim 3, wherein,
the distance between the corresponding points in the first data set and the second data set is any one of the following:
euclidean distance, manhattan distance, chebyshev distance, minkowski distance, normalized euclidean distance, mahalanobis distance, angle cosine, hamming distance, correlation coefficient, correlation distance, information entropy.
5. The method for 3D model modification according to claim 1, wherein,
The target object outline matching parameter of the virtual image and the second 2D image is the area proportion of the target object in the virtual image in the target object outline of the second 2D image.
6. The method for 3D model modification according to claim 1, wherein,
the number of the second 2D images is one;
when at least two of the correction parameters are calculated in step S40, the convergence condition is: each correction parameter reaches a corresponding threshold value, or the judgment parameter reaches a set threshold value;
the judging parameter is a weighted sum of the correction parameters;
the weight of each of the correction parameters may be set to the same or different values in each correction step.
7. The method for 3D model modification according to claim 1, wherein,
the number of the second 2D images is a plurality;
when at least two of the correction parameters are calculated in step S40, the convergence condition is: each comprehensive correction parameter respectively reaches a corresponding threshold value, or the judgment parameter reaches a set threshold value;
the comprehensive correction parameters are weighted sums of correction parameters corresponding to a plurality of second 2D images;
the judging parameter is a weighted sum of the comprehensive correction parameters;
The weight of each of the correction parameters or each of the integrated correction parameters may be set to the same or different values in each correction step.
8. The method for 3D model modification according to claim 1, wherein,
the photographing parameters include at least one of a focal length of the photographing apparatus and relative position information of the photographing apparatus and the target object.
9. The method for 3D model modification according to claim 1, wherein,
correcting the 3D model to be corrected is realized through affine transformation or gesture transformation;
the affine transformation includes: translational transformation, rotational transformation, scaling transformation, shearing transformation, mirror transformation, and any combination of the above;
the gesture conversion includes: skin transformations formed based on basic morphology transformations of the linear matrix, partial morphology transformations, and movements relative to the joints.
10. The method for 3D model modification according to claim 1, wherein,
the 3D model to be corrected is generated based on a first 2D image;
the first 2D image and the second 2D image are acquired at the same time.
11. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the computer program when executed by the processor implements: the method for 3D model modification according to any one of claims 1 to 10.
CN202310720903.6A 2023-06-16 2023-06-16 3D model correction method and terminal equipment Pending CN116758124A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310720903.6A CN116758124A (en) 2023-06-16 2023-06-16 3D model correction method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310720903.6A CN116758124A (en) 2023-06-16 2023-06-16 3D model correction method and terminal equipment

Publications (1)

Publication Number Publication Date
CN116758124A true CN116758124A (en) 2023-09-15

Family

ID=87947355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310720903.6A Pending CN116758124A (en) 2023-06-16 2023-06-16 3D model correction method and terminal equipment

Country Status (1)

Country Link
CN (1) CN116758124A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143212A (en) * 2014-07-02 2014-11-12 惠州Tcl移动通信有限公司 Reality augmenting method and system based on wearable device
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN112016497A (en) * 2020-09-04 2020-12-01 王海 Single-view Taijiquan action analysis and assessment system based on artificial intelligence
CN112084453A (en) * 2020-09-22 2020-12-15 西京学院 Three-dimensional virtual display system, method, computer equipment, terminal and storage medium
CN112669436A (en) * 2020-12-25 2021-04-16 嘉兴恒创电力集团有限公司博创物资分公司 Deep learning sample generation method based on 3D point cloud
WO2021174939A1 (en) * 2020-03-03 2021-09-10 平安科技(深圳)有限公司 Facial image acquisition method and system
KR20220074103A (en) * 2020-11-27 2022-06-03 주식회사 소프트그래피 Device and method of image registration for 3d models
WO2023027712A1 (en) * 2021-08-26 2023-03-02 Innopeak Technology, Inc. Methods and systems for simultaneously reconstructing pose and parametric 3d human models in mobile devices

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143212A (en) * 2014-07-02 2014-11-12 惠州Tcl移动通信有限公司 Reality augmenting method and system based on wearable device
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
WO2021174939A1 (en) * 2020-03-03 2021-09-10 平安科技(深圳)有限公司 Facial image acquisition method and system
CN112016497A (en) * 2020-09-04 2020-12-01 王海 Single-view Taijiquan action analysis and assessment system based on artificial intelligence
CN112084453A (en) * 2020-09-22 2020-12-15 西京学院 Three-dimensional virtual display system, method, computer equipment, terminal and storage medium
KR20220074103A (en) * 2020-11-27 2022-06-03 주식회사 소프트그래피 Device and method of image registration for 3d models
CN112669436A (en) * 2020-12-25 2021-04-16 嘉兴恒创电力集团有限公司博创物资分公司 Deep learning sample generation method based on 3D point cloud
WO2023027712A1 (en) * 2021-08-26 2023-03-02 Innopeak Technology, Inc. Methods and systems for simultaneously reconstructing pose and parametric 3d human models in mobile devices

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
尚洋等: ""三维目标位姿跟踪与模型修正"", 《测绘学报》, vol. 47, no. 06 *
梁永文: ""基于 Unity3D技术 的景区虚拟旅游展示研究"", 《中国新通信》, vol. 25, no. 07 *

Similar Documents

Publication Publication Date Title
CN108875524B (en) Sight estimation method, device, system and storage medium
CN110909693B (en) 3D face living body detection method, device, computer equipment and storage medium
CN104899563B (en) Two-dimensional face key feature point positioning method and system
KR20220066366A (en) Predictive individual 3D body model
CN111325846B (en) Expression base determination method, avatar driving method, device and medium
CN111862299A (en) Human body three-dimensional model construction method and device, robot and storage medium
CN109711472B (en) Training data generation method and device
CN111179247A (en) Three-dimensional target detection method, training method of model thereof, and related device and equipment
JP2020119127A (en) Learning data generation method, program, learning data generation device, and inference processing method
WO2024012333A1 (en) Pose estimation method and apparatus, related model training method and apparatus, electronic device, computer readable medium and computer program product
CN111815768B (en) Three-dimensional face reconstruction method and device
CN111563458A (en) Target detection and positioning method based on YOLOv3 and OpenCV
CN113902851A (en) Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN114529640B (en) Moving picture generation method, moving picture generation device, computer equipment and storage medium
CN111862278A (en) Animation obtaining method and device, electronic equipment and storage medium
CN113538682B (en) Model training method, head reconstruction method, electronic device, and storage medium
CN113902853A (en) Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN117216591A (en) Training method and device for three-dimensional model matching and multi-modal feature mapping model
CN115661370B (en) Infrared 3D human body model construction method and storage medium
CN116758124A (en) 3D model correction method and terminal equipment
CN114913287B (en) Three-dimensional human body model reconstruction method and system
CN113610969B (en) Three-dimensional human body model generation method and device, electronic equipment and storage medium
CN115409949A (en) Model training method, visual angle image generation method, device, equipment and medium
CN111462337B (en) Image processing method, device and computer readable storage medium
JP2000353244A (en) Method for obtaining basic matrix, method for restoring euclidean three-dimensional information and device therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination