CN115830245A - Model reconstruction method and device - Google Patents

Model reconstruction method and device Download PDF

Info

Publication number
CN115830245A
CN115830245A CN202211730661.0A CN202211730661A CN115830245A CN 115830245 A CN115830245 A CN 115830245A CN 202211730661 A CN202211730661 A CN 202211730661A CN 115830245 A CN115830245 A CN 115830245A
Authority
CN
China
Prior art keywords
detail
pixel
point
image
data point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211730661.0A
Other languages
Chinese (zh)
Inventor
范锡睿
赵亚飞
张世昌
陈毅
杜宗财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211730661.0A priority Critical patent/CN115830245A/en
Publication of CN115830245A publication Critical patent/CN115830245A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The disclosure provides a model reconstruction method and a model reconstruction device, relates to the field of computers, and particularly relates to the field of data processing. The specific implementation scheme is as follows: acquiring a plurality of object images of a real object and a relative position relation of image acquisition equipment when each object image is acquired; generating an object point cloud of a real object according to each object image and the relative position relation; obtaining the image gradient of the detail pixel points; adjusting the position of a detail data point in the object point cloud according to the image gradient of the detail pixel point; and constructing an object model of the real object according to the adjusted object point cloud. By applying the model reconstruction scheme provided by the embodiment of the disclosure, the authenticity of the model can be improved.

Description

Model reconstruction method and device
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of digital human, metastic, augmented reality, virtual reality, mixed reality, and augmented reality technologies, and in particular, to a method and an apparatus for model reconstruction.
Background
The model reconstruction technology can be applied to various scenes such as a metasma, a digital person, an AR (Augmented Reality), a VR (Virtual Reality) and the like. For example, in a digital human scenario, a virtual character such as a virtual customer service or a virtual anchor may be constructed using a model reconstruction technique.
In the related technical scheme for reconstructing the model of the real object, images of the real object acquired from multiple angles can be obtained, the relative position relationship of the image acquisition equipment when the multiple images are acquired is obtained, and the model reconstruction is carried out on the basis of the multiple images and the relative position relationship, wherein the purpose is that the overall similarity between the constructed model and the real object is the highest.
Disclosure of Invention
The disclosure provides a model reconstruction method and device.
According to an aspect of the present disclosure, there is provided a model reconstruction method including:
acquiring a plurality of object images of a real object and a relative position relation of image acquisition equipment when each object image is acquired, wherein the acquisition angles corresponding to the object images are different;
generating an object point cloud of the real object according to each object image and the relative position relation;
obtaining an image gradient of detail pixel points, wherein the detail pixel points are pixel points which represent detail information of the real object in the object image;
adjusting the position of a detail data point in the object point cloud according to the image gradient of the detail pixel point, wherein the detail data point is as follows: a data point corresponding to the detail pixel point;
and reconstructing an object model of the real object according to the adjusted object point cloud.
According to another aspect of the present disclosure, there is provided a model reconstruction apparatus including:
the information acquisition module is used for acquiring a plurality of object images of the real object and the relative position relation of the image acquisition equipment when acquiring each object image, wherein the acquisition angles corresponding to each object image are different;
the point cloud generating module is used for generating object point cloud of the real object according to each object image and the relative position relation;
a gradient obtaining module, configured to obtain an image gradient of a detail pixel, where the detail pixel is a pixel in the object image that represents detail information of the real object;
a position adjusting module, configured to adjust a position of a detail data point in the object point cloud according to an image gradient of the detail pixel point, where the detail data point is: a data point corresponding to the detail pixel point;
and the model reconstruction module is used for reconstructing an object model of the real object according to the adjusted object point cloud.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the model reconstruction method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described model reconstruction method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described model reconstruction method.
As can be seen from the above, when the model is constructed by applying the scheme provided by the embodiment of the present disclosure, the object point cloud is generated according to each object image and the above relative position relationship, the image gradient of the detail pixel point in the object image is obtained, and the position of the detail data point in the object point cloud is adjusted according to the image gradient of the detail pixel point, so that the detail information of the object reflected by the data point in the adjusted object point cloud can be more accurate, and thus, the model can be reconstructed according to the adjusted object point cloud, and the object model obtained by reconstruction can be more vivid.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flowchart of a first model reconstruction method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a second model reconstruction method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a first model reconstruction apparatus provided in an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a second model reconstruction apparatus provided in the embodiment of the present disclosure;
FIG. 5 is a block diagram of an electronic device for implementing a model reconstruction method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Because the model reconstruction scheme in the related art reconstructs the model with the aim of maximizing the overall similarity between the constructed model and the real object, the constructed model often lacks the object details of the real object, so that the similarity between the reconstructed model and the real object is still low and the reality of the model is low although the model reconstruction aims at maximizing the similarity.
In order to solve the above problem, the embodiments of the present disclosure provide a model reconstruction method and apparatus. The following is a detailed description of specific examples.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first model reconstruction method according to an embodiment of the present disclosure, where the method includes the following steps S101 to S105 in this embodiment.
Step S101: the method comprises the steps of obtaining a plurality of object images of a real object and the relative position relationship of image acquisition equipment when each object image is acquired.
Wherein, the corresponding collection angles of each object image are different.
The real object can be a human, an animal, a building and the like, and the real object can also be a human face, an animal body and the like.
Specifically, the above-described object image and the relative positional relationship may be obtained by any of the following various implementations.
In a first implementation manner, a plurality of acquisition positions of the image acquisition device may be preset, a relative position relationship between the plurality of acquisition positions is obtained, and images of real objects acquired by the image acquisition device at the respective acquisition positions are obtained.
In a second implementation manner, when the image acquisition device acquires an image of a real object, the acquisition position of the image acquisition device may be recorded, the image acquired by the image acquisition device may be obtained, and the relative position relationship of the image acquisition device may be determined according to the recorded acquisition position.
In a third implementation manner, device parameters of the image capturing device may be calibrated in advance, for example, the image capturing device may be a camera, and the device parameters may be camera internal parameters. After a plurality of object images of a real object are obtained, image key points with the characteristic to the image content in each object image are extracted, the image key points of each object image are matched to obtain the matched image key points representing the same image content, and the relative position relation of image acquisition equipment when each object image is acquired can be calculated by utilizing the prior art according to the position of the matched image key points and the equipment parameters.
Step S102: and generating an object point cloud of the real object according to the object images and the relative position relation.
Specifically, according to each object image and the relative position relationship, an object point cloud of a real object can be generated by using the existing point cloud generation technology, and details are not described here.
Step S103: and obtaining the image gradient of the detail pixel points.
The detail pixel points are pixel points which represent the detail information of the real object in the object image.
The real object has object details, and in the object image, detail pixel points representing detail information of the real object are pixel points corresponding to the object details of the real object. For example, the real object may be a face, the face details include details such as face wrinkles, pores, acne marks, and the like, and the detail pixel points are pixel points corresponding to the details such as face wrinkles, pores, acne marks, skin lines, and the like.
Specifically, the image gradient of the detail pixel point can be obtained through any one of the following two implementation manners.
In a first implementation manner, the high-pass filtering processing may be performed on the object image to obtain detail pixel points in the object image and an image gradient of the detail pixel points.
Because the difference between the pixel values of the detail pixel points corresponding to the details of the object in the object image and other pixel points around the detail pixel points is large, for example, in the object image, the pixel value of the pixel point corresponding to the wrinkle on the face is large relative to the pixel value of the pixel point corresponding to the skin around the wrinkle, and the high-pass filtering processing can filter out high-frequency information in the image, namely the edge of the object in the image, and the edge of the object in the image is just the pixel point of the part of the object in the area where the pixel value is large, so the high-pass filtering processing is performed on the object image, and the detail pixel points in the object image can be accurately filtered out.
The high-pass filtering process for the object image can be implemented by using the existing filtering technology, and is not described in detail here. In addition, when the image is subjected to high-pass filtering, the image gradient of each pixel point in the image needs to be calculated, so that after the detail pixel points in the object image are obtained, the image gradient of the detail pixel points can be determined from the calculated image gradients of the pixel points.
In the implementation mode, the high-pass filtering processing is carried out on the object image, and the detail pixel points in the object image and the image gradients of the detail pixel points can be accurately obtained, so that the image gradients of the more accurate detail pixel points are utilized, and the accuracy of subsequent processing can be improved.
In a second implementation manner, a detail detection model can be obtained by pre-training and is used for detecting detail pixel points in an image and image gradients of the detail pixel points, so that an object image is input into the detail detection model, and the positions of the detail pixel points output by the model and the image gradients can be obtained.
In addition, after the position of the detail pixel point output by the model is obtained, the detail pixel point in the object image and adjacent pixel points around the detail pixel point can be determined, and the image gradient representing the change degree of the pixel value is calculated according to the pixel values of the detail pixel point and the adjacent pixel points.
Step S104: and adjusting the positions of the detail data points in the object point cloud according to the image gradient of the detail pixel points.
Wherein the detail data points are: and data points corresponding to the detail pixel points.
Specifically, the object point cloud is generated according to each object image, so that a corresponding relationship exists between each data point in the object point cloud and a pixel point in each object image. When the image gradient of the detail pixel points is obtained, the detail pixel points in the object image can be determined, so that the detail pixel points corresponding to the detail pixel points can be determined in the object point cloud according to the corresponding relation between the data points in the object point cloud and the pixel points in the object image, and the positions of the detail pixel points corresponding to the detail pixel points are adjusted according to the image gradient of the detail pixel points. For example, the depth of the detail data points may be adjusted.
In an embodiment of the present disclosure, when the position of the detail data point is adjusted according to the image gradient of the detail pixel point, an adjustment range for adjusting the detail data point may be determined according to the image gradient of the detail pixel point, and the position of the detail data point is adjusted according to the determined adjustment range in the preset direction of the object point cloud.
The preset direction may be a depth direction, a height direction, and the like of the object point cloud.
The preset direction may be set manually.
In addition, the image gradient of the pixel point is a vector with direction and size, the size of the image gradient of the pixel point represents the change degree of the pixel value at the position of the pixel point, and the direction of the image gradient of the pixel point represents that the pixel value at the position of the pixel point changes from high to low or from low to high, so that when the position of the detail data point is adjusted according to the image gradient of the pixel point, the preset direction can be determined according to the direction of the image gradient of the detail pixel point.
The adjustment amplitude can be expressed as the distance between the positions of the data points before and after adjustment; the adjustment amplitude can also be represented by an adjustment angle formed by a first straight line and a second straight line, wherein the first straight line is a straight line where the data point and the origin of the object point cloud are located before adjustment, and the second straight line is a straight line where the data point and the origin are located after adjustment.
Specifically, when the adjustment range is determined, the image gradient of the detail pixel point and the adjustment range of the detail data point may be in positive correlation, and the larger the image gradient of the detail pixel point is, the larger the adjustment range of the detail data point may be; the smaller the image gradient of the detail pixel point is, the smaller the adjustment amplitude of the detail data point can be. Therefore, a direct proportion coefficient can be preset, the direct proportion coefficient is multiplied by the size of the image gradient to obtain a multiplication result, the multiplication result is used as the adjustment amplitude, and the corresponding relation between the image gradient range and the adjustment amplitude can also be set, so that after the image gradient of the detail pixel point is obtained, the range of the obtained image gradient can be determined, and the adjustment amplitude corresponding to the range of the obtained image gradient is determined according to the corresponding relation. Thus, after the adjustment amplitude is determined, the position of the detail data point can be adjusted according to the determined adjustment amplitude in the preset direction.
In the scheme, because the difference between the object details and other parts of the real object is large, the image gradient reflected as the detail pixel points in the object image is large, so that the adjustment range for adjusting the detail data points can be accurately determined according to the image gradient of the detail pixel points, and the positions of the detail data points can be accurately adjusted in the preset direction.
In addition, after the above-mentioned detail data point is determined, other data points around the detail data point may be determined, and the other data points may also be used as the detail data points, so that when the position of the detail data point is adjusted according to the image gradient, the position of the other data points may also be adjusted. Specific implementations for adjusting the positions of other data points can be found in the following embodiments, and are not described in detail here.
Step S105: and reconstructing an object model of the real object according to the adjusted object point cloud.
Specifically, according to the adjusted object point cloud, an object model of the real object can be reconstructed by using the existing model reconstruction technology, and details are not described here.
For example, an object model of the real object may be reconstructed from the adjusted object point cloud using existing pitch reconstruction algorithms.
As can be seen from the above, when the model is constructed by applying the scheme provided by the embodiment of the present disclosure, the object point cloud is generated according to each object image and the above relative position relationship, the image gradient of the detail pixel point in the object image is obtained, and the position of the detail data point in the object point cloud is adjusted according to the image gradient of the detail pixel point, so that the detail information of the object reflected by the data point in the adjusted object point cloud can be more accurate, and thus, the model can be reconstructed according to the adjusted object point cloud, and the object model obtained by reconstruction can be more vivid.
In addition, the requirement of the model reconstruction scheme provided by the embodiment of the disclosure on the image acquisition equipment is low, so that the model reconstruction scheme provided by the embodiment of the disclosure can realize high-precision model reconstruction with low cost.
In an embodiment of the present disclosure, after the positions of the detail data points in the object point cloud are adjusted and before the object model is reconstructed, smoothing may be performed on each of the detail data points, and then the object model is reconstructed according to the object point cloud after smoothing, so that the authenticity of the reconstructed object model can be further improved.
In an embodiment of the present disclosure, the real object is a human face, and the object image is a human face image.
In this case, an object point cloud is generated from each face image, the positions of the detail data points in the object point cloud are adjusted, and a face model can be reconstructed from the adjusted object point cloud of the face. Because the point cloud of the object used for reconstruction is adjusted, the detail information of the human face reflected by the data points in the point cloud is more accurate, and therefore, the human face model obtained by reconstruction is more vivid.
The following describes a specific implementation of adjusting the position of the detail data point and other data points around the detail data point.
In one embodiment of the present disclosure, when adjusting the position of the detail data point, a target data point corresponding to the detail pixel point may be determined in the object point cloud, a data point whose distance from the target data point is smaller than a preset distance threshold is determined in the object point cloud, both the determined data points are used as the detail data point, and then the position of the detail data point is adjusted according to the image gradient of the detail pixel point and the target distance.
Wherein the target distance is the distance between the detail data point and the target data point.
The distance threshold may be artificially set.
The specific implementation manner of determining the target data point corresponding to the detail pixel point can be referred to the step S104, and is not described herein again.
Specifically, after the target data point is determined, a spherical range centered on the position of the target data point and having the distance threshold as a radius may be determined, and data points located within the spherical range may be determined, such that all data points within the spherical range may be determined as detail data points. After the detail data points are determined, for each detail data point, if the detail data point is the target data point, the target distance is 0, at this time, a target adjustment amplitude may be determined according to the image gradient of the detail pixel point, and if the detail data point is not the target data point, an adjustment amplitude of the detail data point may be determined according to the target distance between the detail data point and the target adjustment amplitude, where the larger the target distance is, the smaller the adjustment amplitude of the detail data point is, that is, the farther the detail data point is from the target data point, the smaller the adjustment amplitude is. Thus, after the adjustment range of each detail data point is determined, the position of each detail data point can be adjusted according to the adjustment range of each detail data point.
As can be seen from the above, in the scheme provided by the embodiment of the present disclosure, the target data point corresponding to the detail pixel point and the data point having the distance from the target data point smaller than the preset distance threshold may be both used as the detail data point, so as to adjust the positions of the plurality of detail data points, thereby further enabling the detail information of the object reflected by the adjusted object point cloud to be more accurate, and further enabling the model reconstruction to be performed according to the adjusted object point cloud, so as to enable the reconstructed object model to be more vivid.
In order to further improve the accuracy of model reconstruction, in an embodiment of the present disclosure, referring to fig. 2, a flowchart of a second model reconstruction method is provided, and in this embodiment, the method includes the following steps S201 to S207.
Step S201: the method comprises the steps of obtaining a plurality of object images of a real object and the relative position relationship of image acquisition equipment when each object image is acquired.
Step S202: and generating an object point cloud of the real object according to the object images and the relative position relation.
Step S203: and obtaining the image gradient of the detail pixel points.
Step S204: and adjusting the position of the detail data point in the object point cloud according to the image gradient of the detail pixel point.
The steps S201 to S204 are the same as the steps S101 to S104, respectively, and are not described again here.
Step S205: and determining a mapping pixel value of a pixel point obtained by mapping the detail data point back to the object image according to the adjusted position and the relative position relation of the detail data point.
Specifically, according to the adjusted position of the detail data point and the above-mentioned relative position relationship, the existing mapping technology can be used to determine the mapping pixel value of the pixel point obtained by mapping the detail data point back to the object image, and details are not described here.
Step S206: and adjusting the position of the detail data point again according to the pixel value of the detail pixel point and the mapping pixel value, and returning to the step of determining the mapping pixel value of the pixel point obtained by mapping the detail data point back to the object image until the adjustment finishing condition is met.
The adjustment ending conditions may be various, and if the adjustment ending conditions are different, the execution flow of this step is also different.
In an embodiment of the present disclosure, the adjustment ending condition may be that the number of times of adjustment of the detail data point reaches a preset number.
In this case, the number of times of adjusting the detail data points may be recorded, and after the position of the detail data points is adjusted according to the pixel values of the detail pixel points and the mapped pixel values each time, the number of times of adjustment recorded is incremented by one, and whether the incremented number of times of adjustment reaches a preset number of times is determined, and if not, the step of determining the mapped pixel values of the pixel points obtained by mapping the detail data points back to the object image is returned; if so, the following step S207 is executed.
In another embodiment of the present disclosure, the adjustment ending condition may be that a difference between a pixel value of the detail pixel and the re-determined mapped pixel value is smaller than a preset pixel value threshold.
In this case, after the position of the detail data point is adjusted and the mapping pixel value is obtained by re-mapping, the difference between the pixel value of the detail pixel point and the re-determined mapping pixel value may be calculated, if the difference is greater than or equal to the preset pixel value threshold, the position of the detail data point is adjusted again according to the pixel value of the detail pixel point and the re-determined mapping pixel value, the step of determining the mapping pixel value is returned again, and if the difference is less than the preset pixel value threshold, the subsequent step S207 is performed.
The adjustment ending condition may also be other conditions, and is not described in detail here.
In addition, specific implementation manners for adjusting the positions of the detail data points can be found in the following embodiments, and detailed descriptions thereof are omitted here.
Step S207: and reconstructing an object model of the real object according to the adjusted object point cloud.
This step is the same as step S105, and is not described herein again.
As can be seen from the above, when the scheme provided by the embodiment of the present disclosure is applied to model reconstruction, after the position of the detail data point is adjusted according to the image gradient, the position of the detail data point can be adjusted again according to the pixel value of the detail pixel point and the mapping pixel value until the adjustment ending condition is satisfied, so that the adjusted detail data point can more accurately represent the detail information of the real object, and thus the object model is reconstructed according to the adjusted object point cloud, and the accuracy of model reconstruction and the authenticity of the reconstructed object model can be improved.
Specific implementations for adjusting the position of the detail data points are described below.
In one embodiment of the present disclosure, the position of the detail data points may be adjusted by either of the following two implementations.
In a first implementation, a pixel value difference between a pixel value of a detail pixel and a mapped pixel value may be calculated, and the difference is multiplied by a preset adjustment scale coefficient to obtain an adjustment parameter for adjusting the position of a data point, so as to adjust the position of the detail data point according to the adjustment parameter.
For example, the adjustment scaling factor may indicate that each difference between a pixel value of a detail pixel and a mapped pixel value is 1 pixel value unit, and then the detail data point is adjusted by m millimeters in the preset direction, so that if the difference between the pixel values is n, the adjustment parameter may be calculated to be m millimeters, and at this time, the detail data point may be adjusted by m millimeters in the preset direction.
In a second implementation manner, a pixel value loss value between a pixel value of the detail pixel point and the mapped pixel value may be calculated, and the position of the detail data point may be adjusted according to the pixel value loss value.
The pixel value loss value can be calculated by using the existing loss function, model and the like, and is not described in detail here.
In the implementation mode, the pixel value loss value between the pixel value of the detail pixel point and the mapping pixel value can reflect the difference between the detail information of the real object represented by the detail pixel point and the detail information of the real object represented by the detail data point, and the position of the detail data point is adjusted according to the pixel value loss value, so that the adjusted detail information represented by the detail data point and the detail information represented by the detail pixel point tend to be consistent, the adjusted object point cloud has rich and accurate object details, an object model is reconstructed according to the adjusted object point cloud, the accuracy of model reconstruction can be improved, and the authenticity of the constructed model is improved.
In an embodiment of the present disclosure, in the case of adjusting the position of the detail data point through the second implementation manner, the adjustment ending condition may be that the pixel value loss value is smaller than a preset loss value threshold.
The preset loss value threshold may be preset manually.
At this time, after determining the mapped pixel value, the pixel value loss value may be first calculated, and it is determined whether the pixel value loss value is smaller than a preset loss value threshold, if not, the position of the detail data point is adjusted according to the pixel value loss value, and the step of determining the mapped pixel value of the pixel point obtained by mapping the detail data point back to the object image is returned, and if so, step S207 is executed.
In the scheme, the pixel value loss value reflects the difference between the detail information of the real object represented by the detail pixel points and the detail information of the real object represented by the detail data points, if the pixel value loss value is smaller than a preset loss value threshold value, the difference is not large, at the moment, the object point cloud has rich object details, so that the object model is reconstructed according to the adjusted object point cloud, the accuracy of model reconstruction can be improved, and the authenticity of the constructed model is improved.
Corresponding to the model reconstruction method, the embodiment of the disclosure also provides a model reconstruction device.
In an embodiment of the present disclosure, referring to fig. 3, a schematic structural diagram of a first model reconstruction apparatus is provided, in this embodiment, the apparatus includes:
an information obtaining module 301, configured to obtain multiple object images of a real object and a relative position relationship of an image acquisition device when acquiring each object image, where acquisition angles corresponding to each object image are different;
a point cloud generating module 302, configured to generate an object point cloud of the real object according to each object image and the relative position relationship;
a gradient obtaining module 303, configured to obtain an image gradient of a detail pixel, where the detail pixel is a pixel in the object image that represents detail information of the real object;
a position adjusting module 304, configured to adjust a position of a detail data point in the object point cloud according to an image gradient of the detail pixel point, where the detail data point is: a data point corresponding to the detail pixel point;
a model reconstructing module 305, configured to reconstruct an object model of the real object according to the adjusted object point cloud.
As can be seen from the above, when the model is constructed by applying the scheme provided by the embodiment of the present disclosure, the object point cloud is generated according to each object image and the above relative position relationship, the image gradient of the detail pixel point in the object image is obtained, and the position of the detail data point in the object point cloud is adjusted according to the image gradient of the detail pixel point, so that the detail information of the object reflected by the data point in the adjusted object point cloud can be more accurate, and thus, the model can be reconstructed according to the adjusted object point cloud, and the object model obtained by reconstruction can be more vivid.
In addition, the requirement of the model reconstruction scheme provided by the embodiment of the disclosure on the image acquisition equipment is low, so that the model reconstruction scheme provided by the embodiment of the disclosure can realize high-precision model reconstruction with low cost.
In an embodiment of the present disclosure, referring to fig. 4, a schematic structural diagram of a second model reconstruction apparatus is provided, in this embodiment, the apparatus includes:
an information obtaining module 401, configured to obtain a plurality of object images of a real object and a relative position relationship of an image acquisition device when acquiring each object image, where acquisition angles corresponding to the object images are different;
a point cloud generating module 402, configured to generate an object point cloud of the real object according to each object image and the relative position relationship;
a gradient obtaining module 403, configured to obtain an image gradient of a detail pixel, where the detail pixel is a pixel in the object image that represents detail information of the real object;
a position adjusting module 404, configured to adjust a position of a detail data point in the object point cloud according to an image gradient of the detail pixel point, where the detail data point is: a data point corresponding to the detail pixel point;
a pixel mapping module 405, configured to determine, after the position of the detail data point in the object point cloud is adjusted according to the image gradient of the detail data point, a mapping pixel value of a pixel point obtained by mapping the detail data point back to the object image according to the adjusted position of the detail data point and the relative position relationship;
and an optimization adjustment module 406, configured to adjust the position of the detail data point again according to the pixel value of the detail pixel point and the mapping pixel value, and return to the step of determining the mapping pixel value of the pixel point obtained by mapping the detail data point back to the object image until an adjustment end condition is satisfied.
A model reconstructing module 407, configured to reconstruct an object model of the real object according to the adjusted object point cloud.
As can be seen from the above, when the scheme provided by the embodiment of the present disclosure is applied to model reconstruction, after the position of the detail data point is adjusted according to the image gradient, the position of the detail data point can be adjusted again according to the pixel value of the detail pixel point and the mapping pixel value until the adjustment ending condition is met, so that the adjusted detail data point can more accurately represent the detail information of the real object, and thus the object model is reconstructed according to the adjusted object point cloud, and the accuracy of model reconstruction and the authenticity of the reconstructed object model can be improved.
In an embodiment of the present disclosure, the optimization adjusting module 406 is specifically configured to:
and calculating a pixel value loss value between the pixel value of the detail pixel point and the mapping pixel value, adjusting the position of the detail data point according to the pixel value loss value, and returning to the step of determining the mapping pixel value of the pixel point obtained by mapping the detail data point back to the object image until the adjustment ending condition is met.
According to the scheme, the pixel value loss value between the pixel value of the detail pixel point and the mapping pixel value can reflect the difference between the detail information of the real object represented by the detail pixel point and the detail information of the real object represented by the detail data point, and the position of the detail data point is adjusted according to the pixel value loss value, so that the detail information represented by the adjusted detail data point and the detail information represented by the detail pixel point tend to be consistent, the adjusted object point cloud can have rich and accurate object details, an object model is reconstructed according to the adjusted object point cloud, the accuracy of model reconstruction can be improved, and the authenticity of the constructed model is improved.
In an embodiment of the disclosure, the adjustment ending condition is that the pixel value loss value is smaller than a preset loss value threshold.
In the scheme, the pixel value loss value reflects the difference between the detail information of the real object represented by the detail pixel points and the detail information of the real object represented by the detail data points, if the pixel value loss value is smaller than a preset loss value threshold value, the difference is not large, at the moment, the object point cloud has rich object details, so that the object model is reconstructed according to the adjusted object point cloud, the accuracy of model reconstruction can be improved, and the authenticity of the constructed model is improved.
In an embodiment of the present disclosure, the position adjusting module 304 is specifically configured to:
determining the adjustment amplitude for adjusting the detail data points in the object point cloud according to the image gradient of the detail pixel points;
and adjusting the position of the detail data point according to the determined adjustment amplitude in the preset direction of the object point cloud.
In the scheme, because the difference between the object details and other parts of the real object is large, the image gradient reflected as the detail pixel points in the object image is large, so that the adjustment range for adjusting the detail data points can be accurately determined according to the image gradient of the detail pixel points, and the positions of the detail data points can be accurately adjusted in the preset direction.
In an embodiment of the present disclosure, the position adjusting module 304 is specifically configured to:
determining a target data point corresponding to the detail pixel point in the object point cloud;
determining the target data point and a data point with a distance smaller than a preset distance threshold value from the target data point as the detail data point;
and adjusting the position of the detail data point according to the image gradient of the detail pixel point and a target distance, wherein the target distance is the distance between the detail data point and the target data point.
As can be seen from the above, in the scheme provided by the embodiment of the present disclosure, the target data point corresponding to the detail pixel point and the data point having a distance from the target data point smaller than the preset distance threshold may be both used as the detail data point, so as to adjust the positions of the multiple detail data points, which may further make the detail information of the object reflected by the adjusted object point cloud more accurate, thereby performing model reconstruction according to the adjusted object point cloud, and making the object model obtained by reconstruction more realistic.
In an embodiment of the present disclosure, the gradient obtaining module 303 is specifically configured to:
and carrying out high-pass filtering processing on the object image to obtain detail pixel points in the object image and the image gradient of the detail pixel points.
According to the scheme, the high-pass filtering processing is carried out on the object image, the detail pixel points in the object image and the image gradients of the detail pixel points can be accurately obtained, and therefore the accuracy of subsequent processing can be improved by utilizing the accurate image gradients of the detail pixel points.
In an embodiment of the present disclosure, the real object is a human face, and the object image is a human face image.
According to the scheme, object point clouds are generated according to each face image, the positions of detailed data points in the object point clouds are adjusted, and a face model can be obtained through reconstruction according to the adjusted object point clouds of the face. Because the point cloud of the object used for reconstruction is adjusted, the detail information of the human face reflected by the data points in the point cloud is more accurate, and therefore, the human face model obtained by reconstruction is more vivid.
It should be noted that the face model in this embodiment is not a face model for a specific user, and cannot reflect personal information of a specific user.
It should be noted that the object image in the present embodiment is from a public data set.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
In one embodiment of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the method embodiments described above.
In one embodiment of the present disclosure, a non-transitory computer readable storage medium is provided, having stored thereon computer instructions for causing a computer to perform any one of the model reconstruction methods of the preceding method embodiments.
In an embodiment of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements any of the model reconstruction methods of the preceding method embodiments.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the device 500 comprises a computing unit 501 which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as the model reconstruction method. For example, in some embodiments, the model reconstruction method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the model reconstruction method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the model reconstruction method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (17)

1. A method of model reconstruction, comprising:
acquiring a plurality of object images of a real object and a relative position relation of image acquisition equipment when each object image is acquired, wherein the acquisition angles corresponding to the object images are different;
generating an object point cloud of the real object according to each object image and the relative position relation;
obtaining an image gradient of detail pixel points, wherein the detail pixel points are pixel points which represent detail information of the real object in the object image;
adjusting the position of a detail data point in the object point cloud according to the image gradient of the detail pixel point, wherein the detail data point is as follows: a data point corresponding to the detail pixel point;
and reconstructing an object model of the real object according to the adjusted object point cloud.
2. The method of claim 1, further comprising, after said adjusting the location of detail data points in the object point cloud according to image gradients of the detail pixel points:
determining a mapping pixel value of a pixel point obtained by mapping the detail data point back to the object image according to the adjusted position of the detail data point and the relative position relationship;
and adjusting the position of the detail data point again according to the pixel value of the detail pixel point and the mapping pixel value, and returning to the step of determining the mapping pixel value of the pixel point obtained by mapping the detail data point back to the object image until the adjustment end condition is met.
3. The method of claim 2, wherein said re-adjusting the location of the detail data point according to the pixel value of the detail pixel point and the mapped pixel value comprises:
calculating a pixel value loss value between the pixel value of the detail pixel point and the mapping pixel value;
adjusting the position of the detail data point according to the pixel value loss value.
4. The method of claim 3, wherein the adjustment end condition is a pixel value loss value being less than a preset loss value threshold.
5. The method of any of claims 1-4, wherein the adjusting the location of the detail data points in the object point cloud according to the image gradients of the detail pixel points comprises:
determining the adjustment amplitude for adjusting the detail data points in the object point cloud according to the image gradient of the detail pixel points;
and adjusting the position of the detail data point according to the determined adjustment amplitude in the preset direction of the object point cloud.
6. The method of any of claims 1-4, wherein the adjusting the location of the detail data points in the object point cloud according to the image gradients of the detail pixel points comprises:
determining a target data point corresponding to the detail pixel point in the object point cloud;
determining the target data point and a data point with a distance smaller than a preset distance threshold value from the target data point as the detail data point;
and adjusting the position of the detail data point according to the image gradient of the detail pixel point and a target distance, wherein the target distance is the distance between the detail data point and the target data point.
7. The method of any one of claims 1-4, wherein the obtaining image gradients of detail pixel points comprises:
and carrying out high-pass filtering processing on the object image to obtain detail pixel points in the object image and the image gradient of the detail pixel points.
8. A model reconstruction apparatus comprising:
the information acquisition module is used for acquiring a plurality of object images of the real object and the relative position relation of the image acquisition equipment when acquiring each object image, wherein the acquisition angles corresponding to each object image are different;
the point cloud generating module is used for generating object point cloud of the real object according to each object image and the relative position relation;
a gradient obtaining module, configured to obtain an image gradient of a detail pixel, where the detail pixel is a pixel in the object image that represents detail information of the real object;
a position adjusting module, configured to adjust a position of a detail data point in the object point cloud according to an image gradient of the detail pixel point, where the detail data point is: a data point corresponding to the detail pixel point;
and the model reconstruction module is used for reconstructing an object model of the real object according to the adjusted object point cloud.
9. The apparatus of claim 8, further comprising:
a pixel mapping module, configured to determine, after the position of the detail data point in the object point cloud is adjusted according to the image gradient of the detail pixel point, a mapping pixel value of a pixel point obtained by mapping the detail data point back to the object image according to the adjusted position of the detail data point and the relative positional relationship;
and the optimization adjusting module is used for adjusting the position of the detail data point again according to the pixel value of the detail pixel point and the mapping pixel value, and returning to the step of determining the mapping pixel value of the pixel point obtained by mapping the detail data point back to the object image until the adjustment finishing condition is met.
10. The apparatus of claim 9, wherein the optimization adjustment module is specifically configured to:
and calculating a pixel value loss value between the pixel value of the detail pixel point and the mapping pixel value, adjusting the position of the detail data point according to the pixel value loss value, and returning to the step of determining the mapping pixel value of the pixel point obtained by mapping the detail data point back to the object image until the adjustment ending condition is met.
11. The apparatus of claim 10, wherein the adjustment ending condition is that a pixel value loss value is less than a preset loss value threshold.
12. The apparatus according to any one of claims 8-11, wherein the position adjustment module is specifically configured to:
determining the adjustment amplitude for adjusting the detail data points in the object point cloud according to the image gradient of the detail pixel points;
and adjusting the position of the detail data point according to the determined adjustment amplitude in the preset direction of the object point cloud.
13. The apparatus according to any one of claims 8-11, wherein the position adjustment module is specifically configured to:
determining a target data point corresponding to the detail pixel point in the object point cloud;
determining the target data point and a data point with a distance smaller than a preset distance threshold value from the target data point as the detail data point;
and adjusting the position of the detail data point according to the image gradient of the detail pixel point and a target distance, wherein the target distance is the distance between the detail data point and the target data point.
14. The apparatus according to any one of claims 8-11, wherein the gradient obtaining module is specifically configured to:
and carrying out high-pass filtering processing on the object image to obtain detail pixel points in the object image and the image gradient of the detail pixel points.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202211730661.0A 2022-12-30 2022-12-30 Model reconstruction method and device Pending CN115830245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211730661.0A CN115830245A (en) 2022-12-30 2022-12-30 Model reconstruction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211730661.0A CN115830245A (en) 2022-12-30 2022-12-30 Model reconstruction method and device

Publications (1)

Publication Number Publication Date
CN115830245A true CN115830245A (en) 2023-03-21

Family

ID=85519737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211730661.0A Pending CN115830245A (en) 2022-12-30 2022-12-30 Model reconstruction method and device

Country Status (1)

Country Link
CN (1) CN115830245A (en)

Similar Documents

Publication Publication Date Title
CN114842123B (en) Three-dimensional face reconstruction model training and three-dimensional face image generation method and device
CN115409933B (en) Multi-style texture mapping generation method and device
CN114842121B (en) Method, device, equipment and medium for generating mapping model training and mapping
CN115330940B (en) Three-dimensional reconstruction method, device, equipment and medium
CN113344862B (en) Defect detection method, device, electronic equipment and storage medium
CN113409430B (en) Drivable three-dimensional character generation method, drivable three-dimensional character generation device, electronic equipment and storage medium
EP4020387A2 (en) Target tracking method and device, and electronic apparatus
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
CN112528858A (en) Training method, device, equipment, medium and product of human body posture estimation model
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
CN113052962B (en) Model training method, information output method, device, equipment and storage medium
CN114120454A (en) Training method and device of living body detection model, electronic equipment and storage medium
CN113177466A (en) Identity recognition method and device based on face image, electronic equipment and medium
CN115880435B (en) Image reconstruction method, model training method, device, electronic equipment and medium
CN115409951B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN116596750A (en) Point cloud processing method and device, electronic equipment and storage medium
CN115830245A (en) Model reconstruction method and device
CN113421335B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN115222895A (en) Image generation method, device, equipment and storage medium
CN113379932B (en) Human body three-dimensional model generation method and device
CN114565721A (en) Object determination method, device, equipment, storage medium and program product
CN115320642A (en) Lane line modeling method and device, electronic equipment and automatic driving vehicle
CN114842066A (en) Image depth recognition model training method, image depth recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination