WO2021139293A1 - 一种三维模型的纹理获取方法和相关装置 - Google Patents

一种三维模型的纹理获取方法和相关装置 Download PDF

Info

Publication number
WO2021139293A1
WO2021139293A1 PCT/CN2020/120797 CN2020120797W WO2021139293A1 WO 2021139293 A1 WO2021139293 A1 WO 2021139293A1 CN 2020120797 W CN2020120797 W CN 2020120797W WO 2021139293 A1 WO2021139293 A1 WO 2021139293A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
dimensional
network
offset
points
Prior art date
Application number
PCT/CN2020/120797
Other languages
English (en)
French (fr)
Inventor
林祥凯
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2022516349A priority Critical patent/JP7446414B2/ja
Priority to EP20912594.7A priority patent/EP3996042A4/en
Publication of WO2021139293A1 publication Critical patent/WO2021139293A1/zh
Priority to US17/579,072 priority patent/US11989894B2/en
Priority to US18/609,992 priority patent/US20240221193A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • This application relates to the field of electronic technology, and more specifically, to the texture acquisition of a three-dimensional model.
  • Face reconstruction technology is a technology that reconstructs a 3D face model from one or more 2D face images.
  • the user takes multiple 2D pictures with different angles according to the instructions of the terminal.
  • the 2D face images obtained include the color information and depth information of the pictures.
  • the 3D points are back-projected from the depth information.
  • the projected three-dimensional points have a corresponding relationship with the pixel points in the color information; after fusing the three-dimensional points projected from different 2D images, a reconstructed three-dimensional face model can be obtained.
  • the face The pixel points corresponding to each three-dimensional point in the three-dimensional model are pasted on the three-dimensional model, and the texture mapping of the three-dimensional model can be realized, so that the three-dimensional model becomes colorful.
  • an embodiment of the present application provides a method for acquiring a texture of a three-dimensional model.
  • the method is executed by a computer device, and the method includes:
  • the three-dimensional network includes the first correspondence between the point cloud information and the color information of the target, and the first camera pose of the target, the first The camera pose is used to represent the displacement of the target relative to the reference position when the 3D network is generated;
  • the surface color texture of the three-dimensional model of the target object is obtained according to the second corresponding relationship.
  • an embodiment of the present application provides a texture acquiring device for a three-dimensional model, including:
  • a first acquisition unit configured to acquire at least two three-dimensional networks generated by a target object based on multiple angles, the three-dimensional network including a first correspondence between point cloud information and color information of the target object , And a first camera pose of the target, where the first camera pose is used to represent the displacement of the target relative to a reference position when the three-dimensional network is generated;
  • the second acquiring unit, the second acquiring unit is configured to:
  • the second point is in the second network, and the first network and the second network are respectively different three-dimensional networks of the at least two three-dimensional networks ;
  • An update unit configured to update the first correspondence relationship according to the offset acquired by the second acquisition unit to obtain a second correspondence relationship between the point cloud information and the color information of the target object;
  • the third acquiring unit is configured to acquire the surface color texture of the three-dimensional model of the target object according to the second correspondence obtained by the updating unit.
  • the second acquiring unit is further used for:
  • the second acquiring unit is further used for:
  • the proximity algorithm KNN is used to obtain the second point that is closest to the first point.
  • the update unit is also used for:
  • the corresponding relationship between the offset three-dimensional point and the pixel point in the point cloud information is acquired as the second corresponding relationship.
  • the offset includes a rotation matrix R used to represent a rotation operation and a translation matrix T used to represent a translation operation
  • the update unit is further used for:
  • D1 is point cloud information in a three-dimensional network
  • D2 is information in another three-dimensional network.
  • the third acquiring unit is further used for:
  • the first acquiring unit is further used for:
  • the at least two initial images respectively record the depth information of the target, and the depth information is used to record the difference between each point of the target and the reference position Distance, the reference position is the position of the shooting lens that shoots the target;
  • each initial image it is back-projected in the three-dimensional space to obtain the first point cloud information corresponding to each initial image.
  • Each point in the first point cloud information is a three-dimensional point used to record the target. ;
  • the three-dimensional network is generated according to the first point cloud information and the first correspondence.
  • an embodiment of the present application also provides a computer device.
  • the computer device includes: an interactive device, an input/output (I/O) interface, a processor, and a memory, where program instructions are stored;
  • the device is used to obtain the operation instructions input by the user;
  • the processor is used to execute the program instructions stored in the memory, and execute the method as described in any one of the above items.
  • an embodiment of the present application provides a storage medium, where the storage medium is used to store a computer program, and the computer program is used to execute the method in the above aspect.
  • embodiments of the present application provide a computer program product including instructions, which when run on a computer, cause the computer to execute the method in the above aspect.
  • the method for obtaining the texture of the three-dimensional model includes: obtaining at least two three-dimensional networks generated by the target object based on multiple angles, and the three-dimensional network includes the first corresponding relationship between the point cloud information and the color information of the target object, And the first camera pose of the target, the first camera pose is used to represent the displacement of the target relative to the reference position when generating the 3D network; according to the first camera pose, at least two 3D networks are used to record the same position of the target.
  • the offset between the three-dimensional points; the first correspondence is updated according to the offset, and the second correspondence between the point cloud information and the color information of the target is obtained; the surface color of the three-dimensional model of the target is obtained according to the second correspondence Texture.
  • FIG. 1 is a flowchart of an embodiment of a method for acquiring a texture of a three-dimensional model provided by an embodiment of the application;
  • FIG. 2 is a flowchart of another embodiment of a method for acquiring a texture of a three-dimensional model provided by an embodiment of the application;
  • FIG. 3 is a schematic diagram of another embodiment of a method for acquiring a texture of a three-dimensional model provided by an embodiment of the application;
  • FIG. 4 is a schematic diagram of another embodiment of a method for acquiring a texture of a three-dimensional model provided by an embodiment of the application;
  • FIG. 5 is a flowchart of another embodiment of a method for acquiring a texture of a three-dimensional model provided by an embodiment of the application;
  • FIG. 6 is a flowchart of another embodiment of a method for acquiring a texture of a three-dimensional model provided by an embodiment of the application;
  • FIG. 7 is a schematic diagram of a color image and a depth image in a method for acquiring a texture of a three-dimensional model provided by an embodiment of the application;
  • FIG. 8 is a schematic diagram of the first point cloud information in the method for acquiring the texture of a three-dimensional model provided by an embodiment of the application;
  • FIG. 9 is an alignment effect diagram of a pixel network covering a user's face in the prior art.
  • FIG. 10 is an alignment effect diagram of the pixel point network covering the user's face in the method for acquiring the texture of the three-dimensional model provided by the embodiment of the application;
  • FIG. 11 is a schematic diagram of a computer device provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram of a texture acquiring device of a three-dimensional model provided by an embodiment of the application.
  • Face reconstruction technology is a technology that reconstructs a 3D face model from one or more 2D face images.
  • the user takes multiple 2D pictures with different angles according to the instructions of the terminal.
  • the 2D face images obtained include the color information and depth information of the pictures.
  • the 3D points are back-projected from the depth information.
  • the projected three-dimensional points have a corresponding relationship with the pixel points in the color information; after fusing the three-dimensional points projected from different 2D images, a reconstructed three-dimensional face model can be obtained.
  • the face The pixel points corresponding to each three-dimensional point in the three-dimensional model are pasted on the three-dimensional model, and the texture mapping of the three-dimensional model can be realized, so that the three-dimensional model becomes colorful.
  • the correspondence between 3D points and pixel points may not be accurate.
  • the correspondence is particularly accurate, because the face is not a rigid body, images taken at various moments The above does not necessarily guarantee complete stillness (blinking eyes and mouth curling may occur), and these errors will be smoothed out during 3D reconstruction, resulting in the 3D model being unable to align with the pixels according to the correspondence.
  • the point on the nose tip in the 3D model corresponds to the pixel point on the mouth in the color information.
  • the color on the mouth will be mapped to the nose tip of the 3D model, resulting in a three-dimensional Wrong texture in the model.
  • an embodiment of the present application provides a texture acquisition method of a three-dimensional model, which can update the corresponding relationship between point cloud information and color information, so as to achieve finer alignment between the texture information and the three-dimensional model.
  • the methods provided in the embodiments of this application can be applied to various different targets, such as human faces, toys, cars, etc.
  • the embodiments of this application are not limited.
  • the embodiments of this application are
  • the target object is a human face as an example.
  • the first embodiment of the method for acquiring the texture of the three-dimensional model provided by the embodiment of the present application includes the following steps.
  • the initial image is an image of a target object taken from different angles, for example, a face image of a different angle taken by the user under the guidance of the terminal.
  • the shooting lens includes a depth lens, so that at least two initial images respectively record depth information of a human face, and the depth information is used to record the distance between each point of the target object and the shooting lens.
  • the initial image may also be obtained through other methods such as scanning, which is not limited in the embodiment of the present application.
  • each point in the first point cloud information is a three-dimensional point used to record the target object.
  • the target object is a human face
  • the first point cloud information records multiple three-dimensional points on the surface of the human face.
  • the initial image when the initial image is generated, there is a one-to-one correspondence between the points where the depth information is recorded and the pixel points.
  • the depth information is projected as a three-dimensional point, that is, the point where the depth information is recorded is projected as a three-dimensional in the three-dimensional space. Therefore, the three-dimensional point still maintains the corresponding relationship with the pixel point, that is, the first corresponding relationship.
  • the generated three-dimensional network includes the first point cloud information and the first correspondence between the three-dimensional points and the pixel points in the first point cloud information.
  • the acquired at least two three-dimensional networks respectively include the first correspondence between the point cloud information and the color information of the target (that is, the correspondence between the three-dimensional point and the pixel), and the first correspondence of the target.
  • Camera pose the first camera pose is used to represent the movement of the target relative to the shooting lens when generating different three-dimensional networks.
  • each three-dimensional network is a three-dimensional point cloud collection generated based on an initial image, so it can support 360-degree rotation.
  • the angle of each three-dimensional network can be known. Therefore, according to the pose of the first camera, multiple three-dimensional networks are moved to the same angle to perform subsequent steps.
  • a three-dimensional network can be set as the first frame first, for example, the three-dimensional network generated by the user’s front face image is taken as the first frame, and then the other three-dimensional networks are uniformly moved to the angle of the first frame.
  • the specific movement is
  • the operation mode may be at least one of rotation or translation.
  • the first point and the second point are three-dimensional points in two different three-dimensional networks.
  • the first point is a point in the first network
  • the second point is a point in the second network
  • the first point is a point in the second network.
  • the network and the second network are two different three-dimensional networks among the aforementioned at least two three-dimensional networks. Since all three-dimensional networks are at an angle, the point closest to a point belonging to another three-dimensional network is likely to be the three-dimensional point used to record the same position of the target.
  • the offset between the two points is obtained.
  • the offset may be between the first point and the second point. Relative to at least one of rotation or translation, the offset can be obtained by obtaining the offset between the first point and the second point.
  • the shooting angle of the initial image is different when the three-dimensional network is generated, there may be a certain deviation for the three-dimensional points at the same position of the recording target.
  • the point A of the nose tip of the user is recorded in the three-dimensional network generated by the front face picture
  • the point B of the nose tip of the user is recorded in the three-dimensional network generated by the side face picture.
  • the deviation between point A and point B will occur. It may not completely coincide.
  • the deviation between point A and point B is the offset between the two points.
  • the position of the point will change in the subsequent point cloud fusion process.
  • the frontal image In the three-dimensional network generated by the image of the front face and the side face, respectively, it is used to record the offset between the point A and the point B of the tip of the user's nose.
  • the midpoint A of the three-dimensional network generated by the face image will be biased At this time, point A may no longer be located at the tip of the user’s nose.
  • point A still corresponds to the pixel of the tip of the user’s nose. Therefore, it is necessary to adjust the point according to the offset.
  • the pixel corresponding to A is updated to obtain the second correspondence.
  • the second correspondence relationship is based on the offset of the three-dimensional points between different three-dimensional networks, and the updated correspondence generated is closer to the actual correspondence between the three-dimensional model and the pixel.
  • the second Correspondence to obtain the pixels on the surface of the three-dimensional model to obtain the surface color texture of the three-dimensional model.
  • the three-dimensional point and the pixel point have a one-to-one correspondence. Therefore, according to the second correspondence relationship, the pixel point corresponding to each three-dimensional point in the three-dimensional network can be obtained.
  • Pasting the acquired pixels on the corresponding three-dimensional points can realize texture mapping on the surface of the three-dimensional model, so that the three-dimensional model has colors.
  • the method for acquiring the texture of the three-dimensional model includes: acquiring at least two three-dimensional networks generated from different shooting angles of the target object, and the at least two three-dimensional networks respectively include the first point cloud information and color information of the target object.
  • One correspondence, and the first camera pose of the target is used to indicate the movement of the target relative to the shooting lens when generating different 3D networks; according to the first camera pose, it is used for recording in different 3D networks.
  • the offset between the three-dimensional points at the same position of the target; the first correspondence is updated according to the offset, and the second correspondence between the point cloud information and the color information of the target is obtained; the three-dimensional of the target is obtained according to the second correspondence
  • the surface color texture of the model By updating the corresponding relationship between point cloud information and color information, a more precise and subtle alignment between the 3D points and pixels in the 3D model is realized, and the effect of texture mapping of the 3D model is improved.
  • the above-mentioned offsets need to be obtained.
  • the above-mentioned offsets are obtained by finding the nearest point.
  • the embodiments of the present application provide a more specific implementation manner, which will be described in detail below with reference to the accompanying drawings.
  • the second embodiment of the method for acquiring the texture of the three-dimensional model includes the following steps.
  • this step can refer to the above step 101, which will not be repeated here.
  • this step can refer to the above step 102, which will not be repeated here.
  • the user can perform any one of the following steps 203 or 204 as needed.
  • KNN K-Nearest Neighbor
  • Steps 205 to 207 can refer to the above steps 104 to 106, which will not be repeated here.
  • the method provided in this embodiment obtains the points at the same position of the recorded target between different three-dimensional networks by finding the closest point, so as to accurately obtain the offset between different three-dimensional networks, which is subsequently based on the offset. Updating the correspondence relationship provides a precise basis. Thereby, the accuracy of the corresponding relationship between the point cloud information and the color information can be provided.
  • the third embodiment of the method for obtaining the texture of the three-dimensional model includes the following steps.
  • Steps 301 to 302 can refer to the above steps 201 to 202, which will not be repeated here.
  • the second network is one of at least two three-dimensional networks, and the three-dimensional points in the second network are obtained one by one to realize traversal of all points in the second network.
  • the three-dimensional coordinate values (x, y, z) of each coordinate point are respectively acquired.
  • the three-dimensional network where the first point is located is the first network, that is, the second point and the first point belong to points in two different three-dimensional networks.
  • the second network can be calculated separately The distance between each point in the distance and the first point in the first network.
  • sorting is performed according to the distance calculation result.
  • the point closest to the first point in the second network can be obtained as the second point, so that the first point in the first network is found through traversal.
  • Point to the closest point repeat the above method and perform the same operation on each point in the first network to obtain the point closest to each point in the first network in the second network.
  • the closest point to one point can be found very accurately.
  • the coordinate value can provide accurate accuracy, in the subsequent steps, The offset can be accurately calculated according to the coordinate values, thereby realizing the accurate update of the corresponding relationship between the point cloud information and the color information.
  • the method provided in the third embodiment has higher accuracy, the method of traversing three-dimensional points has higher requirements on computing power and also requires higher calculation time. Therefore, in order to solve this problem, the The application embodiment provides another implementation method.
  • the fourth embodiment of the method for acquiring the texture of the three-dimensional model includes the following steps.
  • Steps 401 to 402 can refer to the above steps 201 to 202, which will not be repeated here.
  • the core idea of the KNN algorithm is that if most of the k nearest samples in the feature space of a sample belong to a certain category, then the sample also belongs to this category, and has the characteristics of the samples in this category. characteristic.
  • the algorithm In determining the classification decision, the algorithm only determines the category of the sample to be classified based on the category of the nearest one or several samples.
  • the KNN method is only related to a very small number of adjacent samples when making category decisions. Since the KNN method mainly relies on the limited neighboring samples around, rather than the method of discriminating the class domain to determine the category, the point cloud information processing involved in the embodiment of this application is more overlapping or overlapping. For the sample set to be divided, the KNN method is more suitable than other methods.
  • the KNN algorithm is used to search for the closest point, and the second point that is the closest to the first point in the first network can be quickly found in the second network. This saves computing resources and improves the realization efficiency of the texture acquisition method of the three-dimensional model.
  • the first correspondence In the initial image, there is a corresponding relationship between the depth map and the color map. Specifically, the corresponding relationship between the points in the depth map and the pixels in the color map. The points in the depth map are in the three-dimensional space. After the projection is the point cloud information, the relationship between the three-dimensional points in the point cloud information and the pixel points in the color image is the first corresponding relationship.
  • the final three-dimensional network has a certain deviation, and there will be a certain offset between the three-dimensional points of the same position of the target object recorded between different three-dimensional networks, such as the first network
  • point A and point B are points for recording the tip of the user's nose, but there is a certain offset between them. Therefore, there is a correspondence between point A and point B based on the offset.
  • the core idea of the method provided in the embodiments of the present application is to update the first correspondence relationship through the offset correspondence relationship, thereby improving the alignment of the pixel point and the three-dimensional point, and realizing more accurate texture mapping in the three-dimensional model.
  • the process of updating the first correspondence relationship through the offset correspondence relationship will be described in detail below in conjunction with the accompanying drawings.
  • the fifth embodiment of the method for acquiring the texture of the three-dimensional model provided by the embodiment of the present application includes the following steps.
  • Steps 501 to 504 may refer to the foregoing 201 to 204, and optionally, steps 501 to 504 may also be implemented by the methods of the foregoing Embodiment 3 or Embodiment 4, which is not limited in this embodiment of the present application.
  • step 506 can be specifically implemented in the following manner.
  • R is used to represent the rotation operation between three-dimensional networks
  • T is used to represent the translation operation between three-dimensional networks, so that the offset between the three-dimensional points can be quantified through the two matrices.
  • D1 is the point cloud information in one three-dimensional network
  • D2 is the information in another three-dimensional network.
  • the offset of the point cloud information between the two three-dimensional networks is realized through the offset.
  • the offset point is the actual position of the three-dimensional point after the three-dimensional network is fused into the three-dimensional model. At this time, based on the offset three-dimensional point, reconfirm the difference between the three-dimensional point and the pixel point at this position. Therefore, the first corresponding relationship is updated to the second corresponding relationship, and the alignment of the three-dimensional point and the pixel point is realized.
  • steps 503 to 507 can be executed not only once, but also through multiple executions and repeated iterations. Among them, each iteration will make the corresponding relationship more accurate. The specific number of iterations depends on the desired result. Accuracy.
  • the pixel point corresponding to each three-dimensional point in the three-dimensional model can be obtained.
  • the pixel points are pasted on the corresponding three-dimensional points to implement texture mapping on the surface of the three-dimensional model.
  • the specific technical details in the process of implementing texture mapping belong to the technical solutions in the prior art, based on For the above-mentioned second correspondence, those skilled in the art can independently select the specific implementation of the texture mapping, which is not limited in the embodiment of the present application.
  • the embodiment of the present application also provides a specific implementation manner of the texture acquisition method of the three-dimensional model in actual work.
  • the following describes in detail with reference to the accompanying drawings.
  • the sixth embodiment of the method for acquiring the texture of the three-dimensional model provided by the embodiment of the present application includes the following steps.
  • FIG. 7 is the image of the user's side face in at least two initial images obtained, including a color map 701 and a depth map 702, where the color map 701 is recorded for expressing For the pixel points of the color texture, the depth map 702 records points for representing depth information, that is, the distance between each point in the map and the shooting lens.
  • the first point cloud information 801 obtained by projecting in the three-dimensional space is as shown in FIG. 8.
  • the first point cloud information 801 has a one-to-one correspondence with the points in the depth map 702, and the points in the depth map 702 are in a one-to-one correspondence with the pixels in the color map 701. Therefore, according to the correspondence Relationship, the first corresponding relationship between the three-dimensional point and the pixel point in the color information can be obtained.
  • the generated three-dimensional network is the network shown in FIG. 8, except that the first correspondence and the camera pose information of the target object are also recorded in the three-dimensional network.
  • the specific acquisition method of the offset can be referred to any of the foregoing implementation manners, which will not be repeated here.
  • the manner of updating based on the first correspondence to obtain the second correspondence may refer to any of the foregoing implementation manners, and details are not described herein again.
  • this step can refer to any of the foregoing implementation manners, and details are not described herein again.
  • Figure 9 shows the correspondence between the point cloud information and the pixel point network 901 formed by the pixels in the color map in the prior art. The corresponding relationship of the information is updated. At positions such as the tip of the nose 902, because the three-dimensional model is smoothed during the modeling process, a certain offset is generated, and the pixel point network 901 cannot be aligned with these regions.
  • FIG. 10 After processing by the method provided in the embodiment of this application, as shown in FIG. 10, it can be seen that for the nose tip area 1002, the pixel point network 1001 can completely cover.
  • the pixel The dot network 1001 can cover the range of the human face more accurately.
  • the method for acquiring the texture of the three-dimensional model includes: acquiring at least two three-dimensional networks generated from different shooting angles of the target object, and the at least two three-dimensional networks respectively include the first point cloud information and color information of the target object.
  • One correspondence, and the first camera pose of the target is used to indicate the movement of the target relative to the shooting lens when generating different 3D networks; according to the first camera pose, it is used for recording in different 3D networks.
  • the offset between the three-dimensional points at the same position of the target; the first correspondence is updated according to the offset, and the second correspondence between the point cloud information and the color information of the target is obtained; the three-dimensional of the target is obtained according to the second correspondence
  • the surface color texture of the model By updating the corresponding relationship between the point cloud information and the color information, a more precise and subtle alignment between the three-dimensional points and the pixels in the three-dimensional model is realized, and the effect of texture mapping of the three-dimensional model is improved.
  • a computer device includes hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the above method may be implemented by one physical device, or jointly implemented by multiple physical devices, or may be a logical function module in one physical device, which is not specifically limited in the embodiment of the present application.
  • FIG. 11 is a schematic diagram of the hardware structure of a computer device provided by an embodiment of the application.
  • the computer device includes at least one processor 1101, a communication line 1102, a memory 1103, and at least one communication interface 1104.
  • the processor 1101 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (server IC), or one or more programs for controlling the execution of the program of this application Integrated circuits.
  • CPU central processing unit
  • server IC application-specific integrated circuit
  • the communication line 1102 may include a path to transmit information between the aforementioned components.
  • Communication interface 1104 which uses any device such as a transceiver to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area networks (WLAN), etc. .
  • RAN radio access network
  • WLAN wireless local area networks
  • the memory 1103 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory may exist independently, and is connected to the processor through a communication line 1102. The memory can also be integrated with the processor.
  • the memory 1103 is used to store computer-executed instructions for executing the solution of the present application, and the processor 1101 controls the execution.
  • the processor 1101 is configured to execute computer-executable instructions stored in the memory 1103, so as to implement the method provided in the foregoing embodiment of the present application.
  • the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
  • the processor 1101 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 11.
  • the computer device may include multiple processors, such as the processor 1101 and the processor 1107 in FIG. 11.
  • processors can be a single-CPU (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (for example, computer program instructions).
  • the computer device may further include an output device 1105 and an input device 1106.
  • the output device 1105 communicates with the processor 1101 and can display information in a variety of ways.
  • the output device 1105 may be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector (projector) Wait.
  • the input device 1106 communicates with the processor 1101 and can receive user input in a variety of ways.
  • the input device 1106 may be a mouse, a keyboard, a touch screen device, a sensor device, or the like.
  • the above-mentioned computer device may be a general-purpose device or a special-purpose device.
  • the computer device can be a desktop computer, a portable computer, a network server, a PDA (personal digital assistant, PDA), a mobile phone, a tablet computer, a wireless terminal device, an embedded device, or a device with a similar structure in Figure 11 .
  • PDA personal digital assistant
  • the embodiments of this application do not limit the type of computer equipment.
  • the embodiment of the present application may divide the storage device into functional units according to the foregoing method examples.
  • each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 12 shows a schematic diagram of a texture acquiring device of a three-dimensional model.
  • the first acquiring unit 1201 is configured to acquire at least two three-dimensional networks generated by a target object based on multiple angles, and the three-dimensional network includes a first corresponding relationship between point cloud information and color information of the target object, And a first camera pose of the target, where the first camera pose is used to represent the displacement of the target relative to the reference position when the three-dimensional network is generated;
  • the second obtaining unit 1202 is configured to:
  • the second point is in the second network, and the first network and the second network are respectively different three-dimensional networks of the at least two three-dimensional networks ;
  • An update unit 1203, which is configured to update the first correspondence relationship according to the offset acquired by the second acquisition unit 1202 to obtain a second correspondence relationship between the point cloud information and the color information of the target object;
  • the third obtaining unit 1204 is configured to obtain the surface color texture of the three-dimensional model of the target object according to the second correspondence obtained by the updating unit 1203.
  • the second acquiring unit 1202 is further configured to:
  • the second acquiring unit 1202 is further configured to:
  • the updating unit 1203 is further configured to:
  • the corresponding relationship between the offset three-dimensional point and the pixel point in the point cloud information is acquired as the second corresponding relationship.
  • the offset includes a rotation matrix R used to represent a rotation operation and a translation matrix T used to represent a translation operation
  • the update unit 1203 is also used to:
  • D1 is point cloud information in a three-dimensional network
  • D2 is information in another three-dimensional network.
  • the third acquiring unit 1204 is further configured to:
  • the first obtaining unit 1201 is further configured to:
  • the at least two initial images respectively record the depth information of the target, and the depth information is used to record the difference between each point of the target and the reference position Distance, the reference position is the position of the shooting lens that shoots the target;
  • each initial image it is back-projected in the three-dimensional space to obtain the first point cloud information corresponding to each initial image.
  • Each point in the first point cloud information is a three-dimensional point used to record the target. ;
  • the three-dimensional network is generated according to the first point cloud information and the first correspondence.
  • an embodiment of the present application also provides a storage medium, where the storage medium is used to store a computer program, and the computer program is used to execute the method provided in the foregoing embodiment.
  • the embodiments of the present application also provide a computer program product including instructions, which when run on a computer, cause the computer to execute the method provided in the above-mentioned embodiments.
  • the steps of the method or algorithm described in the embodiments disclosed in this document can be directly implemented by hardware, a software module executed by a processor, or a combination of the two.
  • the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or all areas in the technical field. Any other known storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

一种三维模型的纹理获取方法、装置、设备及介质,该方法包括:获取目标物基于多个角度生成的至少两个三维网络,三维网络中包括目标物的点云信息和色彩信息的第一对应关系,以及目标物的第一相机位姿;根据至少两个三维网络分别包括的第一相机位姿获取至少两个三维网络中用于记录目标物同一位置的三维点之间的偏移量;根据偏移量更新第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系;根据第二对应关系获取目标物的三维模型的表面色彩纹理。该方法通过对点云信息和色彩信息的对应关系进行更新,实现了三维模型中三维点和像素点之间更精确细微的对齐,提升了三维模型纹理贴图的效果。

Description

一种三维模型的纹理获取方法和相关装置
本申请要求于2020年01月10日提交中国专利局、申请号为202010027225.1、申请名称为“一种三维模型的纹理获取方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,更具体地说,涉及三维模型的纹理获取。
背景技术
人脸重建技术,即通过一张或多张的2D人脸图像重建出人脸3D模型的技术。
具体工作时,用户根据终端指示拍摄多张不同角度的2D图片,拍摄得到的2D人脸图像包括图片的色彩信息和深度信息,在三维重建的过程中,从深度信息反投影出三维点,所投影出的三维点与色彩信息中的像素点具有对应关系;对来自不同2D图像投影出的三维点进行融合后,即可得到重建出的人脸三维模型,最后,根据对应关系,将人脸三维模型中各个三维点所对应的像素点贴在三维模型上,即可实现三维模型的纹理贴图,使得三维模型变为彩色。
发明内容
有鉴于此,为解决上述问题,本申请提供的技术方案如下:
一方面,本申请实施例提供了一种三维模型的纹理获取方法,所述方法由计算机设备执行,所述方法包括:
获取目标物基于多个角度生成的至少两个三维网络,该三维网络中包括该目标物的点云信息和色彩信息的第一对应关系,以及该目标物的第一相机位姿,该第一相机位姿用于表示生成该三维网络时该目标物相对参考位置的位移;
根据该至少两个三维网络分别包括的第一相机位姿,将该至少两个三维网络移动到同一角度;
获取距离第一网络中第一点最近的第二点,该第二点处于第二网络,所述第一网络和所述第二网络分别为所述至少两个三维网络中不同的三维网络;
获取该第一点与该第二点之间的偏移作为偏移量;
根据该偏移量更新该第一对应关系,得到该目标物的点云信息和色彩信息的第二对应关系;
根据该第二对应关系获取该目标物的三维模型的表面色彩纹理。
另一方面,本申请实施例提供了一种三维模型的纹理获取装置,包括:
第一获取单元,所述第一获取单元用于获取目标物基于多个角度生成的至少两个三维网络,所述三维网络中包括所述目标物的点云信息和色彩信息的第一对应关系,以及所述目标物的第一相机位姿,所述第一相机位姿用于表示生成所述三维网络时所述目标物相对参考位置的位移;
第二获取单元,所述第二获取单元用于:
根据所述至少两个三维网络分别包括的第一相机位姿,将所述至少两个三维网络移动到同一角度;
获取距离第一网络中第一点最近的第二点,所述第二点处于第二网络,所述第一网络和所述第二网络分别为所述至少两个三维网络中不同的三维网络;
获取所述第一点与所述第二点之间的偏移作为所述偏移量;
更新单元,所述更新单元用于根据所述第二获取单元获取的所述偏移量更新所述第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系;
第三获取单元,所述第三获取单元用于根据所述更新单元得到的所述第二对应关系获取所述目标物的三维模型的表面色彩纹理。
可选地,该第二获取单元还用于:
遍历所述第二网络中的所有点;
分别获取该第二网络中所有点的三维坐标;
根据该三维坐标分别计算该第二网络中每个点距离该第一点的距离;
确定该第二网络中距离该第一点最近的点为该第二点。
可选地,该第二获取单元还用于:
通过邻近算法KNN获取与该第一点距离最近的该第二点。
可选地,该更新单元还用于:
根据该第一对应关系获取与该点云信息中的三维点对应的该色彩信息中的像素点;
通过该偏移量对该点云信息中的三维点进行偏移;
获取该点云信息中偏移后的三维点与该像素点的对应关系作为该第二对应关系。
可选地,该偏移量包括用于表示旋转操作的旋转矩阵R和用于表示平移操作的平移矩阵T,则该更新单元还用于:
执行以下公式:D1=(R|T)×D2;
其中,该D1为一个三维网络中的点云信息,该D2为另一个三维网络中的信息。
可选地,该第三获取单元还用于:
根据该第二对应关系获取该三维模型的该各个三维点所分别对应的该像素点;
将该各个像素点覆盖在对应的该三维点上,以在该三维模型的表面实现纹理贴图。
可选地,该第一获取单元还用于:
获取目标物在不同拍摄角度上的至少两个初始图像,该至少两个初始图像分别记录有该目标物的深度信息,该深度信息用于记录该目标物的各个点与该参考位置之间的距离,该参考位置为拍摄该目标物的拍摄镜头所在的位置;
根据每个初始图像中的深度信息在三维空间内反投影,得到该每个初始图像对应的第一点云信息,该第一点云信息中的各个点为用于记录该目标物的三维点;
获取该三维点与该色彩信息中的像素点的第一对应关系;
根据该第一点云信息与该第一对应关系生成该三维网络。
另一方面,本申请实施例还提供了一种计算机设备所述计算机设备,包括:交互装置、输入/输出(I/O)接口、处理器和存储器,该存储器中存储有程序指令;该交互装置用于获取用户输入的操作指令;该处理器用于执行存储器中存储的程序指令,执行如上述任意一项所述的方法。
又一方面,本申请实施例提供一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行以上方面的方法。
又一方面,本申请实施例提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行以上方面的方法。
本申请实施例所提供的三维模型的纹理获取方法,包括:获取目标物基于多个角度生成的至少两个三维网络,三维网络中包括目标物的点云信息和色彩 信息的第一对应关系,以及目标物的第一相机位姿,第一相机位姿用于表示生成三维网络时目标物相对参考位置的位移;根据第一相机位姿获取至少两个三维网络中用于记录目标物同一位置的三维点之间的偏移量;根据偏移量更新第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系;根据第二对应关系获取目标物的三维模型的表面色彩纹理。通过对点云信息和色彩信息的对应关系进行更新,实现了三维模型中三维点和像素点之间更精确细微的对齐,提升了三维模型纹理贴图的效果。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请实施例所提供的三维模型的纹理获取方法的一个实施例的流程图;
图2为本申请实施例所提供的三维模型的纹理获取方法的另一个实施例的流程图;
图3为本申请实施例所提供的三维模型的纹理获取方法的另一个实施例的示意图;
图4为本申请实施例所提供的三维模型的纹理获取方法的另一个实施例的示意图;
图5为本申请实施例所提供的三维模型的纹理获取方法的另一个实施例的流程图;
图6为本申请实施例所提供的三维模型的纹理获取方法的另一个实施例的流程图;
图7为本申请实施例所提供的三维模型的纹理获取方法中色彩图像和深度图像的示意图;
图8为本申请实施例所提供的三维模型的纹理获取方法中第一点云信息的示意图;
图9为现有技术中像素点网络覆盖用户面部的对齐效果图;
图10为本申请实施例所提供的三维模型的纹理获取方法中像素点网络覆盖用户面部的对齐效果图;
图11为本申请实施例所提供的计算机设备的示意图;
图12为本申请实施例所提供的三维模型的纹理获取装置的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
人脸重建技术,即通过一张或多张的2D人脸图像重建出人脸3D模型的技术。
具体工作时,用户根据终端指示拍摄多张不同角度的2D图片,拍摄得到的2D人脸图像包括图片的色彩信息和深度信息,在三维重建的过程中,从深度信息反投影出三维点,所投影出的三维点与色彩信息中的像素点具有对应关系;对来自不同2D图像投影出的三维点进行融合后,即可得到重建出的人脸三维模型,最后,根据对应关系,将人脸三维模型中各个三维点所对应的像素点贴在三维模型上,即可实现三维模型的纹理贴图,使得三维模型变为彩色。
在上述纹理贴图过程中,经过三维点融合的步骤后,三维点与像素点之间的对应关系不一定准确,其次,即使对应关系特别准,因为人脸不是一个刚体,在各个时刻拍摄的图像上不一定能保证完全静止(可能出现眨眼撇嘴),三维重建时会平滑掉这些误差,导致三维模型根据对应关系无法与像素点对齐。如 果对应关系出现了偏差,比如三维模型中鼻尖上的点对应到了色彩信息中嘴巴上的像素点,在纹理贴图过程中,那就会把嘴巴上的颜色对应到三维模型的鼻尖上,导致三维模型中错误的纹理。
因此,为了解决上述问题,本申请实施例提供一种三维模型的纹理获取方法,能够对点云信息和色彩信息的对应关系进行更新,从而实现纹理信息与三维模型之间更精细的对齐。为便于理解,以下结合附图,对本申请实施例所提供的方法进行详细说明。
需要说明的是,本申请实施例所提供的方法可以应用于各种不同的目标物,例如人脸、玩具或汽车等,本申请实施例并不进行限定,为便于理解,本申请实施例以目标物为人脸为例进行说明。
请参阅图1,如图1所示,本申请实施例所提供的三维模型的纹理获取方法的实施例一包括以下步骤。
101、获取目标物基于多个角度生成的至少两个三维网络。
本实施例中,三维网络的具体生成方式如下:
1、获取目标物在多个拍摄角度上的至少两个初始图像。
本实施例中,初始图像从不同角度拍摄的目标物的图像,例如用户在终端的指导下拍摄的不同角度的人脸图像。该拍摄镜头包括深度镜头,从而使得至少两个初始图像分别记录有人脸的深度信息,深度信息用于记录目标物的各个点与拍摄镜头之间的距离。
可选地,初始图像还可以是通过扫描等其他方式获得,对此本申请实施例并不进行限定。
2、根据每个初始图像中的深度信息在三维空间内反投影,得到每个初始图像对应的第一点云信息。
本实施例中,第一点云信息中的各个点为用于记录目标物的三维点,例如,目标物为人脸时,第一点云信息记录了人脸表面上的多个三维点。
3、获取三维点与色彩信息中的像素点的第一对应关系。
本实施例中,当生成初始图像时,记录有深度信息的点与像素点是一一对应的,当深度信息投影为三维点后,即记录有深度信息的点在三维空间内被投影为三维点,因此,三维点依然保持着与像素点的对应关系,即第一对应关系。
4、根据第一点云信息与第一对应关系生成三维网络。
本实施例中,生成的三维网络中,包括第一点云信息,以及第一点云信息中的三维点与像素点之间的第一对应关系。
因此,在步骤101中,所获取的至少两个三维网络中分别包括目标物的点云信息和色彩信息的第一对应关系(即三维点与像素点的对应关系),以及目标物的第一相机位姿,第一相机位姿用于表示生成不同的三维网络时目标物相对拍摄镜头的移动。
102、根据至少两个三维网络分别包括的第一相机位姿,将至少两个三维网络移动到同一角度。
本实施例中,每个三维网络都是基于一个初始图像生成的三维点云集合,因此能够支持360度转动,根据三维网络中所记录的相机位姿,能够知道每个三维网络的角度情况,因此,根据第一相机位姿,将多个三维网络移动到同一角度下,以执行后续步骤。具体地,可以先设置一个三维网络为第一帧,例如,将用户正脸图像所生成的三维网络作为第一帧,之后将其他三维网络统一移动到第一帧所在的角度,该移动的具体操作方式可以是旋转或平移中的至少一种。
103、获取距离第一网络中第一点最近的第二点。
本实施例中,第一点与第二点分别为两个不同三维网络中的三维点,例如,第一点为第一网络中的点,第二点为第二网络中的点,第一网络与第二网络为前述至少两个三维网络中两个不同的三维网络。由于所有三维网络都处于一个角度,那么距离一个点最近的属于另一个三维网络的点,就大概率可能是用于记录目标物同一位置的三维点。
104、获取第一点与第二点之间的偏移作为偏移量。
本实施例中,当第二点为距离第一点最近的点,且二者不重叠时,获取两个点之间的偏移,该偏移可以是第一点与第二点之间发生相对旋转或平移中的至少一种,获取第一点与第二点之间的偏移即可获得该偏移量。
本实施例中,由于生成三维网络时,初始图像的拍摄角度不同,因此,对于记录目标物同一位置的三维点,可能会具有一定的偏差。例如,正脸图片生成的三维网络中记录用户鼻尖的点A,和侧脸图片生成的三维网络中记录用户鼻尖的点B,当旋转到同一角度时,点A与点B之间由于偏差,可能不会完全重 合,此时,点A和点B之间的偏差即为两个点之间的偏移量。
105、根据偏移量更新第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系。
本实施例中,由于记录同一位置的两个点之间存在偏移量,导致在后续点云融合的过程中,该点的位置会发生改变,例如,如上述例子所述,在正脸图片和侧脸图片所生成的三维网络中,分别用于记录用户鼻尖的点A和点B之间具有偏移量,则点云融合后,正脸图片所生成的三维网络中点A会因偏移量而发生改变,此时点A可能不再位于用户鼻尖的位置上,而根据原有的第一对应关系,点A仍然对应用户鼻尖的像素点,因此,需要根据偏移量,对点A所对应的像素点进行更新,得到第二对应关系。
106、根据第二对应关系获取目标物的三维模型的表面色彩纹理。
本实施例中,第二对应关系是根据不同三维网络之间三维点的偏移量,所生成的更新后的对应关系,更贴近三维模型和像素点的实际对应情况,此时,根据第二对应关系来获取三维模型表面的像素点,以得到三维模型的表面色彩纹理。具体过程可以为:
1、根据第二对应关系,获取三维模型的各个三维点所分别对应的像素点。
本实施例中,在第二对应关系中,三维点与像素点是一一对应的,因此根据第二对应关系,即可获得三维网络中每个三维点所对应的像素点。
2、将各个像素点覆盖在对应的三维点上,以在三维模型的表面实现纹理贴图。
将所获取到的像素点贴在对应的三维点上,即可实现三维模型表面的纹理贴图,使得三维模型具备色彩。
本申请实施例所提供的三维模型的纹理获取方法,包括:获取目标物基于不同拍摄角度生成的至少两个三维网络,至少两个三维网络中分别包括目标物的点云信息和色彩信息的第一对应关系,以及目标物的第一相机位姿,第一相机位姿用于表示生成不同的三维网络时目标物相对拍摄镜头的移动;根据第一相机位姿获取不同三维网络中用于记录目标物同一位置的三维点之间的偏移量;根据偏移量更新第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系;根据第二对应关系获取目标物的三维模型的表面色彩纹理。通过对 点云信息和色彩信息的对应关系进行更新,实现了三维模型中三维点和像素点之间更精确细微的对齐,提升了三维模型纹理贴图的效果。
需要说明的是,由于不同拍摄角度下生成的三维网络之间有所偏差,因此才需要获取上述偏移量,上述通过找最近点的方式获取偏移量,对于找最近点的具体获取方式,本申请实施例提供一种更具体的实施方式,以下结合附图进行详细说明。
请参阅图2,如图2所示,本申请实施例所提供的三维模型的纹理获取方法的实施例二包括以下步骤。
201、获取目标物基于多个角度生成的至少两个三维网络。
本实施例中,本步骤可参阅上述步骤101,此处不再赘述。
202、根据至少两个三维网络分别包括的第一相机位姿,将至少两个三维网络移动到同一角度。
本实施例中,本步骤可参阅上述步骤102,此处不再赘述,在完成步骤202后,用户可根据需要,执行下述步骤203或204中的任意一个。
203、遍历所有坐标点,通过三维坐标计算距离后排序找到距离第一点最近的第二点。
204、通过邻近算法(K-Nearest Neighbor,KNN)找到距离第一点最近的第二点。
步骤205至207可参阅上述步骤104至106,此处不再赘述。
本实施例所提供的方法,通过寻找最近点的方式来获取不同三维网络之间记录目标物同一位置的点,从而精确地得到不同三维网络之间的偏移量,为后续基于该偏移量更新对应关系提供了精确的基础。从而能够提供点云信息和色彩信息对应关系的精确性。
需要说明的是,在上述实施例二中,对于获取距离第一点最近的第二点的具体实现方式,可以通过多种方法,包括但不限于:一、遍历所有坐标点,通过三维坐标计算距离后排序实现;二、通过邻近算法KNN实现,为便于,以下对此两种情况进行详细说明。
一、遍历所有坐标点,通过三维坐标计算距离后排序实现。
请参阅图3,如图3所示,本申请实施例所提供的三维模型的纹理获取方法 的实施例三包括以下步骤。
步骤301至302可参阅上述步骤201至202,此处不再赘述。
303、遍历第二网络中的所有点。
本实施例中,第二网络为至少两个三维网络中的一个,逐个获取第二网络中的三维点,以实现对第二网络中所有点的遍历。
304、分别获取第二网络中所有点的三维坐标。
本实施例中,对于所获取到的第二网络中的坐标点,分别获取每个坐标点的三维坐标值(x,y,z)。
305、根据三维坐标分别计算第二网络中每个点距离第一点的距离。
本实施例中,第一点所在的三维网络为第一网络,即第二点与第一点属于两个不同的三维网络中的点,根据三维坐标的坐标值,可以分别计算出第二网络中的每个点距离与第一网络中的第一点之间的距离。
306、确定第二网络中距离第一点最近的点为第二点。
本实施例中,根据距离计算结果进行排序,即可根据排序结果,获得第二网络中距离第一点最近的一个点作为第二点,从而通过遍历的方式找到了第一网络中距离第一点最近的点,循环以上方式,对第一网络中的每个点进行相同操作,即可获得第二网络中距离第一网络中每个点最近的点。
需要说明的是,上述步骤也可以针对多个三维网络并行实施的,上述实施例只是以其中一个三维网络(第一网络)为例子来进行说明。
后续步骤307至309可参阅上述步骤204至206,此处不再赘述。
本实施例中,通过遍历坐标点之后根据坐标值来获取最近点的方式,能够非常准确地找到距离一个点最近的另一个点,同时,由于坐标值能够提供准确的精度,后续步骤中,还可以根据坐标值准确地计算偏移量,从而实现了点云信息和色彩信息之间对应关系的准确更新。
需要说明的是,上述实施例三所提供的方式虽然精度较高,但是遍历三维点的方式对算力有较高要求,同时也会需要较高的计算时间,因此,为了解决此问题,本申请实施例提供另一种实现方式。
二、通过邻近算法KNN实现,为便于,以下对此两种情况进行详细说明。
请参阅图4,如图4所示,本申请实施例所提供的三维模型的纹理获取方法 的实施例四包括以下步骤。
步骤401至402可参阅上述步骤201至202,此处不再赘述。
403、通过邻近算法KNN获取与第一点距离最近的第二点。
本实施例中,KNN算法的核心思想是如果一个样本在特征空间中的k个最相邻的样本中的大多数属于某一个类别,则该样本也属于这个类别,并具有这个类别上样本的特性。该算法在确定分类决策上只依据最邻近的一个或者几个样本的类别来决定待分样本所属的类别。KNN方法在类别决策时,只与极少量的相邻样本有关。由于KNN方法主要靠周围有限的邻近的样本,而不是靠判别类域的方法来确定所属类别的,因此对于本申请实施例所涉及的点云信息处理这样的类域的交叉或重叠较多的待分样本集来说,KNN方法较其他方法更为适合。
后续步骤404至406可参阅上述步骤204至206,此处不再赘述。
本实施例中,通过KNN算法来执行最近点的查找,能够快速地在第二网络中找到与第一网络中第一点距离最近的第二点。从而节省了算力资源,提高了三维模型的纹理获取方法的实现效率。
需要说明的是,在本申请实施例所提供的方法中,在初始状态下,主要涉及以下两组对应关系。
一、第一对应关系。初始图像中,深度图与彩色图有对应关系,具体为深度图中记录有深度信息的点与彩色图中的像素点之间的对应关系,深度图中记录有深度信息的点在三维空间中投影为点云信息后,点云信息中的三维点与彩色图中的像素点之间的关系即为第一对应关系。
二、偏移量对应关系。对于目标物从不同角度拍摄的初始图像,最终得到的三维网络之间具有一定的偏差,不同三维网络之间记录目标物同一位置的三维点之间会具有一定的偏移量,例如第一网络与第二网络之间,点A与点B为记录用户鼻尖的点,但二者之间具有一定的偏移量,因此,点A和点B之间具有一个基于偏移量的对应关系。
本申请实施例所提供的方法的核心思路在于,通过偏移量对应关系去更新第一对应关系,从而提升了像素点与三维点的对齐,在三维模型中实现更精准的纹理贴图。为便于理解,以下结合附图,对通过偏移量对应关系更新第一对 应关系的过程,进行详细说明。
请参阅图5,如图5所示,本申请实施例所提供的三维模型的纹理获取方法的实施例五包括以下步骤。
步骤501至504可参阅上述201至204,可选地步骤501至504还可以是通过上述实施例三或实施例四种的方法实现的,对此本申请实施例并不进行限定。
505、根据第一对应关系获取与点云信息中的三维点对应的色彩信息中的像素点。
本实施例中,在获得初始的三维网络时,三维点与像素点之间就具备第一对应关系,此处直接获取即可。
506、通过偏移量对点云信息中的三维点进行偏移。
本实施例中,步骤506具体可以通过以下方式实现。
1、获取偏移量中的旋转矩阵R和平移矩阵T。
本实施例中,R用于表示三维网络之间的旋转操作,T用于表示三维网络之间的平移操作,从而可以通过该两个矩阵来量化三维点之间的偏移量。
2、通过执行公式D1=(R|T)×D2对三维点进行偏移。
本实施例中,D1为一个三维网络中的点云信息,D2为另一个三维网络中的信息,通过偏移量,实现两个三维网络之间点云信息的偏移。
507、获取点云信息中偏移后的三维点与像素点的对应关系为第二对应关系。
本实施例中,经过偏移后的点,就是三维网络融合为三维模型后,三维点实际处于的位置,此时基于偏移后的三维点重新确认在该位置下该三维点与像素点之间的对应关系,从而将第一对应关系更新为第二对应关系,从而实现了三维点与像素点的对齐。
需要说明的是,上述步骤503至507,可以不仅执行一次,还可以通过多次执行,反复迭代,其中,每迭代一次,对应关系就会更加准确,具体的迭代次数取决于用于所期望得到的精度。
508、根据第二对应关系获取三维模型的各个三维点所分别对应的像素点。
本实施例中,根据更新后的第二对应关系,即可获取三维模型中每个三维点所对应的像素点。
509、将各个像素点覆盖在对应的三维点上。
本实施例中,将像素点贴在对应的三维点上,即可在三维模型的表面实现纹理贴图,具体地,在实现纹理贴图过程中的具体技术细节,属于现有技术中技术方案,基于上述第二对应关系,本领域技术人员可以自主地选择纹理贴图的具体实现方式,对此本申请实施例并不进行限定。
更进一步地,本申请实施例还提供一种三维模型的纹理获取方法在实际工作中的具体实现方式,为便于理解,以下结合附图进行详细说明。
请参阅图6,如图6所示,本申请实施例所提供的三维模型的纹理获取方法的实施例六包括以下步骤。
601、获取目标物在不同拍摄角度上的至少两个初始图像。
本实施例中,请参阅图7,如图7展示的是获取的至少两个初始图像中用户侧脸的图像,分别包括彩色图701和深度图702,其中,彩色图701中记录有用于表现色彩纹理的像素点,深度图702中记录有用于表示深度信息的点,即图中的每个点距离拍摄镜头的距离。
602、根据每个初始图像中的深度信息在三维空间内反投影,得到每个初始图像对应的第一点云信息。
本实施例中,基于上述图7中的深度图702,在三维空间内投影得到第一点云信息801如图8所示。
603、获取三维点与色彩信息中的像素点的第一对应关系。
本实施例中,第一点云信息801与深度图702中的点是一一对应的,而深度图702中的点与彩色图701中的像素点又是一一对应的,因此根据该对应关系,即可获得三维点与色彩信息中的像素点的第一对应关系。
604、根据第一点云信息与第一对应关系生成三维网络。
本实施例中,所生成的三维网络即图8所示的网络,只不过该三维网络中还记录有第一对应关系和目标物的相机位姿信息。
605、根据第一相机位姿获取不同三维网络中用于记录目标物同一位置的三维点之间的偏移量。
本实施例中,偏移量的具体获取方式可参阅上述任意一种实施方式,此处不再赘述。
606、根据偏移量更新第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系。
本实施例中,基于第一对应关系进行更新得到第二对应关系的方式可参阅上述任意一种实施方式,此处不再赘述。
607、根据第二对应关系获取目标物的三维模型的表面色彩纹理。
本实施例中,本步骤可参阅上述任意一种实施方式,此处不再赘述。如图9所示,图9示出了在现有技术中,点云信息与彩色图中像素点所组成的像素点网络901的对应情况,如图9可见,由于没有对点云信息和色彩信息的对应关系进行更新,在鼻尖902等位置,由于三维模型在建模过程中进行了平滑处理,产生了一定的偏移,导致像素点网络901无法与这些区域对齐。经过本申请实施例所提供的方法处理后,如图10所示,可见,对于鼻尖区域1002,像素点网络1001能够完整地覆盖,本申请实施例所提供的方法经过对应关系的更新后,像素点网络1001能够更准确地覆盖人脸的范围。
本申请实施例所提供的三维模型的纹理获取方法,包括:获取目标物基于不同拍摄角度生成的至少两个三维网络,至少两个三维网络中分别包括目标物的点云信息和色彩信息的第一对应关系,以及目标物的第一相机位姿,第一相机位姿用于表示生成不同的三维网络时目标物相对拍摄镜头的移动;根据第一相机位姿获取不同三维网络中用于记录目标物同一位置的三维点之间的偏移量;根据偏移量更新第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系;根据第二对应关系获取目标物的三维模型的表面色彩纹理。通过对点云信息和色彩信息的对应关系进行更新,实现了三维模型中三维点和像素点之间更精确细微的对齐,提升了三维模型纹理贴图的效果。
上述对本申请实施例提供的方案进行了介绍。可以理解的是,计算机设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
从硬件结构上来描述,上述方法可以由一个实体设备实现,也可以由多个实体设备共同实现,还可以是一个实体设备内的一个逻辑功能模块,本申请实施例对此不作具体限定。
例如,上述方法均可以通过图11中的计算机设备来实现。图11为本申请实施例提供的计算机设备的硬件结构示意图。该计算机设备包括至少一个处理器1101,通信线路1102,存储器1103以及至少一个通信接口1104。
处理器1101可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,服务器IC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路1102可包括一通路,在上述组件之间传送信息。
通信接口1104,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。
存储器1103可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically able programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路1102与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器1103用于存储执行本申请方案的计算机执行指令,并由处理器1101来控制执行。处理器1101用于执行存储器1103中存储的计算机执行指令,从而实现本申请上述实施例提供的方法。
可选的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在具体实现中,作为一种实施例,处理器1101可以包括一个或多个CPU, 例如图11中的CPU0和CPU1。
在具体实现中,作为一种实施例,计算机设备可以包括多个处理器,例如图11中的处理器1101和处理器1107。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,计算机设备还可以包括输出设备1105和输入设备1106。输出设备1105和处理器1101通信,可以以多种方式来显示信息。例如,输出设备1105可以是液晶显示器(liquid crystal display,LCD),发光二级管(light emitting diode,LED)显示设备,阴极射线管(cathode ray tube,CRT)显示设备,或投影仪(projector)等。输入设备1106和处理器1101通信,可以以多种方式接收用户的输入。例如,输入设备1106可以是鼠标、键盘、触摸屏设备或传感设备等。
上述的计算机设备可以是一个通用设备或者是一个专用设备。在具体实现中,计算机设备可以是台式机、便携式电脑、网络服务器、掌上电脑(personal digital assistant,PDA)、移动手机、平板电脑、无线终端设备、嵌入式设备或有图11中类似结构的设备。本申请实施例不限定计算机设备的类型。
本申请实施例可以根据上述方法示例对存储设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
比如,以采用集成的方式划分各个功能单元的情况下,图12示出了一种三维模型的纹理获取装置的示意图。
如图12所示,本申请实施例提供的三维模型的纹理获取装置,包括:
第一获取单元1201,该第一获取单元1201用于获取目标物基于多个角度生成的至少两个三维网络,该三维网络中包括该目标物的点云信息和色彩信息的第一对应关系,以及该目标物的第一相机位姿,该第一相机位姿用于表示生成该三维网络时该目标物相对参考位置的位移;
第二获取单元1202,该第二获取单元1202用于:
根据该至少两个三维网络分别包括的第一相机位姿,将该第一获取单元1201获取的该至少两个三维网络移动到同一角度;
获取距离第一网络中第一点最近的第二点,所述第二点处于第二网络,所述第一网络和所述第二网络分别为所述至少两个三维网络中不同的三维网络;
获取该第一点与该第二点之间的偏移作为该偏移量
更新单元1203,该更新单元1203用于根据该第二获取单元1202获取的该偏移量更新该第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系;
第三获取单元1204,该第三获取单元1204用于根据该更新单元1203得到的该第二对应关系获取该目标物的三维模型的表面色彩纹理。
可选地,该第二获取单元1202还用于:
遍历所述第二网络中的所有点;
分别获取该第二网络中所有点的三维坐标;
根据该三维坐标分别计算该第二网络中每个点距离该第一点的距离;
确定该第二网络中距离该第一点最近的点为该第二点。
可选地,该第二获取单元1202还用于:
通过KNN获取与该第一点距离最近的该第二点。
可选地,该更新单元1203还用于:
根据该第一对应关系获取与该点云信息中的三维点对应的该色彩信息中的像素点;
通过该偏移量对该点云信息中的三维点进行偏移;
获取该点云信息中偏移后的三维点与该像素点的对应关系作为该第二对应关系。
可选地,该偏移量包括用于表示旋转操作的旋转矩阵R和用于表示平移操作的平移矩阵T,则该更新单元1203还用于:
执行以下公式:D1=(R|T)×D2;
其中,该D1为一个三维网络中的点云信息,该D2为另一个三维网络中的信息。
可选地,该第三获取单元1204还用于:
根据该第二对应关系获取该三维模型的该各个三维点所分别对应的该像素点;
将该各个像素点覆盖在对应的该三维点上,以在该三维模型的表面实现纹理贴图。
可选地,该第一获取单元1201还用于:
获取目标物在不同拍摄角度上的至少两个初始图像,该至少两个初始图像分别记录有该目标物的深度信息,该深度信息用于记录该目标物的各个点与该参考位置之间的距离,该参考位置为拍摄该目标物的拍摄镜头所在的位置;
根据每个初始图像中的深度信息在三维空间内反投影,得到该每个初始图像对应的第一点云信息,该第一点云信息中的各个点为用于记录该目标物的三维点;
获取该三维点与该色彩信息中的像素点的第一对应关系;
根据该第一点云信息与该第一对应关系生成该三维网络。
另外,本申请实施例还提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行上述实施例提供的方法。
本申请实施例还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的方法。
有关本申请实施例提供的计算机存储介质中存储的程序的详细描述可参照上述实施例,在此不做赘述。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的核心思想或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (14)

  1. 一种三维模型的纹理获取方法,所述方法由计算机设备执行,所述方法包括:
    获取目标物基于多个角度生成的至少两个三维网络,所述三维网络中包括所述目标物的点云信息和色彩信息的第一对应关系,以及所述目标物的第一相机位姿,所述第一相机位姿用于表示生成所述三维网络时所述目标物相对参考位置的位移;
    根据所述至少两个三维网络分别包括的第一相机位姿,将所述至少两个三维网络移动到同一角度;
    获取距离第一网络中第一点最近的第二点,所述第二点处于第二网络,所述第一网络和所述第二网络分别为所述至少两个三维网络中不同的三维网络;
    获取所述第一点与所述第二点之间的偏移作为偏移量;
    根据所述偏移量更新所述第一对应关系,得到所述目标物的点云信息和色彩信息的第二对应关系;
    根据所述第二对应关系获取所述目标物的三维模型的表面色彩纹理。
  2. 根据权利要求1所述的方法,所述获取距离第一网络中第一点最近的第二点,包括:
    遍历所述第二网络中的所有点;
    分别获取所述第二网络中所有点的三维坐标;
    根据所述三维坐标分别计算所述第二网络中每个点距离所述第一点的距离;
    确定所述第二网络中距离所述第一点最近的点为所述第二点。
  3. 根据权利要求1所述的方法,所述获取距离第一网络中第一点最近的第二点,包括:
    通过邻近算法KNN获取与所述第一点距离最近的所述第二点。
  4. 根据权利要求1所述的方法,所述根据所述偏移量更新所述第一对应关系,得到目标物的点云信息和色彩信息的第二对应关系,包括:
    根据所述第一对应关系获取与所述点云信息中的三维点对应的所述色彩信息中的像素点;
    通过所述偏移量对所述点云信息中的三维点进行偏移;
    获取所述点云信息中偏移后的三维点与所述像素点的对应关系作为所述第二对应关系。
  5. 根据权利要求4所述的方法,所述偏移量包括用于表示旋转操作的旋转矩阵R和用于表示平移操作的平移矩阵T,则所述通过所述偏移量对所述点云信息中的三维点进行偏移,包括:
    执行以下公式:D1=(R|T)×D2;
    其中,所述D1为一个三维网络中的点云信息,所述D2为另一个三维网络中的信息。
  6. 根据权利要求4所述的方法,所述根据所述第二对应关系获取所述目标物的三维模型的表面色彩纹理,包括:
    根据所述第二对应关系获取所述三维模型的所述各个三维点所分别对应的所述像素点;
    将所述各个像素点覆盖在对应的所述三维点上,以在所述三维模型的表面实现纹理贴图。
  7. 根据权利要求1至6任一所述的方法,所述获取目标物基于多个角度生成的至少两个三维网络,包括:
    获取目标物在多个拍摄角度上的至少两个初始图像,所述至少两个初始图像分别记录有所述目标物的深度信息,所述深度信息用于记录所述目标物的各个点与所述参考位置之间的距离,所述参考位置为拍摄所述目标物的拍摄镜头所在的位置;
    根据每个初始图像中的深度信息在三维空间内反投影,得到所述每个初始图像对应的第一点云信息,所述第一点云信息中的各个点为用于记录所述目标物的三维点;
    获取所述三维点与所述色彩信息中的像素点的第一对应关系;
    根据所述第一点云信息与所述第一对应关系生成所述三维网络。
  8. 一种三维模型的纹理获取装置,包括:
    第一获取单元,所述第一获取单元用于获取目标物基于多个角度生成的至少两个三维网络,所述三维网络中包括所述目标物的点云信息和色彩信息的第 一对应关系,以及所述目标物的第一相机位姿,所述第一相机位姿用于表示生成所述三维网络时所述目标物相对参考位置的位移;
    第二获取单元,所述第二获取单元用于:
    根据所述至少两个三维网络分别包括的第一相机位姿,将所述第一获取单元获取的所述至少两个三维网络移动到同一角度;
    获取距离第一网络中第一点最近的第二点,所述第二点处于第二网络,所述第一网络和所述第二网络分别为所述至少两个三维网络中不同的三维网络;
    获取所述第一点与所述第二点之间的偏移作为所述偏移量;
    更新单元,所述更新单元用于根据所述第二获取单元获取的所述偏移量更新所述第一对应关系,得到所述目标物的点云信息和色彩信息的第二对应关系;
    第三获取单元,所述第三获取单元用于根据所述更新单元得到的所述第二对应关系获取所述目标物的三维模型的表面色彩纹理。
  9. 根据权利要求8所述的装置,所述第二获取单元还用于:
    遍历所述第二网络中的所有点;
    分别获取所述第二网络中所有点的三维坐标;
    根据所述三维坐标分别计算所述第二网络中每个点距离所述第一点的距离;
    确定所述第二网络中距离所述第一点最近的点为所述第二点。
  10. 根据权利要求8所述的装置,所述第二获取单元还用于:
    通过邻近算法KNN获取与所述第一点距离最近的所述第二点。
  11. 根据权利要求8所述的装置,所述更新单元还用于:
    根据所述第一对应关系获取与所述点云信息中的三维点对应的所述色彩信息中的像素点;
    通过所述偏移量对所述点云信息中的三维点进行偏移;
    获取所述点云信息中偏移后的三维点与所述像素点的对应关系作为所述第二对应关系。
  12. 一种计算机设备,所述计算机设备包括:交互装置、输入/输出(I/O)接口、处理器和存储器,所述存储器中存储有程序指令;
    所述交互装置用于获取用户输入的操作指令;
    所述处理器用于执行存储器中存储的程序指令,执行如权利要求1-7中任意一项所述的方法。
  13. 一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行权利要求1-7任意一项所述的方法。
  14. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1-7任意一项所述的方法。
PCT/CN2020/120797 2020-01-10 2020-10-14 一种三维模型的纹理获取方法和相关装置 WO2021139293A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2022516349A JP7446414B2 (ja) 2020-01-10 2020-10-14 3dモデルのテクスチャを取得するための方法および関連する装置
EP20912594.7A EP3996042A4 (en) 2020-01-10 2020-10-14 METHOD FOR OBTAINING A THREE-DIMENSIONAL MODEL TEXTURE AND RELEVANT DEVICE
US17/579,072 US11989894B2 (en) 2020-01-10 2022-01-19 Method for acquiring texture of 3D model and related apparatus
US18/609,992 US20240221193A1 (en) 2020-01-10 2024-03-19 Method for acquiring texture of 3d model and related apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010027225.1A CN110895823B (zh) 2020-01-10 2020-01-10 一种三维模型的纹理获取方法、装置、设备及介质
CN202010027225.1 2020-01-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/579,072 Continuation US11989894B2 (en) 2020-01-10 2022-01-19 Method for acquiring texture of 3D model and related apparatus

Publications (1)

Publication Number Publication Date
WO2021139293A1 true WO2021139293A1 (zh) 2021-07-15

Family

ID=69787714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120797 WO2021139293A1 (zh) 2020-01-10 2020-10-14 一种三维模型的纹理获取方法和相关装置

Country Status (5)

Country Link
US (2) US11989894B2 (zh)
EP (1) EP3996042A4 (zh)
JP (1) JP7446414B2 (zh)
CN (1) CN110895823B (zh)
WO (1) WO2021139293A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895823B (zh) 2020-01-10 2020-06-05 腾讯科技(深圳)有限公司 一种三维模型的纹理获取方法、装置、设备及介质
CN111753739B (zh) * 2020-06-26 2023-10-31 北京百度网讯科技有限公司 物体检测方法、装置、设备以及存储介质
CN113487729A (zh) * 2021-07-30 2021-10-08 上海联泰科技股份有限公司 三维模型的表面数据处理方法、系统及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180025540A1 (en) * 2016-07-19 2018-01-25 Usens, Inc. Methods and systems for 3d contour recognition and 3d mesh generation
CN109242961A (zh) * 2018-09-26 2019-01-18 北京旷视科技有限公司 一种脸部建模方法、装置、电子设备和计算机可读介质
CN110070598A (zh) * 2018-01-22 2019-07-30 宁波盈芯信息科技有限公司 用于3d扫描重建的移动终端及其进行3d扫描重建方法
CN110895823A (zh) * 2020-01-10 2020-03-20 腾讯科技(深圳)有限公司 一种三维模型的纹理获取方法、装置、设备及介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015153405A (ja) * 2014-02-19 2015-08-24 株式会社リコー 情報処理装置、情報処理プログラム、および情報処理システム
GB2538751A (en) * 2015-05-27 2016-11-30 Imp College Of Science Tech And Medicine Modelling a three-dimensional space
JP6397386B2 (ja) * 2015-08-24 2018-09-26 日本電信電話株式会社 領域分割処理装置、方法、及びプログラム
CN109325990B (zh) * 2017-07-27 2022-11-29 腾讯科技(深圳)有限公司 图像处理方法及图像处理装置、存储介质
CN109979013B (zh) * 2017-12-27 2021-03-02 Tcl科技集团股份有限公司 三维人脸贴图方法及终端设备
TWI634515B (zh) * 2018-01-25 2018-09-01 廣達電腦股份有限公司 三維影像處理之裝置及方法
CN109409335B (zh) 2018-11-30 2023-01-20 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机可读介质及电子设备
CN109949412B (zh) * 2019-03-26 2021-03-02 腾讯科技(深圳)有限公司 一种三维对象重建方法和装置
CN110119679B (zh) * 2019-04-02 2021-12-10 北京百度网讯科技有限公司 物体三维信息估计方法及装置、计算机设备、存储介质
CN109978931B (zh) * 2019-04-04 2021-12-31 中科海微(北京)科技有限公司 三维场景重建方法及设备、存储介质
CN110189400B (zh) * 2019-05-20 2023-04-14 深圳大学 一种三维重建方法、三维重建系统、移动终端及存储装置
CN110349251B (zh) * 2019-06-28 2020-06-16 深圳数位传媒科技有限公司 一种基于双目相机的三维重建方法及装置
CN110570368B (zh) * 2019-08-21 2020-09-25 贝壳技术有限公司 深度图像的畸变矫正方法、装置、电子设备及存储介质
CN110599546A (zh) * 2019-08-28 2019-12-20 贝壳技术有限公司 一种获取三维空间数据的方法、系统、装置和存储介质
CN111325823B (zh) 2020-02-05 2022-09-27 腾讯科技(深圳)有限公司 人脸纹理图像的获取方法、装置、设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180025540A1 (en) * 2016-07-19 2018-01-25 Usens, Inc. Methods and systems for 3d contour recognition and 3d mesh generation
CN110070598A (zh) * 2018-01-22 2019-07-30 宁波盈芯信息科技有限公司 用于3d扫描重建的移动终端及其进行3d扫描重建方法
CN109242961A (zh) * 2018-09-26 2019-01-18 北京旷视科技有限公司 一种脸部建模方法、装置、电子设备和计算机可读介质
CN110895823A (zh) * 2020-01-10 2020-03-20 腾讯科技(深圳)有限公司 一种三维模型的纹理获取方法、装置、设备及介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3996042A4 *

Also Published As

Publication number Publication date
JP7446414B2 (ja) 2024-03-08
US20240221193A1 (en) 2024-07-04
EP3996042A1 (en) 2022-05-11
US11989894B2 (en) 2024-05-21
US20220138974A1 (en) 2022-05-05
CN110895823B (zh) 2020-06-05
JP2022548608A (ja) 2022-11-21
EP3996042A4 (en) 2022-11-02
CN110895823A (zh) 2020-03-20

Similar Documents

Publication Publication Date Title
WO2021139293A1 (zh) 一种三维模型的纹理获取方法和相关装置
WO2021164150A1 (zh) 一种结合光线跟踪的Web端实时混合渲染方法、装置及计算机设备
US9117267B2 (en) Systems and methods for marking images for three-dimensional image generation
Yue et al. WireDraw: 3D Wire Sculpturing Guided with Mixed Reality.
US11568601B2 (en) Real-time hand modeling and tracking using convolution models
US8436853B1 (en) Methods and systems for acquiring and ranking image sets
US11551388B2 (en) Image modification using detected symmetry
US11727632B2 (en) Shader binding management in ray tracing
WO2023284713A1 (zh) 一种三维动态跟踪方法、装置、电子设备和存储介质
WO2022142783A1 (zh) 一种图像处理方法以及相关设备
CN113793387A (zh) 单目散斑结构光系统的标定方法、装置及终端
CN108655571A (zh) 一种数控激光雕刻机、控制系统及控制方法、计算机
Zhang et al. Point cloud computing algorithm on object surface based on virtual reality technology
WO2023010565A1 (zh) 单目散斑结构光系统的标定方法、装置及终端
WO2024125350A1 (zh) 图像处理方法、装置、设备及介质
CN113705379A (zh) 一种手势估计方法、装置、存储介质及设备
Li et al. Fast 3D texture-less object tracking with geometric contour and local region
CN115690359B (zh) 一种点云处理方法、装置、电子设备及存储介质
Kumara et al. Real-time 3D human objects rendering based on multiple camera details
CN118149840B (zh) 车道线生成方法、装置、电子设备、存储介质和程序产品
CN114781642B (zh) 一种跨媒体对应知识的生成方法和装置
Fan et al. Geometry Calibration Control Method with 3D Sensors of Large Screen Interactive Projection Imaging System
PIÑONES ZULETA Overcoming mixed reality adoption barriers in design through a computer-vision-based approach for content authoring
Putra et al. DGONN: Depthwise Dynamic Graph Overparameterized Neural Network for 3D Point Cloud Object Recognition
CN116152464A (zh) 三维人脸模型的生成方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20912594

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020912594

Country of ref document: EP

Effective date: 20220207

ENP Entry into the national phase

Ref document number: 2022516349

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE