CN117974868A - Texture mapping method and device of grid model and three-dimensional scanning system - Google Patents

Texture mapping method and device of grid model and three-dimensional scanning system Download PDF

Info

Publication number
CN117974868A
CN117974868A CN202311845412.0A CN202311845412A CN117974868A CN 117974868 A CN117974868 A CN 117974868A CN 202311845412 A CN202311845412 A CN 202311845412A CN 117974868 A CN117974868 A CN 117974868A
Authority
CN
China
Prior art keywords
texture
texture image
image
image set
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311845412.0A
Other languages
Chinese (zh)
Inventor
陈尚俭
张立旦
王江峰
郑俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Scantech Hangzhou Co Ltd
Original Assignee
Scantech Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scantech Hangzhou Co Ltd filed Critical Scantech Hangzhou Co Ltd
Priority to CN202311845412.0A priority Critical patent/CN117974868A/en
Publication of CN117974868A publication Critical patent/CN117974868A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application relates to a texture mapping method and device of a grid model and a three-dimensional scanning system, wherein the texture mapping method of the grid model comprises the following steps: acquiring a first grid model of a target object; acquiring a first texture image set and a second texture image set of the target object; the first texture image set and the second texture image set are sets of texture images acquired by different devices; performing image matching on the first texture image set and the second texture image set to determine a first conversion matrix set; the first transformation matrix set is a set of a second texture image in the second texture image set and a first transformation matrix of the first mesh model; and carrying out texture mapping on the grid model to be mapped according to the first transformation matrix set and the second texture image set. According to the application, the absolute orientation is realized without manual operation, and the efficiency of texture mapping of the grid model is improved.

Description

Texture mapping method and device of grid model and three-dimensional scanning system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a texture mapping method and apparatus for a mesh model, and a three-dimensional scanning system.
Background
And shooting a series of third party images of the object to be mapped by using a third party camera, performing sparse reconstruction on the third party images, and performing absolute orientation by manual alignment, rotation and other methods, so as to obtain a conversion matrix between the third party images and the grid model, and mapping according to the conversion matrix, thereby realizing texture mapping of the grid model. Absolute orientation here refers to calculating a transformation matrix between the third party image and the mesh model.
The current absolute orientation method needs to perform certain operation alignment on an object or a grid model of the object by manual participation, and then can perform calculation of a conversion matrix between a third party image and the grid model.
Disclosure of Invention
In this embodiment, a method and an apparatus for texture mapping of a mesh model, and a three-dimensional scanning system are provided to solve the problem of low efficiency of texture mapping of a mesh model in the prior art.
In a first aspect, in this embodiment, there is provided a texture mapping method of a mesh model, the method including:
Acquiring a first grid model of a target object;
acquiring a first texture image set and a second texture image set of the target object; the first texture image set and the second texture image set are sets of texture images acquired by different devices;
performing image matching on the first texture image set and the second texture image set to determine a first conversion matrix set; the first transformation matrix set is a set of a second texture image in the second texture image set and a first transformation matrix of the first mesh model;
and carrying out texture mapping on the grid model to be mapped according to the first transformation matrix set and the second texture image set.
In some of these embodiments, said image matching said first set of texture images and said second set of texture images, determining a first set of transformation matrices comprises:
Performing image matching on the first texture image set and the second texture image set, and determining a first target image in the second texture image set;
acquiring a first transformation matrix of the first target image and the first grid model;
And determining a first conversion matrix set according to the first conversion matrix.
In some of these embodiments, the acquiring a first transformation matrix of the first target image and the first mesh model includes:
Determining a second target image in the first texture image set; the second target image is an image matched with the first target image;
acquiring a second transformation matrix of the second target image and the first grid model;
and determining the first conversion matrix according to the second conversion matrix.
In some of these embodiments, the determining the first transformation matrix from the second transformation matrix includes:
generating a matching point set according to the characteristic points of the first target image and the characteristic points of the second target image;
according to the matching point set and the second transformation matrix, determining a three-dimensional coordinate point set of the matching point set under a world coordinate system;
And determining the first transformation matrix according to the three-dimensional coordinate point set and the characteristic points of the first target image.
In some of these embodiments, the determining the first transformation matrix from the second transformation matrix includes:
generating a matching point set according to the characteristic points of the first target image and the characteristic points of the second target image;
Determining a three-dimensional coordinate point set of the matching point set under a world coordinate system according to the matching point set, the second transformation matrix and the first camera parameters; the first camera parameters are camera parameters of a device that acquires the first texture image set;
Determining the first transformation matrix and the second camera parameters according to the three-dimensional coordinate point set and the characteristic points of the first target image; the second camera parameters are camera parameters of a device that acquired the second texture image set.
In some embodiments, the mesh model to be mapped is a first mesh model, and performing texture mapping on the mesh model to be mapped according to the first transformation matrix set and the second texture image set includes:
and carrying out texture mapping on the first grid model according to the first transformation matrix set, the second texture image set and the second camera parameters.
In some of these embodiments, the determining a first set of transformation matrices from the first transformation matrix includes:
And performing sparse reconstruction on a second texture image in a second texture image set by taking the first target image as a reference to obtain the first conversion matrix set.
In some embodiments, the mesh model to be mapped is a second mesh model, and performing texture mapping on the mesh model to be mapped according to the first transformation matrix set and the second texture image set includes:
Acquiring a second grid model of the target object;
Acquiring a third transformation matrix of the first grid model and the second grid model;
and carrying out texture mapping on the second grid model according to the first conversion matrix set, the second texture image set and the third conversion matrix.
In some of these embodiments, the method further comprises:
determining the range of the first grid model according to the geometric characteristics of the target object;
And determining the first texture image set according to the range of the first grid model.
In a second aspect, in this embodiment, there is provided a texture mapping apparatus for a mesh model, the apparatus including:
the first acquisition module is used for acquiring a first grid model of the target object;
the second acquisition module is used for acquiring a first texture image set and a second texture image set of the target object; the first texture image set and the second texture image set are sets of texture images acquired by different devices;
The determining module is used for performing image matching on the first texture image set and the second texture image set and determining a first conversion matrix set; the first transformation matrix set is a set of a second texture image in the second texture image set and a first transformation matrix of the first mesh model;
and the processing module is used for carrying out texture mapping on the grid model to be mapped according to the first transformation matrix set and the second texture image set.
In a third aspect, in this embodiment, there is provided a three-dimensional scanning system, the system comprising: the system comprises a first three-dimensional scanning device, an image acquisition device and a processor;
the first three-dimensional scanning device is used for scanning the target object to obtain a first grid model, a first texture image set, a first texture image in the first texture image set and a second conversion matrix set of the first grid model;
the processor is arranged to run a computer program to perform the texture mapping method of the mesh model of the first aspect.
In a fourth aspect, in this embodiment, there is provided a three-dimensional scanning system, the system comprising: the system comprises a first three-dimensional scanning device, a second three-dimensional scanning device, an image acquisition device and a processor;
the first three-dimensional scanning device is used for scanning the target object to obtain a first grid model, a first texture image set, a first texture image in the first texture image set and a second conversion matrix set of the first grid model;
The second three-dimensional scanning device is used for scanning the target object to obtain a second grid model;
the processor is arranged to run a computer program to perform the texture mapping method of the mesh model of the first aspect.
Compared with the prior art, the texture mapping method, the texture mapping device and the three-dimensional scanning system for the grid model provided by the embodiment are used for carrying out image matching on the first texture image set and the second texture image set, automatically calculating the first conversion matrix set of the second texture image set and the first grid model according to the matching result, and carrying out absolute orientation without methods such as manual alignment, rotation and the like, so that the automation of absolute orientation is realized, and the efficiency of texture mapping of the grid model is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a block diagram of the hardware architecture of a terminal that performs a texture mapping method of a mesh model according to an embodiment of the present application;
FIG. 2 is a flow chart of a texture mapping method of a mesh model according to an embodiment of the present application;
FIG. 3 is a flow chart of a first method of transition matrix set determination in accordance with an embodiment of the present application;
FIG. 4 is a flow chart of another texture mapping method of a mesh model according to an embodiment of the present application;
FIG. 5 is a flow chart of another texture mapping method of a mesh model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a three-dimensional scanning system according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another three-dimensional scanning system according to an embodiment of the present application;
FIG. 8 is a block diagram of a texture mapping apparatus for a mesh model according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples for a clearer understanding of the objects, technical solutions and advantages of the present application.
Unless defined otherwise, technical or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these" and similar terms in this application are not intended to be limiting in number, but may be singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used herein, are intended to encompass non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this disclosure are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this disclosure, merely distinguish similar objects and do not represent a particular ordering for objects.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or similar computing device. For example, running on a terminal, fig. 1 is a block diagram of the hardware architecture of a terminal that performs a texture mapping method of a mesh model according to an embodiment of the present application. As shown in fig. 1, the terminal may include one or more processors 102 (only one shown in fig. 1) and a memory 104 for storing data, wherein the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a texture mapping method of a mesh model in the present embodiment, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a texture mapping method of a mesh model is provided, and fig. 2 is a flowchart of a texture mapping method of a mesh model according to an embodiment of the present application, as shown in fig. 2, where the flowchart includes the following steps:
step S210, a first mesh model of the target object is acquired.
Specifically, the processor obtains a first mesh model of the target object. The target object is here the object to be texture mapped.
Step S220, a first texture image set and a second texture image set of a target object are obtained; the first texture image set and the second texture image set are sets of texture images acquired by different devices.
Specifically, the processor acquires a first texture image set and a second texture image set of the target object, wherein the first texture image set and the second texture image set are sets of texture images of the target object acquired by different devices, and the first grid model and the first texture image set are data acquired by the same device. The second texture image set is a set of texture images to be subjected to texture mapping, and the image quality of the texture images in the second texture image set is higher than that of the texture images in the first texture image set. The image quality of the texture image in the second texture image set is higher than that of the texture image in the first texture image set, which may be that the definition of the texture image in the second texture image set is higher than that of the texture image in the first texture image set, or that the resolution of the texture image in the second texture image set is higher than that of the texture image in the first texture image set. When the mesh model to be mapped is the first mesh model, after the equipment collects the first mesh model and the first texture image set, the first mesh model can be mapped by using the first texture image set, or the first mesh model can be mapped without being mapped, and then the first mesh model can be mapped by using the second texture image set.
Step S230, performing image matching on the first texture image set and the second texture image set to determine a first conversion matrix set; the first set of transformation matrices is a set of first transformation matrices of the second texture image of the second set of texture images and the first mesh model.
Specifically, the processor performs image matching on the texture images in the first texture image set and the texture images in the second texture image set, and determines a first conversion matrix set according to a matching result. Wherein the first transformation matrix set is a set of first transformation matrices of the second texture image in the second texture image set under the camera coordinate system and the first mesh model of the target object under the world coordinate system. The camera coordinate system here may be a camera coordinate system of the device acquiring the second set of texture images.
Step S240, texture mapping is carried out on the grid model to be mapped according to the first transformation matrix set and the second texture image set.
Specifically, the processor performs texture mapping on the mesh model to be mapped according to the second texture image set and the first transformation matrix set of the second texture image set determined in step S230. The mesh model to be mapped can be a first mesh model or a second mesh model acquired by other equipment which is different from the first texture image set.
In this embodiment, image matching is performed on the first texture image set and the second texture image set, and the first transformation matrix set of the second texture image set and the first mesh model is automatically calculated according to the matching result, so that the absolute orientation is performed without methods such as manual alignment and rotation, and the automation of the absolute orientation is realized, that is, the absolute orientation is realized without manual operation, and the efficiency of texture mapping of the mesh model is improved.
In some embodiments, step S230, performing image matching on the first texture image set and the second texture image set, and determining a first transformation matrix set, as shown in fig. 3, includes the steps of:
In step S231, image matching is performed on the first texture image set and the second texture image set, and the first target image in the second texture image set is determined.
Specifically, the processor performs image matching on the first texture image set and the second texture image set, and determines one or more first target images in the second texture image set. The first target image is a second texture image successfully matched with the first texture image in the first texture image set in the second texture image set, wherein the successful matching can be that the second texture image with the similarity larger than the threshold value with the first texture image is taken as the first target image, or the second texture image with the number of feature points larger than the threshold value matched with the first texture image is taken as the first target image. The number of first target images here may be determined according to the matching result, as long as at least one first target image is obtained. The first target image may be matched with a different first texture image or may be matched with the same first texture image.
Step S232, a first transformation matrix of the first target image and the first mesh model is acquired.
Specifically, the processor obtains one or more first target images and one or more first transformation matrices of the first mesh model. Each image corresponds to a first transformation matrix.
Step S233, determining a first conversion matrix set according to the first conversion matrix.
Specifically, the processor determines a first set of transformation matrices from the one or more first transformation matrices.
In this embodiment, image matching is performed on the first texture image set and the second texture image set, a first target image, in which the second texture image set is matched with the texture images in the first texture image set, is determined, a first transformation matrix of the first target image and the first mesh model is obtained, the first transformation matrix set is determined according to the first transformation matrix, and each image does not need to be matched according to the first transformation matrix set and the second texture image set, so that the matching efficiency and the matching success rate are improved.
In some of these embodiments, acquiring a first transformation matrix of the first target image and the first mesh model includes: determining a second target image in the first texture image set; the second target image is an image matched with the first target image; acquiring a second transformation matrix of the second target image and the first grid model; the first transformation matrix is determined from the second transformation matrix.
In some of these embodiments, determining the first transformation matrix from the second transformation matrix includes: generating a matching point set according to the characteristic points of the first target image and the characteristic points of the second target image; according to the matching point set and the second transformation matrix, determining a three-dimensional coordinate point set of the matching point set under a world coordinate system; and determining a first conversion matrix according to the three-dimensional coordinate point set and the characteristic points of the first target image.
In some of these embodiments, determining the first transformation matrix from the second transformation matrix includes: generating a matching point set according to the characteristic points of the first target image and the characteristic points of the second target image; according to the matching point set, the second transformation matrix and the first camera parameters, determining a three-dimensional coordinate point set of the matching point set under a world coordinate system; the first camera parameters are camera parameters of a device that acquires a first texture image set; determining a first transformation matrix and second camera parameters according to the three-dimensional coordinate point set and the characteristic points of the first target image; the second camera parameters are camera parameters of a device that acquired the second texture image set.
In some of these embodiments, determining the first set of transformation matrices from the first transformation matrix comprises: and performing sparse reconstruction on the second texture image in the second texture image set by taking the first target image as a reference to obtain a first conversion matrix set.
Specifically, the processor performs sparse reconstruction on the second texture images in the second texture image set by using one or more first transformation matrices with reference to one or more first target images, so as to obtain first transformation matrix sets of all the second texture images in the second texture image set.
In some embodiments, the mesh model to be mapped is a first mesh model, and performing texture mapping on the mesh model to be mapped according to the first transformation matrix set and the second texture image set includes: and performing texture mapping on the first grid model according to the first transformation matrix set, the second texture image set and the second camera parameters.
Specifically, when the mesh model to be mapped is a first mesh model, the processor performs texture mapping on the first mesh model according to the first transformation matrix set, the second texture image set and the second camera parameters.
In some embodiments, the mesh model to be mapped is a second mesh model, and performing texture mapping on the mesh model to be mapped according to the first transformation matrix set and the second texture image set includes: acquiring a second grid model of the target object; acquiring a third conversion matrix of the first grid model and the second grid model; and performing texture mapping on the second grid model according to the first conversion matrix set, the second texture image set and the third conversion matrix.
Specifically, when the mesh model to be mapped is the second mesh model, the processor acquires the second mesh model of the target object, the processor calculates a third transformation matrix of the first mesh model and the second mesh model, and the processor performs texture mapping on the second mesh model according to the first transformation matrix set, the second texture image set and the third transformation matrix. The second mesh model here is a model acquired by a different device than the first texture image set, and generally has better local detail than the first mesh model.
In this embodiment, the image quality is improved by acquiring the second mesh model with better local detail.
In some of these embodiments, the texture mapping method of the mesh model further comprises: determining the range of the first grid model according to the geometric characteristics of the target object; a first texture image set is determined from the range of the first mesh model.
Specifically, when the mesh model to be mapped is the second mesh model, the processor determines a range of the first mesh model according to the geometric feature of the target object, and determines the first texture image set according to the range of the first mesh model. The range of the first grid model can be a region with rich geometrical characteristics of the target object, namely, the first grid model of the region with rich geometrical characteristics of the target object is obtained, and the texture image corresponding to the region is obtained as a first texture image set, without obtaining the grid model of all regions of the target object and the first texture image set of all regions.
In this embodiment, when the mesh model to be mapped is the second mesh model, a local first mesh model of the target object and a local first texture image set corresponding to the local first mesh model are obtained, and data matching is performed according to the first texture image set, so that mapping of the second mesh model is completed, and image matching efficiency is improved.
In this embodiment, a texture mapping method of a mesh model is also provided, and fig. 4 is a flowchart of another texture mapping method of a mesh model according to an embodiment of the present application, as shown in fig. 4, where the flowchart includes the following steps:
In step S410, a first mesh model of the target object, a first texture image set, and a second transformation matrix set of the first texture image in the first texture image set relative to the first mesh model are acquired.
Specifically, a first three-dimensional scanning device may be used to scan the target object, and a mesh project may be derived, where the mesh project includes a first mesh model of the target object, a first set of texture images, and a second set of transformation matrices for each of the first texture images in the first set of texture images relative to the first mesh model. Wherein the first texture image set includes camera parameters of the first texture image.
Step S420, a second texture image set of the target object is acquired.
In particular, a new set of texture images of the object surface corresponding to the first mesh model may be captured using an image capturing device, such as a third party camera, for example, a second set of texture images.
In step S430, the image in the second texture image set and the image in the first texture image set are subjected to texture feature recognition and matching.
Specifically, the images in the second texture image set and the images in the first texture image set are subjected to texture feature recognition and matching, and a gesture matrix set of the images in the second texture image set and the corresponding images in the first texture image set is calculated. Assuming that the second texture image set has N images successfully matched with the images in the first texture image set (the N images in the second texture image set are successfully matched with the images in the first texture image set), taking the N images as first target images, and calculating to obtain a pose matrix set RT0 of the N first target images in the second texture image set relative to the first grid model, wherein the RT0 comprises N first conversion matrices corresponding to the N first target images.
More specifically, the step does not require that each image in the second texture image set be successfully matched, or the step directly completes the solution of the final conversion matrix set (first conversion matrix set). The step only needs to ensure that at least 1 second texture image set image is matched and calculated successfully, and then other second texture image set images are solved by taking the calculated successfully calculated image as a reference. Taking the image in the second texture image set as an example, the method of extracting and matching feature points by traversing the image in the first texture image set by using the second texture image image_0 may be sift, orb, surf or the like. And obtaining the best matching relation (such as the best matching relation with the largest number of matched point pairs) between the second texture IMAGE image_0 and the first texture IMAGE image_0 in the first texture IMAGE set, and obtaining the matched point pair CORR. The matching point pair is a set of feature point pairs on matching in the second texture IMAGE image_0 and the first texture IMAGE image_0. And performing ray tracing calculation on the first texture IMAGE image_0 according to the IMAGE feature points in the CORR, the gesture matrix RT_0 (namely the second conversion matrix) of the first texture IMAGE image_0 relative to the first grid model and camera parameters of the first texture IMAGE image_0 to obtain space intersection point coordinates of the IMAGE feature points in the CORR and the first grid model, and obtaining three-dimensional coordinates corresponding to the CORR IMAGE feature points. Because the three-dimensional coordinates corresponding to the characteristic points of the CORR image are obtained, the two-dimensional points (from CORR) of the image in the second texture image image_0 also have corresponding three-dimensional coordinate correspondence relations, and the camera parameters of the image in the second texture image image_0 and the gesture matrix relative to the first grid model, namely the first grid model, are solved by using a PNP algorithm.
Optionally, in order to accelerate the matching efficiency of the second texture image, the first texture image set may be clustered, that is, the images with similar and opposite images are classified, and at this time, the second texture image only needs to be matched with one or two images in each cluster, so as to accelerate the matching efficiency.
In step S440, a first set of transformation matrices is obtained.
Specifically, taking N pictures in a second texture image set as a reference, performing SFM sparse reconstruction on the second texture image set by using N first transformation matrices to obtain a set of relative position relation RT1 of each image in the second texture image set relative to a set of gesture matrices of a first grid model, wherein RT1 is the first transformation matrix set.
Step S450, texture mapping is carried out on the first grid model according to the first transformation matrix set and the second texture image set.
According to the texture mapping method of the grid model, the first target image of which the second texture image set is matched with the texture image in the first texture image set is determined by performing image matching on the first texture image set and the second texture image set, the first conversion matrix of the first target image and the first grid model is obtained, the first conversion matrix set is determined according to the first conversion matrix, and each image is not required to be matched according to the first conversion matrix set and the second texture image set, so that the matching efficiency and the matching success rate are improved.
The existing mapping replacement method needs to match images shot by two devices (such as an image shot by a three-dimensional scanning device and an image shot by a third-party camera), and has poor robustness and poor effect on identification matching of different images, and the matching integrity is influenced by the shooting integrity of the images of the three-dimensional scanning device; the texture mapping method of the grid model provided by the embodiment only needs one image to be successfully replaced, so that the success rate of image replacement is improved. Meanwhile, the two kinds of equipment are matched, and the matching success rate and the robustness of the equipment are greatly improved relative to different camera images.
The existing white film mapping method needs to go through the relative orientation of a third party image and the absolute orientation process needing manual participation, wherein the absolute orientation is manually participated and relatively troublesome, and has certain requirements on the professional degree of personnel; the texture mapping method of the grid model provided by the embodiment directly automates the absolute orientation process, and greatly improves the efficiency without manual participation.
In this embodiment, a texture mapping method of a mesh model is also provided, and fig. 5 is a flowchart of another texture mapping method of a mesh model according to an embodiment of the present application, as shown in fig. 5, where the flowchart includes the following steps:
step S510, acquiring a first mesh model of the target object, a first texture image set, and a second transformation matrix set of the first texture image in the first texture image set relative to the first mesh model.
Specifically, a first three-dimensional scanning device may be used to scan the target object, and a mesh project may be derived, where the mesh project includes a first mesh model of the target object, a first set of texture images, and a second set of transformation matrices for each of the first texture images in the first set of texture images relative to the first mesh model. Wherein the first texture image set includes camera parameters of the first texture image.
In step S520, a second texture image set of the target object is acquired.
In particular, a new set of texture images of the object surface corresponding to the first mesh model may be captured using an image capturing device, such as a third party camera, for example, a second set of texture images.
In step S530, the image in the second texture image set and the image in the first texture image set are subjected to texture feature recognition and matching.
Specifically, the images in the second texture image set and the images in the first texture image set are subjected to texture feature recognition and matching, and a gesture matrix set of the images in the second texture image set and the corresponding images in the first texture image set is calculated. Assuming that the second texture image set has N images successfully matched with the images in the first texture image set (the N images in the second texture image set are successfully matched with the images in the first texture image set), taking the N images as first target images, and calculating to obtain a pose matrix set RT0 of the N first target images in the second texture image set relative to the first grid model, wherein the RT0 comprises N first conversion matrices corresponding to the N first target images.
Step S540, a first conversion matrix set is acquired.
Specifically, taking N pictures in a second texture image set as a reference, performing SFM sparse reconstruction on the second texture image set by using N first transformation matrices to obtain a set of relative position relation RT1 of each image in the second texture image set relative to a set of gesture matrices of a first grid model, wherein RT1 is the first transformation matrix set.
In step S550, a third set of transformation matrices is calculated.
Specifically, a pose relation matrix set rt_final of the images in the second texture image set and the second grid model is obtained by calculation according to the first transformation matrix set RT1 and the third transformation matrix RT2, where rt_final is the third transformation matrix set. The third transformation matrix RT2 is a transformation matrix of the first mesh model and the second mesh model. The third transformation matrix RT2 may be calculated by manual N-point alignment and ICP (ITERATIVE CLOSEST POINT, closest point iterative algorithm) fine registration, or may be calculated by coarse registration and ICP fine registration by automatic mesh feature extraction matching, for example.
In step S560, texture mapping is performed on the second mesh model according to the third transformation matrix set and the second texture image set.
In this embodiment, image matching is performed on the first texture image set and the second texture image set, a first target image, in which the second texture image set is matched with the texture images in the first texture image set, is determined, a first transformation matrix of the first target image and the first mesh model is obtained, the first transformation matrix set is determined according to the first transformation matrix, and each image does not need to be matched according to the first transformation matrix set and the second texture image set, so that the matching efficiency and the matching success rate are improved.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
In this embodiment, a three-dimensional scanning system is further provided, and fig. 6 is a schematic structural diagram of a three-dimensional scanning system according to an embodiment of the present application, where, as shown in fig. 6, the system includes: the first three-dimensional scanning device 610, the image acquisition device 620 and the processor 102, wherein the first three-dimensional scanning device 610 and the image acquisition device 620 are respectively connected with the processor 102. The first three-dimensional scanning device 610 is configured to scan the target object to obtain a first mesh model, a first texture image set, and a second transformation matrix set of the first texture image set and the first mesh model; the image acquisition device 620 is configured to acquire a second texture image; the processor 102 is arranged to run a computer program to perform the texture mapping method of the mesh model in the previous embodiment. The first three-dimensional scanning device 610 includes, but is not limited to, a handheld three-dimensional scanner, a tracking three-dimensional scanner, and the like, and a camera is built in the three-dimensional scanning device, so that when the first three-dimensional scanning device 610 scans a target object, a first grid model, a first texture image set, a first texture image in the first texture image set, and a second conversion matrix set of the first grid model of the target object can be acquired in real time. During the scanning process, the camera of the first three-dimensional scanning device 610 captures one or more first texture images under each corresponding camera parameter, and all the first texture images are collected to be a first texture image set. The image capturing device 620 is a device capable of capturing higher quality images, such as a mobile phone or a professional camera.
According to the three-dimensional scanning system provided by the embodiment, the first target image of which the second texture image set is matched with the texture images in the first texture image set is determined by carrying out image matching on the first texture image set and the second texture image set, the first conversion matrix of the first target image and the first grid model is obtained, the first conversion matrix set is determined according to the first conversion matrix, and each image is not required to be matched according to the first conversion matrix set and the second texture image set, so that the matching efficiency and the matching success rate are improved.
In this embodiment, a three-dimensional scanning system is further provided, and fig. 7 is a schematic structural diagram of another three-dimensional scanning system according to an embodiment of the present application, as shown in fig. 7, where the system includes: the first three-dimensional scanning device 610, the second three-dimensional scanning device 630, the image acquisition device 620 and the processor 102, wherein the first three-dimensional scanning device 610, the second three-dimensional scanning device 630 and the image acquisition device 620 are respectively connected with the processor 102. The first three-dimensional scanning device 610 is configured to scan the target object to obtain a first mesh model, a first texture image set, and a second transformation matrix set of the first texture image set and the first mesh model; the second three-dimensional scanning device 630 is configured to scan the target object to obtain a second mesh model; the image acquisition device 620 is configured to acquire a second texture image; the processor 102 is arranged to run a computer program to perform the texture mapping method of the mesh model in the previous embodiment. The second three-dimensional scanning device 630 includes, but is not limited to, a handheld three-dimensional scanner, a tracking three-dimensional scanner, and the like, which has a camera built therein, and is capable of acquiring a second mesh model of the target object in real time when the first three-dimensional scanning device 610 scans the target object. The second mesh model generally has better local detail than the first mesh model.
According to the three-dimensional scanning system provided by the embodiment, the first target image of which the second texture image set is matched with the texture images in the first texture image set is determined by carrying out image matching on the first texture image set and the second texture image set, the first conversion matrix of the first target image and the first grid model is obtained, the first conversion matrix set is determined according to the first conversion matrix, and each image is not required to be matched according to the first conversion matrix set and the second texture image set, so that the matching efficiency and the matching success rate are improved.
The embodiment also provides a texture mapping device of the mesh model, which is used for implementing the above embodiment and the preferred implementation manner, and the description is omitted. The terms "module," "unit," "sub-unit," and the like as used below may refer to a combination of software and/or hardware that performs a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
FIG. 8 is a block diagram of a texture mapping apparatus for a mesh model according to an embodiment of the present application, as shown in FIG. 8, the apparatus includes:
A first obtaining module 810, configured to obtain a first mesh model of a target object;
A second acquisition module 820 for acquiring a first texture image set and a second texture image set of the target object; the first texture image set and the second texture image set are sets of texture images acquired by different devices;
a determining module 830, configured to perform image matching on the first texture image set and the second texture image set, and determine a first transformation matrix set; the first transformation matrix set is a set of a first transformation matrix of the first grid model and a second texture image in the second texture image set;
A processing module 840 is configured to perform texture mapping on the mesh model to be mapped according to the first transformation matrix set and the second texture image set.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
There is also provided in this embodiment an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring a first grid model of a target object;
s2, acquiring a first texture image set and a second texture image set of a target object; the first texture image set and the second texture image set are sets of texture images acquired by different devices;
S3, performing image matching on the first texture image set and the second texture image set, and determining a first conversion matrix set; the first transformation matrix set is a set of a first transformation matrix of the first grid model and a second texture image in the second texture image set;
and S4, carrying out texture mapping on the grid model to be mapped according to the first transformation matrix set and the second texture image set.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and are not described in detail in this embodiment.
In addition, in combination with the texture mapping method of the mesh model provided in the above embodiment, a storage medium may also be provided for implementation in the present embodiment. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements a texture mapping method for any of the mesh models of the above embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure in accordance with the embodiments provided herein.
It is to be understood that the drawings are merely illustrative of some embodiments of the present application and that it is possible for those skilled in the art to adapt the present application to other similar situations without the need for inventive work. In addition, it should be appreciated that while the development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as a departure from the disclosure.
The term "embodiment" in this disclosure means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive. It will be clear or implicitly understood by those of ordinary skill in the art that the embodiments described in the present application can be combined with other embodiments without conflict.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the patent claims. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (12)

1. A method of texture mapping of a mesh model, the method comprising:
Acquiring a first grid model of a target object;
acquiring a first texture image set and a second texture image set of the target object; the first texture image set and the second texture image set are sets of texture images acquired by different devices;
performing image matching on the first texture image set and the second texture image set to determine a first conversion matrix set; the first transformation matrix set is a set of a second texture image in the second texture image set and a first transformation matrix of the first mesh model;
and carrying out texture mapping on the grid model to be mapped according to the first transformation matrix set and the second texture image set.
2. The texture mapping method of a mesh model of claim 1, wherein the image matching the first texture image set and the second texture image set, determining a first transformation matrix set, comprises:
Performing image matching on the first texture image set and the second texture image set, and determining a first target image in the second texture image set;
acquiring a first transformation matrix of the first target image and the first grid model;
And determining a first conversion matrix set according to the first conversion matrix.
3. The texture mapping method of a mesh model according to claim 2, wherein the acquiring the first transformation matrix of the first target image and the first mesh model comprises:
Determining a second target image in the first texture image set; the second target image is an image matched with the first target image;
acquiring a second transformation matrix of the second target image and the first grid model;
and determining the first conversion matrix according to the second conversion matrix.
4. A texture mapping method of a mesh model according to claim 3, wherein said determining the first transformation matrix from the second transformation matrix comprises:
generating a matching point set according to the characteristic points of the first target image and the characteristic points of the second target image;
according to the matching point set and the second transformation matrix, determining a three-dimensional coordinate point set of the matching point set under a world coordinate system;
And determining the first transformation matrix according to the three-dimensional coordinate point set and the characteristic points of the first target image.
5. A texture mapping method of a mesh model according to claim 3, wherein said determining the first transformation matrix from the second transformation matrix comprises:
generating a matching point set according to the characteristic points of the first target image and the characteristic points of the second target image;
Determining a three-dimensional coordinate point set of the matching point set under a world coordinate system according to the matching point set, the second transformation matrix and the first camera parameters; the first camera parameters are camera parameters of a device that acquires the first texture image set;
Determining the first transformation matrix and the second camera parameters according to the three-dimensional coordinate point set and the characteristic points of the first target image; the second camera parameters are camera parameters of a device that acquired the second texture image set.
6. The method for texture mapping of a mesh model according to claim 5, wherein the mesh model to be mapped is a first mesh model, and the performing texture mapping on the mesh model to be mapped according to the first transformation matrix set and the second texture image set comprises:
and carrying out texture mapping on the first grid model according to the first transformation matrix set, the second texture image set and the second camera parameters.
7. The texture mapping method of a mesh model according to claim 2, wherein the determining a first set of transformation matrices from the first transformation matrix comprises:
And performing sparse reconstruction on a second texture image in a second texture image set by taking the first target image as a reference to obtain the first conversion matrix set.
8. The method for texture mapping of a mesh model according to claim 1, wherein the mesh model to be mapped is a second mesh model, and the performing texture mapping on the mesh model to be mapped according to the first transformation matrix set and the second texture image set includes:
Acquiring a second grid model of the target object;
Acquiring a third transformation matrix of the first grid model and the second grid model;
and carrying out texture mapping on the second grid model according to the first conversion matrix set, the second texture image set and the third conversion matrix.
9. The texture mapping method of a mesh model of claim 8, further comprising:
determining the range of the first grid model according to the geometric characteristics of the target object;
And determining the first texture image set according to the range of the first grid model.
10. A texture mapping apparatus for a mesh model, the apparatus comprising:
the first acquisition module is used for acquiring a first grid model of the target object;
the second acquisition module is used for acquiring a first texture image set and a second texture image set of the target object; the first texture image set and the second texture image set are sets of texture images acquired by different devices;
The determining module is used for performing image matching on the first texture image set and the second texture image set and determining a first conversion matrix set; the first transformation matrix set is a set of a second texture image in the second texture image set and a first transformation matrix of the first mesh model;
and the processing module is used for carrying out texture mapping on the grid model to be mapped according to the first transformation matrix set and the second texture image set.
11. A three-dimensional scanning system, the system comprising: the system comprises a first three-dimensional scanning device, an image acquisition device and a processor;
the first three-dimensional scanning device is used for scanning the target object to obtain a first grid model, a first texture image set, a first texture image in the first texture image set and a second conversion matrix set of the first grid model;
The processor is arranged to run a computer program to perform the texture mapping method of the mesh model of any one of claims 1 to 9.
12. A three-dimensional scanning system, the system comprising: the system comprises a first three-dimensional scanning device, a second three-dimensional scanning device, an image acquisition device and a processor;
the first three-dimensional scanning device is used for scanning the target object to obtain a first grid model, a first texture image set, a first texture image in the first texture image set and a second conversion matrix set of the first grid model;
The second three-dimensional scanning device is used for scanning the target object to obtain a second grid model;
The processor is arranged to run a computer program to perform the texture mapping method of the mesh model of any one of claims 1 to 9.
CN202311845412.0A 2023-12-28 2023-12-28 Texture mapping method and device of grid model and three-dimensional scanning system Pending CN117974868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311845412.0A CN117974868A (en) 2023-12-28 2023-12-28 Texture mapping method and device of grid model and three-dimensional scanning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311845412.0A CN117974868A (en) 2023-12-28 2023-12-28 Texture mapping method and device of grid model and three-dimensional scanning system

Publications (1)

Publication Number Publication Date
CN117974868A true CN117974868A (en) 2024-05-03

Family

ID=90858755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311845412.0A Pending CN117974868A (en) 2023-12-28 2023-12-28 Texture mapping method and device of grid model and three-dimensional scanning system

Country Status (1)

Country Link
CN (1) CN117974868A (en)

Similar Documents

Publication Publication Date Title
CN107330439B (en) Method for determining posture of object in image, client and server
JP5384746B2 (en) Improving the performance of image recognition algorithms with pruning, image scaling, and spatially constrained feature matching
CN108010123B (en) Three-dimensional point cloud obtaining method capable of retaining topology information
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN110648397A (en) Scene map generation method and device, storage medium and electronic equipment
CN111144349B (en) Indoor visual relocation method and system
CN115187715A (en) Mapping method, device, equipment and storage medium
CN116188808A (en) Image feature extraction method and system, storage medium and electronic device
JP5430636B2 (en) Data acquisition apparatus, method and program
CN115289974A (en) Hole site measuring method, hole site measuring device, computer equipment and storage medium
CN114627244A (en) Three-dimensional reconstruction method and device, electronic equipment and computer readable medium
CN115601490B (en) Texture image pre-replacement method and device based on texture mapping and storage medium
CN113554741B (en) Method and device for reconstructing object in three dimensions, electronic equipment and storage medium
CN117173439A (en) Image processing method and device based on GPU, storage medium and electronic equipment
CN117974868A (en) Texture mapping method and device of grid model and three-dimensional scanning system
CN110135422B (en) Dense target detection method and device
CN111050027A (en) Lens distortion compensation method, device, equipment and storage medium
CN114897999B (en) Object pose recognition method, electronic device, storage medium, and program product
CN115661493A (en) Object pose determination method and device, equipment and storage medium
CN111127529B (en) Image registration method and device, storage medium and electronic device
CN116152586A (en) Model training method and device, electronic equipment and storage medium
CN113516157A (en) Embedded three-dimensional scanning system and three-dimensional scanning device
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN111783641A (en) Face clustering method and device
CN111508063A (en) Three-dimensional reconstruction method and system based on image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination