CN116246002A - Texture image pre-displacement method, device and storage medium based on texture mapping - Google Patents

Texture image pre-displacement method, device and storage medium based on texture mapping Download PDF

Info

Publication number
CN116246002A
CN116246002A CN202310196508.2A CN202310196508A CN116246002A CN 116246002 A CN116246002 A CN 116246002A CN 202310196508 A CN202310196508 A CN 202310196508A CN 116246002 A CN116246002 A CN 116246002A
Authority
CN
China
Prior art keywords
texture
texture image
model
feature
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310196508.2A
Other languages
Chinese (zh)
Inventor
王江峰
张立旦
何振贵
陈尚俭
郑俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Scantech Hangzhou Co Ltd
Original Assignee
Scantech Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scantech Hangzhou Co Ltd filed Critical Scantech Hangzhou Co Ltd
Priority to CN202310196508.2A priority Critical patent/CN116246002A/en
Publication of CN116246002A publication Critical patent/CN116246002A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The application relates to a texture image pre-displacement method, a device and a storage medium based on texture mapping, wherein the method comprises the following steps: scanning a target object based on a three-dimensional scanner, and acquiring a first texture image dataset, an object grid model and first texture mapping relation information; processing a first texture image in the first texture image data set and a second texture image in the second texture image data set to determine a first matching relationship between the first texture image and the second texture image; and determining second texture mapping relation information corresponding to the object grid model and the second texture image data according to the first matching relation, the first texture image data set, the object grid model and the first texture mapping relation information. By the method and the device, the problems of complex post-processing operation, high configuration requirement on the three-dimensional scanner and low processing efficiency are solved, the pre-displacement of texture images in the automatic texture mapping is achieved, and the processing efficiency is improved.

Description

Texture image pre-displacement method, device and storage medium based on texture mapping
The application is a divisional application with the application date of 2022, 11-29, the application number of 202211503197.1 and the name of a texture image pre-replacement method based on texture mapping, a device and a storage medium.
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a texture image pre-displacement method, apparatus, and storage medium based on texture mapping.
Background
In the real-time scanning process, the three-dimensional scanner acquires an object image from a camera provided with the three-dimensional scanner as a two-dimensional texture image, and realizes texture mapping of an object grid model through a texture mapping algorithm so as to reconstruct a three-dimensional digital model with a sense of reality texture. In the process of acquiring an object image by a camera, the texture image with different shooting visual angles due to different light filling has a light reflection area and color difference, so that the effect of the three-dimensional digital model presented by the map is poor.
In this regard, at present, the method is to obtain a plurality of image textures captured at each position in advance, and then fuse and splice the plurality of image textures to optimize the fusion and the splicing of the plurality of image textures, so as to eliminate texture chromatic aberration and a reflective area, and realize smooth seamless texture mapping. However, this scheme has the problems of complex post-processing operation, high configuration requirement on the three-dimensional scanner, and low processing efficiency.
Aiming at the problems of complex post-processing operation, high configuration requirement on a three-dimensional scanner and low processing efficiency in the related technology, no effective solution is proposed at present.
Disclosure of Invention
In this embodiment, a method, an apparatus, and a storage medium for pre-displacement of texture images based on texture mapping are provided, so as to solve the problems of complex post-processing operation, high configuration requirement on a three-dimensional scanner, and low processing efficiency in the related art.
In a first aspect, in this embodiment, there is provided a texture image pre-permutation method based on texture mapping, including:
scanning a target object based on a three-dimensional scanner, and acquiring a first texture image dataset, an object grid model and first texture mapping relation information; the first texture mapping relation information is relation information for performing texture mapping on the object grid model based on a first texture image data set;
processing a first texture image in the first texture image data set and a second texture image in a preset second texture image data set, and determining a first matching relationship between the first texture image and the second texture image;
And determining second texture mapping relation information corresponding to the object grid model and the second texture image data according to the first matching relation, the first texture image data set, the object grid model and the first texture mapping relation information.
In some embodiments thereof, the first texture mapping relationship information comprises a first camera parameter set and a first transformation matrix set;
the first camera parameter set is a set of first camera parameters; the first camera parameters are in one-to-one correspondence with first texture images in the first texture image dataset;
the first set of transformation matrices is a set of first transformation matrices between the first texture image in a camera coordinate system and an object grid model in a world coordinate system.
In some embodiments, the determining second texture mapping relation information corresponding to the second texture image data according to the first matching relation, the first texture image dataset, the object grid model and the first texture mapping relation information includes:
determining a three-dimensional coordinate set corresponding to feature points in the first feature set according to the first feature set, the first transformation matrix set, the object grid model and the first camera parameter set; the three-dimensional coordinate set is under the world coordinate system; the first feature set is obtained by performing feature detection on each first texture image in the first texture image data set;
Taking two texture images with the first matching relationship as a group, and determining a second transformation matrix between the second texture image and a coordinate system where the three-dimensional coordinate set is located and a second camera parameter corresponding to the second texture image;
traversing all the two texture images with the first matching relation, and collecting the corresponding second conversion matrixes to obtain a second conversion matrix set; and collecting the corresponding second camera parameters to obtain a second camera parameter set.
In some of these embodiments, the determining a three-dimensional coordinate set corresponding to feature points in the first feature set according to the first feature set, the first transformation matrix set, the object mesh model, and the first camera parameter set includes:
constructing rays from a camera optical center of the three-dimensional scanner to each feature point in the first feature set under each camera coordinate system according to the first conversion matrix set and the first camera parameter set;
converting the object grid model into a camera coordinate system corresponding to the rays;
determining a three-dimensional coordinate set corresponding to the feature points in the first feature set by taking the intersection points of the rays and the object grid model as screening conditions;
The three-dimensional coordinate set is transformed into a world coordinate system based on the first transformation matrix set.
In some embodiments, the determining second texture mapping relation information corresponding to the second texture image data according to the first matching relation, the first texture image dataset, the object grid model and the first texture mapping relation information includes:
determining a three-dimensional coordinate set corresponding to feature points in the first feature set according to the first feature set, the first transformation matrix set and the first camera parameter set; the first feature set is obtained by performing feature detection on each first texture image in the first texture image data set;
taking two texture images with the first matching relationship as a group, and determining a third transformation matrix between the second texture image and a coordinate system where the three-dimensional coordinate set is located and a second camera parameter corresponding to the second texture image;
traversing all the two texture images with the first matching relationship, and collecting the corresponding third conversion matrixes to obtain a third conversion matrix set; collecting the corresponding second camera parameters to obtain a second camera parameter set;
And processing the third transformation matrix in the third transformation matrix set based on a fourth transformation matrix between the three-dimensional coordinate set and the coordinate system where the object grid model is located, so as to obtain a second transformation matrix set.
In some of these embodiments, the determining a three-dimensional coordinate set corresponding to a feature point in the first feature set according to the first feature set, the first transformation matrix set, and the first camera parameter set includes:
and carrying out three-dimensional reconstruction on each characteristic point in the first characteristic set under each camera coordinate system according to the first conversion matrix set and the first camera parameter set, and determining a three-dimensional coordinate set corresponding to the characteristic points in the first characteristic set.
In some of these embodiments, the second texture image has a higher resolution than the first texture image.
In some embodiments, the processing the first texture image in the first texture image dataset and the second texture image in the second texture image dataset to determine the first matching relationship between the first texture image and the second texture image includes:
performing feature detection on each first texture image in the first texture image data set to obtain a first feature set;
Performing feature detection on each second texture image in the second texture image data set to obtain a second feature set;
and performing feature matching on the first feature set and the second feature set, and determining a first matching relation between the first texture image and the second texture image.
In some of these embodiments, the method further comprises:
performing similarity detection on texture images in the first texture image dataset and the second texture image dataset before processing the first texture image in the first texture image dataset and a second texture image in a preset second texture image dataset;
and screening the texture images with the similarity larger than a similarity threshold as processing objects.
In some of these embodiments, the method further comprises:
and performing texture mapping on the object grid model according to the second texture image dataset and the second texture mapping relation information by using a texture mapping algorithm.
In a second aspect, in this embodiment, there is provided a texture image pre-permutation method based on texture mapping, including:
scanning a target object based on a three-dimensional scanner, and acquiring an object grid model, a model skin unfolding diagram of the object grid model and third texture mapping relation information corresponding to the object grid model and the model skin unfolding diagram;
Processing the model skin development diagram and a second texture image in a preset second texture image dataset, and determining a second matching relationship between the model skin development diagram and the second texture image;
and determining second texture mapping relation information corresponding to the object grid model and the second texture image data according to the second matching relation, the model skin development diagram, the object grid model and the third texture mapping relation information.
In a third aspect, in this embodiment, there is provided a texture image pre-permutation apparatus based on texture mapping, including: the device comprises a first scanning module, a first processing module and a first replacement module;
the first scanning module is used for scanning the target object based on the three-dimensional scanner and acquiring a first texture image dataset, an object grid model and first texture mapping relation information; the first texture mapping relation information is relation information for performing texture mapping on the object grid model based on a first texture image data set;
the first processing module is used for processing a first texture image in the first texture image data set and a second texture image in a preset second texture image data set, and determining a second matching relationship between the first texture image and the second texture image;
The first replacement module is configured to determine second texture mapping relationship information corresponding to the object mesh model and the second texture image data according to the second matching relationship, the first texture image dataset, the object mesh model and the first texture mapping relationship information.
In a fourth aspect, in this embodiment, there is provided a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the texture mapping-based texture image pre-permutation method according to the first aspect or the second aspect.
In a fifth aspect, in this embodiment, there is provided a storage medium having stored thereon a computer program, which when executed by a processor, implements the texture image pre-permutation method based on texture mapping described in the first aspect or the second aspect.
Compared with the related art, the texture image pre-displacement method, the texture image pre-displacement device and the storage medium based on texture mapping provided in the embodiment acquire a first texture image dataset, an object grid model and first texture mapping relation information by scanning a target object based on a three-dimensional scanner; the first texture mapping relation information is relation information for performing texture mapping on the object grid model based on the first texture image dataset; processing a first texture image in the first texture image data set and a second texture image in the second texture image data set to determine a first matching relationship between the first texture image and the second texture image; according to the first matching relation, the first texture image dataset, the object grid model and the first texture mapping relation information, second texture mapping relation information corresponding to the object grid model and the second texture image data is determined, so that the first texture image dataset in texture mapping is replaced by the second texture image dataset, the front replacement of the texture image is completed, the problems that the post-processing operation is complex, the configuration requirement on the three-dimensional scanner is high, and the processing efficiency is low are solved, the front replacement of the texture image in the automatic completion texture mapping is realized, and the processing efficiency is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a terminal device according to a texture mapping-based texture image pre-permutation method according to an embodiment of the present application;
FIG. 2 is a flow chart of a texture image pre-permutation method based on texture mapping according to an embodiment of the present application;
FIG. 3 is a flow chart of a texture image pre-permutation method based on texture mapping according to a preferred embodiment of the present application;
FIG. 4 is a flow chart of a texture image pre-permutation method based on texture mapping according to another embodiment of the present application;
FIG. 5 is a block diagram of a texture image pre-displacement device based on texture mapping according to an embodiment of the present application;
fig. 6 is a block diagram of a texture image pre-displacement device based on texture mapping according to another embodiment of the present application.
In the figure: 102. a processor; 104. a memory; 106. a transmission device; 108. an input-output device; 210. a first scanning module; 220. a first processing module; 230. a first replacement module; 410. a second scanning module; 420. a second processing module; 430. and a second substitution module.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, the present application is described and illustrated below with reference to the accompanying drawings and examples.
Unless defined otherwise, technical or scientific terms used herein shall have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these," and the like in this application are not intended to be limiting in number, but rather are singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used in the present application, are intended to cover a non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this application, merely distinguish similar objects and do not represent a particular ordering of objects.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or similar computing device. For example, the terminal is operated, and fig. 1 is a block diagram of a hardware structure of the terminal of the texture image pre-permutation method based on texture mapping according to the present embodiment. As shown in fig. 1, the terminal may include one or more (only one is shown in fig. 1) processors 102 and a memory 104 for storing data, wherein the processors 102 may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a texture image pre-displacement method based on texture mapping in the present embodiment, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-described method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located with respect to the processor 102, which may be connected to the terminal through a grid. Examples of such grids include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 106 is used to receive or transmit data via a mesh. The above-described mesh includes a wireless mesh provided by a communication provider of the terminal. In one example, the transmission device 106 includes a mesh adapter (Network Interface Controller, simply referred to as NIC) that can connect to other mesh devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a texture image pre-displacement method based on texture mapping is provided, fig. 2 is a flowchart of the texture image pre-displacement method based on texture mapping in this embodiment, and as shown in fig. 2, the flowchart includes the following steps:
step S210, scanning a target object based on a three-dimensional scanner, and acquiring a first texture image dataset, an object grid model and first texture mapping relation information; the first texture mapping relation information is relation information for performing texture mapping on the object grid model based on the first texture image dataset;
step S220, processing a first texture image in the first texture image data set and a second texture image in the preset second texture image data set, and determining a first matching relationship between the first texture image and the second texture image;
Step S230, determining second texture mapping relation information corresponding to the object grid model and the second texture image data according to the first matching relation, the first texture image dataset, the object grid model and the first texture mapping relation information.
In particular, three-dimensional scanners include, but are not limited to, hand-held three-dimensional scanners, tracking three-dimensional scanners, and the like. The camera is arranged in the three-dimensional scanner, and can acquire a first texture image data set of the target object in real time when the three-dimensional scanner scans the target object. In the scanning process, the camera shoots one or more first texture images under each corresponding first camera parameter; and collecting all the first texture images to obtain a first texture image data set.
The object grid model is obtained by modeling scanned data, and the coordinates of each point on the object grid model can also be obtained under the world coordinate system. The first texture mapping relation information is generated after scanning; it is considered that the object mesh model can be texture mapped by the first texture image dataset according to the first texture mapping relation information to complete mapping. The first texture image is limited to have reflective areas and color differences due to different light filling under different first camera parameters, so that the first texture image is replaced in the front data processing of the mapping in the application.
The second texture image dataset is pre-processed based on the target object. Such as: and adopting a high-resolution color camera to shoot the target object from different angles in an all-around way, and obtaining a high-resolution and defect-free high-quality second texture image data set. The resolution of the second texture image may be considered to be higher than the resolution of the first texture image. Of course, the second texture image dataset may be obtained in other manners, where the resolution of the second texture image in the second texture image dataset is similar to that of the first texture image in the first texture image dataset, but there are no defects such as reflection points.
Since the texture images in the first texture image dataset and the second texture image dataset are texture images of the target object; the texture images of the two data sets can thus be processed, if the more identical feature points are in the first texture image X and the second texture image Y, then the first texture image X and the second texture image Y are considered to have a preferred matching relationship; the first matching relationship between the first texture image and the second texture image can be determined by integrating all the two texture images with the preferred matching relationship. In other embodiments, the first matching relationship may be determined from a similarity between two texture images.
According to the determined first matching relation, the first texture image dataset, the object grid model and the first texture mapping relation information; second texture mapping relation information corresponding to the object grid model and second texture image data can be calculated; thereby completing the front displacement of the texture image; in the post-processing, the second texture image data can be directly subjected to texture mapping on the object grid model according to the second texture mapping relation information, so that the overall processing efficiency is improved.
In the prior art, fusion and splicing optimization are carried out on a plurality of acquired image textures in post processing are needed to be removed, so that texture chromatic aberration and a reflective area are eliminated, and smooth seamless texture mapping is realized; the post-processing operation is complex, the configuration requirement on the three-dimensional scanner is high, and the processing efficiency is low. By the steps, the matching relationship is combined on the basis of the first texture image data set and the first texture mapping relationship information, so that second texture mapping relationship information corresponding to second texture image data is rapidly calculated; the first texture image dataset in the texture mapping is replaced by the second texture image dataset, so that the prepositioning of the texture images is finished, the prepositioning of the texture images in the texture mapping is automatically finished, the calculation flow is simplified, and the processing efficiency is improved.
In some of these embodiments, the first texture mapping relationship information comprises a first set of camera parameters and a first set of transformation matrices; the first camera parameter set is a set of first camera parameters; the first camera parameters are in one-to-one correspondence with the first texture images in the first texture image dataset; the first set of transformation matrices is a set of first transformation matrices between the first texture image in the camera coordinate system and the object grid model in the world coordinate system.
Specifically, a texture mapping algorithm can be utilized to texture map the object mesh model according to first texture mapping relationship information and a first texture image dataset, the first texture mapping relationship information comprising a first camera parameter set and a first transformation matrix set. Each first camera parameter characterizes a first texture image taken by the camera at the corresponding first camera parameter (camera intrinsic and distortion). Each first texture image corresponds to a first camera parameter. If multiple first texture images are acquired under the same first camera parameters, one of the first texture images may be selected as the first texture image. The camera intrinsic parameter of the first camera parameter is a parameter related to the characteristic of the first camera, such as the focal length, the pixel size, and the like of the first camera. The distortion of the first camera parameter is a distortion of the first camera lens, including radial distortion and tangential distortion. The first camera is a real camera that is actually present in the three-dimensional scanner.
The first transformation matrix is a transformation matrix between a first texture image in a camera coordinate system and an object grid model in a world coordinate system; collecting all the first conversion matrixes to obtain a first conversion matrix set; the first transformation matrix may be associated with the object mesh model by the first transformation matrix. Wherein the first transformation matrix comprises a rotation matrix and a translation matrix. The rotation matrix is the direction of the coordinate axes of the camera coordinate system of the first camera relative to the coordinate axes of the world coordinate system. The translation matrix is the position of the spatial origin under the camera coordinate system of the first camera.
In some embodiments, the processing in step S220 the first texture image in the first texture image dataset and the second texture image in the second texture image dataset to determine the first matching relationship between the first texture image and the second texture image includes the following steps:
performing feature detection on each first texture image in the first texture image data set to obtain a first feature set;
performing feature detection on each second texture image in the second texture image data set to obtain a second feature set;
and performing feature matching on the first feature set and the second feature set, and determining a first matching relation between the first texture image and the second texture image.
Specifically, feature detection is firstly carried out on texture images in a first texture image dataset and a second texture image dataset by using a feature detection algorithm, and after feature detection, feature points and corresponding feature information are detected by each type of texture image; the feature information can describe information about the position, pixel, vector, and the like of the feature point. The first feature set may be considered to include feature points and feature information; the second feature set includes feature points and feature information.
And then, carrying out feature matching on the first feature set and the second feature set by using a feature matching algorithm, and determining a first matching relationship between the first texture image and the second texture image. The method specifically comprises the following steps: matching the characteristic information in the first characteristic set with the characteristic information in the second characteristic set, and screening out the same characteristic points in the first characteristic set and the second characteristic points; if the number of feature points of the first texture image a and the second texture image b is the largest, the two texture images are considered to be the best match, and the two texture images are associated to construct a first matching relationship between the two texture images. In this embodiment, the first texture image and the second texture image having the first matching relationship can be associated quickly and accurately, interference of other texture images is eliminated, and processing steps are simplified.
The feature detection algorithm includes, but is not limited to, harris corner detection algorithm, sift spot detection algorithm, orb feature detection algorithm, etc., which are not limited thereto. Feature matching algorithms include, but are not limited to, FLANN-based matching algorithms, violent matching algorithms, and the like.
In some of these embodiments, to further improve the computational efficiency of the matching relationship. Before processing a first texture image in a first texture image data set and a second texture image in a preset second texture image data set, performing similarity detection on the texture images in the first texture image data set and the second texture image data set; and screening the texture images with the similarity larger than the similarity threshold value as the processing objects.
Specifically, a similarity detection algorithm is adopted to detect the similarity of the texture images in the first texture image dataset and the second texture image dataset; the similarity between each first texture image and all second texture images is detected. Removing texture images with similarity smaller than a similarity threshold; screening out texture images with similarity larger than a similarity threshold as processed objects; the number of texture images can be greatly reduced, and the processing efficiency of the feature detection algorithm and the feature matching algorithm is greatly improved.
Determining the second texture mapping relationship information corresponding to the second texture image data may be performed by a variety of implementations: the first is to convert to a unified world coordinate system for processing. The second is to process and then convert to a world coordinate system.
For the first implementation:
in some embodiments thereof, determining second texture mapping relation information corresponding to the second texture image data according to the first matching relation, the first texture image dataset, the object mesh model and the first texture mapping relation information in step S230 includes the steps of:
step S2311, determining a three-dimensional coordinate set corresponding to the feature points in the first feature set according to the first feature set, the first transformation matrix set, the object grid model and the first camera parameter set; the three-dimensional coordinate set is under a world coordinate system; the first feature set is obtained by carrying out feature detection on each first texture image in the first texture image data set;
step S2312, a second transformation matrix between the second texture image and the coordinate system where the three-dimensional coordinate set is located and a second camera parameter corresponding to the second texture image are determined by taking two texture images with a first matching relationship as a group;
Step S2313, traversing all the two texture images with the first matching relationship, and collecting the corresponding second conversion matrixes to obtain a second conversion matrix set; and collecting the corresponding second camera parameters to obtain a second camera parameter set.
Specifically, according to the first feature set, the first transformation matrix set, the object grid model and the first camera parameter set, three-dimensional coordinates corresponding to feature points in the first feature set are determined under a world coordinate system, and all three-dimensional coordinates are collected to obtain a three-dimensional coordinate set. The two-dimensional characteristic points on each first texture image establish a relation with the object grid model in the three-dimensional space; the corresponding positions of the feature points in the first feature set in the object mesh model are known.
Screening out two texture images which are matched best according to the first matching relation, and knowing the corresponding relation between the characteristic points on the first texture image and the three-dimensional coordinate set at the moment; the corresponding relation between the corresponding characteristic points in the second texture image and the three-dimensional coordinate set can be obtained; and obtaining a second transformation matrix and corresponding second camera parameters between the second texture image and the coordinate system where the three-dimensional coordinate set is located. And finally traversing all the two texture images with the first matching relationship, solving all the second transformation matrixes and corresponding second camera parameters, and further obtaining a second transformation matrix set and a second camera parameter set. In this embodiment, the calculation method using two texture images with the first matching relationship as a group can reduce the calculation redundancy, and the calculation accuracy is high.
In some of these embodiments, determining a three-dimensional coordinate set corresponding to the feature points in the first feature set according to the first feature set, the first transformation matrix set, the object mesh model, and the first camera parameter set in step S2311 includes the steps of:
constructing rays from the camera optical center of the three-dimensional scanner to each feature point in the first feature set under each camera coordinate system according to the first conversion matrix set and the first camera parameter set;
converting the object grid model into a camera coordinate system corresponding to the rays;
taking the intersection point of the ray and the object grid model as a screening condition, and determining a three-dimensional coordinate set corresponding to the feature points in the first feature set;
the three-dimensional coordinate set is converted into a world coordinate system based on the first conversion matrix set.
Specifically, each first texture image corresponds to a camera coordinate system, and the object grid model and the rays are processed under the same camera coordinate system. Such as: for a feature point Pa in a certain first texture image, the optical center of the camera where the first texture image is located is OA: the rays from the camera optical center OA to the feature point Pa are in a camera coordinate system L, an object network model in a world coordinate system is converted into the camera coordinate system L based on a first conversion matrix set, and the rays intersect with the object grid model at three-dimensional points p1, p2, p3. in space; calculating the values of p1, p2 and p3. in the z direction under the camera coordinate system, and obtaining the corresponding three-dimensional coordinates by taking the three-dimensional point with the smallest value as the three-dimensional point corresponding to the characteristic point Pa. And further converting the three-dimensional coordinates into a world coordinate system based on the corresponding first conversion matrix in the first conversion matrix set. And calculating three-dimensional coordinates corresponding to all the feature points based on the steps to obtain a three-dimensional coordinate set under a world coordinate system. In this embodiment, the intersection point of the ray and the object grid model is used as the screening condition to determine the three-dimensional coordinate set, so that the accuracy of the three-dimensional coordinate set can be improved.
In other embodiments, the determining a three-dimensional coordinate set corresponding to the feature points in the first feature set according to the first feature set, the first transformation matrix set, the object mesh model, and the first camera parameter set in step S2311 includes the steps of:
constructing rays from the camera optical center of the three-dimensional scanner to each feature point in the first feature set under each camera coordinate system according to the first conversion matrix set and the first camera parameter set;
converting rays into a world coordinate system;
and determining a three-dimensional coordinate set corresponding to the feature points in the first feature set by taking the intersection points of the rays and the object grid model as screening conditions.
Specifically, each first texture image corresponds to a camera coordinate system, the object grid model converts rays under each constructed camera coordinate system into the world coordinate system under the world coordinate system, the world coordinate system is used as a uniform coordinate system, and the intersection point of the rays and the object grid model is used as a screening condition to determine a three-dimensional coordinate set corresponding to the feature points in the first feature set.
Of course, a three-dimensional reconstruction may also be used to determine a three-dimensional coordinate set corresponding to the feature points in the first feature set, which is not described herein.
For the second implementation:
In some embodiments thereof, determining second texture mapping relation information corresponding to the second texture image data according to the first matching relation, the first texture image dataset, the object mesh model and the first texture mapping relation information in step S230 includes the steps of:
step S2321, determining a three-dimensional coordinate set corresponding to the feature points in the first feature set according to the first feature set, the first conversion matrix set and the first camera parameter set; the first feature set is obtained by carrying out feature detection on each first texture image in the first texture image data set;
step S2322, two texture images with a first matching relationship are taken as a group, and a third transformation matrix between the second texture image and a coordinate system where the three-dimensional coordinate set is located and a second camera parameter corresponding to the second texture image are determined;
step S2323, traversing all the two texture images with the first matching relationship, and collecting the corresponding third conversion matrixes to obtain a third conversion matrix set; collecting corresponding second camera parameters to obtain a second camera parameter set;
step S2324, based on the fourth transformation matrix between the three-dimensional coordinate set and the coordinate system where the object grid model is located, processing the third transformation matrix in the third transformation matrix set to obtain a second transformation matrix set.
Specifically, three-dimensional reconstruction is performed under a camera coordinate system according to the first feature set, the first transformation matrix set and the first camera parameter set, three-dimensional coordinates corresponding to feature points in the first feature set are determined, and all three-dimensional coordinates are collected to obtain a three-dimensional coordinate set. Screening out two texture images which are matched best according to the first matching relation, and knowing the corresponding relation between the characteristic points on the first texture image and the three-dimensional coordinate set at the moment; the corresponding relation between the corresponding characteristic points in the second texture image and the three-dimensional coordinate set can be obtained; and obtaining a third transformation matrix and corresponding second camera parameters between the second texture image and the coordinate system where the three-dimensional coordinate set is located. Traversing all the two texture images with the first matching relationship, solving all the third transformation matrixes and corresponding second camera parameters, and further obtaining a third transformation matrix set and a second camera parameter set. Because the transformation matrix of the three-dimensional coordinate set and the object grid model is the unit matrix, the fourth transformation matrix between the three-dimensional coordinate set and the coordinate system of the object grid model is the unit matrix; thus, the third conversion matrix in the third conversion matrix set can be processed based on the identity matrix to obtain the second conversion matrix set. In this embodiment, the second transformation matrix set and the second camera parameter set can be accurately obtained by performing the processing first and then transforming the second transformation matrix set and the second camera parameter set to be unified in the world coordinate system.
Through the above steps, the second texture mapping relation information may be expressed as: c (C) t ={K t ,R t ,T t }. Wherein K is t Is a second set of camera parameters; r is R t A rotation matrix set which is a second conversion matrix set; t (T) t Is a translation matrix in the second set of translation matrices. It can be considered that: each second texture image corresponds to a second camera parameter. The camera internal parameter of the second camera parameter is a parameter related to the characteristic of the second camera itself, such as the focal length, the pixel size, etc. of the second camera. The distortion of the second camera parameter is a distortion of the second camera lens, including radial distortion and tangential distortion. The second camera may be considered an abstract virtual camera, not a real existing camera. The second transformation matrix is a transformation matrix between a second texture image in the camera coordinate system and the object grid model in the world coordinate system; collecting all the second conversion matrixes to obtain a second conversion matrix set; the second transformation matrix may be associated with the object mesh model by the second transformation matrix. Wherein the second conversion matrix comprises a rotation matrix and a translation matrix. The rotation matrix is the direction of the coordinate axes of the camera coordinate system of the first camera relative to the coordinate axes of the world coordinate system. The translation matrix is the position of the spatial origin under the camera coordinate system of the second camera.
In some of these embodiments, determining a three-dimensional coordinate set corresponding to the feature points in the first feature set according to the first feature set, the first transformation matrix set, and the first camera parameter set in step S2321 includes the steps of:
and carrying out three-dimensional reconstruction on each feature point in the first feature set under each camera coordinate system according to the first conversion matrix set and the first camera parameter set, and determining a three-dimensional coordinate set corresponding to the feature points in the first feature set.
Specifically, according to the first transformation matrix set and the first camera parameter set, each feature point in the first feature set is transformed into a respective camera coordinate system, three-dimensional reconstruction is performed on each feature point to obtain a corresponding three-dimensional point, and then the three-dimensional coordinates of the three-dimensional point are determined. And calculating three-dimensional coordinates corresponding to all the feature points based on the steps to obtain a three-dimensional coordinate set under a camera coordinate system. In the embodiment, the three-dimensional coordinate set is determined by using the three-dimensional reconstruction, so that the calculation efficiency of the three-dimensional coordinate set can be improved. Of course, the three-dimensional coordinate set corresponding to the feature points in the first feature set may also be determined by using rays that construct the camera optical center to each feature point in the first feature set, which is not described herein.
In some of these embodiments, as shown in fig. 3, the texture image pre-permutation method based on texture mapping further includes:
step S240, performing texture mapping on the object grid model according to the second texture image dataset and the second texture mapping relation information by using a texture mapping algorithm.
In this embodiment, the texture mapping algorithm may be used to directly perform texture mapping on the object mesh model according to the second texture image dataset and the second texture mapping relationship information, so that the first texture image data is not required to enter, mapping is completed quickly, and mapping efficiency and accuracy are guaranteed.
The process of the texture mapping algorithm is described below:
partitioning the object grid model, selecting a certain area U on the object grid model, and screening a second texture image corresponding to the area U from a second texture image data set; such as: image ordering is carried out on the first texture images in the first texture image data set, and an image sequence is determined to be I= { I t T=1, & gt, n }; searching for a first texture image I from the first texture image dataset that best matches the region U l (i) The frame number l (i) e {1,., n }; redetermining the first texture image I through a first matching relationship l(i) Corresponding second texture image I j(i) Then the second texture image I j(i) Namely, is opposite to the region UAnd a second texture image.
According to the corresponding second texture mapping relation information C t ={K t ,R t ,T t Second texture image I j(i) And (3) mapping the texture in the region U, repeatedly executing the steps, and completing the texture mapping of all the blocks in the object grid model.
In this embodiment, there is also provided a texture image pre-displacement method based on texture mapping, as shown in fig. 4, including the following steps:
step S410, scanning a target object based on a three-dimensional scanner, and acquiring an object grid model, a model skin development diagram of the object grid model and third texture mapping relation information corresponding to the object grid model and the model skin development diagram;
step S420, processing the model skin development chart and a second texture image in a preset second texture image dataset, and determining a second matching relationship between the model skin development chart and the second texture image;
step S430, determining second texture mapping relation information corresponding to the object grid model and the second texture image data according to the second matching relation, the model skin development diagram, the object grid model and the third texture mapping relation information.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and are not described in detail in this embodiment.
It should be emphasized that the model skin development map of the object grid model refers to a map obtained by mapping the object grid model with the first texture image dataset; namely, performing texture mapping on the object grid model based on the first texture image data set to obtain an unfolding diagram of model skin on the object grid model. And after scanning, the three-dimensional scanner automatically generates third texture mapping relation information without additional algorithm for processing. Each feature point on the model skin development chart is an effective feature point, so that the calculation amount for determining the second matching relation can be effectively reduced.
It is possible to know without doubt based on the above embodiments: since the second texture image dataset is pre-processed based on the target object. Such as: and adopting a high-resolution color camera to shoot the target object from different angles in an all-around way, and obtaining a high-resolution and defect-free high-quality second texture image data set. The resolution of the second texture image may be considered higher than the resolution of the third texture image in the model skin development map. Of course, the second texture image dataset may be obtained in other manners, where the resolution of the second texture image in the second texture image dataset is similar to the resolution of the third texture image in the model skin development map, but there are no defects such as reflection points. Therefore, the image quality of the second texture image is higher than that of the model skin development map. Wherein the third texture image may be obtained by segmenting a model skin unfolding. Such as: and dividing the model skin development diagram according to a preset size to obtain each third texture image.
By the method, the problems of complex post-processing operation, high configuration requirement on the three-dimensional scanner and low processing efficiency are solved, the pre-displacement of texture images in the texture mapping is automatically completed, and the processing efficiency is improved.
In some of these embodiments, the texture image pre-permutation method based on texture mapping further comprises:
and performing texture mapping on the object grid model according to the second texture image dataset and the second texture mapping relation information by using a texture mapping algorithm.
In this embodiment, a texture mapping algorithm similar to that described above is used, and will not be described here.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
In this embodiment, a pre-displacement device for texture image based on texture mapping is further provided, and the pre-displacement device is used to implement the foregoing embodiments and implementations, and is not described in detail. The terms "module," "unit," "sub-unit," and the like as used below may refer to a combination of software and/or hardware that performs a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
Fig. 5 is a block diagram of a texture image pre-displacement apparatus based on texture mapping according to the present embodiment, and as shown in fig. 5, the apparatus includes: a first scanning module 210, a first processing module 220, and a first replacement module 230;
a first scanning module 210, configured to scan the target object based on the three-dimensional scanner, obtain a first texture image dataset, an object mesh model, and first texture mapping relationship information; the first texture mapping relation information is relation information for performing texture mapping on the object grid model based on the first texture image dataset;
a first processing module 220, configured to process a first texture image in the first texture image dataset and a second texture image in the second texture image dataset, and determine a second matching relationship between the first texture image and the second texture image;
the first replacement module 230 is configured to determine second texture mapping relationship information corresponding to the object mesh model and the second texture image data according to the second matching relationship, the first texture image dataset, the object mesh model and the first texture mapping relationship information.
By the device, the problems of complex post-processing operation, high configuration requirement on the three-dimensional scanner and low processing efficiency are solved, the pre-displacement of texture images in the texture mapping is automatically completed, and the processing efficiency is improved.
In some of these embodiments, the first texture mapping relationship information comprises a first set of camera parameters and a first set of transformation matrices; the first camera parameter set is a set of first camera parameters; the first camera parameters are in one-to-one correspondence with the first texture images in the first texture image dataset; the first set of transformation matrices is a set of first transformation matrices between the first texture image in the camera coordinate system and the object grid model in the world coordinate system.
In some embodiments thereof, the first replacing module 230 is further configured to determine a three-dimensional coordinate set corresponding to the feature points in the first feature set according to the first feature set, the first transformation matrix set, the object grid model, and the first camera parameter set; the three-dimensional coordinate set is under a world coordinate system; the first feature set is obtained by carrying out feature detection on each first texture image in the first texture image data set; two texture images with a first matching relationship are taken as a group, and a second transformation matrix between the second texture image and a coordinate system where the three-dimensional coordinate set is located and a second camera parameter corresponding to the second texture image are determined; traversing all the two texture images with the first matching relationship, and collecting the corresponding second conversion matrixes to obtain a second conversion matrix set; and collecting the corresponding second camera parameters to obtain a second camera parameter set.
In some embodiments, the first permutation module 230 is further configured to reconstruct three-dimensionally each feature point in the first feature set according to the first transformation matrix set and the first camera parameter set, and construct a ray from a camera optical center of the three-dimensional scanner to the feature point under a camera coordinate system; taking the intersection point of the ray and the object grid model as a screening condition, and determining a three-dimensional coordinate set corresponding to the feature points in the first feature set from the three-dimensional reconstruction result; the three-dimensional coordinate set is converted into a world coordinate system based on the first conversion matrix set.
In some embodiments, the first permutation module 230 is further configured to determine a three-dimensional coordinate set corresponding to the feature points in the first feature set according to the first feature set, the first transformation matrix set, and the first camera parameter set; the first feature set is obtained by carrying out feature detection on each first texture image in the first texture image data set; two texture images with a first matching relationship are taken as a group, and a third transformation matrix between the second texture image and a coordinate system where the three-dimensional coordinate set is located and a second camera parameter corresponding to the second texture image are determined; traversing all the two texture images with the first matching relationship, and collecting the corresponding third conversion matrixes to obtain a third conversion matrix set; collecting corresponding second camera parameters to obtain a second camera parameter set; and processing the third transformation matrix in the third transformation matrix set based on a fourth transformation matrix between the three-dimensional coordinate set and the coordinate system where the object grid model is located, so as to obtain a second transformation matrix set.
In some embodiments, the first permutation module 230 is further configured to reconstruct three-dimensionally each feature point in the first feature set according to the first transformation matrix set and the first camera parameter set, and construct a ray from a camera optical center of the three-dimensional scanner to the feature point under a camera coordinate system; and determining a three-dimensional coordinate set corresponding to the feature points in the first feature set from the three-dimensional reconstruction result by taking the intersection points of the rays and the object grid model as screening conditions.
In some of these embodiments, the resolution of the second texture image is higher than the resolution of the first texture image.
In some embodiments, the first processing module 220 is further configured to perform feature detection on each first texture image in the first texture image dataset to obtain a first feature set; performing feature detection on each second texture image in the second texture image data set to obtain a second feature set; and performing feature matching on the first feature set and the second feature set, and determining a first matching relation between the first texture image and the second texture image.
In some embodiments, the first processing module 220 is further configured to perform similarity detection on the texture images in the first texture image dataset and the second texture image dataset before processing the first texture image in the first texture image dataset and the second texture image in the second texture image dataset; and screening the texture images with the similarity larger than the similarity threshold value as the processing objects.
In some of these embodiments, the texture image pre-displacement device based on texture mapping further comprises a first mapping module; the first mapping module is used for performing texture mapping on the object grid model according to the second texture image dataset and the second texture mapping relation information by using a texture mapping algorithm.
In this embodiment, there is further provided a texture image pre-displacement device based on texture mapping, as shown in fig. 6, including: a second scan module 410, a second process module 420, and a second permutation module 430;
a second scanning module 410, configured to scan the target object based on the three-dimensional scanner, obtain an object mesh model, a model skin development diagram of the object mesh model, and third texture mapping relation information corresponding to the object mesh model and the model skin development diagram;
a second processing module 420, configured to process the model skin development map and a second texture image in a preset second texture image dataset, and determine a second matching relationship between the model skin development map and the second texture image;
the second replacement module 430 is configured to determine second texture mapping relationship information corresponding to the object mesh model and the second texture image data according to the second matching relationship, the model skin development diagram, the object mesh model, and the third texture mapping relationship information.
By the device, the problems of complex post-processing operation, high configuration requirement on the three-dimensional scanner and low processing efficiency are solved, the pre-displacement of texture images in the texture mapping is automatically completed, and the processing efficiency is improved.
In some of these embodiments, the texture image pre-displacement device based on texture mapping further comprises a second mapping module; and the second mapping module is used for performing texture mapping on the object grid model according to the second texture image dataset and the second texture mapping relation information by using a texture mapping algorithm.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
There is also provided in this embodiment a computer device comprising a memory in which a computer program is stored and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the computer device may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, scanning a target object based on a three-dimensional scanner, and acquiring a first texture image dataset, an object grid model and first texture mapping relation information; the first texture mapping relation information is relation information for performing texture mapping on the object grid model based on the first texture image dataset;
s2, processing a first texture image in the first texture image data set and a second texture image in the preset second texture image data set, and determining a first matching relationship between the first texture image and the second texture image;
and S3, determining second texture mapping relation information corresponding to the object grid model and the second texture image data according to the first matching relation, the first texture image data set, the object grid model and the first texture mapping relation information.
Or S4, scanning the target object based on a three-dimensional scanner, and acquiring an object grid model, a model skin development diagram of the object grid model and third texture mapping relation information corresponding to the object grid model and the model skin development diagram;
S5, processing the model skin development chart and a second texture image in a preset second texture image dataset, and determining a second matching relationship between the model skin development chart and the second texture image;
and S6, determining second texture mapping relation information corresponding to the object grid model and the second texture image data according to the second matching relation, the model skin development diagram, the object grid model and the third texture mapping relation information.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and are not described in detail in this embodiment.
In addition, in combination with the texture image pre-displacement method based on texture mapping provided in the above embodiment, a storage medium may be further provided in this embodiment to implement the method. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the texture image pre-permutation methods based on texture mapping in the above embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are within the scope of the present application in light of the embodiments provided herein.
It is evident that the drawings are only examples or embodiments of the present application, from which the present application can also be adapted to other similar situations by a person skilled in the art without the inventive effort. In addition, it should be appreciated that while the development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as an admission of insufficient detail.
The term "embodiment" in this application means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive. It will be clear or implicitly understood by those of ordinary skill in the art that the embodiments described in this application can be combined with other embodiments without conflict.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the patent. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A texture image pre-permutation method based on texture mapping, comprising:
scanning a target object based on a three-dimensional scanner, and acquiring an object grid model, a model skin unfolding diagram of the object grid model and third texture mapping relation information corresponding to the object grid model and the model skin unfolding diagram; the model skin unfolding diagram is obtained by performing texture mapping on an object grid model for a first texture image data set;
processing the model skin development diagram and a second texture image in a preset second texture image dataset, and determining a second matching relationship between the model skin development diagram and the second texture image; the image quality of the second texture image is higher than that of the model skin development image;
determining second texture mapping relation information corresponding to the object grid model and the second texture image data according to the second matching relation, the model skin development diagram, the object grid model and the third texture mapping relation information;
and performing texture mapping on the object grid model according to the second texture image dataset and the second texture mapping relation information by using a texture mapping algorithm.
2. The texture image pre-permutation method based on texture mapping according to claim 1, wherein the third texture mapping relationship information includes a fifth camera parameter set and a fifth conversion matrix set;
the fifth camera parameter set is a set of fifth camera parameters; the fifth camera parameters are in one-to-one correspondence with the third texture image in the model skin development chart;
the fifth set of transformation matrices is a set of fifth transformation matrices between the third texture image in a camera coordinate system and an object grid model in a world coordinate system.
3. The texture image pre-displacement method according to claim 2, wherein determining second texture mapping relation information corresponding to the object mesh model and the second texture image data according to the second matching relation, the model skin development map, the object mesh model, and the third texture mapping relation information comprises:
determining a three-dimensional coordinate set corresponding to feature points in the third feature set according to the third feature set, the fifth transformation matrix set, the object grid model and the fifth camera parameter set; the three-dimensional coordinate set is under the world coordinate system; the third feature set is obtained by performing feature detection on each third texture image in the model skin development chart;
Taking two texture images with the second matching relationship as a group, and determining a second transformation matrix between the second texture image and a coordinate system where the three-dimensional coordinate set is located and a second camera parameter corresponding to the second texture image;
traversing all the two texture images with the second matching relationship, and collecting the corresponding second conversion matrixes to obtain a second conversion matrix set; and collecting the corresponding second camera parameters to obtain a second camera parameter set.
4. The texture image pre-displacement method according to claim 3, wherein the determining a three-dimensional coordinate set corresponding to feature points in the third feature set according to the third feature set, the fifth transformation matrix set, the object mesh model, and the fifth camera parameter set comprises:
constructing rays from a camera optical center of the three-dimensional scanner to each feature point in the third feature set under each camera coordinate system according to the fifth conversion matrix set and the fifth camera parameter set;
converting the object grid model into a camera coordinate system corresponding to the rays;
determining a three-dimensional coordinate set corresponding to the feature points in the third feature set by taking the intersection points of the rays and the object grid model as screening conditions;
The three-dimensional coordinate set is converted into a world coordinate system based on the fifth conversion matrix set.
5. The texture image pre-displacement method according to claim 2, wherein determining second texture mapping relation information corresponding to the object mesh model and the second texture image data according to the second matching relation, the model skin development map, the object mesh model, and the third texture mapping relation information comprises:
determining a three-dimensional coordinate set corresponding to the feature points in the third feature set according to the third feature set, the fifth transformation matrix set and the fifth camera parameter set; the third feature set is obtained by performing feature detection on each third texture image in the model skin development chart;
taking two texture images with the second matching relationship as a group, and determining a sixth transformation matrix between the second texture image and a coordinate system where the three-dimensional coordinate set is located and a second camera parameter corresponding to the second texture image;
traversing all the two texture images with the second matching relationship, and collecting the corresponding sixth conversion matrix to obtain a sixth conversion matrix set; collecting the corresponding second camera parameters to obtain a second camera parameter set;
And processing a sixth transformation matrix in the sixth transformation matrix set based on a fourth transformation matrix between the three-dimensional coordinate set and the coordinate system where the object grid model is located, so as to obtain a second transformation matrix set.
6. The texture image pre-displacement method according to claim 5, wherein the determining a three-dimensional coordinate set corresponding to feature points in the third feature set according to the third feature set, the fifth transformation matrix set, and the fifth camera parameter set comprises:
and carrying out three-dimensional reconstruction on each characteristic point in the third characteristic set under each camera coordinate system according to the fifth conversion matrix set and the fifth camera parameter set, and determining a three-dimensional coordinate set corresponding to the characteristic points in the third characteristic set.
7. The texture map-based texture image pre-displacement method of claim 1, wherein the processing the model skin development map and a second texture image in a second texture image dataset to determine a second matching relationship between the model skin development map and the second texture image comprises:
performing feature detection on each third texture image in the model skin development chart to obtain a third feature set;
Performing feature detection on each second texture image in the second texture image data set to obtain a second feature set;
and performing feature matching on the third feature set and the second feature set, and determining a second matching relation between a third texture image and the second texture image in the model skin development chart.
8. The texture image pre-permutation method based on texture mapping according to claim 1, further comprising:
performing similarity detection on the model skin development map and a texture image in a second texture image data set before processing the model skin development map and the second texture image in the second texture image data set;
and screening the texture images with the similarity larger than a similarity threshold as processing objects.
9. A texture image pre-displacement device based on texture mapping, comprising: the device comprises a second scanning module, a second processing module, a second replacement module and a second mapping module;
the second scanning module is used for scanning a target object based on a three-dimensional scanner, and acquiring an object grid model, a model skin development diagram of the object grid model and third texture mapping relation information corresponding to the object grid model and the model skin development diagram; the model skin unfolding diagram is obtained by performing texture mapping on an object grid model for a first texture image data set;
The second processing module is used for processing the model skin development chart and a second texture image in a preset second texture image dataset, and determining a second matching relationship between the model skin development chart and the second texture image; the image quality of the second texture image is higher than that of the model skin development image;
the second replacement module is configured to determine second texture mapping relationship information corresponding to the object mesh model and the second texture image data according to the second matching relationship, the model skin unfolded graph, the object mesh model and the third texture mapping relationship information;
the second mapping module is configured to perform texture mapping on the object grid model according to the second texture image dataset and the second texture mapping relation information by using a texture mapping algorithm.
10. A computer device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the steps of the texture image pre-permutation method based on texture mapping of any of claims 1 to 8.
CN202310196508.2A 2022-11-29 2022-11-29 Texture image pre-displacement method, device and storage medium based on texture mapping Pending CN116246002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310196508.2A CN116246002A (en) 2022-11-29 2022-11-29 Texture image pre-displacement method, device and storage medium based on texture mapping

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211503197.1A CN115601490B (en) 2022-11-29 2022-11-29 Texture image pre-replacement method and device based on texture mapping and storage medium
CN202310196508.2A CN116246002A (en) 2022-11-29 2022-11-29 Texture image pre-displacement method, device and storage medium based on texture mapping

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202211503197.1A Division CN115601490B (en) 2022-11-29 2022-11-29 Texture image pre-replacement method and device based on texture mapping and storage medium

Publications (1)

Publication Number Publication Date
CN116246002A true CN116246002A (en) 2023-06-09

Family

ID=84853527

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310196508.2A Pending CN116246002A (en) 2022-11-29 2022-11-29 Texture image pre-displacement method, device and storage medium based on texture mapping
CN202211503197.1A Active CN115601490B (en) 2022-11-29 2022-11-29 Texture image pre-replacement method and device based on texture mapping and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211503197.1A Active CN115601490B (en) 2022-11-29 2022-11-29 Texture image pre-replacement method and device based on texture mapping and storage medium

Country Status (1)

Country Link
CN (2) CN116246002A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758205B (en) * 2023-08-24 2024-01-26 先临三维科技股份有限公司 Data processing method, device, equipment and medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101841668B1 (en) * 2012-02-15 2018-03-27 한국전자통신연구원 Apparatus and method for producing 3D model
CN109325990B (en) * 2017-07-27 2022-11-29 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, and storage medium
CN109389665B (en) * 2018-08-24 2021-10-22 先临三维科技股份有限公司 Texture obtaining method, device and equipment of three-dimensional model and storage medium
CN110599578A (en) * 2019-07-29 2019-12-20 深圳市易尚展示股份有限公司 Realistic three-dimensional color texture reconstruction method
CN113327277A (en) * 2020-02-29 2021-08-31 华为技术有限公司 Three-dimensional reconstruction method and device for half-body image
US11120606B1 (en) * 2020-04-24 2021-09-14 Electronic Arts Inc. Systems and methods for image texture uniformization for multiview object capture
CN113496539B (en) * 2021-06-11 2023-08-15 山东大学 Texture mapping method and system based on three-dimensional grid model parameter design
CN113538649B (en) * 2021-07-14 2022-09-16 深圳信息职业技术学院 Super-resolution three-dimensional texture reconstruction method, device and equipment
CN114049423A (en) * 2021-10-13 2022-02-15 北京师范大学 Automatic realistic three-dimensional model texture mapping method
CN115187715A (en) * 2022-06-30 2022-10-14 先临三维科技股份有限公司 Mapping method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115601490A (en) 2023-01-13
CN115601490B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN111023970B (en) Multi-mode three-dimensional scanning method and system
KR100914211B1 (en) Distorted image correction apparatus and method
CN108961383B (en) Three-dimensional reconstruction method and device
CN113256778B (en) Method, device, medium and server for generating vehicle appearance part identification sample
JP7227969B2 (en) Three-dimensional reconstruction method and three-dimensional reconstruction apparatus
CN110570367A (en) Fisheye image correction method, electronic device and storage medium
WO2014187265A1 (en) Photo-capture processing method, device and computer storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN111612731B (en) Measuring method, device, system and medium based on binocular microscopic vision
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN116246002A (en) Texture image pre-displacement method, device and storage medium based on texture mapping
CN114143528A (en) Multi-video stream fusion method, electronic device and storage medium
CN115546379A (en) Data processing method and device and computer equipment
CN114782636A (en) Three-dimensional reconstruction method, device and system
CN109064533B (en) 3D roaming method and system
CN115289974B (en) Hole site measuring method, hole site measuring device, computer equipment and storage medium
CN112243518A (en) Method and device for acquiring depth map and computer storage medium
JP7432793B1 (en) Mapping methods, devices, chips and module devices based on three-dimensional point clouds
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
CN110969696A (en) Method and system for three-dimensional modeling rapid space reconstruction
CN113077523B (en) Calibration method, calibration device, computer equipment and storage medium
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
WO2021166381A1 (en) Point cloud data processing device, point cloud data processing method, and program
CN113516157A (en) Embedded three-dimensional scanning system and three-dimensional scanning device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination