CN110619601A - Image data set generation method based on three-dimensional model - Google Patents
Image data set generation method based on three-dimensional model Download PDFInfo
- Publication number
- CN110619601A CN110619601A CN201910891377.3A CN201910891377A CN110619601A CN 110619601 A CN110619601 A CN 110619601A CN 201910891377 A CN201910891377 A CN 201910891377A CN 110619601 A CN110619601 A CN 110619601A
- Authority
- CN
- China
- Prior art keywords
- dimensional model
- camera
- projector
- structured light
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000004088 simulation Methods 0.000 claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000003287 optical effect Effects 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
- G06T3/067—Reshaping or unfolding 3D tree structures onto 2D planes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The invention relates to the fields of computer vision, computer graphics and image processing, in particular to an image data set generation method based on a three-dimensional model. The method comprises the steps of projecting to a simulation camera and a simulation projector respectively through a three-dimensional model to simulate texture images collected by the camera and structured light images projected by the projector; superposing the structured light coding information on the texture map through the corresponding relation of the coordinates to obtain the texture map containing structured light; and obtaining the coordinates of a projection coordinate system corresponding to the points on the camera collected image according to the coordinate corresponding relation to obtain a parallax map. According to the invention, through a method of simulating a camera and a projector, the three-dimensional model is utilized to generate a corresponding depth map, texture map, projection structured light, structured light texture map, matching point coordinates and matching disparity map, parameters in the three-dimensional model are controlled and changed to generate multiple groups of corresponding data, and the content of a data set is greatly enriched.
Description
Technical Field
The invention relates to the fields of computer vision, computer graphics and image processing, in particular to an image data set generation method based on a three-dimensional model.
Background
In recent years, the depth perception technology has wide application in the fields of AR/VR, robot, unmanned aerial vehicle, unmanned vehicle and the like, and solves a plurality of problems, such as: non-rigid reconstruction, motion recognition, target tracking, and the like. At present, the implementation schemes mainly include methods such as TOF (time of flight), structured light, stereo vision, and the like, wherein the structured light technology is the currently mainstream and most widely applied depth perception scheme.
The structured light method is based on the principle of triangulation, a triangle is formed among a projector, a camera and a measured object, the pixel matching relationship between the projector and the camera (monocular scheme) or between the camera and the camera (binocular scheme) is completed by projecting a coding pattern of a specific mode to the surface of the object, and depth information is further calculated according to the triangle relationship. The structured light mainly adopts coding methods such as linear structured light, sinusoidal structured light, speckle structured light, pseudo-random coding and the like.
The occurrence of deep learning enables the structured light data processing to obtain depth information or parallax information without being limited to conventional image processing and analytic calculation methods. However, deep learning requires a large amount of data as training samples, but it is time-consuming to acquire a large number of structured light images and requires hardware support.
Disclosure of Invention
The invention aims to provide an image data set generation method based on a three-dimensional model, which utilizes the existing 3D model to generate data sets of different structured light in a simulation mode, solves the problem of difficulty in acquisition of the structured light data sets and provides a convenient data set acquisition method for deep learning.
The realization process of the invention is as follows:
a method of generating an image dataset based on a three-dimensional model, comprising the steps of:
(1) projecting to a simulation camera and a simulation projector respectively through a three-dimensional model to simulate texture images collected by the camera and structured light images projected by the projector;
(2) superposing the coding information of the structured light image on the texture image according to the corresponding relation of the coordinates to obtain the texture image containing the structured light;
(3) obtaining a matching point pair containing a structured light texture image and a structured light image of the analog projector according to the corresponding relation between the three-dimensional model and the analog camera and the analog projector, thereby obtaining a matching point coordinate and further obtaining a parallax map;
(4) and (3) generating a plurality of groups of corresponding data by changing the parameters in the step (1), the step (2) and the step (3) to generate an image data set.
Further, the step (1) specifically comprises the following steps:
(a) acquiring a three-dimensional model;
(b) simulating the setting of a camera;
(c) projecting the three-dimensional model to a simulation camera to simulate a texture image acquired by the camera;
(d) simulating the setting of a projector;
(e) and projecting the three-dimensional model to a simulation projector to simulate the structured light image projected by the projector.
Further, the method for obtaining the three-dimensional model in the step (a) comprises a three-dimensional scanner, a depth sensing device, a three-coordinate measuring machine and artificial modeling; the texture information in the three-dimensional model obtained in the step (a) is gray information or color information; the analog camera of step (b) is one or more; the analog projector of step (d) is one or more.
Further, the specific process of the step (b) is as follows:
firstly, setting the origin of a coordinate system of the three-dimensional model at the gravity center of the three-dimensional modelTreating:
obtaining a model P with the origin of the coordinate system at the center of gravityw(xw,yw,zw;Iw);
Secondly, setting the origin of the coordinate system of the analog camera, namely the position O of the optical center of the analog camera in the coordinate system of the three-dimensional modelc(xc0,yc0,zc0) And simulating a rotation vector of the camera coordinate system to the three-dimensional model coordinate systemDetermining a rotation matrix Rc:
Wherein alpha iscRepresenting the angle of rotation, beta, about the x-axiscIndicating the angle of rotation about the y-axis, gammacRepresenting the angle of rotation about the z-axis, whereby the three-dimensional model is transferred from the barycentric coordinate system to the simulated camera coordinate system Pc(xc,yc,zc;Ic):
Then setting the focal length f of the analog cameracResolution wc*hcPrincipal point coordinate Cc(ccx,ccy) And as-size μc(dcx,dcy) And (4) parameters.
Further, the specific process of step (c) is as follows:
construction of projection matrix M by set simulated camera parametersc:
Wherein, the focal length fcResolution wc*hcPrincipal point coordinate Cc(ccx,ccy) And as-size μc(dcx,dcy) A parameter;
thus, the projection coordinate p of the three-dimensional model to the image plane of the analog camera can be obtainedc(uc,vc):
According to the corresponding relation of the coordinates, the points (1 ≦ u) in the resolution range can be adjustedc≤wc,1≤vc≤hc) Projecting the texture image onto an image plane of an analog camera to obtain a texture image I _ c acquired by the analog cameratexture(uc,vc;Ic) And corresponding depth image I _ cdepth(uc,vc;zc)。
Further, the specific process of the step (d) is as follows:
in optical transmission, the projector arrangement can be regarded as a reversed camera, so that the arrangement of the analog projection is similar to that of an analog camera, i.e. the position O of the optical center of the analog projector in the three-dimensional model coordinate systemp(xp0,yp0,zp0) And simulating rotation of the projector coordinate system to the three-dimensional model coordinate systemDetermining a rotation matrix RpWhereby the three-dimensional model is transferred from the barycentric coordinate system to the simulated projector coordinate system Pp(xp,yp,zp):
[xp,yp,zp]T=Rp[x,y,z]T-[xp0,yp0,zp0]T (2.3)
Then, similarly to the setting of the analog camera, the focal length f of the analog projector is setpResolution wp*hpPrincipal point coordinate Cp(cpx,cpy) And as-size μp(dpx,dpy) Building a projection matrix Mp:
Further, the specific process of the step (e) is as follows:
generating a structured light image I _ p projected by an analog projector according to a resolution and a coding method of the analog projectorstruct(up,vp;Ip) Sinusoidally coded structured light, which has the general formula:
Ip=A+Bcos(2πfx+φ) (3.1)
wherein, I is a coding gray value, A represents the average light intensity of sinusoidal coding, B represents the modulation degree of the sinusoidal coding, f is the frequency of the sinusoidal coding, x is the coordinate of a coding point, and phi is the initial phase of the sinusoidal coding;
the structured light projected by the analog projector is artificially set digital coded structured light, and the artificially set digital coded structured light comprises any one of linear structured light, sinusoidal structured light or pseudo-random coded structured light.
Further, the specific process of the step (2) is as follows:
i _ p can be obtained by simulating the parameters of the projector and the obtained structured light image in the step (1)structCoordinate p ofp(up,vp) And PpCoordinate (x) ofp,yp,zp) The corresponding relation is as follows:
by Pp,P0,PcTo obtain I _ pstructCoordinate p ofpAnd I _ ctextureCoordinate pcThe corresponding relationship of (a);
defining weight coefficient zeta < 1, namely the proportion of projected image pixels in the superposition map, and dividing IpAnd IcOverlap by IcpAnd stored in the camera image plane coordinate system pc(uc,vc) Obtaining a texture image I _ c containing structured lightstruct(uc,vc;Icp) Wherein: i iscp=(1-ζ)Ic+ζIp (4.2)。
Further, the specific process of the step (3) is as follows:
will point pc(uc,vc) Corresponding point pp(up,vp) Is stored in pc(uc,vc) Obtaining a mapping table Map from the coordinates of the analog camera to the coordinates of the analog projector in a coordinate systemx(uc,vc;up) And Mapx(uc,vc;vp) (ii) a The disparity map I _ disparity can be obtained by storing the difference value of the coordinatesx(uc,vc;up-uc)、I_disparityy(uc,vc;vp-vc)。
Further, the specific process of the step (4) is as follows:
a set of two-dimensional data is obtained from the three-dimensional information P (x, y, z; I) by means of an analog camera and an analog projector, the two-dimensional data comprising I _ ctexture(uc,vc;Ic)、I_cdepth(uc,vc;zc)、I_pstruct(up,vp;Ip)、I_cstruct(uc,vc;Icp)、Mapx(uc,vc;up)、Mapx(uc,vc;vp)、I_disparityx(uc,vc;up-uc) And I _ disparityy(uc,vc;vp-vc);
By modifying parameters of the analog camera and the analog projector, said parameters including Oc(xc0,yc0,zc0)、(αc,βc,γc)、fc、wc*hc、Cc(ccx,ccy)、μc(dcx,dcy)、Op(xp0,yp0,zp0)、(αp,βp,γp)、fp、wp*hp、Cp(cpx,cpy)、μp(dpx,dpy) Structured light encoding method IpObtaining another group of data by any one or more parameters;
an image data set is obtained by a three-dimensional data set P (x, y, z; I). The method comprises the steps of projecting to a simulation camera and a simulation projector respectively through a three-dimensional model to simulate texture images collected by the camera and structured light images projected by the projector; superposing the structured light coding information on the texture map through the corresponding relation of the coordinates to obtain the texture map containing structured light; and obtaining the coordinates of a projection coordinate system corresponding to the points on the camera collected image according to the coordinate corresponding relation to obtain a parallax map. According to the invention, through a method of simulating a camera and a projector, the three-dimensional model is utilized to generate a corresponding depth map, texture map, projection structured light, structured light texture map, matching point coordinates and matching disparity map, parameters in the three-dimensional model are controlled and changed to generate multiple groups of corresponding data, and the content of a data set is greatly enriched.
The invention has the following positive effects:
(1) compared with the method of acquiring the data set through actual acquisition, the method of the invention saves more time and does not depend on hardware conditions.
(2) The method can fully utilize any existing 3D data and expand data sources.
(3) The method is more flexible, and can generate data of any structured light, such as sine (cosine) structured light, line structured light, speckle structured light, pseudo-random lattice structured light and the like
(4) The method can obtain a plurality of groups of two-dimensional data from a three-dimensional model by modifying parameters such as focal length, pixel size, resolution, relative position relation between a simulation camera and a projector and the like.
Drawings
FIG. 1 is a schematic diagram showing the positional relationship between a three-dimensional model and a simulation camera, in which the image coordinate system is (u)c,vc) The analog camera coordinate system is (x)c,yc,zc) Coordinate system of three-dimensional model (x)c,yc,zc),OwIs the origin of the coordinate system of the three-dimensional model; pwIs a point in the three-dimensional model coordinate system; o iscIs the origin of the coordinate system of the analog camera; rcA rotation matrix from a model coordinate system point to a simulation camera coordinate system; t iscFor translation vectors, i.e. O, from points of the coordinate system of the three-dimensional model to the coordinate system of the simulated cameracCoordinates of the points in a three-dimensional model coordinate system; f. ofcSimulating a camera focal length; c. Cc(ccx,ccy) Is the principal point of the image; p is a radical ofcIs PwProjection on an image coordinate system; dcxIs the length of the picture element, dcyIs the length and width of the pixel;
FIG. 2 is a schematic view of a triangular relationship between an analog projector and an analog camera, where P iswIs a point in the three-dimensional model coordinate system; o iscIs the origin of the coordinate system of the analog camera; o ispIs the origin of the coordinate system of the analog projector; (u)c,vc) To simulate the camera image coordinate system, ccSimulating a camera image coordinate system principal point; (u)p,vp) To simulate the projector image coordinate system, cpSimulating a projector image coordinate system principal point; p is a radical ofcIs PwA projected point on the simulated camera image coordinate system; p is a radical ofpIs PwA projection point on the simulated projector image coordinate system;
fig. 3 is a light schematic diagram of a sinusoidal structure simulating the projection of a projector.
Detailed Description
The present invention will be further described with reference to the following examples.
In order to solve the difficulty of structured light data set acquisition and provide a convenient data set acquisition method for deep learning, the invention provides an image data set generation method based on a three-dimensional model. The method comprises the steps that a three-dimensional model is projected to a two-dimensional space through a simulation camera and a simulation projector, so that a texture image and a depth image which are collected by the simulation camera corresponding to the three-dimensional model and a structured light image which is projected by the simulation projector corresponding to the three-dimensional model are obtained; according to the corresponding relation, the structured light image and the texture image are fused to obtain a synthesized structured light texture image; according to the corresponding relation between the three-dimensional model and the simulated camera and the simulated projector, the matching point pairs of the structured light texture image and the projected structured light image can be obtained, so that the coordinates of the matching points are obtained, and further the matching parallax map is obtained.
Example 1
The method for generating an image dataset based on a three-dimensional model in this embodiment includes the following steps:
(1) projecting to a simulation camera and a simulation projector respectively through a three-dimensional model to simulate texture images collected by the camera and structured light images projected by the projector; the step (1) specifically comprises the following steps: (a) acquiring a three-dimensional model; (b) simulating the setting of a camera; (c) projecting the three-dimensional model to a simulation camera to simulate a texture image acquired by the camera; (d) simulating the setting of a projector; (e) projecting the three-dimensional model to a simulation projector to simulate a structured light image projected by the projector; the method for obtaining the three-dimensional model in the step (a) comprises a three-dimensional scanner, a depth sensing device, a three-coordinate measuring machine and artificial modeling, and the three-dimensional model can also be obtained in other modes; the texture information in the three-dimensional model obtained in the step (a) is gray information or color information; the analog camera of step (b) is one or more; the analog projectors of step (d) are one or more;
(2) superposing the coding information of the structured light image on the texture image according to the corresponding relation of the coordinates to obtain the texture image containing the structured light;
(3) obtaining a matching point pair containing a structured light texture image and a structured light image of the analog projector according to the corresponding relation between the three-dimensional model and the analog camera and the analog projector, thereby obtaining a matching point coordinate and further obtaining a parallax map;
(4) and (3) generating a plurality of groups of corresponding data by changing the parameters in the step (1), the step (2) and the step (3) to generate an image data set.
Further, the specific process of the step (b) is as follows:
firstly, setting the origin of a coordinate system of the three-dimensional model at the gravity center of the three-dimensional modelTreating:
obtaining a model P with the origin of the coordinate system at the center of gravityw(xw,yw,zw;Iw);
Second, the simulation camera model is a pinhole camera model as an exampleSetting the origin of the coordinate system of the analog camera, i.e. the position O of the optical center of the analog camera in the coordinate system of the three-dimensional modelc(xc0,yc0,zc0) And simulating a rotation vector of the camera coordinate system to the three-dimensional model coordinate systemDetermining a rotation matrix Rc:
Wherein alpha iscRepresenting the angle of rotation, beta, about the x-axiscIndicating the angle of rotation about the y-axis, gammacRepresenting the angle of rotation about the z-axis, whereby the three-dimensional model is transferred from the barycentric coordinate system to the simulated camera coordinate system Pc(xc,yc,zc;Ic):
Then setting the focal length f of the analog cameracResolution wc*hcPrincipal point coordinate Cc(ccx,ccy) And as-size μc(dcx,dcy) Parameters, as shown in fig. 1.
Further, the specific process of step (c) is as follows:
construction of projection matrix M by set simulated camera parametersc:
Wherein, the focal length fcResolution wc*hcPrincipal point coordinate Cc(ccx,ccy) And as-size μc(dcx,dcy) A parameter;
thus, the projection coordinate p of the three-dimensional model to the image plane of the analog camera can be obtainedc(uc,vc):
According to the corresponding relation of the coordinates, the points (1 ≦ u) in the resolution range can be adjustedc≤wc,1≤vc≤hc) Projecting the texture image onto an image plane of an analog camera to obtain a texture image I _ c acquired by the analog cameratexture(uc,vc;Ic) And corresponding depth image I _ cdepth(uc,vc;zc) As shown in fig. 1.
Further, the specific process of the step (d) is as follows:
in optical transmission, the projector arrangement can be regarded as a reversed camera, so that the arrangement of the analog projection is similar to that of an analog camera, i.e. the position O of the optical center of the analog projector in the three-dimensional model coordinate systemp(xp0,yp0,zp0) And simulating rotation of the projector coordinate system to the three-dimensional model coordinate systemDetermining a rotation matrix RpWhereby the three-dimensional model is transferred from the barycentric coordinate system to the simulated projector coordinate system Pp(xp,yp,zp):
[xp,yp,zp]T=Rp[x,y,z]T-[xp0,yp0,zp0]T (2.3)
Then, similarly to the setting of the analog camera, the focal length f of the analog projector is setpResolution wp*hpPrincipal point coordinate Cp(cpx,cpy) And as-size μp(dpx,dpy) Building a projection matrix Mp:
The analog projector and the analog camera form a triangular relationship, as shown in fig. 2.
Further, the specific process of the step (e) is as follows:
generating a structured light image I _ p projected by an analog projector according to a resolution and a coding method of the analog projectorstruct(up,vp;Ip) As shown in fig. 3, the sinusoidally encoded structured light with a resolution of 1024 x 768 has a general formula:
Ip=A+Bcos(2πfx+φ) (3.1)
wherein, I is a coding gray value, A represents the average light intensity of sinusoidal coding, B represents the modulation degree of the sinusoidal coding, f is the frequency of the sinusoidal coding, x is the coordinate of a coding point, and phi is the initial phase of the sinusoidal coding;
the structured light projected by the analog projector is artificially set digital coded structured light, and the artificially set digital coded structured light comprises any one of linear structured light, sinusoidal structured light or pseudo-random coded structured light.
Further, the specific process of the step (2) is as follows:
i _ p can be obtained by simulating the parameters of the projector and the obtained structured light image in the step (1)structCoordinate p ofp(up,vp) And PpCoordinate (x) ofp,yp,zp) The corresponding relation is as follows:
by Pp,P0,PcTo obtain I _ pstructCoordinate p ofpAnd I _ ctextureCoordinate pcThe corresponding relationship of (a);
defining weight coefficient zeta < 1, namely the proportion of projected image pixels in the superposition map, and dividing IpAnd IcOverlap by IcpAnd stored in the camera image plane coordinate system pc(uc,vc) Obtaining a texture image I _ c containing structured lightstruct(uc,vc;Icp) Wherein: i iscp=(1-ζ)Ic+ζIp (4.2)。
Further, the specific process of the step (3) is as follows:
will point pc(uc,vc) Corresponding point pp(up,vp) Is stored in pc(uc,vc) Obtaining a mapping table Map from the coordinates of the analog camera to the coordinates of the analog projector in a coordinate systemx(uc,vc;up) And Mapx(uc,vc;vp) (ii) a The disparity map I _ disparity can be obtained by storing the difference value of the coordinatesx(uc,vc;up-uc)、I_disparityy(uc,vc;vp-vc)。
Further, the specific process of the step (4) is as follows:
a set of two-dimensional data is obtained from the three-dimensional information P (x, y, z; I) by means of an analog camera and an analog projector, the two-dimensional data comprising I _ ctexture(uc,vc;Ic)、I_cdepth(uc,vc;zc)、I_pstruct(up,vp;Ip)、I_cstruct(uc,vc;Icp)、Mapx(uc,vc;up)、Mapx(uc,vc;vp)、I_disparityx(uc,vc;up-uc) And I _ disparityy(uc,vc;vp-vc);
By modifying parameters of the analog camera and the analog projector, said parameters including Oc(xc0,yc0,zc0)、(αc,βc,γc)、fc、wc*hc、Cc(ccx,ccy)、μc(dcx,dcy)、Op(xp0,yp0,zp0)、(αp,βp,γp)、fp、wp*hp、Cp(cpx,cpy)、μp(dpx,dpy) Structured light encoding method IpObtaining another group of data by any one or more parameters;
an image data set is obtained by a three-dimensional data set P (x, y, z; I).
Example 2
The method for generating an image dataset based on a three-dimensional model in this embodiment includes the following steps:
(1) projecting to a simulation camera and a simulation projector respectively through a three-dimensional model to simulate texture images collected by the camera and structured light images projected by the projector;
(2) superposing the coding information of the structured light image on the texture image according to the corresponding relation of the coordinates to obtain the texture image containing the structured light;
(3) obtaining a matching point pair containing a structured light texture image and a structured light image of the analog projector according to the corresponding relation between the three-dimensional model and the analog camera and the analog projector, thereby obtaining a matching point coordinate and further obtaining a parallax map;
(4) and (3) generating a plurality of groups of corresponding data by changing the parameters in the step (1), the step (2) and the step (3) to generate an image data set.
Example 3
The method for generating an image dataset based on a three-dimensional model in this embodiment includes the following steps:
(1) projecting to a simulation camera and a simulation projector respectively through a three-dimensional model to simulate texture images collected by the camera and structured light images projected by the projector; the step (1) specifically comprises the following steps: (a) acquiring a three-dimensional model; (b) simulating the setting of a camera; (c) projecting the three-dimensional model to a simulation camera to simulate a texture image acquired by the camera; (d) simulating the setting of a projector; (e) projecting the three-dimensional model to a simulation projector to simulate a structured light image projected by the projector;
(2) superposing the coding information of the structured light image on the texture image according to the corresponding relation of the coordinates to obtain the texture image containing the structured light;
(3) obtaining a matching point pair containing a structured light texture image and a structured light image of the analog projector according to the corresponding relation between the three-dimensional model and the analog camera and the analog projector, thereby obtaining a matching point coordinate and further obtaining a parallax map;
(4) and (3) generating a plurality of groups of corresponding data by changing the parameters in the step (1), the step (2) and the step (3) to generate an image data set.
Example 4
The method for generating an image dataset based on a three-dimensional model in this embodiment includes the following steps:
(1) projecting to a simulation camera and a simulation projector respectively through a three-dimensional model to simulate texture images collected by the camera and structured light images projected by the projector; the step (1) specifically comprises the following steps: (a) acquiring a three-dimensional model; (b) simulating the setting of a camera; (c) projecting the three-dimensional model to a simulation camera to simulate a texture image acquired by the camera; (d) simulating the setting of a projector; (e) projecting the three-dimensional model to a simulation projector to simulate a structured light image projected by the projector; the method for acquiring the three-dimensional model in the step (a) comprises a three-dimensional scanner, a depth sensing device, a three-coordinate measuring machine and artificial modeling; the texture information in the three-dimensional model obtained in the step (a) is gray information or color information; the analog camera of step (b) is one or more; the analog projectors of step (d) are one or more;
(2) superposing the coding information of the structured light image on the texture image according to the corresponding relation of the coordinates to obtain the texture image containing the structured light;
(3) obtaining a matching point pair containing a structured light texture image and a structured light image of the analog projector according to the corresponding relation between the three-dimensional model and the analog camera and the analog projector, thereby obtaining a matching point coordinate and further obtaining a parallax map;
(4) and (3) generating a plurality of groups of corresponding data by changing the parameters in the step (1), the step (2) and the step (3) to generate an image data set.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and is not intended to limit the invention to the particular forms disclosed. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (10)
1. A method for generating an image dataset based on a three-dimensional model, comprising the steps of:
(1) projecting to a simulation camera and a simulation projector respectively through a three-dimensional model to simulate texture images collected by the camera and structured light images projected by the projector;
(2) superposing the coding information of the structured light image on the texture image according to the corresponding relation of the coordinates to obtain the texture image containing the structured light;
(3) obtaining a matching point pair containing a structured light texture image and a structured light image of the analog projector according to the corresponding relation between the three-dimensional model and the analog camera and the analog projector, thereby obtaining a matching point coordinate and further obtaining a parallax map;
(4) and (3) generating a plurality of groups of corresponding data by changing the parameters in the step (1), the step (2) and the step (3) to generate an image data set.
2. The method for generating a three-dimensional model-based image dataset according to claim 1, wherein the step (1) comprises the steps of:
(a) acquiring a three-dimensional model;
(b) simulating the setting of a camera;
(c) projecting the three-dimensional model to a simulation camera to simulate a texture image acquired by the camera;
(d) simulating the setting of a projector;
(e) and projecting the three-dimensional model to a simulation projector to simulate the structured light image projected by the projector.
3. The three-dimensional model-based image dataset generation method of claim 2, characterized in that:
the method for acquiring the three-dimensional model in the step (a) comprises a three-dimensional scanner, a depth sensing device, a three-coordinate measuring machine and artificial modeling; the texture information in the three-dimensional model obtained in the step (a) is gray information or color information; the analog camera of step (b) is one or more; the analog projector of step (d) is one or more.
4. The method for generating an image dataset based on a three-dimensional model according to claim 2, wherein the specific process of the step (b) is:
firstly, setting the origin of a coordinate system of the three-dimensional model at the gravity center of the three-dimensional modelTreating:
obtaining a model P with the origin of the coordinate system at the center of gravityw(xw,yw,zw;Iw);
Secondly, setting the origin of the coordinate system of the analog camera, namely the position O of the optical center of the analog camera in the coordinate system of the three-dimensional modelc(xc0,yc0,zc0) And simulating a rotation vector of the camera coordinate system to the three-dimensional model coordinate systemDetermining a rotation matrix Rc:
Wherein alpha iscRepresenting the angle of rotation, beta, about the x-axiscIndicating the angle of rotation about the y-axis, gammacRepresenting the angle of rotation about the z-axis, whereby the three-dimensional model is transferred from the barycentric coordinate system to the simulated camera coordinate system Pc(xc,yc,zc;Ic):
Then setting the focal length f of the analog cameracResolution wc*hcPrincipal point coordinate Cc(ccx,ccy) And as-size μc(dcx,dcy) And (4) parameters.
5. The method for generating an image dataset based on a three-dimensional model according to claim 2, wherein the specific process of the step (c) is:
construction of projection matrix M by set simulated camera parametersc:
Wherein, the focal length fcResolution wc*hcPrincipal point coordinate Cc(ccx,ccy) And as-size μc(dcx,dcy) A parameter;
thus, the projection coordinate p of the three-dimensional model to the image plane of the analog camera can be obtainedc(uc,vc):
According to the corresponding relation of the coordinates, the points (1 ≦ u) in the resolution range can be adjustedc≤wc,1≤vc≤hc) Projecting the texture image onto an image plane of an analog camera to obtain a texture image I _ c acquired by the analog cameratexture(uc,vc;Ic) And corresponding depth image I _ cdepth(uc,vc;zc)。
6. The method for generating an image dataset based on a three-dimensional model according to claim 2, wherein the specific process of the step (d) is:
in optical transmission, the projector arrangement can be regarded as a reversed camera, so that the arrangement of the analog projection is similar to that of an analog camera, i.e. the position O of the optical center of the analog projector in the three-dimensional model coordinate systemp(xp0,yp0,zp0) And simulating rotation of the projector coordinate system to the three-dimensional model coordinate systemDetermining a rotation matrix RpWhereby the three-dimensional model is transferred from the barycentric coordinate system to the simulated projector coordinate system Pp(xp,yp,zp):
[xp,yp,zp]T=Rp[x,y,z]T-[xp0,yp0,zp0]T (2.3)
Then, similarly to the setting of the analog camera, the focal length f of the analog projector is setpResolution wp*hpPrincipal point coordinate Cp(cpx,cpy) And as-size μp(dpx,dpy) Building a projection matrix Mp:
7. The method for generating an image dataset based on a three-dimensional model according to claim 2, wherein the specific process of the step (e) is:
generating a structured light image I _ p projected by an analog projector according to a resolution and a coding method of the analog projectorstruct(up,vp;Ip) Sinusoidally coded structured light, which has the general formula:
Ip=A+B cos(2πfx+φ) (3.1)
wherein, I is a coding gray value, A represents the average light intensity of sinusoidal coding, B represents the modulation degree of the sinusoidal coding, f is the frequency of the sinusoidal coding, x is the coordinate of a coding point, and phi is the initial phase of the sinusoidal coding;
the structured light projected by the analog projector is artificially set digital coded structured light, and the artificially set digital coded structured light comprises any one of linear structured light, sinusoidal structured light or pseudo-random coded structured light.
8. The method for generating an image dataset based on a three-dimensional model according to claim 1, wherein the specific process of the step (2) is:
i _ p can be obtained by simulating the parameters of the projector and the obtained structured light image in the step (1)structCoordinate p ofp(up,vp) And PpCoordinate (x) ofp,yp,zp) The corresponding relation is as follows:
by Pp,P0,PcTo obtain I _ pstructCoordinate p ofpAnd I _ ctextureCoordinate pcThe corresponding relationship of (a);
defining weight coefficient zeta < 1, namely the proportion of projected image pixels in the superposition map, and dividing IpAnd IcOverlap by IcpAnd stored in the camera image plane coordinate system pc(uc,vc) Obtaining a texture image I _ c containing structured lightstruct(uc,vc;Icp) Wherein: i iscp=(1-ζ)Ic+ζIp (4.2)。
9. The method for generating an image dataset based on a three-dimensional model according to claim 1, wherein the specific process of the step (3) is:
will point pc(uc,vc) Corresponding point pp(up,vp) Is stored in pc(uc,vc) Obtaining a mapping table Map from the coordinates of the analog camera to the coordinates of the analog projector in a coordinate systemx(uc,vc;up) And Mapx(uc,vc;vp) (ii) a The disparity map I _ disparity can be obtained by storing the difference value of the coordinatesx(uc,vc;up-uc)、I_disparityy(uc,vc;vp-vc)。
10. The method for generating an image dataset based on a three-dimensional model according to claim 1, wherein the specific process of the step (4) is as follows:
a set of two-dimensional data is obtained from the three-dimensional information P (x, y, z; I) by means of an analog camera and an analog projector, the two-dimensional data comprising I _ ctexture(uc,vc;Ic)、I_cdepth(uc,vc;zc)、I_pstruct(up,vp;Ip)、I_cstruct(uc,vc;Icp)、Mapx(uc,vc;up)、Mapx(uc,vc;vp)、I_disparityx(uc,vc;up-uc) And I _ disparityy(uc,vc;vp-vc);
By modifying parameters of the analog camera and the analog projector, said parameters including Oc(xc0,yc0,zc0)、(αc,βc,γc)、fc、wc*hc、Cc(ccx,ccy)、μc(dcx,dcy)、Op(xp0,yp0,zp0)、(αp,βp,γp)、fp、wp*hp、Cp(cpx,cpy)、μp(dpx,dpy) Structured light encoding method IpAny one or more ofObtaining another group of data according to the parameters;
an image data set is obtained by a three-dimensional data set P (x, y, z; I).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910891377.3A CN110619601B (en) | 2019-09-20 | 2019-09-20 | Image data set generation method based on three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910891377.3A CN110619601B (en) | 2019-09-20 | 2019-09-20 | Image data set generation method based on three-dimensional model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110619601A true CN110619601A (en) | 2019-12-27 |
CN110619601B CN110619601B (en) | 2023-05-05 |
Family
ID=68923563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910891377.3A Active CN110619601B (en) | 2019-09-20 | 2019-09-20 | Image data set generation method based on three-dimensional model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110619601B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862317A (en) * | 2020-07-28 | 2020-10-30 | 杭州优链时代科技有限公司 | Clothes modeling method and system |
CN113870430A (en) * | 2021-12-06 | 2021-12-31 | 杭州灵西机器人智能科技有限公司 | Workpiece data processing method and device |
WO2022037688A1 (en) * | 2020-08-21 | 2022-02-24 | 先临三维科技股份有限公司 | Data reconstruction method and system, and scanning device |
CN111862317B (en) * | 2020-07-28 | 2024-05-31 | 杭州优链时代科技有限公司 | Clothing modeling method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016037486A1 (en) * | 2014-09-10 | 2016-03-17 | 深圳大学 | Three-dimensional imaging method and system for human body |
CN108242064A (en) * | 2016-12-27 | 2018-07-03 | 合肥美亚光电技术股份有限公司 | Three-dimensional rebuilding method and system based on face battle array structured-light system |
-
2019
- 2019-09-20 CN CN201910891377.3A patent/CN110619601B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016037486A1 (en) * | 2014-09-10 | 2016-03-17 | 深圳大学 | Three-dimensional imaging method and system for human body |
CN108242064A (en) * | 2016-12-27 | 2018-07-03 | 合肥美亚光电技术股份有限公司 | Three-dimensional rebuilding method and system based on face battle array structured-light system |
Non-Patent Citations (1)
Title |
---|
李擎等: "基于水下三维结构光的视觉测量方法研究", 《大连海洋大学学报》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862317A (en) * | 2020-07-28 | 2020-10-30 | 杭州优链时代科技有限公司 | Clothes modeling method and system |
CN111862317B (en) * | 2020-07-28 | 2024-05-31 | 杭州优链时代科技有限公司 | Clothing modeling method and system |
WO2022037688A1 (en) * | 2020-08-21 | 2022-02-24 | 先临三维科技股份有限公司 | Data reconstruction method and system, and scanning device |
CN113870430A (en) * | 2021-12-06 | 2021-12-31 | 杭州灵西机器人智能科技有限公司 | Workpiece data processing method and device |
CN113870430B (en) * | 2021-12-06 | 2022-02-22 | 杭州灵西机器人智能科技有限公司 | Workpiece data processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110619601B (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107170043B (en) | A kind of three-dimensional rebuilding method | |
CN109003325B (en) | Three-dimensional reconstruction method, medium, device and computing equipment | |
CN107945267B (en) | Method and equipment for fusing textures of three-dimensional model of human face | |
JP4245963B2 (en) | Method and system for calibrating multiple cameras using a calibration object | |
CN108053437B (en) | Three-dimensional model obtaining method and device based on posture | |
CN104330074B (en) | Intelligent surveying and mapping platform and realizing method thereof | |
CN110288642A (en) | Three-dimension object fast reconstructing method based on camera array | |
CN109506589A (en) | A kind of measuring three-dimensional profile method based on light field imaging | |
CN108038886B (en) | Binocular camera system calibration method and device and automobile | |
WO2012096747A1 (en) | Forming range maps using periodic illumination patterns | |
CN107990846B (en) | Active and passive combination depth information acquisition method based on single-frame structured light | |
CN109945841B (en) | Industrial photogrammetry method without coding points | |
CN115345822A (en) | Automatic three-dimensional detection method for surface structure light of aviation complex part | |
CN106500626A (en) | A kind of mobile phone stereoscopic imaging method and three-dimensional imaging mobile phone | |
CN113205603A (en) | Three-dimensional point cloud splicing reconstruction method based on rotating platform | |
CN113643414B (en) | Three-dimensional image generation method and device, electronic equipment and storage medium | |
CN111145345A (en) | Tunnel construction area three-dimensional model construction method and system | |
CN110312111A (en) | The devices, systems, and methods calibrated automatically for image device | |
Mahdy et al. | Projector calibration using passive stereo and triangulation | |
CN110738730A (en) | Point cloud matching method and device, computer equipment and storage medium | |
CN110619601B (en) | Image data set generation method based on three-dimensional model | |
CN111105467B (en) | Image calibration method and device and electronic equipment | |
CN102269575B (en) | Mapping-based phase matching method in vision measurement | |
Fleischmann et al. | Fast projector-camera calibration for interactive projection mapping | |
CN108898550B (en) | Image splicing method based on space triangular patch fitting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |