CN109003250B - Fusion method of image and three-dimensional model - Google Patents

Fusion method of image and three-dimensional model Download PDF

Info

Publication number
CN109003250B
CN109003250B CN201711379612.6A CN201711379612A CN109003250B CN 109003250 B CN109003250 B CN 109003250B CN 201711379612 A CN201711379612 A CN 201711379612A CN 109003250 B CN109003250 B CN 109003250B
Authority
CN
China
Prior art keywords
dimensional model
characteristic points
points
characteristic
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711379612.6A
Other languages
Chinese (zh)
Other versions
CN109003250A (en
Inventor
刘文林
黄�隆
卢天发
张彬彬
江文涛
陈延艺
陈延行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ropt Technology Group Co ltd
Original Assignee
Ropt Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ropt Technology Group Co ltd filed Critical Ropt Technology Group Co ltd
Priority to CN201711379612.6A priority Critical patent/CN109003250B/en
Publication of CN109003250A publication Critical patent/CN109003250A/en
Application granted granted Critical
Publication of CN109003250B publication Critical patent/CN109003250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2008Assembling, disassembling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Ink Jet (AREA)

Abstract

A method for fusing an image with a three-dimensional model. And when a plurality of images are overlapped and fused, the images realize unified standards for the deformation of the same line segments and trend. Bj maps positions (ub, vb), (uc, vc), (ud, vd), (ue, ve) in relation to the quadrilaterals b, c, d, e. The position of Bj is ul=ua/2+wb+wc+uc+wd+ud+we. vb1=vb/2+wb+vb+wc+vc+wd+vd+we. Through the technical steps, the images are partially overlapped and fused on the three-dimensional model, so that the deformation consistency of the length and the trend of the same line segment can be rapidly and seamlessly completed, and the smooth fusion is realized.

Description

Fusion method of image and three-dimensional model
Technical Field
The invention relates to a computer data processing technology, in particular to 3D modeling of computer graphics, video and picture splicing and perspective view calculation.
Background
Currently, a system for fusing 3DGIS and multiple videos is generally used for fusing a three-dimensional scene model and video images. The system is characterized in that after the real-time video image acquired from the video acquisition module is matched and aligned with the three-dimensional scene model, three-dimensional scene information, geographic space information and various attribute information of ground features are fused with the real-time video image, so that the effects of rendering, displaying and enhancing reality are achieved. The system is divided into three parts, namely a video acquisition module, a 3DGIS module and an information fusion module. The video acquisition module and the 3DGIS module are already mature in implementation; the fusion of the three-dimensional scene information, the geospatial information and the various attribute information of the ground object is completed by the 3DGIS module. In order to fuse the real-time video image with the 3DGIS scene model, the prior art has been realized by performing a projection texture mapping method.
The projected texture mapping method is equivalent to a slide show with video images on a three-dimensional model. Has two advantages: firstly, the texture and the space vertex are corresponding in real time, and the texture coordinates do not need to be designated in advance; secondly, the projection texture mapping is used, so that the texture distortion phenomenon can be effectively avoided. Therefore, the internal and external parameters of the camera are acquired through the calibration of the camera, the texture matrix is further obtained according to the internal and external parameters of the camera, the deformation of the image is consistent with the three-dimensional model through adjusting the position of the video player, the image acquired by the camera can be accurately projected into the three-dimensional model, and the fusion of the video image and the three-dimensional model is realized. However, due to rapid development of monitoring technology, a large number of monitoring cameras are increased, multiple camera networks for inspection, monitoring and verification are formed, and a plurality of cameras act simultaneously in the same area, and the cameras have different angles, different distances and different internal and external parameters of the shooting due to different roles, so that different image deformations can be generated; in order to improve the fusion effect of the image and the three-dimensional model, the image needs to be subjected to distortion correction and clipping before projection. In use, it is found that: when a plurality of images are required to be overlapped and fused on the three-dimensional model, because the calibration positions and the internal and external parameters of the cameras of the images are inconsistent, the required distortion correction is inconsistent, the generated distortion effect is inconsistent, the deformation of the same line length and trend of the upper image and the lower image is inconsistent when the images are overlapped and fused, and the fusion is difficult.
Disclosure of Invention
The invention aims to enable each image to generate the same distortion through a novel calculation method, and when a plurality of images are overlapped and fused, the images realize unified standards for the same line segment and trend distortion, thereby realizing the consistency of overlapping and fusing of the images.
The technical scheme is as follows:
step one: the video image is interpreted, and a plurality of characteristic points, namely polygons with vertexes less than or equal to N (N is less than or equal to 2000), are selected. By dividing the video image into areas, dividing the areas to be transformed into boundaries, which are called area boundaries; because of the irregularity of the curve of the region boundary, dividing the curve of the region boundary into a plurality of straight line segments or approximate straight line segments, and then equating the approximate straight line segments to straight line segments; the intersection point of two straight line segments is used as a characteristic point or called vertex, and for the convenience of calculation, the number of the vertexes of one area is generally less than or equal to N;
step two: under the control of environment management software, an environment simulation engine is applied to interpret three-dimensional models such as a natural geographic model, a physical model and the like, and areas with the same linear direction, the same linear length and the same number of characteristic points are marked out on the three-dimensional models by contrasting the characteristic points selected in the video image, so that the characteristic points of the three-dimensional models can be overlapped with the characteristic points selected in the video image in space;
step three: connecting the characteristic points in the three-dimensional model selected in the second step as polygons, and digging out all the characteristic points as areas of polygon vertexes;
step four:
1. calculating pixel positions of the three-dimensional model feature points; setting the manually selected characteristic point set of the polygon as a vertex as A, and automatically calculating the characteristic point set of the pixel as B;
2. calculating pixel positions of feature points of the video image; manually selecting vertexes to form a closed polygon, wherein the set of the pixel positions of the characteristic points in the polygon is called A1; and secondly, dividing the polygon set A1 into a plurality of small subareas, dividing the connected area of the same plane into a subarea, dividing the subarea into a plurality of subareas according to a plane division principle, and automatically calculating a characteristic point set of the pixel after division is called as B1.
Step five: judging the three-dimensional model, and extracting N characteristic points at the same position on the three-dimensional model by contrasting with a characteristic point set A selected from the video image so as to enable the characteristic points to be overlapped in space;
step six: connecting the N characteristic points selected from the three-dimensional model as polygons, and digging out the polygons with the characteristic points N as vertexes;
step seven: calculating pixel positions of the three-dimensional model feature points;
first, the pixel points to which the model points are mapped are respectively drawn out, and are affected by the quadrangle where the points are located and the four quadrangles which are co-bordered by the quadrangle.
Secondly, setting points Bj in a quadrangle a, wherein the quadrangles b, c, d and e respectively have common edges lb, lc, ld and le with a; the vertical distances from the point Bj to the four sides of the quadrangle a are respectively calculated as hb, hc, hd and he; bj with respect to the reverse mapped positions (ub, vb), (uc, vc), (ud, vd), (ue, ve) of the quadrilaterals b, c, d, e; the reverse mapping of Bj on quadrilateral a is (ua, va).
The position of Bj is ua1=ua/2+wb+wc+uc+wd+ud+we. vb1=vb/2+wb+vb+wc+wc+wd+vd+we.
Through calculation, when the weight mapping calculation is performed on the feature points of each video image, when the reverse mapping pixel calculation is performed by using the method, a good fusion effect can be achieved when the video image is recorded on the front surface, and when the video image is recorded by having a large recording inclination angle, serious deformation can occur at the joint of two quadrangles, so that a simple mapping method cannot be adopted under the condition.
Step eight: therefore, the extension lines of hb, hc, hd and he respectively pass through the Bj point and intersect with the opposite side line in the quadrangle a, and the opposite side line is defined as bd/db/ce/ec; the distance bd/db/ce/ec is calculated by two-point length calculation.
The inverse mapping positions (ub, vb), (uc, vc), (ud, vd), (ue, ve) of Bj with respect to the quadrilaterals b, c, d, e are calculated.
The inverse mapping of Bj on quadrilateral a is known as (ua, va).
The weighting of Bj with respect to quadrilateral b is wb= (1-hb/db)/4, and the same applies: bj weights with respect to quadrilateral c are wc= (1-hc/ec)/4, bj weights with respect to quadrilateral d are wd= (1-hd/bd)/4, bj weights with respect to quadrilateral e are we= (1-he/ce)/4;
the position of Bj is ua1=ua/2+wb+wc+uc+wd+ud+we. vb1=vb/2+wb+vb+wc+wc+wd+vd+we.
And better effect is obtained through weight compensation and correction.
Step nine: performing weight mapping calculation on the characteristic points of each video image to enable the characteristic point positions of the video images to coincide with the characteristic point position projections of the three-dimensional model;
step ten: overlapping the characteristic points of each video image with the same characteristic points of the three-dimensional model to realize the fusion of the characteristic points of the video image with the same characteristic points of the three-dimensional model, and calculating the mapping deformation of other pixel points of each image according to the weights to realize the automatic fusion with the three-dimensional model;
step eleven: and re-pasting the polygon fused with the three-dimensional model back to the three-dimensional model.
Through the technical steps, the images are partially overlapped and fused on the three-dimensional model, so that the deformation consistency of the length and the trend of the same line segment can be rapidly and seamlessly completed, and the smooth fusion is realized.
Drawings
FIG. 1 is one of the real scene monitoring images of the X-intersection photographed by the invention;
FIG. 2 is a schematic diagram of the present invention for partitioning an X intersection into closed polygons;
FIG. 3 is a schematic diagram of the invention for establishing a three-dimensional model polygon segmentation in cooperation with an X intersection;
FIG. 4 is a second image of the real scene monitoring of the intersection X in the present invention;
FIG. 5 is a schematic diagram of the invention for thinning and dividing subareas of an X-intersection monitoring image;
FIG. 6 is a schematic diagram of the overlapping of two real-scene monitoring images of the crossing at the X position and a three-dimensional model according to the invention;
FIG. 7 is a schematic diagram of the invention overlapping one of the real scenes of the intersection at the X position with the three-dimensional model;
FIG. 8 is a third image of the real scene monitoring of the intersection at X in accordance with the present invention;
FIG. 9 is a fourth embodiment of the present invention for a real-scene monitoring image of an intersection at X;
FIG. 10 is a graph showing the effect of the invention in which one of the real scenes of the intersection at X is superimposed on the three-dimensional model and then put back into the monitoring image.
Detailed Description
Example 1:
step one: interpreting the video image A of FIG. 2, and selecting 16 characteristic points;
step two: judging the three-dimensional model, extracting 16 characteristic points at the same position from the characteristic points selected in the comparison video image of the figure 3 on the three-dimensional model, and enabling the characteristic points to be overlapped in space;
step three: connecting the selected characteristic points in the three-dimensional model with polygons, and digging out the polygons with the characteristic points as vertexes;
step four: pixel position calculation is carried out on the three-dimensional model feature points according to FIG. 6;
first, the pixel points to which the model points are mapped are respectively drawn out, and are affected by the quadrangle where the points are located and the four quadrangles which are co-bordered by the quadrangle.
Secondly, setting points Bj in a quadrangle a, wherein the quadrangles b, c, d and e respectively have common edges lb, lc, ld and le with a; the vertical distances from the point Bj to the four sides of the quadrangle a are respectively calculated as hb, hc, hd and he; bj with respect to the reverse mapped positions (ub, vb), (uc, vc), (ud, vd), (ue, ve) of the quadrilaterals b, c, d, e; the reverse mapping of Bj on quadrilateral a is (ua, va).
The position of Bj is ua1=ua/2+wb+wc+uc+wd+ud+we. vb1=vb/2+wb+vb+wc+wc+wd+vd+we.
Through calculation, when the weight mapping calculation is performed on the feature points of each video image, when the reverse mapping pixel calculation is performed by using the method, a good fusion effect can be achieved when the video image is recorded on the front surface, and when the video image is recorded by having a large recording inclination angle, serious deformation can occur at the joint of two quadrangles, so that a simple mapping method cannot be adopted under the condition.
Step five: therefore, the extension lines of hb, hc, hd and he respectively pass through the Bj point and intersect with the opposite side line in the quadrangle a, and the opposite side line is defined as bd/db/ce/ec; the distance bd/db/ce/ec is calculated by two-point length calculation.
The inverse mapping positions (ub, vb), (uc, vc), (ud, vd), (ue, ve) of Bj with respect to the quadrilaterals b, c, d, e are calculated.
The inverse mapping of Bj on quadrilateral a is known as (ua, va).
The weighting of Bj with respect to quadrilateral b is wb= (1-hb/db)/4, and the same applies: bj weights with respect to quadrilateral c are wc= (1-hc/ec)/4, bj weights with respect to quadrilateral d are wd= (1-hd/bd)/4, bj weights with respect to quadrilateral e are we= (1-he/ce)/4;
the position of Bj is ua1=ua/2+wb+wc+uc+wd+ud+we. vb1=vb/2+wb+vb+wc+wc+wd+vd+we.
And better effect is obtained through weight compensation and correction.
Step six: performing weight mapping calculation on the characteristic points of each video image to enable the characteristic point positions of the video images to coincide with the characteristic point position projections of the three-dimensional model;
step seven: overlapping the characteristic points of each video image with the same characteristic points of the three-dimensional model to realize the fusion of the characteristic points of the video image with the same characteristic points of the three-dimensional model, and calculating the mapping deformation of other pixel points of each image according to the weights to realize the automatic fusion with the three-dimensional model;
step eight: and re-pasting the polygon fused with the three-dimensional model back to the three-dimensional model.
Example 2.
Step one: judging and reading the video image A, and selecting 1800 characteristic points;
step two: interpreting the three-dimensional model, extracting 1800 characteristic points at the same position from the characteristic points selected in the comparison video image of FIG. 3 on the three-dimensional model, and enabling the characteristic points to be overlapped in space;
step three: connecting the selected characteristic points in the three-dimensional model with polygons, and digging out the polygons with the characteristic points as vertexes;
step four: pixel position calculation is carried out on the three-dimensional model feature points according to FIG. 6;
first, the pixel points to which the model points are mapped are respectively drawn out, and are affected by the quadrangle where the points are located and the four quadrangles which are co-bordered by the quadrangle.
Secondly, setting points Bj in a quadrangle a, wherein the quadrangles b, c, d and e respectively have common edges lb, lc, ld and le with a; the vertical distances from the point Bj to the four sides of the quadrangle a are respectively calculated as hb, hc, hd and he; bj with respect to the reverse mapped positions (ub, vb), (uc, vc), (ud, vd), (ue, ve) of the quadrilaterals b, c, d, e; the reverse mapping of Bj on quadrilateral a is (ua, va).
The position of Bj is ua1=ua/2+wb+wc+uc+wd+ud+we. vb1=vb/2+wb+vb+wc+wc+wd+vd+we.
Through calculation, when the weight mapping calculation is performed on the feature points of each video image, when the reverse mapping pixel calculation is performed by using the method, a good fusion effect can be achieved when the video image is recorded on the front surface, and when the video image is recorded by having a large recording inclination angle, serious deformation can occur at the joint of two quadrangles, so that a simple mapping method cannot be adopted under the condition.
Step five: therefore, the extension lines of hb, hc, hd and he respectively pass through the Bj point and intersect with the opposite side line in the quadrangle a, and the opposite side line is defined as bd/db/ce/ec; the distance bd/db/ce/ec is calculated by two-point length calculation.
The inverse mapping positions (ub, vb), (uc, vc), (ud, vd), (ue, ve) of Bj with respect to the quadrilaterals b, c, d, e are calculated.
The inverse mapping of Bj on quadrilateral a is known as (ua, va).
The weighting of Bj with respect to quadrilateral b is wb= (1-hb/db)/4, and the same applies: bj weights with respect to quadrilateral c are wc= (1-hc/ec)/4, bj weights with respect to quadrilateral d are wd= (1-hd/bd)/4, bj weights with respect to quadrilateral e are we= (1-he/ce)/4;
the position of Bj is ua1=ua/2+wb+wc+uc+wd+ud+we. vb1=vb/2+wb+vb+wc+wc+wd+vd+we.
And better effect is obtained through weight compensation and correction.
Step six: performing weight mapping calculation on the characteristic points of each video image to enable the characteristic point positions of the video images to coincide with the characteristic point position projections of the three-dimensional model;
step seven: overlapping the characteristic points of each video image with the same characteristic points of the three-dimensional model to realize the fusion of the characteristic points of the video image with the same characteristic points of the three-dimensional model, and calculating the mapping deformation of other pixel points of each image according to the weights to realize the automatic fusion with the three-dimensional model;
step eight: and re-pasting the polygon fused with the three-dimensional model back to the three-dimensional model.

Claims (4)

1. A fusion method of an image and a three-dimensional model is characterized in that
Step one: judging the video image, and selecting a plurality of characteristic points as vertexes to form a polygon;
step two: interpreting the three-dimensional model by using an environment simulation engine, and drawing areas with the same linear direction, the same linear length and the same number of characteristic points on the three-dimensional model by contrasting the characteristic points selected in the video image, so that the characteristic points of the three-dimensional model can be overlapped with the characteristic points selected in the video image in space;
step three: connecting the characteristic points in the three-dimensional model selected in the second step as polygons, and digging out all the characteristic points as areas of polygon vertexes;
step four: 1) Calculating pixel positions of the three-dimensional model feature points; setting the selected polygon characteristic point set as the vertex as A, and automatically calculating the characteristic point set of the pixel as B;
2) Calculating pixel positions of feature points of the video image; selecting a set of pixel positions of the feature points of which the vertexes form a closed polygon, wherein the set is called A1; automatically calculating a characteristic point set of pixels to be called B1;
step five: judging the three-dimensional model, and selecting N characteristic points on the three-dimensional model by contrasting with the video image characteristic point set A so as to enable the N characteristic points to be overlapped with the video image characteristic point set A in space;
step six: connecting N selected characteristic points in the three-dimensional model as polygons, and digging out the polygons with the N characteristic points as vertexes;
step seven: calculating pixel positions of the three-dimensional model feature points;
firstly, respectively drawing a quadrangle a where a corresponding video image pixel point mapped by a model feature point is located, and finding four quadrangles b, c, d and e sharing edges;
secondly, setting points Bj in a quadrangle a, wherein the quadrangles b, c, d and e respectively have common edges lb, lc, ld and le with a;
thirdly, respectively calculating the vertical distances from the point Bj to the four sides of the quadrangle a to be hb, hc, hd and he respectively; bj with respect to the reverse mapped positions (ub, vb), (uc, vc), (ud, vd), (ue, ve) of the quadrilaterals b, c, d, e; the reverse mapping of Bj on quadrilateral a is (ua, va);
the position of Bj is ua1=ua/2+wb+wc+uc+wd+ud+we;
vb1=vb/2+wb*vb+wc*vc+wd*vd+we*ue;
step eight: performing weight mapping calculation on the characteristic points of each video image to enable the characteristic point positions of the video images to coincide with the characteristic point position projections of the three-dimensional model;
step nine: overlapping the characteristic points of each video image with the same characteristic points of the three-dimensional model to realize the fusion of the characteristic points of the video image with the same characteristic points of the three-dimensional model, and calculating the mapping deformation of other pixel points of each image according to the weights to realize the automatic fusion with the three-dimensional model;
step ten: and re-pasting the polygon fused with the three-dimensional model back to the three-dimensional model.
2. The method of claim 1, wherein the changing step eight is when the image is a non-frontal captured image:
step eight: respectively making extended lines of hb, hc, hd and he at the Bj point to pass through the Bj point and intersect with opposite side lines of the corresponding sides in the quadrangle a to define bd/db/ce/ec; calculating the distance bd/db/ce/ec through two-point length calculation;
the weighting of Bj with respect to quadrilateral b is wb= (1-hb/db)/4, and the same applies: bj weights with respect to quadrilateral c are wc= (1-hc/ec)/4, bj weights with respect to quadrilateral d are wd= (1-hd/bd)/4, bj weights with respect to quadrilateral e are we= (1-he/ce)/4;
the position of Bj is ua1=ua/2+wb+wc+uc+wd+ud+we;
vb1=vb/2+wb*vb+wc*vc+wd*vd+we*ue;
performing weight mapping calculation on the characteristic points of each video image to enable the characteristic point positions of the video images to coincide with the characteristic point position projections of the three-dimensional model;
wherein, step eight further comprises:
step nine: overlapping the characteristic points of each video image with the same characteristic points of the three-dimensional model to realize the fusion of the characteristic points of the video image with the same characteristic points of the three-dimensional model, and calculating the mapping deformation of other pixel points of each image according to the weights to realize the automatic fusion with the three-dimensional model;
step ten: and re-pasting the polygon fused with the three-dimensional model back to the three-dimensional model.
3. The method of claim 1, wherein the three-dimensional model is a natural geographic model or a building, bridge, airport physical model.
4. The method of fusion of an image and a three-dimensional model according to claim 1, wherein a plurality of feature points are selected to be less than or equal to 2000.
CN201711379612.6A 2017-12-20 2017-12-20 Fusion method of image and three-dimensional model Active CN109003250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711379612.6A CN109003250B (en) 2017-12-20 2017-12-20 Fusion method of image and three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711379612.6A CN109003250B (en) 2017-12-20 2017-12-20 Fusion method of image and three-dimensional model

Publications (2)

Publication Number Publication Date
CN109003250A CN109003250A (en) 2018-12-14
CN109003250B true CN109003250B (en) 2023-05-30

Family

ID=64574059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711379612.6A Active CN109003250B (en) 2017-12-20 2017-12-20 Fusion method of image and three-dimensional model

Country Status (1)

Country Link
CN (1) CN109003250B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383205B (en) * 2020-03-11 2023-03-24 西安应用光学研究所 Image fusion positioning method based on feature points and three-dimensional model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN105303615A (en) * 2015-11-06 2016-02-03 中国民航大学 Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image
CN106373148A (en) * 2016-08-31 2017-02-01 中国科学院遥感与数字地球研究所 Equipment and method for realizing registration and fusion of multipath video images to three-dimensional digital earth system
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW594594B (en) * 2003-05-16 2004-06-21 Ind Tech Res Inst A multilevel texture processing method for mapping multiple images onto 3D models

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN105303615A (en) * 2015-11-06 2016-02-03 中国民航大学 Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image
CN106373148A (en) * 2016-08-31 2017-02-01 中国科学院遥感与数字地球研究所 Equipment and method for realizing registration and fusion of multipath video images to three-dimensional digital earth system
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Modeling and simulations of three-dimensional laser imaging based on space-variant structure;Jie Cao et.al;《Optics & Laser Technology》;20160430;全文 *
三维模型动态纹理映射;刘猛;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20160215;全文 *

Also Published As

Publication number Publication date
CN109003250A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
US11410320B2 (en) Image processing method, apparatus, and storage medium
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
CN108876926B (en) Navigation method and system in panoramic scene and AR/VR client equipment
CN103226830B (en) The Auto-matching bearing calibration of video texture projection in three-dimensional virtual reality fusion environment
US9665984B2 (en) 2D image-based 3D glasses virtual try-on system
Wei et al. Fisheye video correction
CN104966316A (en) 3D face reconstruction method, apparatus and server
CN102945565A (en) Three-dimensional photorealistic reconstruction method and system for objects and electronic device
WO2023280038A1 (en) Method for constructing three-dimensional real-scene model, and related apparatus
CN111260777A (en) Building information model reconstruction method based on oblique photography measurement technology
CN105006021A (en) Color mapping method and device suitable for rapid point cloud three-dimensional reconstruction
CN106534670B (en) It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group
IL284840B (en) Damage detection from multi-view visual data
CN109523622A (en) A kind of non-structured light field rendering method
CN110689476A (en) Panoramic image splicing method and device, readable storage medium and electronic equipment
CN111273877B (en) Linkage display platform and linkage method for live-action three-dimensional data and two-dimensional grid picture
CN110782507A (en) Texture mapping generation method and system based on face mesh model and electronic equipment
TW202309834A (en) Model reconstruction method, electronic device and computer-readable storage medium
TWI489859B (en) Image warping method and computer program product thereof
CN109003250B (en) Fusion method of image and three-dimensional model
CN118247429A (en) Air-ground cooperative rapid three-dimensional modeling method and system
Tian et al. Registration and occlusion handling based on the FAST ICP-ORB method for augmented reality systems
CN109461116B (en) 720 panorama unfolding monitoring method based on opengl
Kang et al. Automatic texture reconstruction of 3d city model from oblique images
CN116801115A (en) Sparse array camera deployment method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 361008 Fujian province Xiamen Software Park Siming District Road No. 59 102 two expected

Applicant after: ROPT TECHNOLOGY GROUP Co.,Ltd.

Address before: 361008 Fujian province Xiamen Software Park Siming District Road No. 59 102 two expected

Applicant before: Ropeok (Xiamen) Technology Group Co.,Ltd.

GR01 Patent grant
GR01 Patent grant