CN108154553A - The seamless integration method and device of a kind of threedimensional model and monitor video - Google Patents
The seamless integration method and device of a kind of threedimensional model and monitor video Download PDFInfo
- Publication number
- CN108154553A CN108154553A CN201810008558.2A CN201810008558A CN108154553A CN 108154553 A CN108154553 A CN 108154553A CN 201810008558 A CN201810008558 A CN 201810008558A CN 108154553 A CN108154553 A CN 108154553A
- Authority
- CN
- China
- Prior art keywords
- micro
- dough sheet
- target
- texture
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of threedimensional models and the seamless integration method and device of monitor video, are related to monitor video system, three-dimension GIS system, threedimensional model Rendering field.The seamless integration method of threedimensional model and monitor video provided by the invention, determine whether the micro- dough sheet of target is blocked by way of establishing depth map, and, if micro- dough sheet is blocked, and in what comes into a driver's body, then using the original texture of threedimensional model, to target, micro- dough sheet renders, the display effect merged so as to improve threedimensional model with monitor video.
Description
Technical field
The present invention relates to monitor video system, three-dimension GIS system, threedimensional model Rendering field, in particular to
The seamless integration method and device of a kind of threedimensional model and monitor video.
Background technology
In the relevant technologies, there is the integration technology of threedimensional model and monitor video.This technology is typically from prison
Video flowing extraction video frame is controlled, and video frame is projected in three dimensional spatial scene, to realize video data and threedimensional model number
According to carrying out full-time empty stereoscopic fusion, so that threedimensional model and real by the three-dimensional scene images that video camera real scene shooting obtains
Carry out real-time perfect adaptation.The technology can realize the intuitive expression of three-dimensional scenic, reach video location and periphery three-dimensional information
Fusion target.
In the relevant technologies, using the image in video camera visible area, full-time empty solid can be carried out to threedimensional model
Video image renders, to realize merging for threedimensional model and monitor video.However, due to the complexity of monitoring scene, related skill
In art, the effect of three-dimensional map model established is unsatisfactory.
Invention content
The purpose of the present invention is to provide threedimensional models and the seamless integration method and device of monitor video.
In a first aspect, an embodiment of the present invention provides a kind of threedimensional model and the seamless integration method of monitor video, including:
Build the depth map of the three dimensional spatial scene corresponding to target video image;
Video frame is obtained, generates video frame texture;
Calculate line of the micro- dough sheet of target in threedimensional model in the coordinate and video frame texture of normalized device coordinate system
Manage coordinate;
According to the coordinate of the normalized device coordinate system of the micro- dough sheet of target, judge the micro- dough sheet of target whether in what comes into a driver's body;
If the micro- dough sheet of target is in what comes into a driver's body, using texture coordinate of the micro- dough sheet of target in video frame texture, from target
The micro- dough sheet depth information of target is read in the depth map of video image, judges whether the micro- dough sheet of target is hidden further according to depth information
Gear;If the micro- dough sheet of target, not in what comes into a driver's body, according to the original texture of threedimensional model, to target, micro- dough sheet renders;
If being blocked, using the original texture of threedimensional model, to target, micro- dough sheet renders.
With reference to first aspect, an embodiment of the present invention provides the first possible embodiment of first aspect, wherein, also
Including:
If not being blocked, the colouring information of texture is obtained using texture coordinate of micro- dough sheet in video frame texture, it is right
The micro- dough sheet of target is rendered.
With reference to first aspect, an embodiment of the present invention provides second of possible embodiment of first aspect, wherein, step
The depth map of three dimensional spatial scene corresponding to rapid structure target video image includes:
The viewing matrix of camera is calculated according to the attitude angle of the three-dimensional coordinate of camera position and camera;
The perspective projection matrix of camera is calculated according to the field angle of camera and the ratio of width to height of video;
Create depth texture object;
Frame buffer zone is set, for controlling rendering pipeline that depth map is stored in depth texture object;
Three-dimensional scenic is rendered according to viewing matrix and perspective projection matrix, to generate depth map.
With reference to first aspect, an embodiment of the present invention provides the third possible embodiment of first aspect, wherein, step
Rapid to obtain video frame, generation video frame texture includes:
Video frame is read, and according to the frame per second more new video frame of video, for being merged with threedimensional model, realizes dynamic fusion
Effect;
It calls three-dimensional API that the video frame of two dimensional image is stored in texture, generates video frame texture.
With reference to first aspect, an embodiment of the present invention provides the 4th kind of possible embodiment of first aspect, wherein, step
Texture coordinate of the rapid micro- dough sheet of target calculated in threedimensional model in the coordinate and video frame texture of normalized device coordinate system
Including:
Using GPU vertex shaders, threedimensional model apex coordinate under model coordinate systems is transformed into and is cut under coordinate system;
The threedimensional model apex coordinate under coordinate system is cut by the rasterisation interpolation of Hardware Render assembly line, in pixel
It obtains cutting the micro- dough sheet homogeneous coordinates (x, y, z, w) of threedimensional model under coordinate system in color device, wherein, X represents point coordinates in X-axis
Projection;Y represents projection of the point coordinates in Y-axis;Z represents projection of the point coordinates in Z axis;X-axis, Y-axis, Z axis are to cut coordinate
Reference axis in system, W is for projecting (x, y, z), so as to be transformed into normalized device coordinate system;
The micro- dough sheet homogeneous coordinates of target are switched into common three-dimensional coordinate, obtain the micro- dough sheet of target in normalized device coordinate system
Coordinate (x/w, y/w, z/w);
According to the micro- dough sheet of target in the coordinate of normalized device coordinate system, the micro- dough sheet of target is calculated in video frame texture
In texture coordinate ((x/w+1) * 0.5, (y/w+1) * 0.5)).
With reference to first aspect, an embodiment of the present invention provides the 5th kind of possible embodiment of first aspect, wherein, step
Suddenly according to the coordinate of the normalized device coordinate system of the micro- dough sheet of target, judge whether the micro- dough sheet of target includes in what comes into a driver's body:
Judge the x/w and y/w of coordinate of the micro- dough sheet of target under normalized device coordinate system whether in [- 1,1] closed interval
On;If not judging the micro- dough sheet of target not within the what comes into a driver's body of camera if;
If the numerical value of coordinate z/w of the micro- dough sheet of target under normalized device coordinate system is being calculated, and according to z/w
Numerical value judge the micro- dough sheet of target whether in what comes into a driver's body.
With reference to first aspect, an embodiment of the present invention provides the 6th kind of possible embodiment of first aspect, wherein, step
The numerical value of coordinate z/w of the rapid calculating micro- dough sheet of target under normalized device coordinate system, and judge that target is micro- according to the numerical value of z/w
Whether dough sheet includes in what comes into a driver's body:
If three-dimensional API uses OpenGL, then the numerical value of z/w then illustrates that the micro- dough sheet of target exists on [- 1,1] closed interval
Within what comes into a driver's body, if the numerical value of z/w does not illustrate the micro- dough sheet of target not within what comes into a driver's body if on [- 1,1] closed interval.
With reference to first aspect, an embodiment of the present invention provides the 7th kind of possible embodiment of first aspect, wherein, step
The numerical value of coordinate z/w of the rapid calculating micro- dough sheet of target under normalized device coordinate system, and judge that target is micro- according to the numerical value of z/w
Whether dough sheet includes in what comes into a driver's body:
If three-dimensional API uses DirectX, then the numerical value of z/w then illustrates that the micro- dough sheet of target exists on [0,1] closed interval
Within what comes into a driver's body;Illustrate the micro- dough sheet of target not within what comes into a driver's body if the numerical value of z/w is not on [0,1] closed interval.
With reference to first aspect, an embodiment of the present invention provides the 8th kind of possible embodiment of first aspect, wherein, step
Suddenly using texture coordinate of the micro- dough sheet of target in video frame texture, the micro- face of target is read from the depth map of target video image
Piece depth information zn, further according to depth information znJudge the micro- dough sheet of target whether be blocked including:
If three-dimensional API uses OpenGL, (z/w+1) * 0.5 and z of the micro- dough sheet of target are usednIt compares, if (z/
W+1) * 0.5 is less than or equal to zn, then illustrate that the micro- dough sheet of target is not blocked;(if z/w+1) * 0.5 is more than zn, then illustrate mesh
Micro- dough sheet is marked to be blocked;
If three-dimensional API uses DirectX, the z/w and z of the micro- dough sheet of targetnBe compared, if z/w be less than or
Equal to zn, then illustrate that micro- dough sheet is not blocked;If z/w is more than zn, then illustrate that micro- dough sheet is blocked.
In a first aspect, an embodiment of the present invention provides a kind of threedimensional model and the seamless integration method of monitor video, including:
Depth map generation module, for building the depth map of the three dimensional spatial scene corresponding to target video image;
Video frame texture generation module for obtaining video frame, generates video frame texture;
Computing module, for calculating coordinate and video of the micro- dough sheet of the target in threedimensional model in normalized device coordinate system
Texture coordinate in frame texture;
Judgment module for the coordinate of the normalized device coordinate system according to the micro- dough sheet of target, judges that the micro- dough sheet of target is
It is no in what comes into a driver's body;
First processing module, for when the micro- dough sheet of target is in what comes into a driver's body, using the micro- dough sheet of target in video frame texture
In texture coordinate, from the depth map of target video image read the micro- dough sheet depth information of target, sentence further according to depth information
Whether the disconnected micro- dough sheet of target is blocked;When the micro- dough sheet of target is not in what comes into a driver's body, then according to the original texture of threedimensional model to mesh
Micro- dough sheet is marked to be rendered;
First processing module, for when the micro- dough sheet of target is blocked, using the original texture of threedimensional model to target micro- face
Piece is rendered.
The seamless integration method of threedimensional model and monitor video provided in an embodiment of the present invention, by the side for establishing depth map
Formula determines whether the micro- dough sheet of target is blocked, also, if micro- dough sheet is blocked, and in what comes into a driver's body, then using three-dimensional
To target, micro- dough sheet renders the original texture of model, the display effect merged so as to improve threedimensional model with monitor video.
For the above objects, features and advantages of the present invention is enable to be clearer and more comprehensible, preferred embodiment cited below particularly, and coordinate
Appended attached drawing, is described in detail below.
Description of the drawings
It in order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range, for those of ordinary skill in the art, without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shown in the relevant technologies, the seamless basic procedure schematic diagram merged of threedimensional model and monitor video;
The threedimensional model and the basic flow of the seamless integration method of monitor video provided Fig. 2 shows the embodiment of the present invention
Journey schematic diagram;
Fig. 3 shows the first of the seamless integration method of the threedimensional model that the embodiment of the present invention provided and monitor video
Detail schema schematic diagram;
Fig. 4 shows second of the seamless integration method of the threedimensional model that the embodiment of the present invention provided and monitor video
Detail schema schematic diagram.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Ground describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be configured to arrange and design with a variety of different herein.Cause
This, the detailed description of the embodiment of the present invention to providing in the accompanying drawings is not intended to limit claimed invention below
Range, but it is merely representative of the selected embodiment of the present invention.Based on the embodiment of the present invention, those skilled in the art are not doing
Go out all other embodiments obtained under the premise of creative work, shall fall within the protection scope of the present invention.
In the relevant technologies, there is the integration technology of threedimensional model and monitor video, the method stream in the relevant technologies
Journey is as follows, as shown in Figure 1:
Step 1, the viewing matrix and projection matrix of specified camera are established;
Step 2, camera is specified to obtain a frame video image by this;
Step 3, it is calculated in corresponding threedimensional model based on video image, viewing matrix and the projection matrix in step 2
Specify whether micro- dough sheet is appeared in the what comes into a driver's body of specified camera;Wherein, micro- dough sheet be threedimensional model in OpenGL or
The dough sheet generated later is rasterized in the three-dimensional rendering assembly line of DirectX (to regard threedimensional model surface as a series of micro-
The set of fettucelle), what comes into a driver's body refers to the space in camera range of visibility, is the set (packet in space where imaged scene
Include that camera can directly take and camera can not directly take while, that is, the one side that is blocked);Such as
Fruit is judged as YES, then performs step 4, if the judgment is No, then performs step 5;
Step 4, using video image to micro- dough sheet is specified to render;
Step 5, using the original texture of threedimensional model to micro- dough sheet is specified to render.
In step 4, micro- dough sheet is rendered using video image there are two types of situation, micro- dough sheet is to be blocked respectively
The situation that situation and micro- dough sheet are not blocked.In the relevant technologies, whether which kind of above-mentioned situation, is all using video figure
As being rendered to micro- dough sheet, even if not occurring micro- dough sheet (being blocked) in video image, can also use corresponding
Algorithm renders it to construct the texture of micro- dough sheet.
But inventor find, in this manner to being in camera what comes into a driver's body, and the micro- dough sheet being blocked into
It is inappropriate that row, which renders, is primarily due to carry out the rendering of this micro- dough sheet that is blocked merely by the mode estimated
, but larger is differed with practical to the result that the micro- dough sheet being blocked is rendered in this way.
And then present inventor provides the seamless integration method of a kind of following threedimensional model and monitor video, such as Fig. 2
It is shown:
Step S101 builds the depth map of the three dimensional spatial scene corresponding to target video image;
Step S102 obtains video frame, generates video frame texture;
Step S103 calculates the micro- dough sheet of target in threedimensional model in the coordinate of normalized device coordinate system and video frame line
Texture coordinate in reason;
Step S104, according to the coordinate of the normalized device coordinate system of the micro- dough sheet of target (when calculating normalized device coordinate
Already have accounted for the what comes into a driver's body spatial position of video camera), judge the micro- dough sheet of target whether in what comes into a driver's body;
Step S105, if the micro- dough sheet of target in what comes into a driver's body, using texture coordinate of micro- dough sheet in video frame texture, from
Micro- dough sheet depth information is read in the depth map of target video image, judges whether the micro- dough sheet of target is hidden further according to depth information
Gear;If the micro- dough sheet of target, not in what comes into a driver's body, according to the original texture of threedimensional model, to target, micro- dough sheet renders;
Step S106, if being blocked, using the original texture of threedimensional model, to target, micro- dough sheet renders;If it is not hidden
Gear then obtains the colouring information of texture using texture coordinate of micro- dough sheet in video frame texture, and to target, micro- dough sheet carries out wash with watercolours
Dye.
In above-mentioned steps, depth map identifies each point (each micro- dough sheet in other words) and video camera in threedimensional model
Distance.Under normal circumstances, many objects are corresponding in target video image, each object is made of multiple micro- dough sheets
's.Video is made of image one by one, and video frame is exactly an image therein;Video frame is stored in the figure in memory
Picture, it is impossible to be used in the texture mapping of threedimensional model.Call three-dimensional API that the video frame of two dimensional image is saved as texture, generation regards
Frequency frame texture.Video frame texture is the form that an image is required according to three-dimensional API, is saved in GPU, can be used for three-dimensional
The texture mapping of model.Video frame and video frame texture are difference lies in storage location difference, and storage format is different, only video
The storage location and form of frame texture could be used for the texture mapping of threedimensional model.
It is needed before the texture coordinate of structure depth map and the micro- dough sheet of determining target in video frame texture by video camera
Position, field angle and video the parameters such as the ratio of width to height calculate transformation matrix, this matrix is used for threedimensional model being converted to
It cuts under coordinate system, that is, subsequent calculating can be carried out by needing parameters being placed under unified coordinate system.
In the following, the process for generating depth map is introduced, that is, step S101 can be divided into following steps:
1, the viewing matrix of camera is calculated according to the attitude angle of the three-dimensional coordinate of camera position and camera;The process
Exterior orientation conversion is substantially carried out, i.e., is transformed into threedimensional model under camera coordinates system from the world coordinate system at place;
2, the perspective projection matrix of camera is calculated according to the field angle of camera and the ratio of width to height of video;The step is main
It is to carry out inner orientation conversion, threedimensional model from camera coordinates system is transformed into and is cut under coordinate system;
3, depth texture object is created, is depth texture application memory space in systems, it is deep in order to improve resolution ratio
It is wide high that the wide height of degree texture should be set as the maximum that three-dimensional API is supported;
4, frame buffer zone is set, for controlling rendering pipeline that depth map is stored in depth texture object;Depth map
In depth value describe camera with specifying the distance between micro- dough sheet;
5, three-dimensional scenic is rendered according to above-mentioned viewing matrix and perspective projection matrix, render after three-dimensional API (such as
OpenGL or DirectX) depth map can be generated, using the frame buffer zone of setting, depth map is stored in depth texture;
When the position of camera, posture change or three-dimensional scenic in certain objects move, then need to render depth again
Figure.Wherein, API (Application Programming Interface, application programming interface) is that some are fixed in advance
The function of justice, it is therefore an objective to the ability that application program is able to access one group of routine with developer based on certain software or hardware is provided,
And source code need not be accessed or understand the details of internal work mechanism.OpenGL (writing Open Graphics Library entirely) is
Refer to and define one across programming language, the graphic package interface of the profession of cross-platform programming interface specification.It is used for graphics
As (two-dimentional also can), it be one powerful, calls convenient underlying graphics library.DirectX (Direct eXtension,
Abbreviation DX) it is the Multimedia Programming interface created by Microsoft.It is realized by C++ programming languages, it then follows COM.It is widely used
In Microsoft Windows, Microsoft XBOX, Microsoft XBOX360 and Microsoft XBOX ONE electronics
Development of games, and can only support these platforms.
Step S102 can be divided into following steps, as shown in Figure 3:
S1021 reads video frame, and according to the frame per second more new video frame of video, for being merged with threedimensional model, realizes dynamic
State syncretizing effect;
S1022 calls three-dimensional API that the video frame of two dimensional image is stored in texture, generates video frame texture.
Wherein, " update " refers to that video is dynamic picture, and video frame is with time change, for melting with threedimensional model
The video frame texture of conjunction is also in dynamic change, therefore syncretizing effect is also with becoming.Data i.e. for fusion are to become in real time
Change, dynamic is newer.Frame per second refers to the frame number of display per second, i.e. several images are broadcast in 1s.
Step S103 can be divided into following steps:
1, the coordinate on threedimensional model vertex is multiplied by world's matrix in order in GPU vertex shaders, camera regards
The projection matrix of figure matrix and camera obtains cutting the threedimensional model apex coordinate under coordinate system;(utilize GPU vertex colorings
Threedimensional model apex coordinate is transformed into the coordinate cut under coordinate system by device under model coordinate systems);
2, using coordinate achieved above as an output quantity of vertex shader, by the grating of Hardware Render assembly line
Change interpolation, just obtain cutting the micro- dough sheet coordinate of threedimensional model under coordinate system in pixel coloring device, which can be expressed as
(x, y, z, w) (wherein, X represents projection of the point coordinates in X-axis;Y represents projection of the point coordinates in Y-axis;Z represents point coordinates in Z axis
Projection;W is used to project (x, y, z));(three under coordinate system are cut by the rasterisation interpolation of Hardware Render assembly line
Dimension module apex coordinate, obtained in pixel coloring device cut coordinate system under the micro- dough sheet homogeneous coordinates of threedimensional model (x, y, z,
w));Wherein, (x, y, z, w) is homogeneous coordinates, will be one n+1 dimensional vector of vector of n dimensions originally, for perspective projection.
(x, y, z) represents same point with (wx, wy, wz, w), and general 4th component w is 1, but after projection changes, w will
No longer it is 1, starts to play a role.W is for projecting (x, y, z), so as to be transformed into normalized device coordinate system.
3, to (x, y, z) divided by w, obtain coordinate (x/w, y/w, z/w) of micro- dough sheet in normalized device coordinate system;It (will
Micro- dough sheet homogeneous coordinates switch to common three-dimensional coordinate, obtain coordinate (x/w, y/w, z/ of micro- dough sheet in normalized device coordinate system
w))。
The effective value ranges of 4, x/w and y/w on [- 1,1] closed interval, texture coordinate on [0,1] closed interval, and
The two adds 1 respectively there are linear relationship, therefore x/w and y/w, has then obtained micro- dough sheet in video frame line multiplied by with 0.5
Texture coordinate in reason.(texture coordinate ((x/w+1) * 0.5, (y/w of micro- dough sheet in video frame texture is further calculated
+1)*0.5)).The matrix multiplication that specifically opposite vertexes coordinate is done in vertex shader can also be transferred in pixel coloring device
It realizes, but the calculation amount of GPU can be increased like that, reduce efficiency, both ways are all within the protection domain of this patent.
Step S104 can be divided into following steps:
1, first determine whether the x/w and y/w of coordinate of micro- dough sheet under normalized device coordinate system whether in [- 1,1] closed zone
Between on, if not illustrating micro- dough sheet if not within the what comes into a driver's body of camera, if carrying out the judgement of next step;
2, the numerical value of coordinate z/w of micro- dough sheet under normalized device coordinate system is calculated, if three-dimensional API is used
OpenGL, then the numerical value of z/w then illustrates micro- dough sheet within what comes into a driver's body on [- 1,1] closed interval, if the numerical value of z/w
Do not illustrate then micro- dough sheet not within what comes into a driver's body on [- 1,1] closed interval.If three-dimensional API uses DirectX, then z/w
Numerical value then illustrate micro- dough sheet within what comes into a driver's body on [0,1] closed interval;If the numerical value of z/w is not in [0,1] closed interval
On then illustrate micro- dough sheet not within what comes into a driver's body.
In step S105, using texture coordinate of micro- dough sheet in video frame texture, from the depth map of target video image
Middle to read micro- dough sheet depth information, following steps can be divided by judging whether the micro- dough sheet of target is blocked further according to depth information,
As shown in Figure 4:
S1051, texture coordinate of micro- dough sheet in depth texture are with texture coordinate of micro- dough sheet in video frame texture
Identical, depth is read from above-mentioned depth texture using texture coordinate of the above-mentioned micro- dough sheet in video frame texture, is denoted as zn,
Next it is divided to according to the three-dimensional API used for two kinds of situations;(texture coordinate of micro- dough sheet in video frame texture is used from depth
Depth is read in texture, is denoted as zn)。
S1052, if three-dimensional API uses OpenGL, then use (z/w+1) * 0.5 and z of micro- dough sheetnIt compares, such as
Fruit (z/w+1) * 0.5 is less than or equal to zn, then illustrate that micro- dough sheet is not blocked;(if z/w+1) * 0.5 is more than zn, then illustrate
Micro- dough sheet is blocked.
S1053, if three-dimensional API uses DirectX, then the z/w and z of micro- dough sheetnIt directly compares, if z/w
Less than or equal to zn, then illustrate that micro- dough sheet is not blocked.If z/w is more than zn, then illustrate that micro- dough sheet is blocked.
And then if micro- dough sheet in the what comes into a driver's body of video camera, and is not blocked, then according to micro- dough sheet in video frame
Texture coordinate in texture reads the color of texture from above-mentioned video frame texture, using the color as micro- dough sheet color (i.e.
Micro- dough sheet is rendered using the color corresponding to micro- dough sheet in video image).
If micro- dough sheet is blocked in the what comes into a driver's body of video camera, then using original data texturing of micro- dough sheet
Micro- dough sheet is rendered.
Based on the above method, present invention also provides a kind of threedimensional model and the seamless fusing device of monitor video, including:
Depth map generation module, for building the depth map of the three dimensional spatial scene corresponding to target video image;
Video frame texture generation module for obtaining video frame, generates video frame texture;
Computing module, for calculating coordinate and video of the micro- dough sheet of the target in threedimensional model in normalized device coordinate system
Texture coordinate in frame texture;
Judgment module for the coordinate of the normalized device coordinate system according to the micro- dough sheet of target, judges that the micro- dough sheet of target is
It is no in what comes into a driver's body;
First processing module, for when the micro- dough sheet of target is in what comes into a driver's body, using the micro- dough sheet of target in video frame texture
In texture coordinate, from the depth map of target video image read the micro- dough sheet depth information of target, sentence further according to depth information
Whether the disconnected micro- dough sheet of target is blocked;When the micro- dough sheet of target is not in what comes into a driver's body, then according to the original texture of threedimensional model to mesh
Micro- dough sheet is marked to be rendered;
First processing module, for when the micro- dough sheet of target is blocked, using the original texture of threedimensional model to target micro- face
Piece is rendered.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit can refer to the corresponding process in preceding method embodiment, and details are not described herein.
If the function is realized in the form of SFU software functional unit and is independent product sale or in use, can be with
It is stored in a computer read/write memory medium.Based on such understanding, technical scheme of the present invention is substantially in other words
The part contribute to the prior art or the part of the technical solution can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, is used including some instructions so that a computer equipment (can be
People's computer, server or network equipment etc.) perform all or part of the steps of the method according to each embodiment of the present invention.
And aforementioned storage medium includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in change or replacement, should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention described should be subject to the protection scope in claims.
Claims (10)
1. a kind of seamless integration method of threedimensional model and monitor video, which is characterized in that including:
Build the depth map of the three dimensional spatial scene corresponding to target video image;
Video frame is obtained, generates video frame texture;
Texture of the micro- dough sheet of target in threedimensional model in the coordinate and video frame texture of normalized device coordinate system is calculated to sit
Mark;
According to the coordinate of the normalized device coordinate system of the micro- dough sheet of target, judge the micro- dough sheet of target whether in what comes into a driver's body;
If the micro- dough sheet of target is in what comes into a driver's body, using texture coordinate of the micro- dough sheet of target in video frame texture, from target video
The micro- dough sheet depth information of target is read in the depth map of image, judges whether the micro- dough sheet of target is blocked further according to depth information;
If the micro- dough sheet of target, not in what comes into a driver's body, according to the original texture of threedimensional model, to target, micro- dough sheet renders;
If being blocked, using the original texture of threedimensional model, to target, micro- dough sheet renders.
2. it according to the method described in claim 1, it is characterized in that, further includes:
If not being blocked, the colouring information of texture is obtained using texture coordinate of micro- dough sheet in video frame texture, to target
Micro- dough sheet is rendered.
3. according to the method described in claim 1, it is characterized in that, step builds the three dimensions corresponding to target video image
The depth map of scene includes:
The viewing matrix of camera is calculated according to the attitude angle of the three-dimensional coordinate of camera position and camera;
The perspective projection matrix of camera is calculated according to the field angle of camera and the ratio of width to height of video;
Create depth texture object;
Frame buffer zone is set, for controlling rendering pipeline that depth map is stored in depth texture object;
Three-dimensional scenic is rendered according to viewing matrix and perspective projection matrix, to generate depth map.
4. according to the method described in claim 1, it is characterized in that, step obtains video frame, generation video frame texture includes:
Video frame is read, and according to the frame per second more new video frame of video, for being merged with threedimensional model, realizes dynamic fusion effect
Fruit;
It calls three-dimensional API that the video frame of two dimensional image is stored in texture, generates video frame texture.
5. according to the method described in claim 1, it is characterized in that, the micro- dough sheet of target in step calculating threedimensional model is in normalizing
The texture coordinate changed in the coordinate and video frame texture of device coordinate system includes:
Using GPU vertex shaders, threedimensional model apex coordinate under model coordinate systems is transformed into and is cut under coordinate system;
The threedimensional model apex coordinate under coordinate system is cut by the rasterisation interpolation of Hardware Render assembly line, in pixel coloring device
In obtain cut coordinate system under the micro- dough sheet homogeneous coordinates (x, y, z, w) of threedimensional model, wherein, X represent point coordinates X-axis throwing
Shadow;Y represents projection of the point coordinates in Y-axis;Z represents projection of the point coordinates in Z axis;X-axis, Y-axis, Z axis are cut in coordinate system
Reference axis, W is for projecting (x, y, z), so as to be transformed into normalized device coordinate system;
The micro- dough sheet homogeneous coordinates of target are switched into common three-dimensional coordinate, obtain seat of the micro- dough sheet of target in normalized device coordinate system
It marks (x/w, y/w, z/w);
According to the micro- dough sheet of target in the coordinate of normalized device coordinate system, the micro- dough sheet of target is calculated in video frame texture
Texture coordinate ((x/w+1) * 0.5, (y/w+1) * 0.5)).
6. according to the method described in claim 1, it is characterized in that, step is according to the normalized device coordinate system of the micro- dough sheet of target
Coordinate, judge whether the micro- dough sheet of target includes in what comes into a driver's body:
Judge the x/w and y/w of coordinate of the micro- dough sheet of target under normalized device coordinate system whether on [- 1,1] closed interval;Such as
Fruit is not being judged then within what comes into a driver's body of the micro- dough sheet of target not in camera;
If the numerical value of coordinate z/w of the micro- dough sheet of target under normalized device coordinate system is being calculated, and according to the number of z/w
Whether value judges the micro- dough sheet of target in what comes into a driver's body.
7. according to the method described in claim 6, it is characterized in that, step calculates the micro- dough sheet of target in normalized device coordinate system
Under coordinate z/w numerical value, and judge whether the micro- dough sheet of target includes in what comes into a driver's body according to the numerical value of z/w:
If three-dimensional API uses OpenGL, then the numerical value of z/w then illustrates the micro- dough sheet of target in what comes into a driver's on [- 1,1] closed interval
Within body, if the numerical value of z/w does not illustrate the micro- dough sheet of target not within what comes into a driver's body if on [- 1,1] closed interval.
8. according to the method described in claim 6, it is characterized in that, step calculates the micro- dough sheet of target in normalized device coordinate system
Under coordinate z/w numerical value, and judge whether the micro- dough sheet of target includes in what comes into a driver's body according to the numerical value of z/w:
If three-dimensional API uses DirectX, then the numerical value of z/w then illustrates the micro- dough sheet of target in what comes into a driver's on [0,1] closed interval
Within body;Illustrate the micro- dough sheet of target not within what comes into a driver's body if the numerical value of z/w is not on [0,1] closed interval.
9. the method according to the description of claim 7 is characterized in that step utilizes line of the micro- dough sheet of target in video frame texture
Coordinate is managed, the micro- dough sheet depth information z of target is read from the depth map of target video imagen, further according to depth information znJudge mesh
Mark micro- dough sheet whether be blocked including:
If three-dimensional API uses OpenGL, (z/w+1) * 0.5 and z of the micro- dough sheet of target are usednIt compares, if (z/w+1) *
0.5 is less than or equal to zn, then illustrate that the micro- dough sheet of target is not blocked;(if z/w+1) * 0.5 is more than zn, then illustrate that target is micro-
Dough sheet is blocked;
If three-dimensional API uses DirectX, the z/w and z of the micro- dough sheet of targetnIt is compared, if z/w is less than or equal to
zn, then illustrate that micro- dough sheet is not blocked;If z/w is more than zn, then illustrate that micro- dough sheet is blocked.
10. a kind of seamless fusing device of threedimensional model and monitor video, which is characterized in that including:
Depth map generation module, for building the depth map of the three dimensional spatial scene corresponding to target video image;
Video frame texture generation module for obtaining video frame, generates video frame texture;
Computing module, for calculating the micro- dough sheet of the target in threedimensional model in the coordinate of normalized device coordinate system and video frame line
Texture coordinate in reason;
Judgment module, for the coordinate of the normalized device coordinate system according to the micro- dough sheet of target, judge the micro- dough sheet of target whether
In what comes into a driver's body;
First processing module, for when the micro- dough sheet of target is in what comes into a driver's body, using the micro- dough sheet of target in video frame texture
Texture coordinate reads the micro- dough sheet depth information of target from the depth map of target video image, judges mesh further according to depth information
Mark whether micro- dough sheet is blocked;It is when the micro- dough sheet of target is not in what comes into a driver's body, then micro- to target according to the original texture of threedimensional model
Dough sheet is rendered;
First processing module, for when the micro- dough sheet of target is blocked, using the original texture of threedimensional model to target micro- dough sheet into
Row renders.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810008558.2A CN108154553A (en) | 2018-01-04 | 2018-01-04 | The seamless integration method and device of a kind of threedimensional model and monitor video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810008558.2A CN108154553A (en) | 2018-01-04 | 2018-01-04 | The seamless integration method and device of a kind of threedimensional model and monitor video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108154553A true CN108154553A (en) | 2018-06-12 |
Family
ID=62460821
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810008558.2A Pending CN108154553A (en) | 2018-01-04 | 2018-01-04 | The seamless integration method and device of a kind of threedimensional model and monitor video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108154553A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921778A (en) * | 2018-07-06 | 2018-11-30 | 成都品果科技有限公司 | A kind of celestial body effect drawing generating method |
CN110090440A (en) * | 2019-04-30 | 2019-08-06 | 腾讯科技(深圳)有限公司 | Virtual objects display methods, device, electronic equipment and storage medium |
CN111325824A (en) * | 2019-07-03 | 2020-06-23 | 杭州海康威视系统技术有限公司 | Image data display method and device, electronic equipment and storage medium |
CN111402374A (en) * | 2018-12-29 | 2020-07-10 | 曜科智能科技(上海)有限公司 | Method, device, equipment and storage medium for fusing multi-channel video and three-dimensional model |
CN112270737A (en) * | 2020-11-25 | 2021-01-26 | 浙江商汤科技开发有限公司 | Texture mapping method and device, electronic equipment and storage medium |
CN113793281A (en) * | 2021-09-15 | 2021-12-14 | 江西格灵如科科技有限公司 | Panoramic image gap real-time stitching method and system based on GPU |
WO2021253642A1 (en) * | 2020-06-18 | 2021-12-23 | 完美世界(北京)软件科技发展有限公司 | Image rendering method and apparatus, computer program and readable medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136738A (en) * | 2011-11-29 | 2013-06-05 | 北京航天长峰科技工业集团有限公司 | Registering method of fixing vidicon surveillance video and three-dimensional model in complex scene |
WO2013086739A1 (en) * | 2011-12-16 | 2013-06-20 | Thomson Licensing | Method and apparatus for generating 3d free viewpoint video |
CN103716586A (en) * | 2013-12-12 | 2014-04-09 | 中国科学院深圳先进技术研究院 | Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene |
CN104599243A (en) * | 2014-12-11 | 2015-05-06 | 北京航空航天大学 | Virtual and actual reality integration method of multiple video streams and three-dimensional scene |
CN106683163A (en) * | 2015-11-06 | 2017-05-17 | 杭州海康威视数字技术股份有限公司 | Imaging method and system used in video monitoring |
CN107067447A (en) * | 2017-01-26 | 2017-08-18 | 安徽天盛智能科技有限公司 | A kind of integration video frequency monitoring method in large space region |
CN107306349A (en) * | 2016-04-21 | 2017-10-31 | 杭州海康威视数字技术股份有限公司 | A kind of three-dimensional shows the method and device of monitor video |
-
2018
- 2018-01-04 CN CN201810008558.2A patent/CN108154553A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136738A (en) * | 2011-11-29 | 2013-06-05 | 北京航天长峰科技工业集团有限公司 | Registering method of fixing vidicon surveillance video and three-dimensional model in complex scene |
WO2013086739A1 (en) * | 2011-12-16 | 2013-06-20 | Thomson Licensing | Method and apparatus for generating 3d free viewpoint video |
CN103716586A (en) * | 2013-12-12 | 2014-04-09 | 中国科学院深圳先进技术研究院 | Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene |
CN104599243A (en) * | 2014-12-11 | 2015-05-06 | 北京航空航天大学 | Virtual and actual reality integration method of multiple video streams and three-dimensional scene |
CN106683163A (en) * | 2015-11-06 | 2017-05-17 | 杭州海康威视数字技术股份有限公司 | Imaging method and system used in video monitoring |
CN107306349A (en) * | 2016-04-21 | 2017-10-31 | 杭州海康威视数字技术股份有限公司 | A kind of three-dimensional shows the method and device of monitor video |
CN107067447A (en) * | 2017-01-26 | 2017-08-18 | 安徽天盛智能科技有限公司 | A kind of integration video frequency monitoring method in large space region |
Non-Patent Citations (1)
Title |
---|
闫甲鹏: "移动增强现实装配系统的虚实场景融合研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921778A (en) * | 2018-07-06 | 2018-11-30 | 成都品果科技有限公司 | A kind of celestial body effect drawing generating method |
CN108921778B (en) * | 2018-07-06 | 2022-12-30 | 成都品果科技有限公司 | Method for generating star effect map |
CN111402374A (en) * | 2018-12-29 | 2020-07-10 | 曜科智能科技(上海)有限公司 | Method, device, equipment and storage medium for fusing multi-channel video and three-dimensional model |
CN111402374B (en) * | 2018-12-29 | 2023-05-23 | 曜科智能科技(上海)有限公司 | Multi-path video and three-dimensional model fusion method, device, equipment and storage medium thereof |
CN110090440A (en) * | 2019-04-30 | 2019-08-06 | 腾讯科技(深圳)有限公司 | Virtual objects display methods, device, electronic equipment and storage medium |
CN111325824A (en) * | 2019-07-03 | 2020-06-23 | 杭州海康威视系统技术有限公司 | Image data display method and device, electronic equipment and storage medium |
CN111325824B (en) * | 2019-07-03 | 2023-10-10 | 杭州海康威视系统技术有限公司 | Image data display method and device, electronic equipment and storage medium |
WO2021253642A1 (en) * | 2020-06-18 | 2021-12-23 | 完美世界(北京)软件科技发展有限公司 | Image rendering method and apparatus, computer program and readable medium |
CN112270737A (en) * | 2020-11-25 | 2021-01-26 | 浙江商汤科技开发有限公司 | Texture mapping method and device, electronic equipment and storage medium |
CN113793281A (en) * | 2021-09-15 | 2021-12-14 | 江西格灵如科科技有限公司 | Panoramic image gap real-time stitching method and system based on GPU |
CN113793281B (en) * | 2021-09-15 | 2023-09-08 | 江西格灵如科科技有限公司 | Panoramic image gap real-time stitching method and system based on GPU |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108154553A (en) | The seamless integration method and device of a kind of threedimensional model and monitor video | |
US11024077B2 (en) | Global illumination calculation method and apparatus | |
US9569885B2 (en) | Technique for pre-computing ambient obscurance | |
US7289119B2 (en) | Statistical rendering acceleration | |
US6426750B1 (en) | Run-time geomorphs | |
US6362822B1 (en) | Lighting and shadowing methods and arrangements for use in computer graphic simulations | |
US7586489B2 (en) | Method of generating surface defined by boundary of three-dimensional point cloud | |
Bala et al. | Combining edges and points for interactive high-quality rendering | |
US7737974B2 (en) | Reallocation of spatial index traversal between processing elements in response to changes in ray tracing graphics workload | |
KR100888528B1 (en) | Apparatus, method, application program and computer readable medium thereof capable of pre-storing data for generating self-shadow of a 3D object | |
US20080180440A1 (en) | Computer Graphics Shadow Volumes Using Hierarchical Occlusion Culling | |
US8072456B2 (en) | System and method for image-based rendering with object proxies | |
US9508191B2 (en) | Optimal point density using camera proximity for point-based global illumination | |
US6806876B2 (en) | Three dimensional rendering including motion sorting | |
EP3211601B1 (en) | Rendering the global illumination of a 3d scene | |
US6791544B1 (en) | Shadow rendering system and method | |
US20130127895A1 (en) | Method and Apparatus for Rendering Graphics using Soft Occlusion | |
JPH09330423A (en) | Three-dimensional shape data transforming device | |
US6346939B1 (en) | View dependent layer ordering method and system | |
Lengyel | Voxel-based terrain for real-time virtual simulations | |
US9454554B1 (en) | View dependent query of multi-resolution clustered 3D dataset | |
KR20100068603A (en) | Apparatus and method for generating mipmap | |
JP3629243B2 (en) | Image processing apparatus and method for rendering shading process using distance component in modeling | |
KR101227155B1 (en) | Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image | |
KR20100075351A (en) | Method and system for rendering mobile computer graphic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220207 Address after: 100000 floors 1-2, Building 29, No. 16, beitaiping Road, Haidian District, Beijing Applicant after: CHINA TOPRS TECHNOLOGY Co.,Ltd. Applicant after: Henan Zhongce Xintu Information Technology Co.,Ltd. Address before: 100039 No. 16 Taiping Road, Beijing, Haidian District Applicant before: CHINA TOPRS TECHNOLOGY Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180612 |