CN107134000B - Reality-fused three-dimensional dynamic image generation method and system - Google Patents
Reality-fused three-dimensional dynamic image generation method and system Download PDFInfo
- Publication number
- CN107134000B CN107134000B CN201710369378.2A CN201710369378A CN107134000B CN 107134000 B CN107134000 B CN 107134000B CN 201710369378 A CN201710369378 A CN 201710369378A CN 107134000 B CN107134000 B CN 107134000B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- camera
- matrix
- dimensional space
- reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method and a system for generating a three-dimensional dynamic image fused with reality, wherein the method comprises the following steps: acquiring a first real scene image in a first three-dimensional space through a first camera, recording a three-dimensional space matrix of the first real scene image as Q1, and converting the three-dimensional space matrix Q1 into a two-dimensional plane matrix R1 of the first camera through perspective projection; converting the obtained two-dimensional plane matrix R1 into a corresponding two-dimensional plane matrix R2 in a second three-dimensional space; converting the two-dimensional planar matrix R2 to a two-dimensional planar matrix R3 of the second camera; and taking the visual angle of the first camera as the visual angle of human eyes in the real space, and projecting the first real scene image corresponding to the two-dimensional planar matrix R3 to the second real scene corresponding to the visual angle of the second camera. The invention fuses the actually seen scene with the plane pattern generated in the three-dimensional software under the condition that the human visual angle and the visual angle of the surrounding environment are specific, thereby forming the naked eye three-dimensional effect.
Description
Technical Field
The invention relates to a method and a system for generating a three-dimensional dynamic image fused with reality, belonging to the technical field of image processing.
Background
The 3D ground Painting (3D Street Painting) is to show the Painting on the ground to obtain the three-dimensional artistic effect, or directly use the ground as the carrier to carry out Painting creation, which takes the outdoor ground as the medium and utilizes the principle of plane perspective to produce the visual virtual three-dimensional effect, so that visitors have a sense of being personally on the scene. The scenery in the 3D ground painting is stereoscopic, fine and vivid, and can often achieve the artistic effect of being fake and truer. The effect of simulating the three-dimensional space on the two-dimensional plane is always the focus of human visual art, and particularly, since the renaming of the literature, the problem of solving the problem becomes one of important standards for art progress and art history writing, so that the renaming of the literature and the following church wall painting, ceiling painting, municipal hall, noble dwelling and villa become excellent simulation places. 3D street painting can be seen as an important development and extension of this artistic logic in the present generation.
The most special place for the 3D ground painting compared with the general painting is that its perspective principle is different from the general painting. The normal drawing, the perspective arrangement of the picture does not refer to the viewpoint of the user's own station, the picture structure of the picture is only based on the perspective of the picture, the 3D ground picture refers to the station viewpoint of the user, the whole picture structure takes the viewpoint of the user as the visual origin, so that the 3D ground picture is not only a picture, but also a real visual space, the user can be integrated into the picture, the optimal visual effect can be achieved by watching the picture by using the camera at the initially designed optimal viewpoint, and the picture is stretched and deformed by watching from other angles by naked eyes. The image of the picture can be normally observed only from the camera. Such contrast can give a 3D picture a strong visual impact, thereby causing visual resonance of the viewer and deepening the impression of the viewer.
At present, in the process of creating a naked eye 3D ground painting, an artist draws on the ground through a grid division and alignment technique, so that the perspective relation of the three-dimensional content of the painting and the perspective relation of surrounding real scenes are well matched, the content of the painting feels off the ground, and a three-dimensional effect is formed. The drawing of 3D ground painting has the following problems:
firstly, the limitation of drawing places is large, so far, 3D floor painting is applied to public places more often, and is suitable for exhibition, activity, park entertainment and product publicity places, and when the 3D floor painting is applied to narrow places such as houses, villas, clubs, hotels and the like, the drawing difficulty is large, and the technical requirements on a drawer are very high;
secondly, the drawing materials of the 3D ground painting are mainly determined according to the painting substrate, and if the painting substrate is a cloth substrate, acrylic pigments and oil painting pigments can be used. It is suitable if the substrate is a street or road acrylic pigment and toner. But drawing directly on the ground with toner or acryl can be difficult to clean;
thirdly, the painting is difficult to store completely, and pedestrians can tread the works in exhibitions, activities, park entertainment and product publicity occasions due to high personnel mobility, so that the integrity of the works is influenced, and the painting effect is further influenced;
fourthly, the expression form is too single, the 3D painting is static after being drawn, cannot be updated in real time, or has the effect of dynamic continuity, and has no sound effect which is completely matched;
fifth, the painting technique is difficult to master, and since the requirement of 3D painting on the spatial imagination of artists is very high, only a few artists master at present, and large-scale production and use cannot be realized.
In summary, it can be seen that the 3D floor painting has a large limitation on the drawing place, and is difficult to clean and store completely after drawing, the representation form is too single, and difficult to master, and the production efficiency is low, so a new technology is needed to solve the above problems, thereby achieving the purpose of achieving the same drawing effect as the 3D floor painting, even exceeding the drawing effect of the 3D floor painting.
Disclosure of Invention
The invention aims to provide a method and a system for generating a three-dimensional dynamic image fused with reality, which are used for solving the problems of the existing 3D ground painting, realizing the artistic effect of the 3D ground painting, and meanwhile, have wide application range, diversified representation forms and convenient updating and interaction.
In order to achieve the purpose, the invention provides a three-dimensional dynamic image generation method fusing reality. Specifically, the method comprises the following steps:
the method comprises the following steps: acquiring a first reality scene image in a first three-dimensional space through a first camera, recording a three-dimensional space matrix of the first reality scene image in the first three-dimensional space as Q1, and converting the three-dimensional space matrix Q1 into a two-dimensional plane matrix R1 of the first camera through perspective projection;
step two: converting the obtained two-dimensional plane matrix R1 of the first camera into a corresponding two-dimensional plane matrix R2 in a second three-dimensional space;
step three: converting the two-dimensional planar matrix R2 in the second three-dimensional space to a two-dimensional planar matrix R3 of the second camera;
step four: taking the visual angle of the first camera as the visual angle of human eyes in the real space, and projecting a first real scene image corresponding to the two-dimensional planar matrix R3 to a second real scene of a second three-dimensional space corresponding to the visual angle of the second camera;
the method of the invention can realize the generation of the three-dimensional dynamic image fused with reality from the first step to the fourth step, and further can implement the fifth step to achieve better effect, and the fifth step comprises the following steps: the method comprises the steps of obtaining the position of a light source in a first reality scene of a first three-dimensional space, and arranging the same light source in a second reality scene of a second three-dimensional space according to the position of the light source in the first reality scene corresponding to the first three-dimensional space.
The projection conversion principle in the method is as follows: firstly, imaging perspective is carried out on a three-dimensional object under a coordinate system of a camera according to a conventional algorithm, and the principle is as follows:
in the three-dimensional space, under the condition that variables are input at left and right boundaries, the left boundary value (left) and the right boundary value (right) of a projection plane are [ left, right ], -Ny/z is [ bottom, top ], the upper boundary value (top) and the lower boundary value (bottom) of the projection plane are [ left, right ], [ N is the distance from the eye to the near clipping plane, F is the distance from the eye to the far clipping plane, mapping-Nx/z belonging to [ left, right ] into x belonging to [ -1,1], -Ny/z belonging to [ bottom, top ] into y belonging to [ -1,1], and performing linear interpolation,
final projected point
After the form of perspective division, P 'only changes the form of x and y components, az + b and-z are not changed, we do the inverse process of perspective division-multiply each component of P' with-z to get
We end up with a perspective transformation matrix:
the vertex in camera space, if in a view frustum, is transformed into a regular view frustum. OpenGL uses the form of M when constructing a perspective projection matrix. For the projection surface, its width and height are mostly different, i.e. the aspect ratio is not 1, such as 640/480. Whereas the aspect ratio of the regular observer is the same, i.e. the aspect ratio is always 1. This causes distortion of polygons, and the solution to this problem is to perform a correction in the viewport transformation performed on the normalized device coordinates after the polygons are subjected to perspective transformation, clipping, and perspective division, which transforms the normalized vertices into the viewport in the same proportion as on the projection plane, thereby removing the distortion caused by the perspective projection transformation. The correction is performed on the premise that the aspect ratio of the projection plane and the aspect ratio of the viewport are the same.
After obtaining the conventional perspective projection matrix, the image of the perspective projection matrix needs to be transformed to the xz plane matrix through perspective transformation again, and the transformation method is the same as the principle. The matrix form of R1, R2, R3 in the process of the invention corresponds to M.
In the first step, the first three-dimensional space refers to a real space corresponding to a first real scene image acquired by a first camera, and the first real scene image is fused with a second real scene corresponding to a second three-dimensional space in the fourth step.
The second three-dimensional space in the fourth step refers to a spatial region where human eyes are located, the spatial region being different from the first three-dimensional space corresponding to the first reality scene image in the first step, and the first reality scene image corresponding to the two-dimensional plane matrix R3 is projected onto a plane or a curved surface in the second reality scene in the second three-dimensional space. The plane refers to a horizontal surface in the second real scene, and the curved surface refers to a surface with unevenness in the second real scene.
In the fourth step, a three-dimensional animation which is pre-made according to the perspective relation of the image of the first reality scene seen by human eyes in the first three-dimensional space is further projected onto a plane or a curved surface in the second reality scene of the second three-dimensional space, namely, the pre-made three-dimensional animation is projected onto the plane or the curved surface in the second reality scene where the human eyes are located, and the three-dimensional animation is an animation model which is made in computer software. The three-dimensional animation can be designed into cartoon characters, and the projection relation of the cartoon characters is consistent with the projection relation of the two-dimensional plane matrix R3.
The projection in the method adopts a projector as equipment, the visual angle of human eyes in the method refers to an included angle formed by light rays led out from the upper part, the lower part or the left part and the right part of two ends of an object at the optical centers of the human eyes when the object is observed, the visual angle of a first camera refers to a mode of simulating the visual angle of the human eyes, the shooting angle range of the first camera is used for shooting, and a second camera refers to a virtual camera simulating a camera in the real world in computer software. When the method is used for generating the three-dimensional dynamic image fused with reality, the sound information in the first reality scene corresponding to the first reality scene image is played in the second reality scene through the multimedia equipment.
The invention also provides a system for generating the three-dimensional dynamic image fused with reality, which adopts the method for generating the three-dimensional dynamic image fused with reality to generate the three-dimensional dynamic image fused with reality, and the system at least comprises: the system comprises a camera, a three-dimensional image processing module, a projector and a sound playing module, wherein the camera is used for acquiring a first reality scene image corresponding to a first three-dimensional space to be fused; the three-dimensional image processing module is used for carrying out projection relation conversion on a first reality scene image acquired by the camera; the projector is used for projecting a first reality scene image to be fused to a second reality scene; the sound playing module is used for playing sound information in the first reality scene corresponding to the first reality scene image in the second reality scene.
The video camera can adopt a DVCPRO digital camcorder, which is a new digital format introduced by Song corporation of 1996 on the basis of DV format. It uses 4:1:1 sampling, 5:1 compression, track width of 18 microns. DVCPRO50 was introduced in 1998 based on DVCPRO and used 4:2:2 sampling and 3.3:1 compression. The development of a higher-grade product DVCPRO100, also called DVCPROHD, to the higher-grade field of digital tv, high-definition tv, was started in 1999. DVCPRO uses a number of standard 4:1 formats in image processing, i.e., luminance fy at the sampling frequency is 13.5MHz and color difference fPB, R is 3.375 MHz. In practice, the DVCPRO video signal input is in 4:2 format, and is recorded after 4:1 conversion, and the 4:1 off-band signal is interpolated during playback to reform the 4:2 format signal output. From the data rate, the net data rate of 4:2 format is [ (720+ 2X 360). times.576X 8 ]. times.25. apprxeq.165.9 Mbps, after conversion to 4:1, is [ (720+ 2X 180). times.576X 8 ]. times.25. apprxeq.124.4 Mbps, which is compressed to 25Mbps for recording, and the required compression ratio is 124.4Mbps/25 Mbps. apprxeq.5: 1. A 4 channel 16bit digital audio signal can also be recorded simultaneously using an 1/4 inch MP tape.
The three-dimensional image processing module adopts a 3D API (display card interface) to directly interface with an application program, and the 3D API provides functions such as glu Peractive (fov, Aspect, near, far) and D3DXMatrix pixel FovLH (plout, fovY, Aspect, zn, zf) to provide a quick perspective matrix generation method for users. The 3D API can enable 3D software designed by programmers to call programs in the API, so that the API is automatically communicated with a driving program of hardware, a powerful 3D graphic processing function in a 3D chip is started, and the design efficiency of the 3D program is greatly improved.
The projector may adopt a DLP projector, which uses a dmd (digital micro mirror device) digital micro reflector as a light valve imaging device. A DLP computer board is composed of A/D decoder, memory chip, image processor and several Digital Signal Processors (DSP), and all the text images are passed through the said board to generate a digital signal, which is then processed and transferred to the heart-DMD of DLP system. And the light beam is projected on the DMD after passing through a three-color lens rotating at a high speed, and then is projected on a large screen through an optical lens to complete image projection. A DMD is formed by a plurality of tiny square mirror plates (referred to simply as micromirrors) closely arranged in rows and columns attached to electrical nodes of a silicon wafer, each micromirror corresponding to a pixel of a generated image. Thus, the number of micromirrors of the DMD device determines the physical resolution of a DLP projector, for example, a projector with a resolution of 600X800, which means that the number of micromirrors of the DMD device is 600X800 to 480000. In the DMD device, each micromirror corresponds to a memory, and the memory can control the micromirror to switch rotation at two positions of ± 10 degrees. And each pixel on the DMD block has an area of 16 μm × 16 at intervals of 1 μm. DLP projectors can be classified into: single chip microcomputer, two chip microcomputers, three chip microcomputers. The red, green and blue of the digital signal of the DMD rotate in sequence, the small mirror is opened or closed according to the position of the pixel and the number of colors, the DLP can be regarded as a simple light path system consisting of only one light source and one group of projection lenses, the lenses amplify the reflected image of the DMD and directly project the image on a screen, and a vivid and bright demonstration effect can be achieved.
The sound playing module can adopt YT07, YT07 voice module is a popular type voice playing module which is provided by thousands of meshes of electronics. The device has the characteristics of low price, stability, reliability, repeatable recording, switch contact control, wide power supply voltage, small volume and the like. There are two main control playback types: control through 7 sets of contacts, 485 serial bus.
The invention has the following advantages: the perspective relation of the plane pattern formed by the human visual angle corresponds to the space perspective relation of a main camera in the three-dimensional software of the computer, the plane pattern formed in the three-dimensional software is projected on a plane or a curved surface corresponding to the actual space through the projector, so that under the condition that the human visual angle and the visual angle of the surrounding environment are specific, the actually seen scene is fused with the plane pattern formed in the three-dimensional software.
Drawings
FIG. 1 is a schematic flow chart of a method for generating a three-dimensional dynamic image fused with reality;
fig. 2 is a schematic diagram of a realistic three-dimensional motion image generation simulation.
Detailed Description
The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
With reference to fig. 1 and with further reference to fig. 2, a method for generating a reality-fused three-dimensional dynamic image, the method comprising the steps of:
s1: acquiring a first real scene 7 image 2 in a first three-dimensional space 1 through a first camera 3, recording a three-dimensional space matrix of the first real scene 7 image 2 in the first three-dimensional space 1 as Q1, and converting the three-dimensional space matrix Q1 into a two-dimensional plane matrix R1 of the first camera 3 through perspective projection;
s2: converting the obtained two-dimensional planar matrix R1 of the first camera 3 into a corresponding two-dimensional planar matrix R2 in the second three-dimensional space 4;
s3: converting the two-dimensional planar matrix R2 in the second three-dimensional space 4 to the two-dimensional planar matrix R3 of the second camera 5;
s4: taking the visual angle of the first camera 3 as the visual angle of human eyes in the real space, and projecting the image 2 of the first real scene 7 corresponding to the two-dimensional planar matrix R3 into the second real scene 6 of the second three-dimensional space 4 corresponding to the visual angle of the second camera 5;
the method S1 to S4 of the invention can realize the generation of the three-dimensional dynamic image fused with reality, and further can implement S5, S5: the position of a light source in a first reality scene 7 of a first three-dimensional space 1 is obtained, and the same light source is arranged in a second reality scene 6 of a second three-dimensional space 4 according to the position of the light source in the first reality scene 7 corresponding to the first three-dimensional space 1.
In S1, the first three-dimensional space 1 refers to a real space corresponding to the image 2 of the first real scene 7 acquired by the first camera 3, and the image 2 of the first real scene 7 is merged with the second real scene 6 corresponding to the second three-dimensional space 4 in S4.
The second three-dimensional space 4 in S4 refers to a spatial region where the human eyes are located, which is different from the first three-dimensional space 1 corresponding to the first real scene 7 image 2 in S1, and the first real scene 7 image 2 corresponding to the two-dimensional plane matrix R3 is projected onto a plane or a curved surface in the second real scene 6 of the second three-dimensional space 4. The plane refers to a horizontal surface actually existing in the second real scene 6, and the curved surface refers to a surface with unevenness actually existing in the second real scene 6.
In S4, a three-dimensional animation that is previously created based on the perspective relationship of the image 2 of the first real scene 7 seen by the human eye in the first three-dimensional space 1 is further projected onto a plane or a curved surface in the second real scene 6 of the second three-dimensional space 4, that is, the previously created three-dimensional animation that is an animation model created in computer software is projected onto a plane or a curved surface in the second real scene 6 where the human eye is located. The three-dimensional animation can be designed into cartoon characters, and the projection relation of the cartoon characters is consistent with the projection relation of the two-dimensional plane matrix R3.
As shown in fig. 2, the present invention further provides a system for generating a three-dimensional dynamic image fused with reality, which generates a three-dimensional dynamic image fused with reality by using the method for generating a three-dimensional dynamic image fused with reality, and the system at least includes: the system comprises a camera, a three-dimensional image processing module 8, a projector 9 and a sound playing module 10, wherein the camera is used for acquiring a first reality scene 7 image 2 corresponding to a first three-dimensional space 1 to be fused; the three-dimensional image processing module 8 is used for performing projection relation conversion on the image 2 of the first reality scene 7 acquired by the camera; the projector 9 is used for projecting the image 2 of the first reality scene 7 to be fused to the second reality scene 6; the sound playing module 10 is configured to play the sound information in the first reality scene 7 corresponding to the image 2 of the first reality scene 7 in the second reality scene 6.
The idea of the invention is further elucidated below in connection with practical applications:
scene: when in study, the course teaching plan is displayed on the desk in a stereo projection mode.
Firstly, acquiring an image of a course teaching plan through a first camera, recording a three-dimensional space matrix in which the image of the course teaching plan is located as Q1, and converting the three-dimensional space matrix Q1 into a two-dimensional plane matrix R1 of the first camera through perspective projection by using a three-dimensional image processing module;
secondly, converting the obtained two-dimensional plane matrix R1 of the first camera into a corresponding two-dimensional plane matrix R2 in the space where the desk is located;
thirdly, converting the two-dimensional plane matrix R2 in the space where the desk is located into the two-dimensional plane matrix R3 of the second camera in the space where the desk is located;
and fourthly, taking the visual angle of the second camera as the visual angle of human eyes in the space where the desk is positioned, and projecting the image corresponding to the two-dimensional planar matrix R3 to the space where the desk is positioned corresponding to the visual angle of the second camera through the projector.
And fifthly, acquiring the position of the light source in the three-dimensional space where the course teaching plan is located, and arranging the light sources at the same positions in the space where the desk is located according to the positions of the light sources corresponding to the three-dimensional space where the course teaching plan is located.
It should be further noted that the visual angle of the second camera is a model which is set in the three-dimensional image processing module and is equivalent to the visual angle of human eyes, when human eyes are located at the visual angle of the second camera, a three-dimensional dynamic image formed by fusing the course teaching plan image and the desk can be seen, and meanwhile, the multimedia device is used for playing the sound information in the corresponding first real scene in the three-dimensional space where the course teaching plan is located, so that the desk fusion of the course teaching plan image and the second real scene is realized.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.
Claims (8)
1. A method for generating a three-dimensional dynamic image fused with reality is characterized in that: the method comprises the following steps:
the method comprises the following steps: acquiring a first reality scene image in a first three-dimensional space through a first camera, recording a three-dimensional space matrix of the first reality scene image in the first three-dimensional space as Q1, and converting the three-dimensional space matrix Q1 into a two-dimensional plane matrix R1 of the first camera through perspective projection;
step two: converting the obtained two-dimensional plane matrix R1 of the first camera into a corresponding two-dimensional plane matrix R2 in a second three-dimensional space;
step three: converting the two-dimensional planar matrix R2 in the second three-dimensional space to a two-dimensional planar matrix R3 of the second camera;
step four: and taking the visual angle of the first camera as the visual angle of human eyes in the real space, and projecting the first real scene image corresponding to the two-dimensional planar matrix R3 into the second real scene of the second three-dimensional space corresponding to the visual angle of the second camera.
2. The method according to claim 1, wherein the method comprises: the method also comprises the following step five: the method comprises the steps of obtaining the position of a light source in a first reality scene of a first three-dimensional space, and arranging the same light source in a second reality scene of a second three-dimensional space according to the position of the light source in the first reality scene corresponding to the first three-dimensional space.
3. The method according to claim 1, wherein the method comprises: the second three-dimensional space in the fourth step refers to a spatial region where human eyes are located, the spatial region being different from the first three-dimensional space corresponding to the first reality scene image in the first step, and the first reality scene image corresponding to the two-dimensional plane matrix R3 is projected onto a plane or a curved surface in the second reality scene in the second three-dimensional space.
4. A method for generating a three-dimensional dynamic image fused with reality according to claim 3, wherein: the plane refers to a horizontal surface in the second real scene, and the curved surface refers to a surface with unevenness in the second real scene.
5. The method according to claim 1, wherein the method comprises: and in the fourth step, a three-dimensional animation which is pre-made according to the perspective relation of the first reality scene image seen by human eyes in the first three-dimensional space is further projected onto a plane or a curved surface in a second reality scene in a second three-dimensional space, wherein the three-dimensional animation is an animation model made in computer software.
6. The method according to claim 5, wherein the method comprises: the three-dimensional animation is an animation character, and the projection relation of the animation character is consistent with the projection relation of the two-dimensional plane matrix R3.
7. The method according to claim 1, wherein the method comprises: the projection in the method adopts a projector as equipment, the visual angle of human eyes in the method refers to an included angle formed by light rays led out from the upper part, the lower part or the left part and the right part of two ends of an object at the optical centers of the human eyes when the object is observed, the visual angle of a first camera refers to a mode of simulating the visual angle of the human eyes, the shooting angle range of the first camera is used for shooting, and a second camera refers to a virtual camera simulating a camera in the real world in computer software.
8. The method according to claim 1, wherein the method comprises: when the method is used for generating the three-dimensional dynamic image fused with reality, the sound information in the first reality scene corresponding to the first reality scene image is played in the second reality scene through the multimedia equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710369378.2A CN107134000B (en) | 2017-05-23 | 2017-05-23 | Reality-fused three-dimensional dynamic image generation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710369378.2A CN107134000B (en) | 2017-05-23 | 2017-05-23 | Reality-fused three-dimensional dynamic image generation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107134000A CN107134000A (en) | 2017-09-05 |
CN107134000B true CN107134000B (en) | 2020-10-23 |
Family
ID=59732312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710369378.2A Expired - Fee Related CN107134000B (en) | 2017-05-23 | 2017-05-23 | Reality-fused three-dimensional dynamic image generation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107134000B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109801351B (en) * | 2017-11-15 | 2023-04-14 | 阿里巴巴集团控股有限公司 | Dynamic image generation method and processing device |
CN108171786A (en) * | 2017-12-12 | 2018-06-15 | 上海爱优威软件开发有限公司 | A kind of terminal simulation design method and system |
CN108765582B (en) * | 2018-04-28 | 2022-06-17 | 海信视像科技股份有限公司 | Panoramic picture display method and device |
CN111050145B (en) * | 2018-10-11 | 2022-07-01 | 上海云绅智能科技有限公司 | Multi-screen fusion imaging method, intelligent device and system |
CN112687012A (en) * | 2021-01-08 | 2021-04-20 | 中国南方电网有限责任公司超高压输电公司南宁监控中心 | Island information fusion method based on three-dimensional visual management and control platform |
CN113091764B (en) * | 2021-03-31 | 2022-07-08 | 泰瑞数创科技(北京)有限公司 | Method for customizing and displaying navigation route of live-action three-dimensional map |
CN115965519A (en) * | 2021-10-08 | 2023-04-14 | 北京字跳网络技术有限公司 | Model processing method, device, equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102111562A (en) * | 2009-12-25 | 2011-06-29 | 新奥特(北京)视频技术有限公司 | Projection conversion method for three-dimensional model and device adopting same |
WO2011144793A1 (en) * | 2010-05-18 | 2011-11-24 | Teknologian Tutkimuskeskus Vtt | Mobile device, server arrangement and method for augmented reality applications |
CN103426195A (en) * | 2013-09-09 | 2013-12-04 | 天津常青藤文化传播有限公司 | Method for generating three-dimensional virtual animation scenes watched through naked eyes |
CN104903935A (en) * | 2012-10-04 | 2015-09-09 | 株式会社吉奥技术研究所 | Stereoscopic map display system |
CN105354820A (en) * | 2015-09-30 | 2016-02-24 | 深圳多新哆技术有限责任公司 | Method and apparatus for regulating virtual reality image |
CN105447911A (en) * | 2014-09-26 | 2016-03-30 | 联想(北京)有限公司 | 3D map merging method, 3D map merging device and electronic device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
IL159013A0 (en) * | 2001-05-22 | 2004-05-12 | Yoav Shefi | Method and system for displaying visual content in a virtual three-dimensional space |
CN106131536A (en) * | 2016-08-15 | 2016-11-16 | 万象三维视觉科技(北京)有限公司 | A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof |
-
2017
- 2017-05-23 CN CN201710369378.2A patent/CN107134000B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102111562A (en) * | 2009-12-25 | 2011-06-29 | 新奥特(北京)视频技术有限公司 | Projection conversion method for three-dimensional model and device adopting same |
WO2011144793A1 (en) * | 2010-05-18 | 2011-11-24 | Teknologian Tutkimuskeskus Vtt | Mobile device, server arrangement and method for augmented reality applications |
CN104903935A (en) * | 2012-10-04 | 2015-09-09 | 株式会社吉奥技术研究所 | Stereoscopic map display system |
CN103426195A (en) * | 2013-09-09 | 2013-12-04 | 天津常青藤文化传播有限公司 | Method for generating three-dimensional virtual animation scenes watched through naked eyes |
CN105447911A (en) * | 2014-09-26 | 2016-03-30 | 联想(北京)有限公司 | 3D map merging method, 3D map merging device and electronic device |
CN105354820A (en) * | 2015-09-30 | 2016-02-24 | 深圳多新哆技术有限责任公司 | Method and apparatus for regulating virtual reality image |
Non-Patent Citations (2)
Title |
---|
3维地形景观模拟中的透视投影变换;吴迪 等;《测绘通报》;20030630(第6期);第27-28,34页 * |
New 3D Image Technologies Developed in Taiwan;Tzuan-Ren Jeng 等;《IEEE TRANSACTIONS ON MAGNETICS》;20110331;第47卷(第3期);第663-668页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107134000A (en) | 2017-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107134000B (en) | Reality-fused three-dimensional dynamic image generation method and system | |
CN103426195B (en) | Generate the method for bore hole viewing three-dimensional cartoon scene | |
Agrawala et al. | Artistic multiprojection rendering | |
CN106527857A (en) | Virtual reality-based panoramic video interaction method | |
CN102800119B (en) | Animation display method and device of three-dimensional curve | |
CN202003534U (en) | Interactive electronic sand table | |
CN105137705B (en) | A kind of creation method and device of virtual ball curtain | |
TW200426487A (en) | Projecting system | |
US20110310310A1 (en) | System and method for imagination park tree projections | |
Hirose | Virtual reality technology and museum exhibit | |
CN103309145A (en) | 360-degree holographic phantom imaging system | |
CN215494533U (en) | 180-degree holographic and L-screen naked eye 3D immersive space | |
CN111047711A (en) | Immersive interactive Box image manufacturing method | |
CN202694670U (en) | Multimedia digital sand table | |
CN102737567B (en) | Multimedia orthographic projection digital model interactive integration system | |
CN202443687U (en) | Multimedia orthographic projection digital model interactive integrated system | |
CN202171927U (en) | Phantom imaging system | |
Ebert et al. | Tiled++: An enhanced tiled hi-res display wall | |
Rossi | Smart architectural models: Spatial projection-based augmented mock-up | |
Zheng et al. | A virtual environment making method for cave system | |
Zhang | The Dynamic Visual Art Language of Architectural Landscape in the Context of New Media | |
CN219320955U (en) | 360-degree three-dimensional virtual-real interactive sand table with middle hidden supporting structure | |
US20170221504A1 (en) | Photorealistic CGI Generated Character | |
Paneva-Marinova et al. | Presentation Layer in a Virtual Museum for Cultural Heritage Artefacts | |
Raskar | Projectors: advanced graphics and vision techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201023 |
|
CF01 | Termination of patent right due to non-payment of annual fee |