CN116129058A - Cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence - Google Patents

Cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence Download PDF

Info

Publication number
CN116129058A
CN116129058A CN202310390117.4A CN202310390117A CN116129058A CN 116129058 A CN116129058 A CN 116129058A CN 202310390117 A CN202310390117 A CN 202310390117A CN 116129058 A CN116129058 A CN 116129058A
Authority
CN
China
Prior art keywords
light source
exhibition
modeling
vertex
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310390117.4A
Other languages
Chinese (zh)
Other versions
CN116129058B (en
Inventor
张辰昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Miaowen Creative Technology Co ltd
Original Assignee
Tulin Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tulin Technology Shenzhen Co ltd filed Critical Tulin Technology Shenzhen Co ltd
Priority to CN202310390117.4A priority Critical patent/CN116129058B/en
Publication of CN116129058A publication Critical patent/CN116129058A/en
Application granted granted Critical
Publication of CN116129058B publication Critical patent/CN116129058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of virtual world image rendering, in particular to a cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence. Which comprises the following steps: building a spatial structure of an exhibition, and setting a plurality of fixed-point light sources; matching the fixed point light source with the display area; modeling the exhibited items, and carrying out matching display on the exhibited items and the corresponding display areas after the modeling is completed; and establishing a participant model, and setting a projection light source for the visual angle of the participant. According to the invention, the display is supplemented by the set fixed-point light source, the projection angle of the projection light source is adjusted, so that the brightness degree of the display is changed by adjusting the angle of the projection light source, the participants can realize the omnibearing viewing of the display through different projected angles, and the effect of viewing the display in a cloud exhibition by the participants is improved.

Description

Cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence
Technical Field
The invention relates to the technical field of virtual world image rendering, in particular to a cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence.
Background
The exhibition is a propaganda activity which is carried out for showing products and technologies, expanding channels, promoting sales and spreading brands, people are required to participate in the activity on site when the exhibition is held, along with the proposal of the meta universe, the system of the virtual world is also continuously perfected, people in different areas are gathered together through the meta universe, people in different areas can carry out face-to-face communication, the communication effect is improved, and the virtual universe is established on a network to form a cloud exhibition, so that people watch the exhibits on the network.
At present, when people watch the exhibits in the cloud exhibition, the watching effect is the same as the watching effect of the pictures, the light and the vision are unchanged, the people cannot always obtain the desired effect in the cloud exhibition, the effect of participating in the exhibition on the internet is reduced, and the effect of watching the exhibits by people is influenced.
Disclosure of Invention
The invention aims to provide a cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence so as to solve the problems in the background technology.
In order to achieve the above purpose, the invention provides an artificial intelligence-based cloud exhibition three-dimensional modeling and rendering method, which comprises the following steps:
s1, building a spatial structure of an exhibition through a three-dimensional modeling technology, and setting a plurality of fixed-point light sources in the building process;
s2, modeling a display area in the modeled exhibition, and after the modeling is completed, matching the fixed-point light source with the display area to determine the light projection intensity;
s3, modeling the exhibited product, carrying out matching display on the exhibited product and the corresponding display area after the modeling is completed, and carrying out light intensity connection with the fixed point light source after the matching is completed;
s4, establishing a participant model, and setting a projection light source for a visual view angle of the participant.
As a further improvement of the technical scheme, the fixed point light sources and the projection light sources supplement light to the modeled exhibit, and the plurality of fixed point light sources control the light intensity inside the exhibit in the process of supplementing light to the exhibit.
As a further improvement of the technical scheme, three states exist in the projection light source for supplementing light to the exhibits:
state one: the projection light source is turned off, and light supplementing operation is carried out on the exhibited articles through the fixed-point light source;
state two: the projection light source is turned on, and the display is supplemented according to the visual angle of the participants;
state three: the projection light source is turned on, light is supplemented to the exhibited item according to the position selected by the participant, and the viewing angle of the visual angle of the participant is in an unfixed state in the light supplementing process of the projection light source.
As a further improvement of the technical scheme, the fixed-point light sources are visible rays of masses, the projection light sources and the participants perform one-to-one display, and the projection light sources among the participants do not perform the display.
As a further improvement of the technical scheme, in S2, when modeling the display area, the fixed point light sources between two adjacent display areas do not carry out complementary light, and the light intensity of each position inside the display area is the same.
As a further improvement of the technical scheme, when participants watch a plurality of exhibits in the exhibition, the LOD model generation algorithm is adopted to calculate the distance change of the viewing angle of the observation of the exhibits.
As a further improvement of the technical scheme, a geometric element deleting algorithm is adopted when the LOD model generating algorithm displays products with different distances, the model is gridded, the original modeling model is simplified through the distances and the viewing angles of the participants, and the vertexes are deleted.
As a further improvement of the technical scheme, the geometrical element deletion algorithm comprises the following steps:
a1, calculating local geometric and topological characteristics of each given vertex in the triangular mesh, and classifying the vertices;
a2, deleting the vertex when the distance from the point to the average plane is smaller than a given error value;
a3, carrying out local triangularization on the cavity left after the vertex is deleted;
a4, repeating the operations of A1, A2 and A3 until no vertex which can be deleted exists in the triangle mesh.
As a further improvement of the technical scheme, after the geometric element deleting algorithm deletes the vertexes, the grid simplifying algorithm is adopted to calculate the original model, so that the remote exhibit image is displayed, and the algorithm steps are as follows:
b1, calculating an error matrix Q of each vertex in the original model;
b2, selecting effective folding vertex pairs;
b3, for each vertex pair (v 1 ,v 2 ) Calculate the optimal for replacing v 1 ,v 2 New vertex v';
b4, placing all vertexes in the pile according to the folding cost sequence, and placing the minimum cost at the top;
b5 repeatedly connecting the vertex pairs (v 1 ,v 2 ) Output from the heap, fold, modify the cost of the affected vertex pairs.
Compared with the prior art, the invention has the beneficial effects that:
1. in this artificial intelligence based cloud exhibition three-dimensional modeling and rendering method, carry out light filling to the exhibit through the fixed point light source that sets up to make projection light source carry out the adjustment of projection angle, make the participant change the brightness degree of exhibit through adjusting projection light source's angle, make the participant realize the omnidirectional of exhibit through the different angles of projection and watch, improve the effect that the participant watched the exhibit in the cloud exhibition.
2. According to the cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence, distance control is carried out on the modeled exhibits in the exhibition through the LOD model algorithm, so that the definition of the same exhibit is different when the participants watch the same exhibit at different distances, the participants feel that the participants watch the exhibits in the real world, the authenticity of the exhibited in the cloud exhibition is improved, and the effect of the participants watching the exhibits is improved.
Drawings
Fig. 1 is a schematic overall structure of embodiment 1 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: the invention provides a cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence, referring to fig. 1, comprising the following steps:
s1, building a spatial structure of an exhibition through a three-dimensional modeling technology, and setting a plurality of fixed-point light sources in the building process;
s2, modeling a display area in the modeled exhibition, and after the modeling is completed, matching the fixed-point light source with the display area to determine the light projection intensity;
when modeling the display area, the fixed point light sources between two adjacent display areas do not carry out complementary light, so that the phenomenon that light rays between the two display areas are mutually staggered to carry out staggered illumination on the exhibited product is avoided, the operation intensity of the system on light projection is increased, and meanwhile, the light intensity at each position inside the display area is the same, so that the fixed point light sources form an effect of comprehensively illuminating the display area;
in the process of supplementing light to the exhibits, the plurality of fixed-point light sources control the overall light intensity in the exhibit, so that the influence on the viewing of the exhibits due to the overlarge light intensity in the exhibit is avoided;
s3, modeling the exhibited item, carrying out matching display on the exhibited item and a corresponding display area after the modeling is completed, and carrying out light intensity connection with the fixed point light source after the matching is completed, so that the fixed point light source and the projection light source carry out light supplementing on the modeled exhibited item, and a participant can clearly watch the exhibited item;
s4, establishing a participant model, and setting a projection light source for a visual view angle of the participant.
Wherein, projection light source is in three kinds of states to the light filling of exhibit:
state one: the projection light source is turned off, the light supplementing operation is carried out on the exhibited item through the fixed point light source, and the exhibited item is watched by the participator through the light of the fixed point light source, so that the light on the exhibited item is not changed along with the change of the viewing angle of the participator when the exhibited item is watched in the state;
state two: the projection light source is turned on, the exhibited item is supplemented with light according to the visual angle of the participator, the exhibited item is irradiated by the cooperation of the fixed-point light source, so that the exhibited item has the intensity difference of light, at the moment, the projection light source moves along with the light of the participator, so that the exhibited item has different viewing feelings under different viewing angles of the participator, and the effect of the viewer on the exhibited item is improved;
state three: the projection light source is turned on, light is supplemented to the exhibited item according to the position selected by the participant, and in the light supplementing process of the projection light source, the viewing angle of the visual view angle of the participant is in an unfixed state, at the moment, the irradiation direction of the projection light source is fixed by the participant, so that the projection light source supplements the exhibited item by a fixed angle, and the visual view angle of the participant in the state can move, so that the participant can view the state displayed by the exhibited item when the projection light source supplements the exhibited item by a fixed light;
the participants control and adjust the irradiation angle of the projection light source according to the self conditions, so that the participants are convenient for watching the exhibits.
Meanwhile, if the projection light source of each participant irradiates the exhibit, the quantity of light on the exhibit is too much, the effect of watching the exhibit by each participant can be affected, in order to solve the problems, the fixed point light source is set to be the masses visible light, and the fixed point light source is the light which can be seen by each participant entering the exhibition, meanwhile, the illumination light source is also provided for the exhibition, the projection light source and the participants perform one-to-one exhibition, the projection light sources among the participants do not perform the exhibition, so that the light irradiated by the projection light source can only be seen by the participants, the situation that the participants cannot clearly watch the exhibit due to the fact that a plurality of projection light sources appear on the same exhibit can not appear, and the effect of watching the exhibit by the participants in the cloud exhibition can be enhanced.
Meanwhile, in an exhibition, in order to improve the integrity of the view of the exhibition, a plurality of display areas are built together, the display areas are mutually combined to form a large area, and a large area is formed, so that different distances exist between a plurality of exhibits and participants in the display area, if the same display proportion is set for the exhibits, the participants easily see unreal feeling, in order to reduce the unreal feeling of the cloud exhibition in view of the exhibits, the reality of the view of the exhibits is improved, when the participants view the plurality of exhibits in the exhibition, the viewing angle of the exhibits is calculated by adopting an LOD model generation algorithm, and when the viewing point of the participants is close to the object, the detail of the model which can be observed is abundant; when the view point is far away from the model, the observed details are gradually blurred, and the LOD model is built, so that the data volume and complexity can be effectively reduced, the real-time processing of the three-dimensional scene is realized, the definition of the participants watching the exhibits is continuously processed in the moving process of the participants, and the reality of the participants watching the exhibits is enhanced.
Meanwhile, when participants watch the exhibits at different distances in the exhibition, corresponding details are selected according to the viewing angles watched by the participants to display, so that the excessive operation intensity of the system caused by the excessive emphasis on the details is reduced, meanwhile, the definition of watching the exhibits at different distances is also different, and the authenticity of exhibition demonstration is improved.
The LOD model generation algorithm is used for gridding the model by adopting a geometric element deletion algorithm when displaying products with different distances, simplifying an original modeling model by the distances and the viewing angles of participants, and deleting vertexes.
The geometrical element deletion algorithm comprises the following steps:
a1, calculating local geometric and topological characteristics of each given vertex in the triangular mesh, and classifying the vertices;
a2, deleting the vertex when the distance from the point to the average plane is smaller than a given error value;
a3, carrying out local triangularization on the cavity left after the vertex is deleted;
a4, repeating the operations of A1, A2 and A3 until no vertex which can be deleted exists in the triangle mesh.
After the geometric element deleting algorithm deletes the vertex, the grid simplifying algorithm is adopted to calculate the original model, so that the remote exhibit image is displayed, and the algorithm steps are as follows:
b1, calculating an error matrix Q of each vertex in the original model;
b2, selecting effective folding vertex pairs;
b3, for each vertex pair (v 1 ,v 2 ) Calculate the optimal for replacing v 1 ,v 2 New vertex v';
b4, placing all vertexes in the pile according to the folding cost sequence, and placing the minimum cost at the top;
b5 repeatedly connecting the vertex pairs (v 1 ,v 2 ) Output from the heap, fold, modify the cost of the affected vertex pairs.
In the algorithm, the key is that the calculation of the vertex error matrix Q in B1 and the selection of the new vertex v' position in B3 have various methods, and the error matrix calculation mainly adopts the square sum of the distances from the point to the relevant planes as error measurement, and the specific formula is as follows:
in three-dimensional Euclidean space, a plane is expressed as
Figure SMS_2
The planar normal is->
Figure SMS_4
And has->
Figure SMS_6
Point->
Figure SMS_8
The square of the distance to the plane is
Figure SMS_10
For a triangle set associated with a vertex, the quadratic error measure of that vertex +.>
Figure SMS_11
Wherein->
Figure SMS_12
The triplet +.>
Figure SMS_1
Wherein->
Figure SMS_3
,/>
Figure SMS_5
,/>
Figure SMS_7
Q is called an error matrix,>
Figure SMS_9
also known as secondary errors.
Regarding the selection of the position of the new vertex v ', an optimal selection method is adopted, namely, the space position v' is calculated, so that the error cost caused by folding operation is minimized, v 'can be searched on the connection line of two points, and v' can also be searched in the whole space, the minimum error cost can be obtained by the method, and the new vertex coordinate is as follows
Figure SMS_13
。/>

Claims (9)

1. The cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence is characterized by comprising the following steps of: the method comprises the following steps:
s1, building a spatial structure of an exhibition through a three-dimensional modeling technology, and setting a plurality of fixed-point light sources in the building process;
s2, modeling a display area in the modeled exhibition, and after the modeling is completed, matching the fixed-point light source with the display area to determine the light projection intensity;
s3, modeling the exhibited product, carrying out matching display on the exhibited product and the corresponding display area after the modeling is completed, and carrying out light intensity connection with the fixed point light source after the matching is completed;
s4, establishing a participant model, and setting a projection light source for a visual view angle of the participant.
2. The artificial intelligence based cloud exhibition three-dimensional modeling and rendering method of claim 1, wherein: the fixed point light sources and the projection light sources supplement light to the modeled exhibit, and the plurality of fixed point light sources control the light intensity in the exhibit during the light supplement process.
3. The artificial intelligence based cloud exhibition three-dimensional modeling and rendering method of claim 2, wherein: the projection light source has three states in supplementing light to the exhibited article:
state one: the projection light source is turned off, and light supplementing operation is carried out on the exhibited articles through the fixed-point light source;
state two: the projection light source is turned on, and the display is supplemented according to the visual angle of the participants;
state three: the projection light source is turned on, light is supplemented to the exhibited item according to the position selected by the participant, and the viewing angle of the visual angle of the participant is in an unfixed state in the light supplementing process of the projection light source.
4. The artificial intelligence based cloud exhibition three-dimensional modeling and rendering method of claim 1, wherein: the fixed point light source is the masses visible light, and the projection light source carries out one-to-one show with the participant, and the projection light source between the participant does not carry out the show.
5. The artificial intelligence based cloud exhibition three-dimensional modeling and rendering method of claim 1, wherein: in S2, when modeling the display area, the fixed point light sources between two adjacent display areas do not perform complementary light, and the light intensities of the positions inside the display areas are the same.
6. The artificial intelligence based cloud exhibition three-dimensional modeling and rendering method of claim 1, wherein: when participants watch a plurality of exhibits in the exhibition, the LOD model generation algorithm is adopted to calculate the change of the viewing angle of the exhibits.
7. The artificial intelligence based cloud exhibition three-dimensional modeling and rendering method of claim 6, wherein: when the LOD model generation algorithm displays products with different distances, a geometric element deletion algorithm is adopted to grid the model, the original modeling model is simplified through the distances and the viewing angles of the participants, and the vertexes are deleted.
8. The artificial intelligence based cloud exhibition three-dimensional modeling and rendering method of claim 7, wherein: the geometrical element deletion algorithm comprises the following steps:
a1, calculating local geometric and topological characteristics of each given vertex in the triangular mesh, and classifying the vertices;
a2, deleting the vertex when the distance from the point to the average plane is smaller than a given error value;
a3, carrying out local triangularization on the cavity left after the vertex is deleted;
a4, repeating the operations of A1, A2 and A3 until no vertex which can be deleted exists in the triangle mesh.
9. The artificial intelligence based cloud exhibition three-dimensional modeling and rendering method of claim 8, wherein: after the geometric element deleting algorithm deletes the vertexes, the original model is calculated by adopting a grid simplifying algorithm, so that the remote exhibit image is displayed, and the algorithm comprises the following steps:
b1, calculating an error matrix Q of each vertex in the original model;
b2, selecting effective folding vertex pairs;
b3, for each vertex pair (v 1 ,v 2 ) Calculate the optimal for replacing v 1 ,v 2 New vertex v';
b4, placing all vertexes in the pile according to the folding cost sequence, and placing the minimum cost at the top;
b5 repeatedly connecting the vertex pairs (v 1 ,v 2 ) Output from the heap, fold, modify the cost of the affected vertex pairs.
CN202310390117.4A 2023-04-13 2023-04-13 Cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence Active CN116129058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310390117.4A CN116129058B (en) 2023-04-13 2023-04-13 Cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310390117.4A CN116129058B (en) 2023-04-13 2023-04-13 Cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN116129058A true CN116129058A (en) 2023-05-16
CN116129058B CN116129058B (en) 2024-06-21

Family

ID=86297677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310390117.4A Active CN116129058B (en) 2023-04-13 2023-04-13 Cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116129058B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09319894A (en) * 1996-06-03 1997-12-12 Hitachi Ltd Method and device for three-dimensional cg image using plural light sources
KR200355011Y1 (en) * 2004-04-20 2004-07-01 염인철 An image expressing device use light with natural color
US20150325035A1 (en) * 2013-01-31 2015-11-12 Dirtt Environmental Solutions Inc. Method and system for efficient modeling of specular reflection
CN109597207A (en) * 2019-01-29 2019-04-09 京东方科技集团股份有限公司 Light compensating apparatus and method, the VR helmet of VR Eye-controlling focus
CN113298924A (en) * 2020-08-28 2021-08-24 阿里巴巴集团控股有限公司 Scene rendering method, computing device and storage medium
CN113332714A (en) * 2021-06-29 2021-09-03 天津亚克互动科技有限公司 Light supplementing method and device for game model, storage medium and computer equipment
CN115731336A (en) * 2023-01-06 2023-03-03 粤港澳大湾区数字经济研究院(福田) Image rendering method, image rendering model generation method and related device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09319894A (en) * 1996-06-03 1997-12-12 Hitachi Ltd Method and device for three-dimensional cg image using plural light sources
KR200355011Y1 (en) * 2004-04-20 2004-07-01 염인철 An image expressing device use light with natural color
US20150325035A1 (en) * 2013-01-31 2015-11-12 Dirtt Environmental Solutions Inc. Method and system for efficient modeling of specular reflection
CN109597207A (en) * 2019-01-29 2019-04-09 京东方科技集团股份有限公司 Light compensating apparatus and method, the VR helmet of VR Eye-controlling focus
CN113298924A (en) * 2020-08-28 2021-08-24 阿里巴巴集团控股有限公司 Scene rendering method, computing device and storage medium
CN113332714A (en) * 2021-06-29 2021-09-03 天津亚克互动科技有限公司 Light supplementing method and device for game model, storage medium and computer equipment
CN115731336A (en) * 2023-01-06 2023-03-03 粤港澳大湾区数字经济研究院(福田) Image rendering method, image rendering model generation method and related device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
彭雷,戴光明: "LOD模型生成算法研究", 微机发展, no. 04, pages 119 - 120 *
李培: "基于虚拟环境的博物馆数字建模及场景优化技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 884 *
李培: "基于虚拟环境的博物馆数字建模及场景优化技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 138 - 884 *
李磊: "Revit 三维模型 LOD 简化及 Web 可视化", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, pages 038 - 201 *
李磊: "Revit 三维模型 LOD 简化及 Web 可视化", 《中国优秀硕士学位论文全文数据库工程科技辑》, pages 038 - 201 *

Also Published As

Publication number Publication date
CN116129058B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN103337095B (en) The tridimensional virtual display methods of the three-dimensional geographical entity of a kind of real space
CN107705241B (en) Sand table construction method based on tile terrain modeling and projection correction
Agrawala et al. Artistic multiprojection rendering
US7796134B2 (en) Multi-plane horizontal perspective display
CN108830939B (en) Scene roaming experience method and experience system based on mixed reality
US20060152579A1 (en) Stereoscopic imaging system
CN103077552B (en) A kind of three-dimensional display method based on multi-view point video
CN101968892A (en) Method for automatically adjusting three-dimensional face model according to one face picture
US20060221071A1 (en) Horizontal perspective display
CN112530005B (en) Three-dimensional model linear structure recognition and automatic restoration method
CN104702936A (en) Virtual reality interaction method based on glasses-free 3D display
US20060250390A1 (en) Horizontal perspective display
CN104599305A (en) Two-dimension and three-dimension combined animation generation method
CN116152417B (en) Multi-viewpoint perspective space fitting and rendering method and device
CN104581119A (en) Display method of 3D images and head-wearing equipment
CN103077546A (en) Three-dimensional perspective transforming method of two-dimensional graphics
CN108881886A (en) A method of it is realized based on camera Matrix Technology and carries out the lossless interactive application of big data in display end
JP4996922B2 (en) 3D visualization
CN103455299A (en) Large-wall stereographic projection method
CN116129058B (en) Cloud exhibition three-dimensional modeling and rendering method based on artificial intelligence
Zhou et al. 3DPS: An auto-calibrated three-dimensional perspective-corrected spherical display
CN106204710A (en) The method that texture block based on two-dimensional image comentropy is mapped to three-dimensional grid model
CN103093491A (en) Three-dimensional model high sense of reality virtuality and reality combination rendering method based on multi-view video
TW494366B (en) Three dimensional graphics drawing apparatus and method thereof
CN102467747A (en) Building decoration animation three-dimensional (3D) effect processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240520

Address after: Room 301, No. 1585 Chenxiang Road, Chongming District, Shanghai, 202150 (Yongguan Economic Development Zone, Shanghai)

Applicant after: Shanghai Miaowen Creative Technology Co.,Ltd.

Country or region after: China

Address before: W57, 4th Floor, Lianhui Building, No. 5 Xingong Village, Sanlian Community, Longhua Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Applicant before: Tulin Technology (Shenzhen) Co.,Ltd.

Country or region before: China

GR01 Patent grant