CN111080807A - Method for adjusting model transparency - Google Patents
Method for adjusting model transparency Download PDFInfo
- Publication number
- CN111080807A CN111080807A CN201911347129.9A CN201911347129A CN111080807A CN 111080807 A CN111080807 A CN 111080807A CN 201911347129 A CN201911347129 A CN 201911347129A CN 111080807 A CN111080807 A CN 111080807A
- Authority
- CN
- China
- Prior art keywords
- model
- target model
- transparency
- models
- adjusting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Abstract
The invention discloses a method for adjusting model transparency, which comprises the following steps: step 1: selecting a model to be processed, manually selecting a target model, and automatically performing transparent processing on other models except the target model by software, wherein only a wire frame of a building is reserved, so that the actual position of the target model in the whole model can be conveniently checked; step 2, dynamically determining the position of a target model according to the movement condition of the position of the scene camera, wherein the position of the target model is in a visible area of the scene camera; step 3, automatically selecting a target model according to the dynamic data of the target model, and performing transparent processing on other models except the target model; according to the invention, through automatically judging the target model, other models which shield the target model are transparently processed, so that a user can determine and check the target model more quickly, and user operation is reduced.
Description
Technical Field
The invention relates to the technical field of software processing, in particular to a method for adjusting model transparency.
Background
With the development of software technology, three-dimensional visual display of data is rapidly developed according to the intuitiveness and convenience of data display. In the fields of smart cities, smart communities, smart factories and the like, three-dimensional visualization of data is indispensable.
In application scenes such as smart cities, smart communities, smart factories and the like, three-dimensional modeling of buildings is the basis of three-dimensional visualization. Due to the fact that the building structure is complex, and various equipment and facilities are attached to the building, when a user views the building, the problem that the target model is shielded by other models often occurs.
The existing processing method usually realizes the display of the target model through user interaction in modes of rotating the model, moving a scene camera and the like, and has complex operation and time consumption.
Disclosure of Invention
In view of the above, the present invention provides a method for adjusting model transparency,
in order to achieve the purpose, the invention adopts the following technical scheme:
a method of adjusting transparency of a model, comprising the steps of:
step 1: selecting a model to be processed, manually selecting a target model, and automatically performing transparent processing on other models except the target model by software, wherein only a wire frame of a building is reserved, so that the actual position of the target model in the whole model can be conveniently checked;
step 2, dynamically determining the position of a target model according to the movement condition of the position of the scene camera, and performing transparent processing on other models in a visible area of the scene camera;
and 3, automatically selecting the target model according to the dynamic data of the target model, and performing transparent processing on other models except the target model.
Preferably, the step 1 further comprises: the transparency treatment also comprises the step of adjusting the transparency, wherein the transparency is a value between 0 and 100 percent.
Preferably, when viewing the live-action mode of the model, the transparency of the model is 0; the preferred model transparency initial value is 30% when looking at the perspective mode of the model.
Preferably, in the step 2, the position of the target model is dynamically determined according to the movement condition of the position of the scene camera, and other models are transparently processed in the visible area of the scene camera. Models outside the visible region are not processed to reduce the resources consumed by re-rendering.
According to the technical scheme, the invention discloses the method for adjusting the model transparency, and other models which shield the target model are transparently processed by automatically judging the target model, so that a user can determine and check the target model more quickly, and the user operation is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a method for adjusting model transparency according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment 1 of the present invention provides a method for adjusting transparency of a model, including the steps of:
step 1: selecting a model to be processed, manually selecting a target model, and automatically performing transparent processing on other models except the target model by software, wherein only a wire frame of a building is reserved, so that the actual position of the target model in the whole model can be conveniently checked;
step 2, dynamically determining the position of a target model according to the movement condition of the position of the scene camera, and performing transparent processing on other models in a visible area of the scene camera;
and 3, automatically selecting the target model according to the dynamic data of the target model, and performing transparent processing on other models except the target model.
In a specific embodiment, the step 1 further includes: the transparency treatment also comprises the step of adjusting the transparency, wherein the transparency is a value between 0 and 100 percent.
In a particular embodiment, when viewing the live-action mode of the model, the transparency of the model is 0; the preferred model transparency initial value is 30% when looking at the perspective mode of the model.
In a specific embodiment, in step 2, the position of the target model is dynamically determined according to the movement condition of the position of the scene camera, and other models are transparently processed in the visible area of the scene camera. Models outside the visible region are not processed to reduce resources consumed by re-rendering
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (4)
1. A method for adjusting the transparency of a model, comprising the steps of:
step 1: selecting a model to be processed, manually selecting a target model, and automatically performing transparent processing on other models except the target model by software, wherein only a wire frame of a building is reserved, so that the actual position of the target model in the whole model can be conveniently checked;
step 2, dynamically determining the position of a target model according to the movement condition of the position of the scene camera, and performing transparent processing on other models in a visible area of the scene camera;
and 3, automatically selecting the target model according to the dynamic data of the target model, and performing transparent processing on other models except the target model.
2. The method for adjusting model transparency according to claim 1, wherein the step 1 further comprises: the transparency treatment also comprises the step of adjusting the transparency, wherein the transparency is a value between 0 and 100 percent.
3. The method for adjusting transparency of model according to claim 2, wherein when viewing the live-action mode of the model, the transparency of the model is 0; the preferred model transparency initial value is 30% when looking at the perspective mode of the model.
4. The method for adjusting model transparency according to claim 2, wherein in the step 2, the position of the target model is dynamically determined according to the movement of the position of the scene camera, and other models are transparently processed within the visible area of the scene camera. Models outside the visible region are not processed to reduce the resources consumed by re-rendering.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911347129.9A CN111080807A (en) | 2019-12-24 | 2019-12-24 | Method for adjusting model transparency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911347129.9A CN111080807A (en) | 2019-12-24 | 2019-12-24 | Method for adjusting model transparency |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111080807A true CN111080807A (en) | 2020-04-28 |
Family
ID=70317265
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911347129.9A Pending CN111080807A (en) | 2019-12-24 | 2019-12-24 | Method for adjusting model transparency |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111080807A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114937118A (en) * | 2022-06-09 | 2022-08-23 | 北京新唐思创教育科技有限公司 | Model conversion method, apparatus, device and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006067714A2 (en) * | 2004-12-20 | 2006-06-29 | Koninklijke Philips Electronics N.V. | Transparency change of view-obscuring objects |
CN103842042A (en) * | 2012-11-20 | 2014-06-04 | 齐麟致 | Information processing method and information processing device |
US20150130788A1 (en) * | 2012-11-07 | 2015-05-14 | Zhou Bailiang | Visualize the obscure object in 3d space |
CN105389847A (en) * | 2015-11-06 | 2016-03-09 | 网易(杭州)网络有限公司 | Drawing system and method of 3D scene, and terminal |
CN107396069A (en) * | 2017-09-01 | 2017-11-24 | 三筑工科技有限公司 | Monitor methods of exhibiting, apparatus and system |
-
2019
- 2019-12-24 CN CN201911347129.9A patent/CN111080807A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006067714A2 (en) * | 2004-12-20 | 2006-06-29 | Koninklijke Philips Electronics N.V. | Transparency change of view-obscuring objects |
US20150130788A1 (en) * | 2012-11-07 | 2015-05-14 | Zhou Bailiang | Visualize the obscure object in 3d space |
CN103842042A (en) * | 2012-11-20 | 2014-06-04 | 齐麟致 | Information processing method and information processing device |
CN105389847A (en) * | 2015-11-06 | 2016-03-09 | 网易(杭州)网络有限公司 | Drawing system and method of 3D scene, and terminal |
CN107396069A (en) * | 2017-09-01 | 2017-11-24 | 三筑工科技有限公司 | Monitor methods of exhibiting, apparatus and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114937118A (en) * | 2022-06-09 | 2022-08-23 | 北京新唐思创教育科技有限公司 | Model conversion method, apparatus, device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017092303A1 (en) | Virtual reality scenario model establishing method and device | |
CN108961152B (en) | Method and device for generating plane house type graph | |
CN111681320B (en) | Model display method and device in three-dimensional house model | |
CN111105507B (en) | Virtual accessory model generation method and device, processor and electronic device | |
US10600255B2 (en) | Technologies for composing a virtual reality setting in a mobile computing environment | |
US20140184596A1 (en) | Image based rendering | |
CN111414225A (en) | Three-dimensional model remote display method, first terminal, electronic device and storage medium | |
US10147240B2 (en) | Product image processing method, and apparatus and system thereof | |
US9659404B2 (en) | Normalized diffusion profile for subsurface scattering rendering | |
CN104680532A (en) | Object labeling method and device | |
CN112691381A (en) | Rendering method, device and equipment of virtual scene and computer readable storage medium | |
CN108846900B (en) | Method and system for improving spatial sense of user in room source virtual three-dimensional space diagram | |
CN107590337A (en) | A kind of house ornamentation displaying interactive approach and device | |
CN108920037B (en) | Method and device for displaying virtual three-dimensional space of house | |
CN111080807A (en) | Method for adjusting model transparency | |
US20220358694A1 (en) | Method and apparatus for generating a floor plan | |
EP4207082A1 (en) | Image-based lighting effect processing method and apparatus, and device, and storage medium | |
CN110619683A (en) | Three-dimensional model adjusting method and device, terminal equipment and storage medium | |
CN110889057A (en) | Business data visualization method and business object visualization device | |
CN109885172B (en) | Object interaction display method and system based on Augmented Reality (AR) | |
CN109491565B (en) | Method and equipment for displaying component information of object in three-dimensional scene | |
CN108734761B (en) | Scene visualization method and device, electronic equipment and storage medium | |
CN111047717A (en) | Method for carrying out character labeling on three-dimensional model | |
US10289289B2 (en) | Techniques for authoring view points, view paths, and view surfaces for 3D models | |
CN107452046B (en) | Texture processing method, device and equipment of three-dimensional city model and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |