CN105205860A - Display method and device for three-dimensional model scene - Google Patents

Display method and device for three-dimensional model scene Download PDF

Info

Publication number
CN105205860A
CN105205860A CN201510642707.7A CN201510642707A CN105205860A CN 105205860 A CN105205860 A CN 105205860A CN 201510642707 A CN201510642707 A CN 201510642707A CN 105205860 A CN105205860 A CN 105205860A
Authority
CN
China
Prior art keywords
dimensional model
model
default
viewing
fidelity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510642707.7A
Other languages
Chinese (zh)
Other versions
CN105205860B (en
Inventor
江春华
陈晓龙
罗新伟
陈显龙
方文
陈少坤
庞中山
李坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Forever Technology Co Ltd
Original Assignee
Beijing Forever Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Forever Technology Co Ltd filed Critical Beijing Forever Technology Co Ltd
Priority to CN201510642707.7A priority Critical patent/CN105205860B/en
Publication of CN105205860A publication Critical patent/CN105205860A/en
Application granted granted Critical
Publication of CN105205860B publication Critical patent/CN105205860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a display method for a three-dimensional model scene. The display method comprises the steps that a person model is set in a canvas space and is moved to a certain position, then the sight position and direction of the person model are determined, a conical watching area is structured with the sight position as a vertex and a preset angle as a central angle, a first type three-dimensional model in the conical watching area is searched for and displayed, the area outside the conical watching area is determined as a non-watching area, and a second type three-dimensional model in the non-watching area is searched for and displayed, wherein the simulation degree of the first type three-dimensional model is higher than that of the second type three-dimensional model. Thus, the models loaded in the display method are not all high-simulation models, and the area which cannot be hit by the sight is loaded as low-simulation-degree models by simulating the watching feelings of users, so that the loading efficiency of the model scene is improved, and the displayed scene better conforms to the watching experience of the users. The embodiment of the invention further provides a display device for the three-dimensional model scene.

Description

The methods of exhibiting of three-dimensional model scene and device
Technical field
The application relates to three-dimensional model display field, more specifically, is methods of exhibiting and the device of three-dimensional model scene.
Background technology
Three-dimensional model is the For Polygons Representation of object, may be displayed in the terminal devices such as computing machine, the object be used in simulate real world.Further, multiple three-dimensional model is presented in the painting canvas space of terminal device setting, the scene in a real world can be simulated.Such as, meadow model, goal model, prototype soccerballs, grandstand model etc. are presented in painting canvas space, a football pitch scene can be built.
Particularly, in the model scene of generation, person model can be comprised, the user of this model scene of person model representative viewing.User by the input equipment of video equipment, can control the movement of person model, along with the movement of person model, and the zones of different of painting canvas space display simulation scene, thus the real-world object seen in the moving process of analog subscriber in real scene.
At present, the three-dimensional model loaded in model scene is all the model of same type, the model that namely fidelity is higher, causes the displaying speed of simulated scenario slower.
Summary of the invention
In view of this, this application provides a kind of methods of exhibiting of three-dimensional model scene, in order to solve the lower technical matters of existing scene load mode speed.In addition, present invention also provides a kind of exhibiting device of three-dimensional model scene, in order to ensure the application in practice of described method and realization.
For realizing described object, the technical scheme that the application provides is as follows:
The first aspect of the application provides a kind of methods of exhibiting of three-dimensional model scene, comprising:
In default painting canvas space, according to the mobile operation of user to default person model, determine eye position and the direction of visual lines of described default person model; Wherein, default person model is shown in described default painting canvas space;
With the direction that described eye position is summit, predetermined angle is central angle, described direction of visual lines is described central angle, determine taper viewing areas, and the region outside described tapered sweep district is defined as non-viewing area;
In multiple default three-dimensional model, search the first kind three-dimensional model being positioned at described taper viewing areas, and search the Second Type three-dimensional model being positioned at described non-viewing area; Wherein, the fidelity of described first kind three-dimensional model is higher than the fidelity of described Second Type three-dimensional model;
In described default painting canvas space, show described first kind three-dimensional model and described Second Type three-dimensional model, obtain target three-dimensional scene.
The second aspect of the application provides a kind of exhibiting device of three-dimensional model scene, comprising:
Eye position and direction determination module, in default painting canvas space, according to the mobile operation of user to default person model, determine eye position and the direction of visual lines of described default person model; Wherein, default person model is shown in described default painting canvas space;
Painting canvas space viewing Region dividing module, for be summit with described eye position, predetermined angle is central angle, described direction of visual lines is described central angle direction, determine taper viewing areas, and the region outside described tapered sweep district is defined as non-viewing area;
Module searched by first kind three-dimensional model, in multiple default three-dimensional model, searches the first kind three-dimensional model being positioned at described taper viewing areas;
Module searched by Second Type three-dimensional model, in multiple default three-dimensional model, searches the Second Type three-dimensional model being positioned at described non-viewing area; Wherein, the fidelity of described first kind three-dimensional model is higher than the fidelity of described Second Type three-dimensional model;
Dissimilar three-dimensional model display module, in described default painting canvas space, shows described first kind three-dimensional model and described Second Type three-dimensional model, obtains target three-dimensional scene.
From above technical scheme, the application's tool has the following advantages:
This application provides a kind of methods of exhibiting embodiment of three-dimensional model scene, the present embodiment is previously provided with painting canvas space, painting canvas is provided with person model in space, after person model moves to certain position, determine eye position and the direction of visual lines of person model, with this eye position for summit, take predetermined angle as central angle, taper viewing areas is built in direction of visual lines side, in multiple default three-dimensional model, search the first kind three-dimensional model that is positioned at taper viewing areas and show, region outside taper viewing areas is defined as non-viewing area, and search the Second Type three-dimensional model that is positioned at non-viewing area and show, wherein, the fidelity of first kind three-dimensional model is higher than Second Type three-dimensional model.Visible, the three-dimensional model scene that the present embodiment generates is all not the model of high emulation, but the viewing of analog subscriber impression, the model that sight line can not be touched region is loaded as the lower model of fidelity, this kind of mode not only can improve the loading velocity of model scene, and the scene of showing meets the viewing experience of user more.
Certainly, the arbitrary product implementing the application might not need to reach above-described all advantages simultaneously.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only the embodiment of the application, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to the accompanying drawing provided.
The process flow diagram of the methods of exhibiting embodiment of the three-dimensional model scene that Fig. 1 provides for the application;
The specific implementation process flow diagram that different model is set at taper viewing areas according to line-of-sight distance that Fig. 2 provides for the application;
The specific implementation process flow diagram that different model is set in viewing buffer area that Fig. 3 provides for the application;
Fig. 4 is included in the specific implementation process flow diagram of space body inner model for setting that the application provides;
Fig. 5 to be blocked the specific implementation process flow diagram of model for setting that the application provides;
The structured flowchart of the exhibiting device embodiment of the three-dimensional model scene that Fig. 6 provides for the application.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present application, be clearly and completely described the technical scheme in the embodiment of the present application, obviously, described embodiment is only some embodiments of the present application, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of the application's protection.
See Fig. 1, it illustrates the flow process of the methods of exhibiting embodiment of the three-dimensional model scene that the application provides.As shown in Figure 1, the present embodiment can specifically comprise step S101 ~ step S104.
Step S101: in default painting canvas space, according to the mobile operation of user to default person model, determines eye position and the direction of visual lines of default person model; Wherein, default person model is shown in default painting canvas space.
Wherein, pre-set painting canvas space, and person model is set in painting canvas space, be used for simulating real person and the user of current viewing model scene.User can by the movement of the input device controls such as mouse, keyboard person model, and the present embodiment, according to the shift position of person model, carrys out the different model scene that Reality simulation personage watches at diverse location.
Particularly, can be the space coordinates that painting canvas spatial placement is corresponding, the eye position of person model and the position of eyes, can represent by the three-dimensional coordinate in usage space coordinate system, wherein, in three-dimensional coordinate, the value of Z dimension can be preset value, this preset value represents the eye-level of person model on level land, certainly, if person model is moved into certain height, then this height of sighting line value can respective change.In addition, person model by front and back left back mobile time, in the D coordinates value of eye position, the value of X dimension and Y dimension can correspondingly change.
At certain current time, obtain the eye position of person model in painting canvas space, and, obtain the direction of visual lines of person model.Direction of visual lines refers to the direction faced by person model, can be expressed as the angle with certain reference direction.Such as, reference direction is the direction of X dimension, i.e. dead ahead, then what direction of visual lines represented is the sight line of person model and the angle in dead ahead.
Step S102: take eye position as summit, predetermined angle is central angle, direction that direction of visual lines is central angle, determines taper viewing areas, and the region outside tapered sweep district is defined as non-viewing area.
Wherein, the three-dimensional model scene of generation needs to show the mankind to watch, it is appreciated that human eye has certain angular field of view, is generally the numerical value between 90 ° to 120 °.Therefore, certain numerical value within the scope of this can be got as predetermined angle.
When in the face of certain direction, human eye can upper and lower left back rotation, thus the field range of watching can think a conical region.Therefore, the present embodiment, using the eye position of person model as summit, using predetermined angle as central angle, determines the viewing areas of a taper.Wherein, this central angle represents the angle between two lines relative in tapered side.In addition, this taper viewing areas has direction, and direction is the direction of visual lines of person model.
In painting canvas space, the region that person model can be watched is defined as taper viewing areas, in addition, the current region that can not watch of remaining region representation person model, is defined as non-viewing area by this subregion.Like this, whole painting canvas space can be thought and to be made up of two parts, is respectively: taper viewing areas and non-viewing area.
Step S103: in multiple default three-dimensional model, searches the first kind three-dimensional model being positioned at taper viewing areas, and searches the Second Type three-dimensional model being positioned at non-viewing area; Wherein, the fidelity of first kind three-dimensional model is higher than the fidelity of Second Type three-dimensional model.
Wherein, according to the scene of realistic simulation, preset the three-dimensional model making object in this scene.Such as, simulated scenario is school, can make the tree modelling etc. of teaching building model, playground model, road model, road both sides.Wherein, three-dimensional model uses existing three-dimensional drawing tool making, as CAD drawing instrument.
It should be noted that, each object can have two kinds of dissimilar three-dimensional models, and the fidelity of two kinds of three-dimensional models is different, and a fidelity is higher, and another fidelity is lower.Such as, the teaching building model that fidelity is higher can comprise three-dimensional window, door and stair etc., or, can further include word upstairs, pattern etc.On the contrary, the teaching building module that fidelity is lower can be cube model, cubical side can comprise the door and window etc. of two dimension, and the word do not comprised upstairs and pattern.Certainly, this is only an example, and the application is not limited thereto.
Be taper viewing areas and non-viewing area two parts above by painting canvas spatial division, in the model that fidelity is higher, search the model being arranged in taper viewing areas, the model found is called first kind three-dimensional model, and, in the model that fidelity is lower, search the model being arranged in non-viewing area, the model found is called Second Type three-dimensional model.
Step S104: in default painting canvas space, display first kind three-dimensional model and Second Type three-dimensional model, obtain target three-dimensional scene.
Wherein, three-dimensional model has respective coordinate figure, for representing its present position in painting canvas space.Therefore, according to the coordinate figure of three-dimensional model, three-dimensional model can be placed in painting canvas space, thus obtain three-dimensional model scene, be shown to user's viewing.
From above technical scheme, this application provides a kind of methods of exhibiting embodiment of three-dimensional model scene, the present embodiment is previously provided with painting canvas space, painting canvas is provided with person model in space, after person model moves to certain position, determine eye position and the direction of visual lines of person model, with this eye position for summit, take predetermined angle as central angle, taper viewing areas is built in direction of visual lines side, in multiple default three-dimensional model, search the first kind three-dimensional model that is positioned at taper viewing areas and show, region outside taper viewing areas is defined as non-viewing area, and search the Second Type three-dimensional model that is positioned at non-viewing area and show, wherein, the fidelity of first kind three-dimensional model is higher than Second Type three-dimensional model.Visible, the three-dimensional model scene that the present embodiment generates is all not the model of high emulation, but the viewing of analog subscriber impression, the model that sight line can not be touched region is loaded as the lower model of fidelity, this kind of mode not only can improve the loading efficiency of model scene, and the scene of showing meets the viewing experience of user more.
In above embodiment, be two parts by whole painting canvas spatial division, a part is sight line region in one's power, and another part is the too late region of sight line, shows the three-dimensional model that fidelity is different respectively.Further, careful division can also be carried out to sight line region in one's power.
Be understandable that, in actual life, the mankind are when watching the object on certain direction, and object sharpness reduces gradually along with the increase of viewing distance.Therefore, can distance threshold be pre-set, so that taper viewing areas is divided into two parts, show the three-dimensional model that fidelity is different respectively.
Particularly, as shown in Figure 2, in above-described embodiment in step S103 in multiple default three-dimensional model, the specific implementation of searching the first kind three-dimensional model being positioned at taper viewing areas comprises step S201 ~ step S202.
Step S201: in default refined model storehouse, search at taper viewing areas and and distance between eye position do not exceed the three-dimensional model of predeterminable range threshold value.
Step S202: in default simple model storehouse, search at taper viewing areas and and distance between eye position exceed the three-dimensional model of predeterminable range threshold value, and search the three-dimensional model being positioned at non-viewing area;
Specifically, before enforcement, two model banies can be pre-set, be respectively refined model storehouse and simple model storehouse.Comprise the whole three-dimensional models in this simulated scenario in two model banies, only the fidelity of three-dimensional model is different, particularly, presets the fidelity of three-dimensional model in refined model storehouse, higher than the fidelity of three-dimensional model in default simple model storehouse.
Such as, the three-dimensional model of teaching building is all comprised in refined model storehouse and simple model storehouse, wherein, in refined model storehouse, teaching building model emulates more, such as comprise three-dimensional floor, door and window, word and pattern etc., in simple model storehouse, teaching building model is cube, cuboid sides is posted the flooring body of wall etc. that two-dimension picture represents.
As known from the above, be previously provided with distance threshold, the distance value of the clear viewing of distance threshold simulation people.Usually, human eye can object within clear viewing 50m, therefore, according to the engineer's scale of simulated scenario and real scene, can carry out ratiometric conversion to 50m, the predeterminable range threshold value in acquisition the application.Certainly, predeterminable range threshold value can also be other numerical value, and is confined to this numerical value.
Start with the summit of taper viewing areas, extend the length of this predeterminable range threshold value to direction of visual lines, using the taper viewing areas within predeterminable range threshold value as Part I region, using part remaining in taper viewing areas as Part I region.
In refined model storehouse, search the model being positioned at Part I region, in simple model storehouse, search the model being positioned at Part II region.Like this, taper viewing areas is also divided into two parts, and distance eye position does not exceed the part of predetermined threshold value distance, display be meticulous model, the rough model of the part display exceeded.This kind of load mode, by the mode of models show in painting canvas, can improve the formation efficiency of model scene further.
It should be noted that, the fidelity of model in the Part II region of taper viewing areas, can higher than the fidelity of model in non-viewing area.Specifically, except arranging refined model storehouse and simple model storehouse, can also arrange transition model storehouse, wherein, in these three model banies, the fidelity sequence of model is that refined model storehouse is greater than transition model storehouse and is greater than simple model storehouse.
In concrete enforcement, model in the Part I region of taper viewing areas can from refined model library lookup to, model in the Part I region of taper viewing areas can find from transition model storehouse, and the model in non-viewing area can find from simple model storehouse.Thus in the model scene of displaying, trizonal fidelity reduces successively, this kind of mode not only can improve loading velocity, and more meets the viewing impression of user, better user experience.
Above careful division is carried out to taper viewing areas, certainly, in actual applications, can also distinguish non-viewing area.Particularly, although the sight angle of people is less than 180 degree, extend to the region of 180 degree from sight angle to both sides, still can fuzzyly watch.
Therefore, as shown in Figure 3, in above-described embodiment, step S103 is in multiple default three-dimensional model, and the specific implementation of searching the Second Type three-dimensional model being positioned at non-viewing area can comprise step S301 ~ step S304.
Step S301: line centered by direction of visual lines, is divided into two equal subangles by predetermined angle, and be summit respectively with eye position, the complementary angle of subangle is central angle, determines the viewing buffer area of two tapers.
Wherein, line centered by direction of visual lines, default sight angle is divided into two equal subangles, such as, predetermined angle is 120 degree, and predetermined angle is divided into the subangle of two 60 degree by direction of visual lines.Using the complementary angle of subangle as central angle, determine the viewing buffer area of two tapers respectively in the both sides of taper viewing areas.Such as, subangle is 60 degree, then the angular dimension watching buffer area is 30 degree.
Step S302: the region outside taper viewing areas and two taper viewing buffer areas is defined as non-viewing areas.
Therefore, whole painting canvas space can Further Division be three parts, i.e. taper viewing areas, viewing buffer area and non-viewing areas.Wherein, viewing buffer zone and non-viewing areas can be combined as the non-viewing area in flow process shown in Fig. 1.
Step S303: in default refined model storehouse, searches the three-dimensional model being positioned at two viewing buffer areas.
Step S304: in default simple model storehouse, searches the second subtype three-dimensional model being positioned at non-viewing areas; Wherein, the fidelity of three-dimensional model in refined model storehouse is preset, higher than the fidelity of three-dimensional model in default simple model storehouse.
In this implementation, non-viewing areas can be divided into two parts, i.e. the viewing buffer area of predetermined angle complementary angle size, and person model non-viewing areas behind.In viewing buffer zone, the fidelity of three-dimensional model, higher than the fidelity of three-dimensional model in non-viewing areas, not only can improve model loading efficiency, also more meet user's viewing experience.
It should be noted that, the refined model storehouse in this implementation also can be the transition model storehouse during Fig. 2 illustrates, like this, in taper viewing areas, viewing buffer area and non-viewing areas, the fidelity of model reduces successively.
Certainly, the refined model storehouse in this implementation also can be the refined model storehouse above in Fig. 2, and like this, the model in taper viewing areas and buffering viewing areas is meticulous three-dimensional model.When person model is turned round to buffering viewing areas, do not need to reload model, displaying efficiency can be improved.But this mode, higher to the image procossing performance requirement of equipment, therefore, be applied on the equipment that has compared with hi-vision handling property.
Moreover, except above Region dividing, according to the relation between model, the lower model of some fidelities can also be loaded, to improve the displaying efficiency of model scene further.
Particularly, have in the large object of spatial accommodation, the object that some are little can be placed.Like this, in the three-dimensional model of large object, the three-dimensional model of some wisps can be loaded.If person model does not closely watch this large object model, then the wisp model comprised in this large object model can be the model that fidelity is lower.
Therefore, as shown in Figure 4, in above-described embodiment, step S103 is in multiple default three-dimensional model, and the specific implementation of searching the first kind three-dimensional model being positioned at taper viewing areas can comprise step S401 ~ step S403.
Step S401: in multiple default three-dimensional model, search the three-dimensional model being positioned at taper viewing areas.
Step S402: judge whether comprise the space body three-dimensional model with pre-set space mark in the three-dimensional model found; If so, step S403 is performed.
Step S403: the distance between space body three-dimensional model and eye position is greater than default low coverage threshold value, determines the space three-dimensional model being positioned at space body three-dimensional model inside; Wherein, the fidelity of space body three-dimensional model, higher than the fidelity of three-dimensional model in space.
Specifically, before enforcement, for large object dimensional model arranges corresponding mark (free token), for convenience of description, large object model can be called space body three-dimensional model.
First, in the three-dimensional model, search all three-dimensional models being positioned at taper viewing areas, if comprise space body three-dimensional model in the three-dimensional model found, then determine the distance of person model apart from this space body three-dimensional model further.This distance and the low coverage threshold value preset are compared, if the former is greater than the latter, then represent that person model watches certain space body model in larger distance, then now need the wisp model comprised in space body three-dimensional model (or being called three-dimensional model in space) to be set to the lower three-dimensional model of fidelity.
Certainly, if when the distance of person model metric space body three-dimensional models is less than or equal to default low coverage threshold value, immediately three-dimensional model in space can be loaded as the highest three-dimensional model of fidelity, the three-dimensional model namely in refined model storehouse.
As known from the above, in this implementation, in the space comprised in space body three-dimensional model, the fidelity of three-dimensional model is lower, therefore, can improve scene loading efficiency further.
In actual applications, except being loaded as except the lower model of fidelity by being included in three-dimensional model in space, can also the model be blocked so be processed, to improve scene loading efficiency further.
Particularly, as shown in Figure 5, in above-described embodiment, step S103 is in multiple default three-dimensional model, and the specific implementation of searching the first kind three-dimensional model being positioned at taper viewing areas can comprise step S501 ~ step S503.
Step S501: in multiple default three-dimensional model, search the three-dimensional model being positioned at taper viewing areas.
Step S502: judge whether comprise the shutter body three-dimensional model having and preset and block mark in the three-dimensional model found; If so, step S503 is performed.
Step S503: search the three-dimensional model that is blocked be positioned on rear side of shutter body three-dimensional model; Wherein, the fidelity of shutter body three-dimensional model, higher than the fidelity of the three-dimensional model that is blocked.
Specifically, before enforcement, for the three-dimensional model that can block other models arranges corresponding mark (blocking mark), for convenience of description, the model blocking other models can be called shutter body three-dimensional model, the model be blocked is called the three-dimensional model that is blocked.
First, in the three-dimensional model, search all three-dimensional models being positioned at taper viewing areas, if comprise shutter body three-dimensional model in the three-dimensional model found, then from direction of visual lines, to the three-dimensional model that is blocked on rear side of shutter body three-dimensional model be positioned at, be defined as the three-dimensional model of fidelity lower than shutter body three-dimensional model fidelity.
It should be noted that, shutter body three-dimensional model can find from refined model storehouse, and the body three-dimensional models that is blocked can find from simple model storehouse or transition model storehouse.
In this implementation, will from visual line of sight, the three-dimensional model of the object that is blocked is set to the lower three-dimensional model of fidelity, and the model data amount of loading is less, thus can improve the loading velocity of model of place.
In addition, relevant the application have following some supplement.
The first, in the prior art, when user checks certain model of place, client is the mobile operation according to user, obtains three-dimensional model be in real time illustrated in painting canvas space from server side.But the three-dimensional model in the application can download from server side in advance, when user wants to check model of place, load from the three-dimensional model that this locality is downloaded, thus scene loading velocity can be improved further.
The second, three-dimensional coordinate itself has respective coordinate figure, represents that three-dimensional coordinate is placed on the position in painting canvas space.Searching a kind of implementation being arranged in certain region three-dimensional model can be determine the coordinate range in this region, the coordinate figure of each three-dimensional model and the coordinate range in this region is compared.But in this kind of implementation, not only program needs to write complicated computational logic code, and program computation amount is comparatively large, and processing speed is slower.Therefore, another implementation of searching three-dimensional model calls collision function interface.
Particularly, divide the explanation of taper viewing areas known from above-mentioned relevant Fig. 2, need to use predeterminable range threshold value to search three-dimensional model.Therefore, can bind the sphere model (this sphere model is set to transparent, can't be presented in painting canvas space) that a radius is predeterminable range threshold value in person model, the movement of person model can drive the movement of sphere model.When sphere model and other three-dimensional models collide, collision function interface can produce collision accident.Therefore, when person model moves to certain position, call collision function interface in this position, directly search in collision function interface, be which three-dimensional model to produce collision accident with, the three-dimensional model producing collision accident is defined as the three-dimensional model within predeterminable range threshold value.
Certainly, default low coverage threshold value in Fig. 4 also can use another sphere model to represent, this sphere model is also bundled in person model, the event if this sphere model does not collide with space body three-dimensional model, distance between representation space body three-dimensional models and eye position is greater than default low coverage threshold value, say, to be person model be at remote this space body three-dimensional model of viewing, then three-dimensional model in the space in space body three-dimensional model is set to the lower model of fidelity vividerly.But once when this sphere model and space body three-dimensional model collide event, three-dimensional model in space close to this space body three-dimensional model, is then set to the higher model of fidelity by expression person model.
Call the mode of collision function interface, do not need to write computational logic code in program, programmed logic is better, and program computation amount is less, and processing speed is very fast.
Below the exhibiting device embodiment of the three-dimensional model scene that the application provides is introduced, it should be noted that, the explanation of the exhibiting device embodiment of regarding three-dimensional model scene see the methods of exhibiting embodiment of three-dimensional model scene provided above, can not repeat below.
See Fig. 6, it illustrates the structure of the exhibiting device embodiment of three-dimensional model scene.As shown in Figure 6, the exhibiting device embodiment of this three-dimensional model scene can specifically comprise: eye position and direction determination module 601, painting canvas space viewing Region dividing module 602, module 603 searched by first kind three-dimensional model, module 604 searched by Second Type three-dimensional model and dissimilar three-dimensional model display module 605; Wherein:
Eye position and direction determination module 601, in default painting canvas space, according to the mobile operation of user to default person model, determine eye position and the direction of visual lines of default person model; Wherein, default person model is shown in default painting canvas space;
Painting canvas space viewing Region dividing module 602, for take eye position as summit, predetermined angle is central angle, direction of visual lines is central angle direction, determine taper viewing areas, and the region outside tapered sweep district be defined as non-viewing area;
Module 603 searched by first kind three-dimensional model, in multiple default three-dimensional model, searches the first kind three-dimensional model being positioned at taper viewing areas;
Module 604 searched by Second Type three-dimensional model, in multiple default three-dimensional model, searches the Second Type three-dimensional model being positioned at non-viewing area; Wherein, the fidelity of first kind three-dimensional model is higher than the fidelity of Second Type three-dimensional model;
Dissimilar three-dimensional model display module 605, in default painting canvas space, display first kind three-dimensional model and Second Type three-dimensional model, obtain target three-dimensional scene.
From above technical scheme, this application provides a kind of exhibiting device embodiment of three-dimensional model scene, the present embodiment is previously provided with painting canvas space, painting canvas is provided with person model in space, after person model moves to certain position, eye position and direction determination module 601 determine eye position and the direction of visual lines of person model, Region dividing module 602 is watched with this eye position for summit in painting canvas space, take predetermined angle as central angle, taper viewing areas is built in direction of visual lines side, and then first kind three-dimensional model searches module 603 in multiple default three-dimensional model, search the first kind three-dimensional model being positioned at taper viewing areas, and shown by dissimilar three-dimensional model display module 605, region outside taper viewing areas is defined as non-viewing area by painting canvas space viewing Region dividing module 602, Second Type three-dimensional model is searched module 604 and is searched the Second Type three-dimensional model being positioned at non-viewing area, and shown by dissimilar three-dimensional model display module 605, wherein, the fidelity of first kind three-dimensional model is higher than Second Type three-dimensional model.Visible, the three-dimensional model scene that the present embodiment generates is all not the model of high emulation, but the viewing of analog subscriber impression, the model that sight line can not be touched region is loaded as the lower model of fidelity, this kind of mode not only can improve the loading efficiency of model scene, and the scene of showing meets the viewing experience of user more.
In actual applications, taper viewing areas has predeterminable range threshold value; Correspondingly, first kind three-dimensional model is searched module 603 and can specifically be comprised: submodule searched by line-of-sight distance inner model and line-of-sight distance external model searches submodule; Wherein:
Submodule searched by line-of-sight distance inner model, in default refined model storehouse, search at taper viewing areas and and distance between eye position do not exceed the three-dimensional model of predeterminable range threshold value;
Line-of-sight distance external model searches submodule, in default simple model storehouse, search at taper viewing areas and and distance between eye position exceed the three-dimensional model of predeterminable range threshold value, and search the three-dimensional model being positioned at non-viewing area;
Wherein, the fidelity of three-dimensional model in refined model storehouse is preset, higher than the fidelity of three-dimensional model in default simple model storehouse.
Further, in the exhibiting device embodiment of above three-dimensional model scene, the predetermined angle that painting canvas space viewing Region dividing module 602 uses is less than 180 degree; Correspondingly, search module 604 at Second Type three-dimensional model can specifically comprise: submodule, non-viewing areas determination submodule, buffer models determination submodule, buffer models determination submodule and non-viewing area model determination submodule are determined in viewing buffer zone; Wherein:
Submodule is determined in viewing buffer zone, for line centered by direction of visual lines, predetermined angle is divided into two equal subangles, and be summit respectively with eye position, the complementary angle of subangle is central angle, determines the viewing buffer area of two tapers;
Non-viewing areas determination submodule, for being defined as non-viewing areas by the region outside taper viewing areas and two taper viewing buffer areas;
Buffer models determination submodule, in default refined model storehouse, searches the three-dimensional model being positioned at two viewing buffer areas;
Non-viewing area model determination submodule, in default simple model storehouse, searches the second subtype three-dimensional model being positioned at non-viewing areas; Wherein, the fidelity of three-dimensional model in refined model storehouse is preset, higher than the fidelity of three-dimensional model in default simple model storehouse.
Moreover first kind three-dimensional model is searched module 603 and can specifically be comprised: submodule searched by viewing area model, space body model judges submodule and submodule searched by space inner model; Wherein:
Submodule searched by viewing area model, in multiple default three-dimensional model, searches the three-dimensional model being positioned at taper viewing areas;
Space body model judges submodule, whether comprises the space body three-dimensional model with pre-set space mark for judging in the three-dimensional model that finds; If so, submodule searched by trigger either spatial inner model;
Submodule searched by space inner model, when being greater than default low coverage threshold value for the distance between space body three-dimensional model and eye position, determines the space three-dimensional model being positioned at space body three-dimensional model inside; Wherein, the fidelity of space body three-dimensional model, higher than the fidelity of three-dimensional model in space.
Moreover first kind three-dimensional model is searched module 603 and can specifically be comprised: viewing area model determination submodule, shutter body model judge submodule and submodule searched by the model that is blocked; Wherein:
Viewing area model determination submodule, in multiple default three-dimensional model, searches the three-dimensional model being positioned at taper viewing areas;
Shutter body model judges submodule, whether comprises for judging the shutter body three-dimensional model having and preset and block mark in the three-dimensional model that finds; If so, trigger the model that is blocked and search submodule;
Submodule searched by the model that is blocked, for searching the three-dimensional model that is blocked be positioned on rear side of shutter body three-dimensional model; Wherein, the fidelity of shutter body three-dimensional model, higher than the fidelity of the three-dimensional model that is blocked.
It should be noted that, each embodiment in this instructions all adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar part mutually see.
Also it should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising above-mentioned key element and also there is other identical element.
To the above-mentioned explanation of the disclosed embodiments, professional and technical personnel in the field are realized or uses the application.To be apparent for those skilled in the art to the multiple amendment of these embodiments, General Principle as defined herein when not departing from the spirit or scope of the application, can realize in other embodiments.Therefore, the application can not be restricted to these embodiments shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (10)

1. a methods of exhibiting for three-dimensional model scene, is characterized in that, comprising:
In default painting canvas space, according to the mobile operation of user to default person model, determine eye position and the direction of visual lines of described default person model; Wherein, default person model is shown in described default painting canvas space;
With the direction that described eye position is summit, predetermined angle is central angle, described direction of visual lines is described central angle, determine taper viewing areas, and the region outside described tapered sweep district is defined as non-viewing area;
In multiple default three-dimensional model, search the first kind three-dimensional model being positioned at described taper viewing areas, and search the Second Type three-dimensional model being positioned at described non-viewing area; Wherein, the fidelity of described first kind three-dimensional model is higher than the fidelity of described Second Type three-dimensional model;
In described default painting canvas space, show described first kind three-dimensional model and described Second Type three-dimensional model, obtain target three-dimensional scene.
2. the methods of exhibiting of three-dimensional model scene according to claim 1, is characterized in that, described taper viewing areas has predeterminable range threshold value;
Correspondingly, described in multiple default three-dimensional model, search the first kind three-dimensional model being positioned at described taper viewing areas, comprising:
In default refined model storehouse, search at described taper viewing areas and and distance between described eye position do not exceed the three-dimensional model of described predeterminable range threshold value;
In default simple model storehouse, search at described taper viewing areas and and distance between described eye position exceed the three-dimensional model of described predeterminable range threshold value, and search the three-dimensional model being positioned at described non-viewing area;
Wherein, the fidelity of three-dimensional model in described default refined model storehouse, higher than the fidelity of three-dimensional model in described default simple model storehouse.
3. the methods of exhibiting of three-dimensional model scene according to claim 1, is characterized in that, described predetermined angle is less than 180 degree;
Correspondingly, in multiple default three-dimensional model, search the Second Type three-dimensional model being positioned at described non-viewing area, comprising:
Line centered by described direction of visual lines, is divided into two equal subangles by described predetermined angle, and respectively with described eye position be summit, the complementary angle of described subangle for central angle, determine the viewing buffer area of two tapers;
Region outside described taper viewing areas and two taper viewing buffer areas is defined as non-viewing areas;
In default refined model storehouse, search the three-dimensional model being positioned at two described viewing buffer areas;
In default simple model storehouse, search the second subtype three-dimensional model of non-viewing areas described in being positioned at;
Wherein, the fidelity of three-dimensional model in described default refined model storehouse, higher than the fidelity of three-dimensional model in described default simple model storehouse.
4. the methods of exhibiting of three-dimensional model scene according to claim 1, is characterized in that, described in multiple default three-dimensional model, searches the first kind three-dimensional model being positioned at described taper viewing areas, comprising:
In multiple default three-dimensional model, search the three-dimensional model being positioned at described taper viewing areas;
Judge in the three-dimensional model found, whether to comprise the space body three-dimensional model with pre-set space mark;
If so, the distance between described space body three-dimensional model and described eye position is greater than default low coverage threshold value, the space three-dimensional model being positioned at described space body three-dimensional model inside is determined;
Wherein, the fidelity of described space body three-dimensional model, higher than the fidelity of three-dimensional model in described space.
5. the methods of exhibiting of three-dimensional model scene according to claim 1, is characterized in that, described in multiple default three-dimensional model, searches the first kind three-dimensional model being positioned at described taper viewing areas, comprising:
In multiple default three-dimensional model, search the three-dimensional model being positioned at described taper viewing areas;
Judge in the three-dimensional model found, whether to comprise the shutter body three-dimensional model having and preset and block mark;
If so, the three-dimensional model that is blocked be positioned on rear side of described shutter body three-dimensional model is searched;
Wherein, the fidelity of described shutter body three-dimensional model, higher than the fidelity of the described three-dimensional model that is blocked.
6. an exhibiting device for three-dimensional model scene, is characterized in that, comprising:
Eye position and direction determination module, in default painting canvas space, according to the mobile operation of user to default person model, determine eye position and the direction of visual lines of described default person model; Wherein, default person model is shown in described default painting canvas space;
Painting canvas space viewing Region dividing module, for be summit with described eye position, predetermined angle is central angle, described direction of visual lines is described central angle direction, determine taper viewing areas, and the region outside described tapered sweep district is defined as non-viewing area;
Module searched by first kind three-dimensional model, in multiple default three-dimensional model, searches the first kind three-dimensional model being positioned at described taper viewing areas;
Module searched by Second Type three-dimensional model, in multiple default three-dimensional model, searches the Second Type three-dimensional model being positioned at described non-viewing area; Wherein, the fidelity of described first kind three-dimensional model is higher than the fidelity of described Second Type three-dimensional model;
Dissimilar three-dimensional model display module, in described default painting canvas space, shows described first kind three-dimensional model and described Second Type three-dimensional model, obtains target three-dimensional scene.
7. the exhibiting device of three-dimensional model scene according to claim 6, is characterized in that, described taper viewing areas has predeterminable range threshold value;
Correspondingly, described first kind three-dimensional model is searched module and is comprised:
Submodule searched by line-of-sight distance inner model, in default refined model storehouse, search at described taper viewing areas and and distance between described eye position do not exceed the three-dimensional model of described predeterminable range threshold value;
Line-of-sight distance external model searches submodule, for in default simple model storehouse, search at described taper viewing areas and and distance between described eye position exceed the three-dimensional model of described predeterminable range threshold value, and search the three-dimensional model being positioned at described non-viewing area;
Wherein, the fidelity of three-dimensional model in described default refined model storehouse, higher than the fidelity of three-dimensional model in described default simple model storehouse.
8. the exhibiting device of three-dimensional model scene according to claim 6, is characterized in that, described predetermined angle is less than 180 degree;
Correspondingly, search module at Second Type three-dimensional model to comprise:
Submodule is determined in viewing buffer zone, for line centered by described direction of visual lines, described predetermined angle is divided into two equal subangles, and respectively with described eye position be summit, the complementary angle of described subangle for central angle, determine the viewing buffer area of two tapers;
Non-viewing areas determination submodule, for being defined as non-viewing areas by the region outside described taper viewing areas and two taper viewing buffer areas;
Buffer models determination submodule, in default refined model storehouse, searches the three-dimensional model being positioned at two described viewing buffer areas;
Non-viewing area model determination submodule, in default simple model storehouse, searches the second subtype three-dimensional model of non-viewing areas described in being positioned at;
Wherein, the fidelity of three-dimensional model in described default refined model storehouse, higher than the fidelity of three-dimensional model in described default simple model storehouse.
9. the exhibiting device of three-dimensional model scene according to claim 6, is characterized in that, described first kind three-dimensional model is searched module and comprised:
Submodule searched by viewing area model, in multiple default three-dimensional model, searches the three-dimensional model being positioned at described taper viewing areas;
Space body model judges submodule, whether comprises the space body three-dimensional model with pre-set space mark for judging in the three-dimensional model that finds; If so, submodule searched by trigger either spatial inner model;
Submodule searched by space inner model, when being greater than default low coverage threshold value for the distance between described space body three-dimensional model and described eye position, determines the space three-dimensional model being positioned at described space body three-dimensional model inside; Wherein, the fidelity of described space body three-dimensional model, higher than the fidelity of three-dimensional model in described space.
10. the exhibiting device of three-dimensional model scene according to claim 6, is characterized in that, described first kind three-dimensional model is searched module and comprised:
Viewing area model determination submodule, in multiple default three-dimensional model, searches the three-dimensional model being positioned at described taper viewing areas;
Shutter body model judges submodule, whether comprises for judging the shutter body three-dimensional model having and preset and block mark in the three-dimensional model that finds; If so, trigger the model that is blocked and search submodule;
Submodule searched by the model that is blocked, for searching the three-dimensional model that is blocked be positioned on rear side of described shutter body three-dimensional model; Wherein, the fidelity of described shutter body three-dimensional model, higher than the fidelity of the described three-dimensional model that is blocked.
CN201510642707.7A 2015-09-30 2015-09-30 The methods of exhibiting and device of threedimensional model scene Active CN105205860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510642707.7A CN105205860B (en) 2015-09-30 2015-09-30 The methods of exhibiting and device of threedimensional model scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510642707.7A CN105205860B (en) 2015-09-30 2015-09-30 The methods of exhibiting and device of threedimensional model scene

Publications (2)

Publication Number Publication Date
CN105205860A true CN105205860A (en) 2015-12-30
CN105205860B CN105205860B (en) 2018-03-13

Family

ID=54953518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510642707.7A Active CN105205860B (en) 2015-09-30 2015-09-30 The methods of exhibiting and device of threedimensional model scene

Country Status (1)

Country Link
CN (1) CN105205860B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570923A (en) * 2016-09-27 2017-04-19 乐视控股(北京)有限公司 Frame rendering method and device
CN106780421A (en) * 2016-12-15 2017-05-31 苏州酷外文化传媒有限公司 Finishing effect methods of exhibiting based on panoramic platform
CN107577345A (en) * 2017-09-04 2018-01-12 苏州英诺迈医学创新服务有限公司 A kind of method and device for controlling virtual portrait roaming
CN107833274A (en) * 2017-11-21 2018-03-23 北京恒华伟业科技股份有限公司 A kind of creation method and system of three-dimensional cable model
CN109814703A (en) * 2017-11-21 2019-05-28 百度在线网络技术(北京)有限公司 A kind of display methods, device, equipment and medium
CN110096143A (en) * 2019-04-04 2019-08-06 贝壳技术有限公司 A kind of concern area of threedimensional model determines method and device
CN110992485A (en) * 2019-12-04 2020-04-10 北京恒华伟业科技股份有限公司 GIS map three-dimensional model azimuth display method and device and GIS map
WO2021098582A1 (en) * 2019-11-20 2021-05-27 Ke.Com (Beijing) Technology Co., Ltd. System and method for displaying virtual reality model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074050A (en) * 2011-03-01 2011-05-25 哈尔滨工程大学 Fractal multi-resolution simplified method used for large-scale terrain rendering
CN104867174A (en) * 2015-05-08 2015-08-26 腾讯科技(深圳)有限公司 Three-dimensional map rendering and display method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074050A (en) * 2011-03-01 2011-05-25 哈尔滨工程大学 Fractal multi-resolution simplified method used for large-scale terrain rendering
CN104867174A (en) * 2015-05-08 2015-08-26 腾讯科技(深圳)有限公司 Three-dimensional map rendering and display method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIE FENG 等: "Efficient View-Dependent LOD Control for Large 3D Unclosed Mesh Models of Environments", 《IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》 *
余青: "面向虚拟森林的多层场景引擎设计", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
冯洁 等: "大型三维网格模型的简化及基于视点的LOD控制", 《计算机辅助设计与图形学学报》 *
刘燕平 等: "模型和视点相关性的多分辨率模型的渲染", 《微电子学与计算机》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570923A (en) * 2016-09-27 2017-04-19 乐视控股(北京)有限公司 Frame rendering method and device
CN106780421A (en) * 2016-12-15 2017-05-31 苏州酷外文化传媒有限公司 Finishing effect methods of exhibiting based on panoramic platform
CN107577345A (en) * 2017-09-04 2018-01-12 苏州英诺迈医学创新服务有限公司 A kind of method and device for controlling virtual portrait roaming
CN107577345B (en) * 2017-09-04 2020-12-25 苏州英诺迈医学创新服务有限公司 Method and device for controlling virtual character roaming
CN107833274A (en) * 2017-11-21 2018-03-23 北京恒华伟业科技股份有限公司 A kind of creation method and system of three-dimensional cable model
CN109814703A (en) * 2017-11-21 2019-05-28 百度在线网络技术(北京)有限公司 A kind of display methods, device, equipment and medium
CN109814703B (en) * 2017-11-21 2022-05-17 百度在线网络技术(北京)有限公司 Display method, device, equipment and medium
CN110096143A (en) * 2019-04-04 2019-08-06 贝壳技术有限公司 A kind of concern area of threedimensional model determines method and device
CN110096143B (en) * 2019-04-04 2022-04-29 贝壳技术有限公司 Method and device for determining attention area of three-dimensional model
WO2021098582A1 (en) * 2019-11-20 2021-05-27 Ke.Com (Beijing) Technology Co., Ltd. System and method for displaying virtual reality model
CN110992485A (en) * 2019-12-04 2020-04-10 北京恒华伟业科技股份有限公司 GIS map three-dimensional model azimuth display method and device and GIS map

Also Published As

Publication number Publication date
CN105205860B (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN105205860B (en) The methods of exhibiting and device of threedimensional model scene
CN107469354B (en) Visible sensation method and device, storage medium, the electronic equipment of compensating sound information
CN111192354A (en) Three-dimensional simulation method and system based on virtual reality
KR20170092632A (en) Mixed-reality visualization and method
CN112915542B (en) Collision data processing method and device, computer equipment and storage medium
EP3832605B1 (en) Method and device for determining potentially visible set, apparatus, and storage medium
CN106204713B (en) Static merging processing method and device
WO2016081255A1 (en) Using depth information for drawing in augmented reality scenes
CN105657406A (en) Three-dimensional observation perspective selecting method and apparatus
CN102760308B (en) Method and device for node selection of object in three-dimensional virtual reality scene
CN105103112A (en) Apparatus and method for manipulating the orientation of object on display device
TW202121155A (en) Interactive object driving method, apparatus, device, and computer readable storage meidum
CN106780707A (en) The method and apparatus of global illumination in simulated scenario
CN105224179A (en) Method for information display and device
CN1171853A (en) Method for controlling level of detail displayed in computer generated screen display of complex structure
CN105225272A (en) A kind of tri-dimensional entity modelling method based on the reconstruct of many outline lines triangulation network
CN104461690A (en) Power equipment operation simulation system
CN110174950B (en) Scene switching method based on transmission gate
CN106683152B (en) 3D visual effect analogy method and device
CN107704667A (en) Simulate crowd movement's emulation mode, the device and system of sociability
CN114470775A (en) Object processing method, device, equipment and storage medium in virtual scene
CN101529474B (en) Image processing device, control method for image processing device
CN106485789A (en) A kind of 3D model loading method and its device
CN108446023B (en) Virtual reality feedback device and positioning method, feedback method and positioning system thereof
US10297090B2 (en) Information processing system, apparatus and method for generating an object in a virtual space

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant