CN106228509A - Performance methods of exhibiting and device - Google Patents
Performance methods of exhibiting and device Download PDFInfo
- Publication number
- CN106228509A CN106228509A CN201610591312.3A CN201610591312A CN106228509A CN 106228509 A CN106228509 A CN 106228509A CN 201610591312 A CN201610591312 A CN 201610591312A CN 106228509 A CN106228509 A CN 106228509A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- collection point
- spherical coordinate
- pinup picture
- centered
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000001747 exhibiting effect Effects 0.000 title claims abstract description 21
- 238000012512 characterization method Methods 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 abstract description 26
- 238000005516 engineering process Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000004148 unit process Methods 0.000 description 1
Classifications
-
- G06T3/067—
Abstract
The invention discloses a kind of performance methods of exhibiting and device.The method includes: obtain the collection point of scene;Centered by collection point, carry out data acquisition according to the perspective view of collection point;The data collected being converted into pinup picture, and is pasted onto on the stereomodel centered by collection point by pinup picture, wherein, pinup picture is for showing the performance information of collection point.By the present invention, solve and prior art cannot intuitively be shown the technical problem of the performance of visual angle in VR scene.
Description
Technical field
The present invention relates to image processing field, in particular to a kind of performance methods of exhibiting and device.
Background technology
VR (Virtual Reality, i.e. virtual reality are called for short VR), is by VPL company of U.S. founder Lanier
(Jaron Lanier) proposed in early 1980s.Its concrete intension is: comprehensive utilization computer graphics system is with each
Plant the interface equipments such as reality and control, three-dimensional environment that generate on computers, can be mutual provides the technology immersing sensation.
Wherein, computer generate, virtual environment (i.e. Virtual Environment, be called for short VE) can be referred to as by mutual three-dimensional environment.
Virtual reality technology is the technology of a kind of computer simulation system that can create with the experiencing virtual world.It utilizes computer raw
Become a kind of simulated environment, utilize the interactive three-dimensional dynamic vision of Multi-source Information Fusion and the system emulation of entity behavior to make user
It is immersed in this environment.
So there is the application scenarios at the visual angle (360 degree of own roamings) of high-freedom degree, traditional performance evaluation for VR
Instrument cannot be played effectiveness.It is analyzed because traditional analytical tool is typically to be directed to a certain specific visual angle, can't
Consider other visual angles of this position.But in VR applies, because user can the most arbitrarily convert visual angle, so, often
The information at one visual angle all becomes particularly significant.
The most traditional performance evaluation, mainly records the performance analysis data of current rendered picture (fixed viewpoint).Or
Person records the performance analysis data of whole scene (fixed viewpoint).And how to represent with the exhibition method of broken line graph or plane graph
Data.
Prior art mainly has following obvious shortcoming, causes the scene that cannot be applicable to VR:
1., because VR scene is the application scenarios at visual angle of a high-freedom degree, visual angle that user is current is the most unknowable, because of
This performance simply recording a certain fixed viewpoint is meaningless, it is necessary to record the performance information at whole visual angle.
2. the form that represents of broken line graph or plane graph is not particularly suited for the displaying of VR scene information, because VR scene is many
The dimension of one angle information.
If the performance information 3. recording whole scene can produce substantial amounts of invalid data, not only take substantial amounts of storage sky
Between, also make data analysis become the most difficult.And some key position, user cannot revise freely and gather the close of data
Degree.
The shortcoming of prior art determines cannot intuitively show the performance of visual angle in VR scene.
For the technical problem in correlation technique, effective solution is the most not yet proposed.
Summary of the invention
Present invention is primarily targeted at a kind of performance methods of exhibiting of offer and device, so that solve cannot be straight in correlation technique
See and show the technical problem of the performance of visual angle in VR scene.
To achieve these goals, according to an aspect of the invention, it is provided a kind of performance methods of exhibiting, including: obtain
Take the collection point of scene;Centered by described collection point, carry out data acquisition according to the perspective view of described collection point;To gather
The data obtained are converted into pinup picture, and are pasted onto on the stereomodel centered by described collection point by described pinup picture, wherein, and institute
State pinup picture for showing the performance information of described collection point.
Further, centered by described collection point, carry out data acquisition according to the perspective view of described collection point and include:
Centered by described collection point, under spherical coordinate system, carry out data acquisition according to the acquisition granularity pre-set;To collect
Data be converted into pinup picture and include: the data collected under described spherical coordinate system are transformed into plane coordinate system, and use pre-
The data collected described in the parameter characterization first arranged are to form described pinup picture, and described parameter includes at least one of: figure,
Number of words, color;Described pinup picture is pasted onto on the stereomodel centered by described collection point and includes: described pinup picture is attached to
On spheroid centered by described collection point.
Further, centered by described collection point, count under spherical coordinate system according to the acquisition granularity pre-set
Include according to collection: the cartesian coordinate system in described scene is converted to spherical coordinate system, according to described the acquisition granularity at described ball
Travel through described scene under coordinate system to be acquired.
Further, under described spherical coordinate system, travel through described scene according to described the acquisition granularity to be acquired including: make
Under described spherical coordinate system, travel through described scene in predetermined angular be acquired with the angle information in described spherical coordinate system.
Further, the data collected under described spherical coordinate system are transformed into plane coordinate system to include: by described ball
Coordinate system is converted to cartesian coordinate system;Described cartesian coordinate system is converted to plane coordinate system.
Further, described collection point is be pre-configured with, and described collection point is one or more.
To achieve these goals, according to a further aspect in the invention, a kind of performance exhibiting device, this device are additionally provided
Including: acquiring unit, for obtaining the collection point of scene;Collecting unit, for centered by described collection point, adopts according to described
The perspective view of collection point carries out data acquisition;Converting unit, for being converted into pinup picture, and by described patch by the data collected
Figure is pasted onto on the stereomodel centered by described collection point, and wherein, described pinup picture is for showing the performance of described collection point
Information.
Further, described collecting unit, for centered by described collection point, exists according to the acquisition granularity pre-set
Data acquisition is carried out under spherical coordinate system;Described converting unit is for being transformed into flat by the data collected under described spherical coordinate system
Areal coordinate system, and use the data collected described in the parameter characterization pre-set to form described pinup picture, described parameter includes
At least one of: figure, number of words, color;During described converting unit is additionally operable to be attached to described collection point be by described pinup picture
On the spheroid of the heart.
Further, described collecting unit, for the cartesian coordinate system in described scene is converted to spherical coordinate system, is pressed
Under described spherical coordinate system, travel through described scene according to described the acquisition granularity to be acquired.
Further, described collecting unit is for using the angle information in described spherical coordinate system to exist in predetermined angular
Travel through described scene under described spherical coordinate system to be acquired.
Further, described converting unit is for being converted to cartesian coordinate system by described spherical coordinate system;Described conversion is single
Unit is additionally operable to described cartesian coordinate system is converted to plane coordinate system.
Further, described collection point is be pre-configured with, and described collection point is one or more.
In embodiments of the present invention, centered by collection point, carry out data acquisition according to the perspective view of collection point, gather
To data can reflect the performance information of perspective view of this collection point, in order to show each visual angle of collection point intuitively
Performance, the data collected are converted into pinup picture, pinup picture can show the performance information of collection point the most intuitively, by pinup picture
It is pasted onto on the stereomodel centered by collection point, so can the most three-dimensional show intuitively centered by collection point
The performance information at each visual angle, has reached the technique effect of the performance of visual angle in displaying VR scene directly perceived, has solved relevant
Technology cannot intuitively be shown the technical problem of the performance of visual angle in VR scene.
Accompanying drawing explanation
The accompanying drawing of the part constituting the application is used for providing a further understanding of the present invention, and the present invention's is schematic real
Execute example and illustrate for explaining the present invention, being not intended that inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of a kind of optional performance methods of exhibiting according to embodiments of the present invention;
Fig. 2 is the schematic diagram arranging collection point in the scene according to embodiments of the present invention;
Fig. 3 is the schematic diagram that the performance information of perspective view is shown in collection point in the scene according to embodiments of the present invention;
Fig. 4 is the flow chart of another kind of optional performance methods of exhibiting according to embodiments of the present invention;
Fig. 5 is the schematic diagram of performance exhibiting device according to embodiments of the present invention.
Detailed description of the invention
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can phases
Combination mutually.Describe the present invention below with reference to the accompanying drawings and in conjunction with the embodiments in detail.
In order to make those skilled in the art be more fully understood that the application scheme, below in conjunction with in the embodiment of the present application
Accompanying drawing, is clearly and completely described the technical scheme in the embodiment of the present application, it is clear that described embodiment is only
The embodiment of the application part rather than whole embodiments.Based on the embodiment in the application, ordinary skill people
The every other embodiment that member is obtained under not making creative work premise, all should belong to the model of the application protection
Enclose.
It should be noted that term " first " in the description and claims of this application and above-mentioned accompanying drawing, "
Two " it is etc. for distinguishing similar object, without being used for describing specific order or precedence.Should be appreciated that so use
Data can exchange in the appropriate case, in order to embodiments herein described herein.Additionally, term " includes " and " tool
Have " and their any deformation, it is intended that cover non-exclusive comprising, such as, contain series of steps or unit
Process, method, system, product or equipment are not necessarily limited to those steps or the unit clearly listed, but can include the most clear
That list to Chu or for intrinsic other step of these processes, method, product or equipment or unit.
Explanation of nouns:
VR:Virtual Reality, i.e. virtual reality, be called for short VR.
FPS:Frames Per Second, transmission frame number the most per second, is called for short frame per second or FPS.
DC:Draw Call, is preparing data and is notifying that the process that GPU renders is referred to as a Draw Call every time.
Normalization: map the data between 0-1.
Embodiments provide a kind of performance methods of exhibiting.
Fig. 1 is the flow chart of performance methods of exhibiting according to embodiments of the present invention, as it is shown in figure 1, the method includes following
Step:
Step S102, obtains the collection point of scene.
Step S104, centered by collection point, carries out data acquisition according to the perspective view of collection point.
The data collected are converted into pinup picture, and pinup picture are pasted onto standing centered by collection point by step S106
On body Model, wherein, pinup picture is for showing the performance information of collection point.
Collection point can preassign, and collection point can be one or more.Such as, the scene shown in Fig. 2 there are 4
Collection point.Processing mode to each collection point is identical, and the process to each collection point is independently to carry out.At record number
According to when the panoramic view data of position, collection point is acquired, it is possible to record directive data, use need not be predicted
Visual angle, family.
In embodiments of the present invention, centered by collection point, carry out data acquisition according to the perspective view of collection point, gather
To data can reflect the performance information of perspective view of this collection point, in order to show each visual angle of collection point intuitively
Performance, the data collected are converted into pinup picture, pinup picture can show the performance information of collection point the most intuitively, by pinup picture
It is pasted onto on the stereomodel centered by collection point, so can the most three-dimensional show intuitively centered by collection point
The performance information at each visual angle, solves and cannot intuitively show in correlation technique that in VR scene, the technology of the performance of visual angle is asked
Topic, has reached the technique effect of the performance of visual angle in displaying VR scene directly perceived.
In order to simulate visual angle truly, can be centered by collection point, according to the acquisition granularity pre-set at spherical coordinates
Data acquisition is carried out under system;The data collected under spherical coordinate system are transformed into plane coordinate system, and use and to pre-set
The data that parameter characterization collects are to form pinup picture, and parameter includes at least one of: figure, number of words, color;Pinup picture is attached to
On spheroid centered by collection point.
Under spherical coordinate system, the rotation at visual angle is very directly perceived, be easily controlled, and when moving with uniform velocity, amount of calculation is minimum,
Whole rotary course just can be controlled by two angular velocity.
The data that the figure pre-set, character or characterization collect can be used to form pinup picture.Such as, false
If the data that use characterization collects are to form pinup picture, owing to the rgb value of color is at [0,1] this interval range,
The data collected are normalized, namely the data collected are mapped in [0,1] this interval range.
In the prior art, use broken line graph or plane graph performance data, there is the problem that cannot show panoramic view data.
In embodiments of the present invention, the form that represents of data uses spheroid thermodynamic chart, it is possible to performance panoramic view data, spheroid is from the centre of sphere simultaneously
This dimensional information of user perspective is intuitively presented to the directions of rays of sphere emission.
According to the acquisition granularity pre-set, the perspective view centered by collection point is divided, obtain multiple visual angle
Scope.Multiple angulars field of view according to collection point carry out data acquisition, are depicted as a pinup picture, wherein, each angular field of view pair
Answer a position of a pinup picture.Then pinup picture is pasted onto on the spheroid centered by collection point.
Centered by collection point, before carrying out data acquisition according to the acquisition granularity pre-set under spherical coordinate system,
First the cartesian coordinate system in scene is converted to spherical coordinate system, under spherical coordinate system, travels through scene according to the acquisition granularity and adopt
Collection.
Use below equation (1) that the cartesian coordinate system in scene is converted to spherical coordinate system:
By the coordinate system of scene after cartesian coordinate system is converted to spherical coordinate system, it is (main that each frame gathers primary information
DC and FPS the two critical data to be gathered), the data normalization that will collect, in order to use color to represent afterwards and collect
Data.The acquisition granularity preset is depended in the change of each frame, is 2 degree as θ is 1 degree and φ.
The acquisition granularity can use two angle informations to represent, such as, uses two angle, θ in spherical coordinate system and φ table
Show, use θ and φ in spherical coordinate system to travel through scene in predetermined angular under spherical coordinate system and be acquired.For example it is assumed that adopt
Collection granularity is that to be 1 degree and φ be 2 degree to θ.θ, φ are from the beginning of 0, and every frame θ increases by 1 degree, (rotates one week) when θ is 360 degree, and θ returns
0, φ increases by 2 degree, until φ increases to 180 degree (rotation half cycles), after a sampling period of each sampled point terminates, permissible
A complete heating power pinup picture is drawn by the information collected.
During the data collected under spherical coordinate system are transformed into plane coordinate system, need to change spherical coordinate system
For cartesian coordinate system, then cartesian coordinate system is converted to plane coordinate system.
Use following formula that spherical coordinate system is converted to cartesian coordinate system:
In cartesian coordinate system, each point is still that a three-dimensional coordinate points, needs three-dimensional coordinate points conversion
To plane coordinate system, so could generate each point on pinup picture by one_to_one corresponding, and draw the color of each pixel.Can use
The UVMap algorithm position by angle-determining pixel, UVMap algorithm is a kind of algorithm that 3D coordinate system is mapped to 2D coordinate system
(cartesian coordinate system is converted to plane coordinate system), its conversion formula is as follows:
After pinup picture generates, can automatically be attached on current collection point, wherein color is the reddest, represents performance the most undesirable, red
Represented direction is the directions of rays from the centre of sphere to that pixel of sphere Show Color, has reached displaying collection point directly perceived
The effect of the performance information at each visual angle, postrun result is as shown in Figure 3.By record panorama performance information by the way of and ball
The ways of presentation that body heat is tried hard to so that tester and developer can understand each sampled point any direction easily and regard
The behavior pattern at angle, and then analyze the bottleneck point of performance more quickly and easily, and reasonably optimize and revise.
Collection point is be pre-configured with, and collection point is one or more.
By the way of adding collection point, greatly reducing the data volume continuing record data, it is many invalid to have filtered out
Data, solve that the data volume of record is the biggest or the problem of uncontrollable data probability density, and then carry to greatest extent
The operability of the high present invention and ease for use, test simultaneously or developer can also according to scene or other demands freely
Ground adds or deletes the quantity of collection point, or changes the density of collection point.Key area can be arranged higher collection
Density, in order to carry out careful performance detection.
The embodiment of the present invention additionally provides a kind of performance methods of exhibiting.As shown in Figure 4, the method comprises the following steps:
Step S402, is converted to spherical coordinate system by cartesian coordinate system.
Use above-mentioned formula (1) that cartesian coordinate system is converted to spherical coordinate system.
Step S404, travels through scene according to default resolution under spherical coordinate system automatically.
Default resolution is above-mentioned default the acquisition granularity.Perspective view is divided into multiple regarding according to default resolution
Angle.
Step S406, often one visual angle of traversal then records a secondary data.
Step S408, by spherical coordinate system Coordinate Conversion to plane coordinate system.
During the data collected under spherical coordinate system are transformed into plane coordinate system, need to use above-mentioned formula
(2) spherical coordinate system is converted to cartesian coordinate system, then uses above-mentioned formula (3) that cartesian coordinate system is converted to plane and sit
Mark system.
Step S410, generates pinup picture at plane coordinate system and is attached on the material ball of sampled point.
In embodiments of the present invention, adding collection point in the scene, the process to each collection point is identical, the most solely
Vertical, it is independent of each other.First coordinate system is transformed into spherical coordinate system to facilitate rotation, after having rotated, the data of collection is passed through
UVMap algorithm is transformed into plane coordinate system, and then the data collected are converted into the shape of pinup picture (thermodynamic chart) according to coordinate system
Formula, is attached on the spheroid of collection point such that it is able to reflect the performance information at each visual angle of collection point intuitively, and then
Can quickly investigate out performance bottleneck and be optimized.
According to embodiments of the present invention, a kind of performance exhibiting device is additionally provided.This performance exhibiting device is able to carry out above-mentioned
Performance methods of exhibiting, above-mentioned performance methods of exhibiting also is able to be performed by this performance exhibiting device.
Fig. 5 is the schematic diagram of performance exhibiting device according to embodiments of the present invention.As it is shown in figure 5, this device includes obtaining
Unit 10, collecting unit 20 and converting unit 30.
Acquiring unit 10, for obtaining the collection point of scene.
Collecting unit 20, for centered by collection point, carries out data acquisition according to the perspective view of collection point.
Converting unit 30, for the data collected are converted into pinup picture, and are pasted onto pinup picture and with collection point are
On the stereomodel of the heart, wherein, pinup picture is for showing the performance information of collection point.
Alternatively, collecting unit 20 is for centered by collection point, according to the acquisition granularity pre-set at spherical coordinate system
Under carry out data acquisition;Converting unit 30 is for being transformed into plane coordinate system by the data collected under spherical coordinate system, and makes
The data collected with the parameter characterization pre-set are to form pinup picture, and parameter includes at least one of: figure, number of words, face
Color;Converting unit 30 is additionally operable to be attached on the spheroid centered by collection point pinup picture.
Alternatively, collecting unit 20 is for being converted to spherical coordinate system by the cartesian coordinate system in scene, according to gathering grain
Degree travels through scene under spherical coordinate system and is acquired.
Alternatively, collecting unit 20 is for using the angle information in spherical coordinate system in predetermined angular at spherical coordinate system
Lower traversal scene is acquired.
Alternatively, converting unit 30 is for being converted to cartesian coordinate system by spherical coordinate system;Converting unit 30 be additionally operable to by
Cartesian coordinate system is converted to plane coordinate system.
Alternatively, collection point is be pre-configured with, and collection point is one or more.
Obviously, those skilled in the art should be understood that each module of the above-mentioned present invention or each step can be with general
Calculating device realize, they can concentrate on single calculating device, or be distributed in multiple calculating device and formed
Network on, alternatively, they can with calculate the executable program code of device realize, it is thus possible to by they store
Performed by calculating device in the storage device, or they are fabricated to respectively each integrated circuit modules, or by them
In multiple modules or step be fabricated to single integrated circuit module and realize.So, the present invention be not restricted to any specifically
Hardware and software combines.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for the skill of this area
For art personnel, the present invention can have various modifications and variations.All within the spirit and principles in the present invention, that is made any repaiies
Change, equivalent, improvement etc., should be included within the scope of the present invention.
Claims (12)
1. a performance methods of exhibiting, it is characterised in that including:
Obtain the collection point of scene;
Centered by described collection point, carry out data acquisition according to the perspective view of described collection point;
The data collected are converted into pinup picture, and described pinup picture is pasted onto the stereomodel centered by described collection point
On, wherein, described pinup picture is for showing the performance information of described collection point.
Method the most according to claim 1, it is characterised in that
Centered by described collection point, carry out data acquisition according to the perspective view of described collection point and include: with described collection point
Centered by, under spherical coordinate system, carry out data acquisition according to the acquisition granularity pre-set;
The data collected are converted into pinup picture include: the data collected under described spherical coordinate system are transformed into plane and sit
Mark system, and use the data collected described in the parameter characterization that pre-sets to form described pinup picture,
Described parameter includes at least one of: figure, number of words, color;
Described pinup picture is pasted onto on the stereomodel centered by described collection point and includes: be attached to described pinup picture adopt with described
On spheroid centered by collection point.
Method the most according to claim 2, it is characterised in that centered by described collection point, according to adopting of pre-setting
Collection granularity carries out data acquisition under spherical coordinate system and includes:
Cartesian coordinate system in described scene is converted to spherical coordinate system, according to described the acquisition granularity under described spherical coordinate system
Travel through described scene to be acquired.
Method the most according to claim 3, it is characterised in that travel through under described spherical coordinate system according to described the acquisition granularity
Described scene is acquired including:
Use the angle information in described spherical coordinate system to travel through described scene in predetermined angular under described spherical coordinate system to enter
Row gathers.
Method the most according to claim 3, it is characterised in that the data collected under described spherical coordinate system are transformed into
Plane coordinate system includes:
Described spherical coordinate system is converted to cartesian coordinate system;
Described cartesian coordinate system is converted to plane coordinate system.
Method the most according to any one of claim 1 to 5, it is characterised in that described collection point is be pre-configured with, institute
It is one or more for stating collection point.
7. a performance exhibiting device, it is characterised in that including:
Acquiring unit, for obtaining the collection point of scene;
Collecting unit, for centered by described collection point, carries out data acquisition according to the perspective view of described collection point;
Converting unit, for the data collected are converted into pinup picture, and are pasted onto described pinup picture and with described collection point are
On the stereomodel at center, wherein, described pinup picture is for showing the performance information of described collection point.
Device the most according to claim 7, it is characterised in that
Described collecting unit, for centered by described collection point, is carried out under spherical coordinate system according to the acquisition granularity pre-set
Data acquisition;
Described converting unit for being transformed into plane coordinate system by the data collected under described spherical coordinate system, and uses in advance
Described in the parameter characterization arranged, the data that collect are to form described pinup picture, and described parameter includes at least one of: figure, word
Number, color;
Described converting unit is additionally operable to be attached on the spheroid centered by described collection point described pinup picture.
Device the most according to claim 8, it is characterised in that
Described collecting unit is for being converted to spherical coordinate system by the cartesian coordinate system in described scene, according to described the acquisition granularity
Under described spherical coordinate system, travel through described scene be acquired.
Device the most according to claim 9, it is characterised in that
Described collecting unit is for using the angle information in described spherical coordinate system in predetermined angular at described spherical coordinate system
The lower described scene of traversal is acquired.
11. devices according to claim 9, it is characterised in that
Described converting unit is for being converted to cartesian coordinate system by described spherical coordinate system;
Described converting unit is additionally operable to described cartesian coordinate system is converted to plane coordinate system.
12. according to the device according to any one of claim 7 to 11, it is characterised in that described collection point is be pre-configured with,
Described collection point is one or more.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610591312.3A CN106228509A (en) | 2016-07-22 | 2016-07-22 | Performance methods of exhibiting and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610591312.3A CN106228509A (en) | 2016-07-22 | 2016-07-22 | Performance methods of exhibiting and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106228509A true CN106228509A (en) | 2016-12-14 |
Family
ID=57534014
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610591312.3A Pending CN106228509A (en) | 2016-07-22 | 2016-07-22 | Performance methods of exhibiting and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106228509A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365572A (en) * | 2020-09-30 | 2021-02-12 | 深圳市为汉科技有限公司 | Rendering method based on tessellation and related product thereof |
CN113205582A (en) * | 2021-06-03 | 2021-08-03 | 腾讯科技(深圳)有限公司 | Baking paste generating and using method, device, equipment and medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1556505A (en) * | 2004-01-08 | 2004-12-22 | 沈阳工业学院 | Virtual fractal spectacle three dimensional roaming system |
CN101527051A (en) * | 2009-03-26 | 2009-09-09 | 北京像素软件科技股份有限公司 | Method for rendering sky based on atmospheric scattering theory and device thereof |
CN102056015A (en) * | 2009-11-04 | 2011-05-11 | 沈阳隆惠科技有限公司 | Streaming media application method in panoramic virtual reality roaming |
CN103077552A (en) * | 2012-12-27 | 2013-05-01 | 浙江大学 | Three-dimensional displaying method based on multi-view video |
CN103226693A (en) * | 2013-03-21 | 2013-07-31 | 浙江工业大学 | Identification and space positioning device and method for fished targets based on omni directional vision |
CN104484327A (en) * | 2014-10-09 | 2015-04-01 | 上海杰图天下网络科技有限公司 | Project environment display method |
CN104517001A (en) * | 2014-02-27 | 2015-04-15 | 绿地控股集团有限公司 | Browser-based method for displaying to-be-constructed construction information |
CN105719271A (en) * | 2014-12-04 | 2016-06-29 | 高德软件有限公司 | Method and apparatus for determination of target object |
CN105741341A (en) * | 2016-01-27 | 2016-07-06 | 桂林长海发展有限责任公司 | Three-dimensional space environment imaging system and method |
-
2016
- 2016-07-22 CN CN201610591312.3A patent/CN106228509A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1556505A (en) * | 2004-01-08 | 2004-12-22 | 沈阳工业学院 | Virtual fractal spectacle three dimensional roaming system |
CN101527051A (en) * | 2009-03-26 | 2009-09-09 | 北京像素软件科技股份有限公司 | Method for rendering sky based on atmospheric scattering theory and device thereof |
CN102056015A (en) * | 2009-11-04 | 2011-05-11 | 沈阳隆惠科技有限公司 | Streaming media application method in panoramic virtual reality roaming |
CN103077552A (en) * | 2012-12-27 | 2013-05-01 | 浙江大学 | Three-dimensional displaying method based on multi-view video |
CN103226693A (en) * | 2013-03-21 | 2013-07-31 | 浙江工业大学 | Identification and space positioning device and method for fished targets based on omni directional vision |
CN104517001A (en) * | 2014-02-27 | 2015-04-15 | 绿地控股集团有限公司 | Browser-based method for displaying to-be-constructed construction information |
CN104484327A (en) * | 2014-10-09 | 2015-04-01 | 上海杰图天下网络科技有限公司 | Project environment display method |
CN105719271A (en) * | 2014-12-04 | 2016-06-29 | 高德软件有限公司 | Method and apparatus for determination of target object |
CN105741341A (en) * | 2016-01-27 | 2016-07-06 | 桂林长海发展有限责任公司 | Three-dimensional space environment imaging system and method |
Non-Patent Citations (2)
Title |
---|
张喜涛等: "可漫游的虚拟战场环境建模与实现", 《计算机仿真》 * |
李永亮: "基于图像绘制技术的全景漫游系统构建", 《地理空间信息》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365572A (en) * | 2020-09-30 | 2021-02-12 | 深圳市为汉科技有限公司 | Rendering method based on tessellation and related product thereof |
CN112365572B (en) * | 2020-09-30 | 2023-10-03 | 深圳市为汉科技有限公司 | Rendering method based on surface subdivision and related products thereof |
CN113205582A (en) * | 2021-06-03 | 2021-08-03 | 腾讯科技(深圳)有限公司 | Baking paste generating and using method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11538229B2 (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium | |
CN107018336A (en) | The method and apparatus of image procossing and the method and apparatus of Video processing | |
US8194101B1 (en) | Dynamic perspective video window | |
King et al. | ARVino-outdoor augmented reality visualisation of viticulture GIS data | |
CN107852487A (en) | Method for the electronic equipment of 360 degree of 3-D views of generation and for the electronic equipment | |
Wu et al. | Efficient VR and AR navigation through multiperspective occlusion management | |
CN108960947A (en) | Show house methods of exhibiting and system based on virtual reality | |
CN104392045B (en) | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal | |
CN110379010A (en) | Three-dimensional geographic information method for visualizing and system based on video fusion | |
WO2019238114A1 (en) | Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium | |
CN106157354B (en) | A kind of three-dimensional scenic switching method and system | |
CN108605093A (en) | Method and apparatus for handling 360 degree of images | |
CN110009561A (en) | A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place | |
CN105988578A (en) | Interactive video display method, device and system | |
CN108269305A (en) | A kind of two dimension, three-dimensional data linkage methods of exhibiting and system | |
CN102945563A (en) | Showing and interacting system and method for panoramic videos | |
JP2022539160A (en) | Simple environment solver with plane extraction | |
Fu et al. | Tunnel vision optimization method for VR flood scenes based on Gaussian blur | |
Jian et al. | Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system | |
CN115830199B (en) | XR technology-based ubiquitous training campus construction method, system and storage medium | |
WO2022017779A2 (en) | Map for augmented reality | |
CN113253842A (en) | Scene editing method and related device and equipment | |
CN106228509A (en) | Performance methods of exhibiting and device | |
US10909752B2 (en) | All-around spherical light field rendering method | |
Cui et al. | Fusing surveillance videos and three‐dimensional scene: A mixed reality system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161214 |
|
RJ01 | Rejection of invention patent application after publication |