CN108876889B - In-situ volume rendering method - Google Patents

In-situ volume rendering method Download PDF

Info

Publication number
CN108876889B
CN108876889B CN201810549318.3A CN201810549318A CN108876889B CN 108876889 B CN108876889 B CN 108876889B CN 201810549318 A CN201810549318 A CN 201810549318A CN 108876889 B CN108876889 B CN 108876889B
Authority
CN
China
Prior art keywords
depth image
image
volume
data
body depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810549318.3A
Other languages
Chinese (zh)
Other versions
CN108876889A (en
Inventor
解利军
洪天龙
郑耀
陈建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810549318.3A priority Critical patent/CN108876889B/en
Publication of CN108876889A publication Critical patent/CN108876889A/en
Application granted granted Critical
Publication of CN108876889B publication Critical patent/CN108876889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering

Abstract

The invention discloses an in-situ volume rendering method, which is used for performing data volume rendering preprocessing in situ during large-scale scientific calculation so as to greatly reduce data transmission and storage capacity; and performing interactive volume rendering at the rendering node according to the user viewpoint. The invention maps the original data into a depth image at a computing node, wherein the image is a direct volume rendering result with the depth image reserved. Because the user is difficult to interactively interfere in the in-situ calculation, the main drawing parameters are automatically optimized by using a particle swarm algorithm. The user can set the desired compression rate and drawing time, and the algorithm searches the drawing parameters achieving the best drawing effect accordingly. The method can reduce the transmission data volume of 1 to 3 orders of magnitude of large-scale scientific calculation. Tests are carried out on a plurality of large-scale scientific computing applications, and the effectiveness of the method is verified.

Description

In-situ volume rendering method
Technical Field
The invention relates to the field of visualization, in particular to an in-situ volume rendering method.
Background
With the rapid development of computer hardware technology and numerical simulation methods, large-scale scientific computing has begun to attempt class E computing (10)18Billions of times), but the disk read-write speed of the current large computer is at least 4 to 5 orders of magnitude slower than the CPU calculation speed, and data read-write becomes the main speed bottleneck of scientific calculation and scientific data analysis.
In order to solve the problem, the current main method is to perform sparse sampling on output data in time and space, so as to reduce the scale of the output data, but the method inevitably loses transient and small-scale characteristics, and the original calculation result is greatly wasted. In order to fully utilize all the calculated data without increasing the data throughput, in-situ analysis and visualization concepts have been proposed in recent years in academia and industry. The core idea is that when data is obtained by calculation and simulation, analysis is immediately carried out in situ (without transmission and storage), then original data is abandoned, and only the analysis and visualization results are output.
Although the in-situ analysis and visualization idea is direct, the practical implementation is very difficult, and the real application is very rare. The main problem is that data analysis and visualization is often an exploratory process and often the appropriate methods and parameters cannot be determined "in situ" from the calculations.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an in-situ volume rendering method which uses a volume depth image as intermediate data and uses an optimization algorithm to perform automatic parameter setting, thereby finally realizing interactive volume rendering of ultra-large-scale scientific data. The specific technical scheme is as follows:
an in-situ volume rendering method, comprising the steps of:
(1) and (3) generating a body depth image: processing data of a calculation result at an in-situ calculation node of large-scale scientific calculation in the following processing mode:
(1.1) emitting a ray from the viewpoint O to each pixel point of a rendering region (W, H), and calculating the intersection point of each ray and a volume data bounding box, wherein W and H respectively represent the width and the height of the rendering region;
(1.2) sampling at equal intervals along the ray from the incident point to the emergent point of the bounding box, wherein the sampling interval is recorded as
Figure BDA0001680025880000011
Interpolating from peripheral data by using a trilinear interpolation method to obtain a sampling value;
(1.3) obtaining a color value c and an opacity value alpha of the sampling point by using a transfer function F;
(1.4) when the color similarity of adjacent sampling points on the same ray is smaller than a specific threshold value delta, merging the adjacent sampling points, wherein the merged sampling point set is called a super segment, the attribute of the super segment comprises a starting position, an ending position, a color value and transparency, the combination of all the super segments is called a body depth image and contains depth information and color information of data body drawing when observed from a specific viewpoint;
(2) and (3) volume depth image drawing: and (3) transmitting the volume depth image obtained in the step (1.4) to a drawing node for drawing, wherein the drawing method of the volume depth image comprises the following steps:
(2.1) expanding each super segment into a frustum by using the positions of the pixel points and the positions of the viewpoints during generation;
(2.2) sequencing all the frustum cones according to the depth according to the current drawing viewpoint;
(2.3) directly drawing all the frustum bodies after sequencing, and correcting the transparency of each frustum body during drawing: the transparency is determined according to the relationship between the length of the frustum and the length of the line segment through which the current line of sight passes, using the following formula:
Figure BDA0001680025880000021
wherein eta is the transparency value of the frustum, s is the length of the frustum, s 'is the length of the line segment of the current sight line passing through the frustum, and eta' is the transparency value after correction;
(3) automatic optimization of body depth image operation parameters
The key operation parameter combination in the generation of the body depth image is
Figure BDA0001680025880000026
Optimizing the data;
(3.1) selecting a group of parameter groups in the parameter space to generate a body depth image, and obtaining corresponding body depth image generation time t and body depth image compression ratio c;
(3.2) drawing the body depth image by using the body depth image to obtain a final image, and calculating the quality q of the final image;
(3.3) substituting the t, c and q values into an evaluation function to obtain the evaluation values of the group of parameters, and updating the global optimal parameter group according to the evaluation values, wherein the evaluation function formula is as follows:
Figure BDA0001680025880000022
in the formula k1,k2,k3Weights for the generation time of the body depth image, the data compression rate of the body depth image and the rendering quality of the body depth image respectively satisfy k1+k2+k31, ψ (t) is an evaluation function of the generation time,
Figure BDA0001680025880000023
is an evaluation function of data compression ratio, and the two formulas are as follows:
Figure BDA0001680025880000024
Figure BDA0001680025880000025
in the formula, alpha and beta are two proportional division points of generation time and compression ratio respectively, and values are in (0,1) intervals for regulation and control.
(3.4) finishing the optimization according to the termination condition to obtain an optimal parameter set; otherwise, go back to (3.1).
(4) In the subsequent simulation calculation, the body depth image is generated based on the set of optimum parameter sets.
Preferably, the body depth image generation time t in (3.1) is obtained by the following formula:
Figure BDA0001680025880000031
wherein t isiAnd N is the number of the calculation nodes when the volume depth image is generated for the ith calculation node.
Preferably, the body depth image compression rate c in (3.1) is obtained by the following formula:
Figure BDA0001680025880000032
in the formula, DiData size, D, of the body depth image generated for the ith compute noderawRepresenting the size of the original data.
Preferably, the quality q of the image in the step (3.2) is obtained by the following formula:
Figure BDA0001680025880000033
n is the number of image pixels, Xobs,iIs the color value of the ith pixel of an image rendered using a volume depth image, Xmodel,iIs the color value of the ith pixel of a reference image. Since each color value is a three-dimensional vector, (X)obs,i-Xmodel,i) The euler distance formula is used for the calculation of (a), and the smaller the value, the closer the two images are.
Preferably, the step (3) uses a particle swarm algorithm to perform the optimization, and the process thereof includes:
(a) randomly generating J points, wherein each point comprises a group of position information and speed information, the position information is a group of body depth image parameter groups, the parameter groups are randomly generated in a parameter space, and the speed values are randomly generated in a parameter space of (-1, 1);
(b) generating a body depth image according to the parameter set of each point, obtaining generation time t and a compression ratio c, drawing an image by using the generated body depth image data, and comparing the image with a reference image to obtain image quality q;
(c) substituting the evaluation function E into the evaluation functions E according to the t, c and q to obtain an evaluation value E, and updating the historical optimal solution and the global optimal solution of the point according to the E;
(d) and updating the speed and the position information of each point according to the historical optimal solution and the global optimal solution.
(e) And determining whether the iteration is ended or not by judging whether more than half of points in the point set and the Euler distance of the global optimal solution are smaller than a certain threshold epsilon or not. And (c) returning to the step (b) if the optimal solution does not meet the standard, otherwise, finishing the optimization, and obtaining the global optimal solution, namely the optimization result.
Compared with the prior art, the invention has the following beneficial effects:
1. the original data can be greatly reduced by using the volume depth image;
2. the drawing of the volume depth image can achieve a high-quality image result and a certain visual interaction effect;
3. the body depth image parameter automatic optimization based on the particle swarm algorithm can quickly and well search the optimal parameter set in a large-range continuous parameter space.
Drawings
FIG. 1 in-situ volume rendering mode of operation;
FIG. 2 is a schematic diagram of volume depth image generation;
FIG. 3 is a schematic drawing of a volume depth image;
FIG. 4 is a flow chart of automatic parameter optimization based on particle swarm optimization;
fig. 5DNS turbulence in-situ volume rendering results.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, and the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the algorithm for in-situ volume rendering is divided into two parts, one part is run on a computation simulation node in a supercomputer to convert original computation data into intermediate representation of a depth image, and the other part is run on a rendering server equipped with a graphics card to render a volume depth image according to a viewpoint, so as to realize a volume rendering effect. The parameter selection of the body depth image is realized by a parameter optimization algorithm, the body depth image is operated once only in the first time step of calculation, and the subsequent operation is not changed after the parameter is selected. The concrete implementation is as follows:
a method of in-situ volume rendering, the method comprising the steps of:
(1) body depth image generation (as shown in fig. 2): processing data of a calculation result at an in-situ calculation node of large-scale scientific calculation in the following processing mode:
(1.1) emitting a ray from the viewpoint O to each pixel point of a rendering region (W, H), and calculating the intersection point of each ray and a volume data bounding box, wherein W and H respectively represent the width and the height of the rendering region;
(1.2) sampling at equal intervals along the ray from the incident point to the emergent point of the bounding box, wherein the sampling interval is recorded as
Figure BDA0001680025880000041
Interpolating from peripheral data by using a trilinear interpolation method to obtain a sampling value;
(1.3) obtaining a color value c and an opacity value alpha of the sampling point by using a transfer function F;
(1.4) when the color similarity of adjacent sampling points on the same ray is smaller than a specific threshold value delta, merging the adjacent sampling points, wherein a set of the merged sampling points is called a super segment, the attributes of the super segment comprise a starting position, an ending position, a color value and transparency, the combination of all the super segments is called a body depth image and contains depth information and color information of data volume rendering when the super segment is observed from a specific viewpoint;
(2) volume depth image rendering (as shown in fig. 3): and (3) transmitting the volume depth image obtained in the step (1.4) to a drawing node for drawing, wherein the volume depth image is generally 1-3 orders of magnitude smaller than the original data, can be transmitted to the drawing node for drawing through a network, and can also be permanently stored in a disk array for post-processing interactive analysis. The drawing method of the volume depth image comprises the following steps:
(2.1) expanding each super segment into a frustum by using the positions of the pixel points and the positions of the viewpoints during generation;
(2.2) sequencing all the frustum cones according to the depth according to the current drawing viewpoint;
(2.3) directly drawing all the frustum bodies after sequencing, and correcting the transparency of each frustum body during drawing: the transparency is determined according to the relationship between the length of the frustum and the length of the line segment through which the current line of sight passes, using the following formula:
Figure BDA0001680025880000051
wherein eta is the transparency value of the frustum, s is the length of the frustum, s 'is the length of the line segment of the current sight line passing through the frustum, and eta' is the transparency value after correction;
(3) automatic optimization of body depth image operation parameters
The body depth image is carried out in situ, the optimal operation parameters are difficult to set manually, an automatic optimization method needs to be provided, and the optimization target is as follows:
(a) the difference between the image generated by rendering using the volume depth image and the image generated by direct volume rendering using the raw data should be as small as possible, the target is measured using the root mean square error, and the formula is as follows:
Figure BDA0001680025880000052
Xobs,iis the color value of the ith pixel of an image rendered using a volume depth image, Xmodel,iIs the color value of the ith pixel of a reference image. Since each color value is a three-dimensional vector, (X)obs,i-Xmodel,i) The euler distance formula is used for the calculation of (c). The smaller the value, the closer the two images are.
(b) The body depth image should be as small as possible, the goal being measured using data compression, the formula is as follows:
Figure BDA0001680025880000053
wherein c is the data compression ratio of the global volume depth image, DiData size, D, of the body depth image generated for the ith compute noderawRepresenting the size of the original data.
(c) The generation time of the body depth image should be as small as possible:
Figure BDA0001680025880000054
where t is the generation time value of the global volume depth image, tiAnd generating the body depth image for the ith calculation node.
(d) For the three targets, the invention uses a weighting method to synthesize the three targets into a single-target optimization problem, and the formula is as follows:
Figure BDA0001680025880000055
in the formula k1,k2,k3Weights for the generation time of the body depth image, the data compression rate of the body depth image and the rendering quality of the body depth image respectively satisfy k1+k2+k31.ψ (t) is an evaluation function of the generation time,
Figure BDA0001680025880000061
is an evaluation function of data compression ratio, and the two formulas are as follows:
Figure BDA0001680025880000062
Figure BDA0001680025880000063
in the formula, α and β are two proportional division points of the generation time and the compression ratio respectively, take values in a (0,1) interval for regulation, and preferentially select a parameter group of which the generation time is less than α and the compression ratio is less than β. And if the regulation value is exceeded, giving an index function penalty.
Many factors affect the above objectives, including computing platform characteristics, network characteristics, raw data characteristics, etc. Among the parameters that can be set by the algorithm are: the size and sampling interval of the depth image to be generated are recorded as
Figure BDA0001680025880000064
The merge threshold δ of super-fragments, etc. The invention uses particle swarm optimization to carry out optimization, as shown in figure 4, the process comprises:
(a) randomly generating J points, wherein each point comprises a group of position information and speed information, the position information is a group of body depth image parameter groups, the parameter groups are randomly generated in a parameter space, and the speed values are randomly generated in a parameter space of (-1, 1);
(b) generating a body depth image according to the parameter set of each point, obtaining generation time t and a compression ratio c, drawing an image by using the generated body depth image data, and comparing the image with a reference image to obtain image quality q;
(c) substituting the evaluation function E into the evaluation functions E according to the t, c and q to obtain an evaluation value E, and updating the historical optimal solution and the global optimal solution of the point according to the E;
(d) and updating the speed and position information of each point by substituting the following formulas according to the historical optimal solution and the global optimal solution:
Vi,t+1=w*Vi,t+c1*r1*(PB,i-Xi,t)+c2*r2*(GB-Xi,t)
Xi,t+1=Xi,t+r*Vi,t+1
in the formula Vi,t+1And Xi,t+1Respectively representing the speed and the new position value of the ith point t +1 wheel, which is represented by the V of the previous wheeli,tAnd Xi,tAnd (4) obtaining. c. C1And c2Are learning factors representing the maximum step size of the historical optimal solution pushing each point to that point and the global optimal solution of the set of points, respectively. c. C2When the distance is larger, the whole point set is more quickly closed to the global optimal solution, otherwise, the point set is slowly closed. PB,iRepresents the historical optimal solution of the ith point, GBRepresenting a globally optimal solution. The parameter w is the inertial weight, which is a coefficient that preserves the previous velocity, causing the point to have an inertia that preserves the original direction of motion. Parameter r1And r2Is [0,1 ]]Random numbers are uniformly distributed among the random numbers, and random disturbance is added to the algorithm. The constant r is a constraint factor and is a weight for the velocity, typically set to 1.
(e) And determining whether the iteration is ended or not by judging whether more than half of points in the point set and the Euler distance of the global optimal solution are smaller than a certain threshold epsilon or not. And (c) returning to the step (b) if the optimal solution does not meet the standard, otherwise, finishing the optimization, and obtaining the global optimal solution, namely the optimization result.
(4) In subsequent simulation calculations, the generation of the volumetric depth image is performed in situ according to the set of optimal parameter sets.
Example (b): the algorithm is realized on a supercomputing platform. In the test example, the computing nodes comprise 32 nodes, each node comprises 16 Intel (R) Xeon (R) E5620 CPUs, the main frequency is 2.40GHz, the single-node memory is 22G, and the nodes are connected by using a 1G Infiniband network. The drawing node uses 2 Intel core i7-4 core CPUs, 2 Quadro 4000 video cards and 32GB memory.
Calculation example A3-dimensional incompressible fluid isotropic turbulence phenomenon was simulated using a Direct Numerical Simulation (DNS) method with a grid number of 10243And the time step is 1024. FIG. 5 is a diagram of the velocity derivation λ in DNS using this method2The position of the vortex in the flow field can be shown by the drawing effect picture. The drawing effect of the method is very small from that of direct volume drawing, no obvious visual difference exists, but the compression ratio can reach more than 10 times after in-situ treatment.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (5)

1. An in-situ volume rendering method, comprising the steps of:
(1) and (3) generating a body depth image: processing data of a calculation result at an in-situ calculation node of large-scale scientific calculation in the following processing mode:
(1.1) emitting a ray from the viewpoint O to each pixel point of a rendering region (W, H), and calculating the intersection point of each ray and a volume data bounding box, wherein W and H respectively represent the width and the height of the rendering region;
(1.2) sampling at equal intervals along the ray from the incident point to the emergent point of the bounding box, wherein the sampling interval is recorded as
Figure FDA0003461246830000013
Interpolating from peripheral data by using a trilinear interpolation method to obtain a sampling value;
(1.3) obtaining color values c and opacity values alpha of the sampling points by using a transfer function F;
(1.4) when the color similarity of adjacent sampling points on the same ray is smaller than a specific threshold value delta, merging the adjacent sampling points, wherein the merged sampling point set is called a super segment, the attribute of the super segment comprises a starting position, an ending position, a color value and transparency, the combination of all the super segments is called a body depth image and contains depth information and color information of data body drawing when observed from a specific viewpoint;
(2) and (3) volume depth image drawing: and (3) transmitting the volume depth image obtained in the step (1.4) to a drawing node for drawing, wherein the drawing method of the volume depth image comprises the following steps:
(2.1) expanding each super segment into a frustum by using the positions of the pixel points and the positions of the viewpoints during generation;
(2.2) sequencing all the frustum cones according to the depth according to the current drawing viewpoint;
(2.3) directly drawing all the frustum bodies after sequencing, and correcting the transparency of each frustum body during drawing: the transparency is determined according to the relationship between the length of the frustum and the length of the line segment through which the current line of sight passes, using the following formula:
Figure FDA0003461246830000011
wherein eta is the transparency value of the frustum, s is the length of the frustum, s 'is the length of the line segment of the current sight line passing through the frustum, and eta' is the transparency value after correction;
(3) automatic optimization of body depth image operation parameters
Depth of bodyThe key operation parameter combination in the process of generating the degree image is
Figure FDA0003461246830000012
Optimizing the data;
(3.1) selecting a group of parameter groups in the parameter space to generate a body depth image, and obtaining corresponding body depth image generation time t and body depth image compression ratio c;
(3.2) drawing the body depth image by using the body depth image to obtain a final image, and calculating the quality q of the final image;
(3.3) substituting the t, c and q values into an evaluation function to obtain the evaluation values of the group of parameters, and updating the global optimal parameter group according to the evaluation values, wherein the evaluation function formula is as follows:
Figure FDA0003461246830000021
in the formula k1,k2,k3Weights for the generation time of the body depth image, the data compression rate of the body depth image and the rendering quality of the body depth image respectively satisfy k1+k2+k31, ψ (t) is an evaluation function of the generation time,
Figure FDA0003461246830000022
is an evaluation function of data compression ratio, and the two formulas are as follows:
Figure FDA0003461246830000023
Figure FDA0003461246830000024
in the formula, alpha and beta are two proportional division points of generation time and compression ratio respectively, and values are in a (0,1) interval for regulation and control;
(3.4) finishing the optimization according to the termination condition to obtain an optimal parameter set; otherwise, returning to (3.1);
(4) in the subsequent simulation calculation, the body depth image is generated based on the set of optimum parameter sets.
2. The in-situ volume rendering method according to claim 1, wherein the volume depth image generation time t in (3.1) is obtained by the following formula:
Figure FDA0003461246830000025
wherein t isiAnd N is the number of the calculation nodes when the volume depth image is generated for the ith calculation node.
3. The in-situ volume rendering method according to claim 1, wherein the volume depth image compression ratio c in (3.1) is obtained by the following formula:
Figure FDA0003461246830000026
in the formula, DiData size, D, of the body depth image generated for the ith compute noderawRepresenting the size of the original data.
4. The in-situ volume rendering method according to claim 1, wherein the quality q of the image in step (3.2) is obtained by the following formula:
Figure FDA0003461246830000027
n is the number of image pixels, Xobs,iIs the color value of the ith pixel of an image rendered using a volume depth image, Xmodel,iIs the color value of the ith pixel of a reference image; due to each colorThe color value is a three-dimensional vector, (X)obs,i-Xmodel,i) The euler distance formula is used for the calculation of (a), and the smaller the value, the closer the two images are.
5. The in-situ volume rendering method according to claim 1, wherein the step (3) is optimized by using a particle swarm optimization, and comprises the steps of:
(a) randomly generating J points, wherein each point comprises a group of position information and speed information, the position information is a group of body depth image parameter groups, the parameter groups are randomly generated in a parameter space, and the speed values are randomly generated in a parameter space of (-1, 1);
(b) generating a body depth image according to the parameter set of each point, obtaining generation time t and a compression ratio c, drawing an image by using the generated body depth image data, and comparing the image with a reference image to obtain image quality q;
(c) substituting the evaluation function E into the evaluation functions E according to the t, c and q to obtain an evaluation value E, and updating the historical optimal solution and the global optimal solution of the point according to the E;
(d) updating the speed and position information of each point according to the historical optimal solution and the global optimal solution;
(e) judging whether the Euler distance of more than half of points in the point set and the global optimal solution is less than a certain threshold value epsilon to determine whether the iteration is ended or not; and (c) returning to the step (b) if the optimal solution does not meet the standard, otherwise, finishing the optimization, and obtaining the global optimal solution, namely the optimization result.
CN201810549318.3A 2018-05-31 2018-05-31 In-situ volume rendering method Active CN108876889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810549318.3A CN108876889B (en) 2018-05-31 2018-05-31 In-situ volume rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810549318.3A CN108876889B (en) 2018-05-31 2018-05-31 In-situ volume rendering method

Publications (2)

Publication Number Publication Date
CN108876889A CN108876889A (en) 2018-11-23
CN108876889B true CN108876889B (en) 2022-04-22

Family

ID=64336329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810549318.3A Active CN108876889B (en) 2018-05-31 2018-05-31 In-situ volume rendering method

Country Status (1)

Country Link
CN (1) CN108876889B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005020141A2 (en) * 2003-08-18 2005-03-03 Fovia, Inc. Method and system for adaptive direct volume rendering
CN101286225A (en) * 2007-04-11 2008-10-15 中国科学院自动化研究所 Mass data object plotting method based on three-dimensional grain hardware acceleration
CN101604453A (en) * 2009-07-08 2009-12-16 西安电子科技大学 Large-scale data field volume rendering method based on partition strategy
WO2012135153A2 (en) * 2011-03-25 2012-10-04 Oblong Industries, Inc. Fast fingertip detection for initializing a vision-based hand tracker
WO2013161590A1 (en) * 2012-04-27 2013-10-31 株式会社日立メディコ Image display device, method and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9053574B2 (en) * 2011-03-02 2015-06-09 Sectra Ab Calibrated natural size views for visualizations of volumetric data sets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005020141A2 (en) * 2003-08-18 2005-03-03 Fovia, Inc. Method and system for adaptive direct volume rendering
CN101286225A (en) * 2007-04-11 2008-10-15 中国科学院自动化研究所 Mass data object plotting method based on three-dimensional grain hardware acceleration
CN101604453A (en) * 2009-07-08 2009-12-16 西安电子科技大学 Large-scale data field volume rendering method based on partition strategy
WO2012135153A2 (en) * 2011-03-25 2012-10-04 Oblong Industries, Inc. Fast fingertip detection for initializing a vision-based hand tracker
WO2013161590A1 (en) * 2012-04-27 2013-10-31 株式会社日立メディコ Image display device, method and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
千万亿次科学计算的原位可视化;单桂华;《计算机辅助设计与图形学学报》;20130331;第286-293页 *

Also Published As

Publication number Publication date
CN108876889A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
Barron et al. Mip-nerf 360: Unbounded anti-aliased neural radiance fields
CN110335290B (en) Twin candidate region generation network target tracking method based on attention mechanism
CN105654492B (en) Robust real-time three-dimensional method for reconstructing based on consumer level camera
CN109993095B (en) Frame level feature aggregation method for video target detection
US8223148B1 (en) Method and apparatus for computing indirect lighting for global illumination rendering in 3-D computer graphics
WO2015139574A1 (en) Static object reconstruction method and system
EP3295368A1 (en) Deepstereo: learning to predict new views from real world imagery
CN110120065B (en) Target tracking method and system based on hierarchical convolution characteristics and scale self-adaptive kernel correlation filtering
EP3736776B1 (en) Apparatus, system and method for the generation of polygonal meshes
US20230401672A1 (en) Video processing method and apparatus, computer device, and storage medium
Mao et al. Uasnet: Uncertainty adaptive sampling network for deep stereo matching
CN108596881A (en) The intelligent image statistical method of rock grain size after a kind of explosion
Zubić et al. An effective loss function for generating 3D models from single 2D image without rendering
Kang et al. A survey of photon mapping state-of-the-art research and future challenges
EP4287134A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
CN108876889B (en) In-situ volume rendering method
CN110363792A (en) A kind of method for detecting change of remote sensing image based on illumination invariant feature extraction
CN110084872B (en) Data-driven smoke animation synthesis method and system
Takeshita Aabb pruning: Pruning of neighborhood search for uniform grid using axis-aligned bounding box
CN112053384A (en) Target tracking method based on bounding box regression model
Sun et al. Deeper spatial pyramid network with refined up-sampling for optical flow estimation
Chetan et al. Accurate Differential Operators for Hybrid Neural Fields
CN117392332B (en) Method and system for generating three-dimensional thermodynamic diagram based on GIS
JP7265686B1 (en) Information processing device, information processing method and information processing program
US20230394767A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant