CN113706696A - 3D geometric model detail level self-adaptive selection method based on object visual saliency - Google Patents

3D geometric model detail level self-adaptive selection method based on object visual saliency Download PDF

Info

Publication number
CN113706696A
CN113706696A CN202111023516.4A CN202111023516A CN113706696A CN 113706696 A CN113706696 A CN 113706696A CN 202111023516 A CN202111023516 A CN 202111023516A CN 113706696 A CN113706696 A CN 113706696A
Authority
CN
China
Prior art keywords
dimensional scene
geometric
frame
picture
geometric object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111023516.4A
Other languages
Chinese (zh)
Other versions
CN113706696B (en
Inventor
陈纯毅
胡小娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202111023516.4A priority Critical patent/CN113706696B/en
Publication of CN113706696A publication Critical patent/CN113706696A/en
Application granted granted Critical
Publication of CN113706696B publication Critical patent/CN113706696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a 3D geometric model detail level self-adaptive selection method based on object visual saliency. The method utilizes interframe continuity to calculate the significance value of the geometric objects of the three-dimensional scene according to the significance diagram of the previous frame of three-dimensional scene picture, and accordingly the significance value is used as a basis for selecting proper geometric model detail levels for all the geometric objects of the three-dimensional scene when the next frame of three-dimensional scene picture is drawn. The geometric model with low fineness degree is selected for the geometric object with low significance value, so that the data volume of the geometric model of the three-dimensional scene during ray tracing is reduced, the calculation overhead of ray intersection operation can be reduced, and the drawing speed of the three-dimensional scene picture is accelerated.

Description

3D geometric model detail level self-adaptive selection method based on object visual saliency
Technical Field
The invention relates to a 3D geometric model detail level self-adaptive selection method based on object visual saliency, and belongs to the technical field of virtual three-dimensional scene drawing.
Background
Virtual three-dimensional scene rendering technology has been applied in many fields such as movie production, game animation, virtual reality, and the like. It is generally desirable to render three-dimensional scenes as fast as possible while ensuring that the visual quality of the scene meets the requirements of a particular application. In order to increase the three-dimensional scene rendering speed, people design various acceleration methods, such as designing a more efficient scene acceleration structure, a faster visibility solution algorithm, a more effective space-time domain multiplexing algorithm, and the like. In fact, there are many factors affecting the three-dimensional scene rendering speed, and ways to accelerate the three-dimensional scene rendering speed can be explored from different angles. A virtual three-dimensional scene typically comprises several geometric objects (as shown in fig. 1), each of which can be described by a number of elementary geometric primitives like triangles. For example, Chapter 25 of Computer Graphics: Principles and Practice,3rd Edition, published 2014 by Addison-Wesley, Inc., written by J.F. Hughes et al, describes the representation of geometric object surfaces with triangular meshes, where section 25.4 also describes the Level of Detail (LOD) concept of geometric models. In three-dimensional scene pictures generated using three-dimensional graphics rendering techniques (e.g., rasterization techniques, ray tracing techniques, path tracing techniques, etc.), different picture regions tend to have different visual saliency. A book of major academic papers, "image saliency detection technology research based on deep learning", which was completed by von jiaqi of the university of electronic technology in 2019, introduced how to obtain a saliency map of an image frame by using an image saliency detection technology. By utilizing an image saliency detection technology, a saliency map of any given picture can be calculated; the saliency map is a gray scale map, and the gray scale value of each pixel of the saliency map quantifies the probability of the corresponding pixel of the picture being noticed by human eyes (i.e. the visual saliency); the greater the grayscale value of a salient image pixel, the greater the likelihood that the corresponding pixel of the picture can be of interest.
If all pixels occupied by a certain geometric object in the three-dimensional scene picture are small in significance, the geometric object is less likely to be concerned, so that when a geometric model of the three-dimensional scene is constructed, the geometric object can be represented by a geometric model with a lower level of detail (i.e. a relatively rough geometric model) without having a significant influence on the visual perception quality of the drawn three-dimensional scene picture. When the geometric model of the three-dimensional scene is constructed by the geometric model with lower detail level of the geometric object, the calculation cost of the geometric model can be correspondingly reduced due to the reduction of the number of basic geometric primitives in the geometric model of the three-dimensional scene, and the drawing speed of the picture of the three-dimensional scene is further improved. In order to be able to select an appropriate level of detail for different geometric objects during rendering of a three-dimensional scene, the saliency of each geometric object in the three-dimensional scene needs to be calculated. The saliency map of an image picture calculated by the conventional image saliency detection technology is a measurement result of the saliency of each pixel of the image picture. The invention provides a method for further calculating the significance of each geometric object according to the significance map of the image picture. In addition, the invention also provides a method for adaptively selecting the geometric model detail level of the geometric object according to the significance of the geometric object.
Disclosure of Invention
The invention aims to provide a 3D geometric model detail level self-adaptive selection method based on object visual saliency, which can self-adaptively select the detail level of a geometric model of a geometric object according to the saliency of the geometric object in a three-dimensional scene picture so as to reduce the geometric model calculation overhead in the process of drawing the three-dimensional scene picture and improve the drawing speed of the three-dimensional scene picture.
The technical scheme of the method is realized as follows: A3D geometric model detail level self-adaptive selection method based on object visual saliency is characterized by comprising the following steps: generating N for each geometric object in a three-dimensional scenelodGeometric models of different levels of detail; in the process of continuously drawing NUM +1 frames of three-dimensional scene pictures, calculating a saliency map of a current frame picture of the three-dimensional scene by using an image saliency detection technology, and calculating a saliency value of each geometric object according to the saliency map of the current frame picture; NUM is a positive number; and when the next frame of picture of the three-dimensional scene is drawn, selecting a proper geometric model detail level for the geometric objects according to the significance of each geometric object to construct the three-dimensional scene, and drawing the next frame of picture of the three-dimensional scene. The method comprises the following concrete implementation steps:
step 101: generating N for each geometric object A001 in a three-dimensional scenelodGeometric models with different detail levels are generated from the 1 st detail level to the Nth detail level for each geometric object A001 in the three-dimensional scenelodThe detail levels are NlodGeometric models of level of detail, from level 1 to NthlodThe fineness degree of the geometric model of each detail level is gradually reduced; numbering all geometric objects A001 in a three-dimensional scene, the numbering of the geometric objects A001 starting from 1 up to Nobj,NobjRepresenting the number of geometric objects A001 contained in the three-dimensional scene;
step 102: when drawing a 1 st frame three-dimensional scene picture A002 of a three-dimensional scene, selecting a geometric model of a 1 st detail level of the geometric object A001 for each geometric object A001 in the three-dimensional scene to construct a three-dimensional scene geometric model; drawing a 1 st frame of three-dimensional scene picture A002 according to the three-dimensional scene geometric model, the three-dimensional scene light source parameter, the geometric object material parameter and the virtual camera parameter corresponding to the 1 st frame of picture by utilizing a ray tracing technology; in the process of drawing the 1 st frame of three-dimensional scene picture A002, recording the number of a geometric object A001 corresponding to a visual scene point corresponding to each pixel on a virtual pixel plane of a virtual camera; pixels of the 1 st three-dimensional scene picture A002 correspond to pixels on a virtual pixel plane of the virtual camera one by one, and each pixel of the 1 st three-dimensional scene picture A002 corresponds to a serial number of a geometric object A001; let the frame number IF1 is ═ 1; is as followsFDisplaying the frame three-dimensional scene picture A002 on a display;
step 103: computing item I using image saliency detection techniquesFA saliency map a003 of the frame three-dimensional scene picture a 002; significance map A003 Pixel withFPixels of the frame three-dimensional scene picture A002 correspond to one another;
step 104: calculating the significance value of each geometric object A001 of the three-dimensional scene; specifically, for i ═ 1,2, …, NobjFor the ith geometric object a001 of the three-dimensional scene, the following operations are performed:
step 104-1: creating a set S1 in the memory of the computer, each element of the set S1 for storing a pixel saliency value; set S1 is set to null;
step 104-2: for item IFEach pixel a004 of the frame three-dimensional scene picture a002 does the following operations:
if the number of geometric object A001 corresponding to pixel A004 is equal to i, adding the value of the pixel of saliency map A003 corresponding to pixel A004 to set S1; the value of each pixel of the saliency map A003 represents the ithFThe saliency value of the corresponding pixel of the frame three-dimensional scene picture a 002;
step 104-3: if the set S1 is empty,turning to Step104-5, otherwise, sorting the elements in the set S1 according to the descending order; order to
Figure BDA0003241506260000031
Wherein α represents in the interval (0, 1)]The ratio coefficient of the median value, M represents the number of elements in the set S1,
Figure BDA0003241506260000032
represents rounding up x; let TS denote the value of the m-th element of the set S1;
step 104-4: let the saliency value of the ith geometric object A001 be equal to the average of all elements in the set S1 that are greater than or equal to TS; turning to Step 104-6;
step 104-5: let the saliency value of the ith geometric object A001 equal 0;
step 104-6: the operation for the ith geometric object a001 of the three-dimensional scene ends;
step 105: for rendering the No. I of a three-dimensional sceneFSelecting a proper geometric model detail level for each geometric object A001 of the three-dimensional scene when the +1 frame of three-dimensional scene picture A002; specifically, for i ═ 1,2, …, NobjFor the ith geometric object a001 of the three-dimensional scene, the following operations are performed:
step 105-1: let SIL denote the saliency value of the ith geometric object a 001; if SIL is equal to 0, go to Step105-2, otherwise the calculation satisfies the condition ak×SH<SIL≤ak+1Subscript k of XS, wherein k is [0, (N)lod-1)]Integer of interval, { am|m=0,1,2,…,NlodIs a division point coefficient sequence used for dividing the significance value range of the geometric object A001 into NlodThe number of the sub-areas is equal to that of the sub-areas,
Figure BDA0003241506260000033
SH is the maximum of the saliency values of all geometric objects a001 of the three-dimensional scene; let LEVEL equal Nlod-k; turning to Step 105-3;
step 105-2: let LEVEL equal Nlod
Step 105-3: in drawing the No. I of three-dimensional sceneF+1 frame of the three-dimensional scene picture A002, the geometric model of the ith geometric object A001 at the LEVEL of detail is selected as the geometric data representing the ith geometric object A001 to be constructed when drawing the ith geometric object A001FA three-dimensional scene geometric model used when +1 frame of the three-dimensional scene picture a 002;
step 105-4: the operation for the ith geometric object a001 of the three-dimensional scene ends;
step 106: utilizing ray tracing technology to calculate the geometric model, light source parameter, material parameter and the No. I of the three-dimensional sceneFDrawing the No. I by the virtual camera parameter corresponding to the +1 frame three-dimensional scene picture A002F+1 frame three-dimensional scene picture a 002; at the drawing stage IFIn the process of +1 frame of the three-dimensional scene picture A002, recording the number of a geometric object A001 corresponding to the visual scene point corresponding to each pixel on the virtual pixel plane of the virtual camera; i thFThe pixels of the +1 frame three-dimensional scene picture A002 correspond to the pixels on the virtual pixel plane of the virtual camera one by one, the I < th > frame three-dimensional scene picture A002FEach pixel of the +1 frame three-dimensional scene picture A002 corresponds to the number of a geometric object A001;
step 107: is as followsF+1 frame three-dimensional scene picture a002 is displayed on the display; if IF<NUM, then order IF=IF+1, go to Step103, otherwise end the drawing operation.
The invention has the positive effects that: the method utilizes interframe continuity to calculate the significance value of the geometric objects of the three-dimensional scene according to the significance diagram of the previous frame of three-dimensional scene picture, and accordingly the significance value is used as a basis for selecting proper geometric model detail levels for all the geometric objects of the three-dimensional scene when the next frame of three-dimensional scene picture is drawn. The geometric model with low fineness degree is selected for the geometric object with low significance value, so that the data volume of the geometric model of the three-dimensional scene during ray tracing is reduced, the calculation overhead of ray intersection operation can be reduced, and the drawing speed of the three-dimensional scene picture is accelerated.
Drawings
Fig. 1 is a schematic diagram of a three-dimensional scene picture including a plurality of geometric objects.
Detailed Description
In order that the features and advantages of the method may be more clearly understood, the method is further described below in connection with specific embodiments. The embodiment considers a virtual room three-dimensional scene, and the three-dimensional scene comprises geometric objects such as walls, ceilings, floors, doors, windows, curtains, tables, chairs and the like around the room. The table and the chair are both placed on the floor, the door is closed, and the window is covered by a curtain. A circular light source is located in the ceiling of the room and illuminates the room downwardly. The geometric objects around the room, such as walls, ceilings, floors, doors, curtains, tables, chairs, and the like, are all made of diffuse reflection materials.
The technical scheme of the method is realized as follows: A3D geometric model detail level self-adaptive selection method based on object visual saliency is characterized by comprising the following steps: generating N for each geometric object in a three-dimensional scenelodGeometric models of different levels of detail; in the process of continuously drawing NUM +1 frames of three-dimensional scene pictures, calculating a saliency map of a current frame picture of the three-dimensional scene by using an image saliency detection technology, and calculating a saliency value of each geometric object according to the saliency map of the current frame picture; NUM is a positive number; and when the next frame of picture of the three-dimensional scene is drawn, selecting a proper geometric model detail level for the geometric objects according to the significance of each geometric object to construct the three-dimensional scene, and drawing the next frame of picture of the three-dimensional scene. The method comprises the following concrete implementation steps:
step 101: generating N for each geometric object A001 in a three-dimensional scenelodGeometric models with different detail levels are generated from the 1 st detail level to the Nth detail level for each geometric object A001 in the three-dimensional scenelodThe detail levels are NlodGeometric models of level of detail, from level 1 to NthlodThe fineness degree of the geometric model of each detail level is gradually reduced; numbering all geometric objects A001 in a three-dimensional scene, the numbering of the geometric objects A001 starting from 1 up to Nobj,NobjRepresenting geometric pairs contained in a three-dimensional sceneThe number of the image A001;
step 102: when drawing a 1 st frame three-dimensional scene picture A002 of a three-dimensional scene, selecting a geometric model of a 1 st detail level of the geometric object A001 for each geometric object A001 in the three-dimensional scene to construct a three-dimensional scene geometric model; drawing a 1 st frame of three-dimensional scene picture A002 according to the three-dimensional scene geometric model, the three-dimensional scene light source parameter, the geometric object material parameter and the virtual camera parameter corresponding to the 1 st frame of picture by utilizing a ray tracing technology; in the process of drawing the 1 st frame of three-dimensional scene picture A002, recording the number of a geometric object A001 corresponding to a visual scene point corresponding to each pixel on a virtual pixel plane of a virtual camera; pixels of the 1 st three-dimensional scene picture A002 correspond to pixels on a virtual pixel plane of the virtual camera one by one, and each pixel of the 1 st three-dimensional scene picture A002 corresponds to a serial number of a geometric object A001; let the frame number IF1 is ═ 1; is as followsFDisplaying the frame three-dimensional scene picture A002 on a display;
step 103: computing item I using image saliency detection techniquesFA saliency map a003 of the frame three-dimensional scene picture a 002; significance map A003 Pixel withFPixels of the frame three-dimensional scene picture A002 correspond to one another;
step 104: calculating the significance value of each geometric object A001 of the three-dimensional scene; specifically, for i ═ 1,2, …, NobjFor the ith geometric object a001 of the three-dimensional scene, the following operations are performed:
step 104-1: creating a set S1 in the memory of the computer, each element of the set S1 for storing a pixel saliency value; set S1 is set to null;
step 104-2: for item IFEach pixel a004 of the frame three-dimensional scene picture a002 does the following operations:
if the number of geometric object A001 corresponding to pixel A004 is equal to i, adding the value of the pixel of saliency map A003 corresponding to pixel A004 to set S1; the value of each pixel of the saliency map A003 represents the ithFThe saliency value of the corresponding pixel of the frame three-dimensional scene picture a 002;
step 104-3: if the set S1 is empty, turning to Step104-5, otherwise, sorting the elements in the set S1 in descending order; order to
Figure BDA0003241506260000051
Wherein α represents in the interval (0, 1)]The ratio coefficient of the median value, M represents the number of elements in the set S1,
Figure BDA0003241506260000052
represents rounding up x; let TS denote the value of the m-th element of the set S1;
step 104-4: let the saliency value of the ith geometric object A001 be equal to the average of all elements in the set S1 that are greater than or equal to TS; turning to Step 104-6;
step 104-5: let the saliency value of the ith geometric object A001 equal 0;
step 104-6: the operation for the ith geometric object a001 of the three-dimensional scene ends;
step 105: for rendering the No. I of a three-dimensional sceneFSelecting a proper geometric model detail level for each geometric object A001 of the three-dimensional scene when the +1 frame of three-dimensional scene picture A002; specifically, for i ═ 1,2, …, NobjFor the ith geometric object a001 of the three-dimensional scene, the following operations are performed:
step 105-1: let SIL denote the saliency value of the ith geometric object a 001; if SIL is equal to 0, go to Step105-2, otherwise the calculation satisfies the condition ak×SH<SIL≤ak+1Subscript k of XS, wherein k is [0, (N)lod-1)]Integer of interval, { am|m=0,1,2,…,NlodIs a division point coefficient sequence used for dividing the significance value range of the geometric object A001 into NlodThe number of the sub-areas is equal to that of the sub-areas,
Figure BDA0003241506260000061
SH is the maximum of the saliency values of all geometric objects a001 of the three-dimensional scene; let LEVEL equal Nlod-k; turning to Step 105-3;
step Step 105-2: let LEVEL equal Nlod
Step 105-3: in drawing the No. I of three-dimensional sceneF+1 frame of the three-dimensional scene picture A002, the geometric model of the ith geometric object A001 at the LEVEL of detail is selected as the geometric data representing the ith geometric object A001 to be constructed when drawing the ith geometric object A001FA three-dimensional scene geometric model used when +1 frame of the three-dimensional scene picture a 002;
step 105-4: the operation for the ith geometric object a001 of the three-dimensional scene ends;
step 106: utilizing ray tracing technology to calculate the geometric model, light source parameter, material parameter and the No. I of the three-dimensional sceneFDrawing the No. I by the virtual camera parameter corresponding to the +1 frame three-dimensional scene picture A002F+1 frame three-dimensional scene picture a 002; at the drawing stage IFIn the process of +1 frame of the three-dimensional scene picture A002, recording the number of a geometric object A001 corresponding to the visual scene point corresponding to each pixel on the virtual pixel plane of the virtual camera; i thFThe pixels of the +1 frame three-dimensional scene picture A002 correspond to the pixels on the virtual pixel plane of the virtual camera one by one, the I < th > frame three-dimensional scene picture A002FEach pixel of the +1 frame three-dimensional scene picture A002 corresponds to the number of a geometric object A001;
step 107: is as followsF+1 frame three-dimensional scene picture a002 is displayed on the display; if IF<NUM, then order IF=IF+1, go to Step103, otherwise end the drawing operation.
In ray tracing, rays emitted from a viewpoint and passing through each pixel on a virtual pixel plane of a virtual camera intersect with a geometric object of a three-dimensional scene, an intersection point closest to the viewpoint is a visual field sight point, and the visual scene point corresponds to the geometric object. The visible scene points correspond to pixels on a virtual pixel plane of the virtual camera one to one. Thus, a pixel on the virtual pixel plane of the virtual camera corresponds to a geometric object.
In this embodiment, Nlod=4,NUM=50,a1=0.15,a2=0.3,a3=0.6,α=0.7。

Claims (1)

1. A3D geometric model detail level self-adaptive selection method based on object visual saliency is characterized by comprising the following steps: generating N for each geometric object in a three-dimensional scenelodGeometric models of different levels of detail; in the process of continuously drawing NUM +1 frames of three-dimensional scene pictures, calculating a saliency map of a current frame picture of the three-dimensional scene by using an image saliency detection technology, and calculating a saliency value of each geometric object according to the saliency map of the current frame picture; NUM is a positive number; when drawing the next frame of picture of the three-dimensional scene, selecting a proper geometric model detail level for the geometric objects according to the significance of each geometric object to construct the three-dimensional scene, and drawing the next frame of picture of the three-dimensional scene; the method comprises the following concrete implementation steps:
step 101: generating N for each geometric object A001 in a three-dimensional scenelodGeometric models with different detail levels are generated from the 1 st detail level to the Nth detail level for each geometric object A001 in the three-dimensional scenelodThe detail levels are NlodGeometric models of level of detail, from level 1 to NthlodThe fineness degree of the geometric model of each detail level is gradually reduced; numbering all geometric objects A001 in a three-dimensional scene, the numbering of the geometric objects A001 starting from 1 up to Nobj,NobjRepresenting the number of geometric objects A001 contained in the three-dimensional scene;
step 102: when drawing a 1 st frame three-dimensional scene picture A002 of a three-dimensional scene, selecting a geometric model of a 1 st detail level of the geometric object A001 for each geometric object A001 in the three-dimensional scene to construct a three-dimensional scene geometric model; drawing a 1 st frame of three-dimensional scene picture A002 according to the three-dimensional scene geometric model, the three-dimensional scene light source parameter, the geometric object material parameter and the virtual camera parameter corresponding to the 1 st frame of picture by utilizing a ray tracing technology; in the process of drawing the 1 st frame of three-dimensional scene picture A002, recording the number of a geometric object A001 corresponding to a visual scene point corresponding to each pixel on a virtual pixel plane of a virtual camera; pixel and dummy of the 1 st frame three-dimensional scene picture A002Pixels on a virtual pixel plane of the camera correspond to one another, and each pixel of a 1 st frame of three-dimensional scene picture A002 corresponds to the number of a geometric object A001; let the frame number IF1 is ═ 1; is as followsFDisplaying the frame three-dimensional scene picture A002 on a display;
step 103: computing item I using image saliency detection techniquesFA saliency map a003 of the frame three-dimensional scene picture a 002; significance map A003 Pixel withFPixels of the frame three-dimensional scene picture A002 correspond to one another;
step 104: calculating the significance value of each geometric object A001 of the three-dimensional scene; specifically, for i ═ 1,2, …, NobjFor the ith geometric object a001 of the three-dimensional scene, the following operations are performed:
step 104-1: creating a set S1 in the memory of the computer, each element of the set S1 for storing a pixel saliency value; set S1 is set to null;
step 104-2: for item IFEach pixel a004 of the frame three-dimensional scene picture a002 does the following operations:
if the number of geometric object A001 corresponding to pixel A004 is equal to i, adding the value of the pixel of saliency map A003 corresponding to pixel A004 to set S1; the value of each pixel of the saliency map A003 represents the ithFThe saliency value of the corresponding pixel of the frame three-dimensional scene picture a 002;
step 104-3: if the set S1 is empty, turning to Step104-5, otherwise, sorting the elements in the set S1 in descending order; order to
Figure FDA0003241506250000011
Wherein α represents in the interval (0, 1)]The ratio coefficient of the median value, M represents the number of elements in the set S1,
Figure FDA0003241506250000012
represents rounding up x; let TS denote the value of the m-th element of the set S1;
step 104-4: let the saliency value of the ith geometric object A001 be equal to the average of all elements in the set S1 that are greater than or equal to TS; turning to Step 104-6;
step 104-5: let the saliency value of the ith geometric object A001 equal 0;
step 104-6: the operation for the ith geometric object a001 of the three-dimensional scene ends;
step 105: for rendering the No. I of a three-dimensional sceneFSelecting a proper geometric model detail level for each geometric object A001 of the three-dimensional scene when the +1 frame of three-dimensional scene picture A002; specifically, for i ═ 1,2, …, NobjFor the ith geometric object a001 of the three-dimensional scene, the following operations are performed:
step 105-1: let SIL denote the saliency value of the ith geometric object a 001; if SIL is equal to 0, go to Step105-2, otherwise the calculation satisfies the condition ak×SH<SIL≤ak+1Subscript k of XS, wherein k is [0, (N)lod-1)]Integer of interval, { am|m=0,1,2,…,NlodIs a division point coefficient sequence used for dividing the significance value range of the geometric object A001 into NlodThe number of the sub-areas is equal to that of the sub-areas,
Figure FDA0003241506250000021
SH is the maximum of the saliency values of all geometric objects a001 of the three-dimensional scene; let LEVEL equal Nlod-k; turning to Step 105-3;
step 105-2: let LEVEL equal Nlod
Step 105-3: in drawing the No. I of three-dimensional sceneF+1 frame of the three-dimensional scene picture A002, the geometric model of the ith geometric object A001 at the LEVEL of detail is selected as the geometric data representing the ith geometric object A001 to be constructed when drawing the ith geometric object A001FA three-dimensional scene geometric model used when +1 frame of the three-dimensional scene picture a 002;
step 105-4: the operation for the ith geometric object a001 of the three-dimensional scene ends;
step 106: utilizing ray tracing technology to calculate the geometric model, light source parameter, material parameter and the No. I of the three-dimensional sceneFDrawing the No. I by the virtual camera parameter corresponding to the +1 frame three-dimensional scene picture A002F+1 frame three-dimensional scene picture a 002; at the drawing stage IFIn the process of +1 frame of the three-dimensional scene picture A002, recording the number of a geometric object A001 corresponding to the visual scene point corresponding to each pixel on the virtual pixel plane of the virtual camera; i thFThe pixels of the +1 frame three-dimensional scene picture A002 correspond to the pixels on the virtual pixel plane of the virtual camera one by one, the I < th > frame three-dimensional scene picture A002FEach pixel of the +1 frame three-dimensional scene picture A002 corresponds to the number of a geometric object A001;
step 107: is as followsF+1 frame three-dimensional scene picture a002 is displayed on the display; if IF<NUM, then order IF=IF+1, go to Step103, otherwise end the drawing operation.
CN202111023516.4A 2021-09-02 2021-09-02 3D geometric model detail level self-adaptive selection method based on object visual saliency Active CN113706696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111023516.4A CN113706696B (en) 2021-09-02 2021-09-02 3D geometric model detail level self-adaptive selection method based on object visual saliency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111023516.4A CN113706696B (en) 2021-09-02 2021-09-02 3D geometric model detail level self-adaptive selection method based on object visual saliency

Publications (2)

Publication Number Publication Date
CN113706696A true CN113706696A (en) 2021-11-26
CN113706696B CN113706696B (en) 2023-09-19

Family

ID=78657127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111023516.4A Active CN113706696B (en) 2021-09-02 2021-09-02 3D geometric model detail level self-adaptive selection method based on object visual saliency

Country Status (1)

Country Link
CN (1) CN113706696B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310226A (en) * 2023-05-16 2023-06-23 深圳大学 Three-dimensional object hierarchical model generation method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903146A (en) * 2012-09-13 2013-01-30 中国科学院自动化研究所 Image processing method for scene drawing
CN105931298A (en) * 2016-04-13 2016-09-07 山东大学 Automatic selection method for low relief position based on visual significance
US20170004648A1 (en) * 2015-06-30 2017-01-05 Ariadne's Thread (Usa), Inc. (Dba Immerex) Variable resolution virtual reality display system
CN110728741A (en) * 2019-10-11 2020-01-24 长春理工大学 Surface light source illumination three-dimensional scene picture rendering method based on multi-detail level model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903146A (en) * 2012-09-13 2013-01-30 中国科学院自动化研究所 Image processing method for scene drawing
US20170004648A1 (en) * 2015-06-30 2017-01-05 Ariadne's Thread (Usa), Inc. (Dba Immerex) Variable resolution virtual reality display system
CN105931298A (en) * 2016-04-13 2016-09-07 山东大学 Automatic selection method for low relief position based on visual significance
CN110728741A (en) * 2019-10-11 2020-01-24 长春理工大学 Surface light source illumination three-dimensional scene picture rendering method based on multi-detail level model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310226A (en) * 2023-05-16 2023-06-23 深圳大学 Three-dimensional object hierarchical model generation method, device, equipment and storage medium
CN116310226B (en) * 2023-05-16 2023-08-18 深圳大学 Three-dimensional object hierarchical model generation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113706696B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN111508052B (en) Rendering method and device of three-dimensional grid body
Sigg et al. GPU-based ray-casting of quadratic surfaces.
Airey et al. Towards image realism with interactive update rates in complex virtual building environments
US7439975B2 (en) Method and system for producing dynamically determined drop shadows in a three-dimensional graphical user interface
US5742749A (en) Method and apparatus for shadow generation through depth mapping
Bergeron A general version of Crow's shadow volumes
Aliaga et al. Automatic image placement to provide a guaranteed frame rate
US6285370B1 (en) Method and system for high performance computer-generated virtual environments
US5579455A (en) Rendering of 3D scenes on a display using hierarchical z-buffer visibility
Schott et al. A directional occlusion shading model for interactive direct volume rendering
US10163253B2 (en) Lighting management in virtual worlds
EP0727052A1 (en) Weather simulation system
Dollner et al. Real-time expressive rendering of city models
Funkhouser Database and display algorithms for interactive visualization of architectural models
Sabino et al. A hybrid GPU rasterized and ray traced rendering pipeline for real time rendering of per pixel effects
Ganestam et al. Real-time multiply recursive reflections and refractions using hybrid rendering
US6396502B1 (en) System and method for implementing accumulation buffer operations in texture mapping hardware
WO2008014384A2 (en) Real-time scenery and animation
CN113706696A (en) 3D geometric model detail level self-adaptive selection method based on object visual saliency
Nakamae et al. Compositing 3D images with antialiasing and various shading effects
Levene A framework for non-realistic projections
JP3286294B2 (en) System and method for displaying a three-dimensional object that generates object blur using motion vectors
Meyer et al. Real-time reflection on moving vehicles in urban environments
Shirley et al. Fast ray tracing and the potential effects on graphics and gaming courses
Rubin The representation and display of scenes with a wide range of detail

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant