CN110276823B - Ray tracing based real-time interactive integrated imaging generation method and system - Google Patents

Ray tracing based real-time interactive integrated imaging generation method and system Download PDF

Info

Publication number
CN110276823B
CN110276823B CN201910438381.4A CN201910438381A CN110276823B CN 110276823 B CN110276823 B CN 110276823B CN 201910438381 A CN201910438381 A CN 201910438381A CN 110276823 B CN110276823 B CN 110276823B
Authority
CN
China
Prior art keywords
plane
light field
integrated light
visual model
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910438381.4A
Other languages
Chinese (zh)
Other versions
CN110276823A (en
Inventor
蒋晓瑜
秦志强
张文阁
严志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Academy of Armored Forces of PLA
Original Assignee
Academy of Armored Forces of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Academy of Armored Forces of PLA filed Critical Academy of Armored Forces of PLA
Priority to CN201910438381.4A priority Critical patent/CN110276823B/en
Publication of CN110276823A publication Critical patent/CN110276823A/en
Application granted granted Critical
Publication of CN110276823B publication Critical patent/CN110276823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/604Rotation of whole images or parts thereof using coordinate rotation digital computer [CORDIC] devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses an integrated imaging generation method and a system based on ray tracing and real-time interaction, which relate to the technical field of computer integrated imaging generation and comprise the steps of firstly reading a system parameter file, loading a virtual scene three-dimensional model and a virtual scene texture file, and establishing a hierarchical bounding box acceleration structure and an integrated light field visual model according to the files and the models; then judging whether an interactive instruction is received, if so, generating light rays corresponding to each pixel according to the modified attribute value of the integrated light field visual model, and if not, directly generating light rays corresponding to each pixel according to the attribute value of the integrated light field visual model; and finally, rendering all the rays in parallel by adopting a hierarchical bounding box acceleration structure and a ray tracing technology, and generating and displaying a unit image array. By adopting the method or the system provided by the invention, the acquisition process of the virtual camera model is eliminated, the calculation process is simplified, and the real-time performance of the interaction function is improved.

Description

Ray tracing based real-time interactive integrated imaging generation method and system
Technical Field
The invention relates to the technical field of computer integrated imaging generation, in particular to an integrated imaging generation method and system based on ray tracing and capable of realizing real-time interaction.
Background
The existing calculation generation integrated imaging method comprises the following steps: a virtual camera model is established on each lens or each viewpoint, then light field information (a view angle diagram or some pixels in the view angle diagram) of a certain view angle or a certain direction is collected by each virtual camera, and a unit image array (EIA) is obtained through data processing processes such as secondary sampling, pixel mapping and the like. Although the above method can obtain the unit image array, the calculation is time-consuming, the viewing performance of the integrated imaging is limited, and the realization of the interactive function is not facilitated.
Specifically, a plurality of perspective views are obtained through a virtual camera in commercial three-dimensional software by using methods such as Point tracing Rendering (PRR), multiple Viewpoint Rendering (MVR), parallel Group Rendering (PGR), and Viewpoint Vector Rendering (VVR), pixels in the perspective views are filled into the EIA according to a certain mapping relationship, but a lot of redundant information occurs in the process of acquiring the perspective views, and the pixel mapping process is time-consuming. Subsequently, the scholars write a specific virtual camera and increase the computation speed using parallel processing. An Image Space Parallel Processing (ISPP) optimization algorithm is used for establishing each hexagonal lens as a virtual camera model, distributing GPU threads to each pixel, optimizing a parallel algorithm framework on the basis of the ISPP algorithm, and calculating in real time to generate an EIA. According to the Multiple Ray Clusterrendering (MRCR) method, a Ray clusterings model similar to a virtual camera is established for each viewpoint according to the position of an observer, an integrated light field suitable for being watched at the current distance is calculated, and real-time interactive display is realized by utilizing parallel calculation. The domestic Beijing post and telecommunications university proposes a Backward RayTracing CGII method, a virtual camera is established for each viewpoint, light is emitted for each pixel through the virtual camera, the light is rendered by using a reverse ray tracing technology, and real-time display is realized by using a parallel computing technology. Although the algorithms avoid redundancy of acquired information, the EIA is indirectly calculated by using the acquisition result of the virtual camera model, and the EIA is not directly calculated by using an integrated light field as a model; when the interactive display function is realized, the technical difficulty is increased, and meanwhile, the processes of continuously calculating the space conversion matrix of the virtual camera and the like increase the complexity of interactive calculation and influence the real-time performance of interaction.
Disclosure of Invention
The invention aims to provide an integrated imaging generation method and system based on ray tracing and capable of realizing real-time interaction, which get rid of the acquisition process of a virtual camera model, simplify the calculation flow and improve the real-time performance of an interaction function.
In order to achieve the purpose, the invention provides the following scheme:
an integrated imaging generation method based on ray tracing and real-time interaction comprises the following steps:
reading a system parameter file by using an open-source function library, and loading a virtual scene three-dimensional model and a virtual scene texture file by using the open-source function library; the system parameter file is stored in a parameter class ConfigXML, and the virtual scene three-dimensional model and the virtual scene texture file are stored in a model data structure MeshBuffer;
establishing a hierarchical bounding box acceleration structure according to the data in the model data structure MeshBuffer;
establishing an integrated light field visual model according to the data in the parameter class ConfigXML; the integrated light field visual model sequentially comprises a plane where a virtual unit image array is located, a plane where a virtual lens array is located, a plane where the center of a three-dimensional object is located and a light ray emission plane; the integrated light field visual model is realized by ILFR type, and the attribute values of the integrated light field visual model comprise data type EIABuffer, world coordinate data, plane distance data, pixel size of LCD display screen and position data of lens optical center; the data type EIABbuffer is a two-dimensional structure body with the same size as the unit image array and is used for storing the color value of each pixel in the unit image array; the world coordinate data comprises a point Lookat and a point O r And the world coordinate data of the vector up, wherein the point Lookat is the original point of the coordinate system of the integrated light field visual model, the point Or is the original point of the world coordinate system, and the vector up is the top vector of the coordinate system of the integrated light field visual model; the plane distance data comprises the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, and the distance between the plane of the center of the three-dimensional object and the light emission plane; the position data of the optical center of the lens refers to the pixel of the vertical projection point of the optical center of the lens on the unit image array, and the position data of the optical center of the lens on the unit image arrayThe location data of (1);
judging whether an interactive instruction is received or not to obtain a first judgment result; the interactive instruction comprises a keyboard interactive instruction and a mouse interactive instruction;
if the first judgment result indicates that an interactive instruction is received, modifying the attribute value of the integrated light field visual model according to the interactive instruction, and generating light rays corresponding to each pixel according to the modified attribute value of the integrated light field visual model;
if the first judgment result indicates that an interactive instruction is not received, generating light rays corresponding to each pixel according to the attribute value of the integrated light field visual model;
rendering all the rays in parallel by adopting the hierarchical bounding box acceleration structure and a ray tracing technology to generate a unit image array;
and drawing and displaying the unit image array on a display screen by adopting a double-cache technology.
Optionally, the system parameter file includes an xml file and a csv file; the csv file comprises position data of centers of all lenses in the lens array, and the xml file comprises the pixel size of an LCD display screen, the distance between a unit image array and the lens array, the focal length of the lens, the width of the LCD display screen, the width numerical value of the unit image array in a virtual space, the horizontal resolution of the unit image array, the vertical resolution of the unit image array, the number of the lenses in the lens array, the pixel number of each unit image array and the file name of a lens center position data file;
the files in the virtual scene three-dimensional model are a ply file, an obj file and a txt file, and the virtual scene texture files are a ppm file, an hdr file and a jpg file.
Optionally, the establishing an integrated light field visual model according to the data in the parameter type ConfigXML specifically includes:
adopting a right-handed Cartesian coordinate system and establishing a coordinate axis as x w 、y w 、z w The world coordinate system of (a); wherein the point Or is the origin of the world coordinate system;
setting a point Lookat as an origin of an integrated light field visual model coordinate system, and setting the point O r The vector formed to the point Lookat is set as z of the integrated light field visual model coordinate system c Axis, establishing the coordinate axis as x c 、y c 、z c The integrated light field visual model coordinate system of (1); wherein the point loosat is a volume central point of the three-dimensional virtual object; the vector up is a top vector of the integrated light field visual model coordinate system and is used for constructing a unit orthogonal substrate of the integrated light field visual model coordinate system;
establishing an integrated light field visual model according to a plane of the virtual unit image array, a plane of the virtual lens array, a plane of the center of the three-dimensional object, a light emitting plane and the integrated light field visual model coordinate system; wherein the plane of the virtual unit image array, the plane of the virtual lens array, the plane of the center of the three-dimensional object, and the light emission plane are all in the z-coordinate system of the integrated light field visual model coordinate system c The axis is vertical; z of the light emission plane and the integrated light field visual model coordinate system c The axes intersecting at a point O r The plane of the center of the three-dimensional object and the z of the integrated light field visual model coordinate system c The axis intersects at a point D, and the center point of the virtual lens array coincides with the point D; the position relation of the plane where the virtual unit image array is located and the plane where the virtual lens array is located is consistent with the position relation of the plane where the virtual unit image array is located and the plane where the virtual lens array is located in the physical reproduction system.
Optionally, the interaction instruction includes a rotation instruction, a movement instruction, a zoom instruction, and a display fine adjustment instruction; wherein the content of the first and second substances,
the rotation instruction is to reset the midpoint O of the integrated light field visual model by monitoring the dragging of the left button of the mouse r Is realized by the world coordinates of;
the moving instruction is to reset the center point Lookat and the point O of the integrated light field visual model by monitoring the dragging of the right button of the mouse r Is realized by the world coordinates of;
the zooming instruction is realized by monitoring the dragging of a mouse wheel key and resetting the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object and the pixel size of the LCD display screen in the integrated light field visual model according to a certain proportion;
the display fine adjustment instruction is realized by independently modifying the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, the distance between the plane of the center of the three-dimensional object and the light emission plane and the pixel size of the LCD display screen in the integrated light field visual model through keys of a keyboard.
Optionally, the generating light corresponding to each pixel specifically includes:
assigning a thread to each pixel;
calculating the coordinates and direction vectors of the emitting points of the light rays corresponding to the current pixels according to the thread corresponding to each pixel and the attribute values of the integrated light field visual model;
and generating the light corresponding to each pixel according to the coordinates of the emitting point of the light and the direction vector.
Optionally, the rendering all the light rays in parallel by using the hierarchical bounding box acceleration structure and the ray tracing technology to generate the unit image array specifically includes:
step S1: an engine Optix in the hierarchical bounding box acceleration structure and the open-source ray tracing technology is adopted to calculate the radiance value of each ray in parallel, and the radiance value is stored in a data structure EIABuffer of the integrated light field visual model;
step S2: until the radiance value of the light corresponding to each pixel in the unit image array is calculated, all data in the data structure EIABuffer is a frame unit image array, copying all data in the data structure EIABuffer to an idle cache of OpenGL in an integrated light field visual model, and refreshing the data structure EIABuffer;
and step S3: repeating steps S1 to S2 until the unit image arrays of all frames are generated.
Optionally, the drawing and displaying the unit image array on the display screen by using a double-cache technology specifically includes:
and drawing and displaying the unit image array of each frame on an LCD display screen by using a double-cache technology in an open source OpenGL graphic library.
An integrated imaging generation system based on ray tracing and real-time interactable, comprising:
the initialization module is used for reading the system parameter file by using the open source function library and loading the virtual scene three-dimensional model and the virtual scene texture file by using the open source function library; the system parameter file is stored in a parameter class ConfigXML, and the virtual scene three-dimensional model and the virtual scene texture file are stored in a model data structure MeshBuffer;
the hierarchical bounding box acceleration structure establishing module is used for establishing a hierarchical bounding box acceleration structure according to the data in the model data structure MeshBuffer;
the integrated light field visual model establishing module is used for establishing an integrated light field visual model according to the data in the parameter class ConfigXML; the integrated light field visual model sequentially comprises a plane where a virtual unit image array is located, a plane where a virtual lens array is located, a plane where the center of a three-dimensional object is located and a light ray emission plane; the integrated light field visual model is realized by ILFR type, and the attribute values of the integrated light field visual model comprise data type EIABuffer, world coordinate data, plane distance data, pixel size of LCD display screen and position data of lens optical center; the data type EIABbuffer is a two-dimensional structure body with the same size as the unit image array and is used for storing the color value of each pixel in the unit image array; the world coordinate data comprises a point Lookat and a point O r And the world coordinate data of the vector up, wherein the point Lookat is the origin of the coordinate system of the integrated light field visual model, the point Or is the origin of the world coordinate system, and the vector up is the top vector of the coordinate system of the integrated light field visual model; the plane distance data comprises a plane where the virtual unit image array is locatedThe distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, and the distance between the plane of the center of the three-dimensional object and the light emitting plane; the position data of the optical center of the lens refers to the position data of the pixel where the vertical projection point of the optical center of the lens on the unit image array is located in the unit image array;
the first judgment result obtaining module is used for judging whether the interactive instruction is received or not to obtain a first judgment result; the interactive instruction comprises a keyboard interactive instruction and a mouse interactive instruction;
a first light ray generation module, configured to modify, according to the interaction instruction, an attribute value of the integrated light field visual model when the first determination result indicates that an interaction instruction is received, and generate, according to the modified attribute value of the integrated light field visual model, a light ray corresponding to each pixel;
the second light ray generation module is used for generating light rays corresponding to each pixel according to the attribute value of the integrated light field visual model when the first judgment result shows that the interactive instruction is not received;
the unit image array generating module is used for rendering all the rays in parallel by adopting the hierarchical bounding box acceleration structure and the ray tracing technology to generate a unit image array;
and the display module is used for drawing and displaying the unit image array on a display screen by adopting a double-cache technology.
Optionally, the integrated light field visual model building module specifically includes:
a world coordinate system establishing unit for adopting a right-handed Cartesian coordinate system and establishing a coordinate axis as x w 、y w 、z w The world coordinate system of (a); wherein the point Or is the origin of the world coordinate system;
an integrated light field visual model coordinate system establishing unit for setting the point Lookat as the origin of the integrated light field visual model coordinate system and the point O r The vector formed to the point Lookat is set as z of the integrated light field visual model coordinate system c Axis, establishing the coordinate axis as x c 、y c 、z c The integrated light field visual model coordinate system of (1); wherein the point loosat is a volume central point of the three-dimensional virtual object; the vector up is a top vector of the integrated light field visual model coordinate system and is used for constructing a unit orthogonal substrate of the integrated light field visual model coordinate system;
the integrated light field visual model establishing unit is used for establishing an integrated light field visual model according to a plane where the virtual unit image array is located, a plane where the virtual lens array is located, a plane where the center of the three-dimensional object is located, a light ray emission plane and the integrated light field visual model coordinate system; wherein the plane of the virtual unit image array, the plane of the virtual lens array, the plane of the center of the three-dimensional object, and the light emission plane are all in the z-coordinate system of the integrated light field visual model coordinate system c The axis is vertical; z of the light ray emission plane and the integrated light field visual model coordinate system c The axes intersecting at a point O r The plane of the center of the three-dimensional object and the z of the integrated light field visual model coordinate system c The axis intersects at a point D, and the center point of the virtual lens array coincides with the point D; the position relationship between the plane of the virtual unit image array and the plane of the virtual lens array is consistent with the position relationship between the plane of the virtual unit image array and the plane of the virtual lens array in the physical reproduction system.
Optionally, the display module specifically includes:
and the display unit is used for drawing and displaying the unit image array of each frame on the LCD display screen by using a double-cache technology in an open source OpenGL graphic library.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the virtual scene acceleration structure is constructed in the preprocessing process, the light ray tracing rendering efficiency in the rendering process is improved, the integrated light field visual model is constructed at the same time, the virtual camera array in the traditional algorithm is replaced, the calculation processes of generating the virtual camera array and calculating the virtual camera conversion matrix in the ISPP optimization algorithm are omitted, the simplified calculation process is simplified, and the real-time performance of the interaction function is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flowchart of an integrated imaging generation method based on ray tracing and real-time interaction according to an embodiment of the present invention;
FIG. 2 is a diagram of a basic structure of an integrated light field viewing model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of ray equation calculations according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an integrated image generation system based on ray tracing and capable of real-time interaction according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
Example one
As shown in fig. 1, the method for generating an integrated image based on ray tracing and real-time interaction provided by this embodiment includes:
step 101: reading a system parameter file by using an open source function library, and loading a virtual scene three-dimensional model and a virtual scene texture file by using the open source function library; the system parameter file is stored in a parameter class ConfigXML, and the virtual scene three-dimensional model and the virtual scene texture file are stored in a model data structure MeshBuffer.
The system parameter file comprises an xml file and a csv file; the csv file comprises position data of centers of all lenses in the lens array, and the xml file comprises the pixel size of an LCD display screen, the distance between a unit image array and the lens array, the focal length of the lens, the width of the LCD display screen, the width numerical value of the unit image array in a virtual space, the horizontal resolution of the unit image array, the vertical resolution of the unit image array, the number of the lenses in the lens array, the pixel number of each unit image array and the file name of a lens center position data file; the formats of the files in the virtual scene three-dimensional model are a ply file, an obj file and a txt file, and the formats of the virtual scene texture files are a ppm file, an hdr file and a jpg file.
Step 102: and establishing a hierarchical bounding box acceleration structure according to the data in the model data structure MeshBuffer.
Step 103: establishing an integrated light field visual model according to the data in the parameter class ConfigXML; the integrated light field visual model sequentially comprises a plane where a virtual unit image array is located, a plane where a virtual lens array is located, a plane where the center of a three-dimensional object is located and a light ray emission plane; the integrated light field visual model is realized by ILFR type, and the attribute values of the integrated light field visual model comprise data type EIABuffer, world coordinate data, plane distance data, pixel size of LCD display screen and position data of lens optical center; the data type EIABbuffer is a two-dimensional structure body with the same size as the unit image array and is used for storing the color value of each pixel in the unit image array; the world coordinate data comprises a point Lookat and a point O r And the world coordinate data of the vector up, wherein the point Lookat is the origin of the coordinate system of the integrated light field visual model, the point Or is the origin of the world coordinate system, and the vector up is the top vector of the coordinate system of the integrated light field visual model; the plane distance data includes a distance between a plane where the virtual unit image array is located and a plane where the virtual lens array is located, the virtual lens arrayThe distance between the plane where the column is located and the plane where the center of the three-dimensional object is located, and the distance between the plane where the center of the three-dimensional object is located and the light emitting plane; the position data of the optical center of the lens refers to the position data of the pixel where the vertical projection point of the optical center of the lens on the unit image array is located in the unit image array.
Step 104: judging whether an interactive instruction is received or not to obtain a first judgment result; the interactive instructions comprise keyboard interactive instructions and mouse interactive instructions; if the first determination result indicates that an interactive instruction is received, executing step 105; if the first determination result indicates that the interactive command is not received, step 106 is executed.
Step 105: and modifying the attribute value of the integrated light field visual model according to the interactive instruction, and generating the light ray corresponding to each pixel according to the modified attribute value of the integrated light field visual model.
Step 106: and generating light rays corresponding to each pixel according to the attribute value of the integrated light field visual model.
Step 107: and rendering all the rays in parallel by adopting the hierarchical bounding box acceleration structure and the ray tracing technology to generate a unit image array.
Step 108: and drawing and displaying the unit image array on a display screen by adopting a double-cache technology.
Step 103 specifically comprises:
adopting a right-handed Cartesian coordinate system and establishing a coordinate axis as x w 、y w 、z w The world coordinate system of (a); wherein the point Or is the origin of the world coordinate system.
Setting the point Lookat as the origin of the coordinate system of the integrated light field visual model, and setting the point O r The vector formed to the point Lookat is set as z of the integrated light field visual model coordinate system c Axis, establishing the coordinate axis as x c 、y c 、z c The integrated light field visual model coordinate system of (1); the point lookup at is a volume central point of the three-dimensional virtual object; the vector up is the top vector of the integrated light field view model coordinate system, and is used for constructing the unit positive of the integrated light field view model coordinate systemAnd (4) crossing the substrate.
Establishing an integrated light field visual model according to a plane of the virtual unit image array, a plane of the virtual lens array, a plane of the center of the three-dimensional object, a light emitting plane and the integrated light field visual model coordinate system; wherein the plane of the virtual unit image array, the plane of the virtual lens array, the plane of the center of the three-dimensional object, and the light emission plane are all in the z-coordinate system of the integrated light field visual model coordinate system c The axis is vertical; z of the light ray emission plane and the integrated light field visual model coordinate system c The axes intersecting at a point O r Z of the plane of the center of the three-dimensional object and the coordinate system of the integrated light field visual model c The axis intersects at point D, and the center point of the virtual lens array coincides with said point D; the position relationship between the plane of the virtual unit image array and the plane of the virtual lens array is consistent with the position relationship between the plane of the virtual unit image array and the plane of the virtual lens array in the physical reproduction system.
The interactive instruction in the step 104 comprises a rotation instruction, a movement instruction, a zooming instruction and a display fine-tuning instruction; wherein the rotation instruction is to reset the midpoint O of the integrated light field visual model by monitoring the dragging of the left button of the mouse r Is realized by the world coordinates of; the moving instruction is to reset the center point Lookat and the point O of the integrated light field visual model by monitoring the dragging of a right mouse button r Is realized by the world coordinates of; the zooming instruction is realized by monitoring the dragging of a mouse wheel key and resetting the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object and the pixel size of the LCD display screen in the integrated light field visual model according to a certain proportion; the display fine adjustment instruction is used for independently modifying the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, the plane of the center of the three-dimensional object and the light in the integrated light field visual model through keyboard keysThe distance between the line emission planes and the pixel size of the LCD display screen.
The step 105 and the step 106 of generating the light corresponding to each pixel specifically include:
one thread is assigned to each pixel.
And calculating the coordinates and the direction vectors of the emitting points of the light rays corresponding to the current pixels according to the thread corresponding to each pixel and the attribute values of the integrated light field visual model.
And generating the light corresponding to each pixel according to the coordinates of the emitting point of the light and the direction vector.
Step 107 specifically includes:
step S1: and calculating the radiance value of each ray in parallel by using an engine Optix in the hierarchical bounding box acceleration structure and the open-source ray tracing technology, and storing the radiance value in a data structure EIABuffer of the integrated light field visual model.
Step S2: until the radiance value of the light corresponding to each pixel in the unit image array is calculated, all data in the data structure EIABuffer is a frame unit image array, copying all data in the data structure EIABuffer to an idle cache of OpenGL in an integrated light field visual model, and refreshing the data structure EIABuffer.
And step S3: repeating steps S1 to S2 until the unit image arrays of all frames are generated.
Step 108 specifically includes:
and drawing and displaying the unit image array of each frame on an LCD display screen by using a double-cache technology in an open source OpenGL graphic library.
Example two
The integrated imaging generation method provided by the embodiment mainly comprises an input module, a preprocessing module, an interaction module, a rendering module and a display module.
1. Input module
Reading a system parameter file by using an open source function library, wherein the system parameter file comprises an xml file and a csv file.
Wherein the csv file contains position data of the centers of all lenses in the lens array.
The xml file contains main data as follows:
pixelSize: pixel size (mm) of LCD display screen;
lens _ EIA _ dist: the size of the distance (mm) between the EIA and the lens array;
a lens: lens focal length (mm);
LCDw: width (mm) of the LCD display screen;
VLCDw: width value of EIA in virtual space;
film _ horRes: EIA horizontal resolution;
film _ verRes: EIA vertical resolution;
LensCount: the number of lenses in the lens array;
parallelx: the number of pixels per EIA;
lens center _ csv: a file name of the lens center position data file;
position data of lens optical center: the position data of the optical center of the lens refers to the position data of the pixel where the optical center of the lens vertically projects on the unit image array and in the unit image array.
All data after the reading of the two files are stored in a parameter class ConfigXML.
And (II) loading and loading the three-dimensional model of the virtual scene and the texture file of the virtual scene by using the open-source function library. The format of the virtual scene three-dimensional model file can be a ply file, an obj file and a txt file, and the format of the virtual scene texture file can be a ppm file, an hdr file and a jpg file. These data are stored in the model data structure MeshBuffer.
2. Pre-processing module
And (I) establishing a hierarchical bounding box acceleration structure (Geometrygroup class) by using an open source function library according to data in a model data structure MeshBuffer for subsequent ray tracing calculation.
And (II) establishing an integrated light field visual model according to data in the parameter class ConfigXML, wherein the basic structure of the integrated light field visual model is shown in FIG. 2.
The integrated light field visual model comprises an EIAP (plane of a virtual EIA), a LAP (plane of a virtual lens array), a CDP (plane of a three-dimensional object center) and a ROP (light emitting plane). Adopting a right-handed Cartesian coordinate system and establishing a coordinate axis as x w 、y w 、z w The world coordinate system of (1). Establishing a coordinate axis as x c 、y c 、z c The integrated light field visual model coordinate system (inner space for short), the point lookup is the origin of the inner space (the initial default is to use the volume center of the three-dimensional virtual object as the point lookup to ensure that the generated EIA can provide basic initial display effect for the integrated imaging physical reproduction system), and the point O is the point of the three-dimensional virtual object r (origin of world coordinate system) is p unit lengths from point loosat (the unit length is used for all values of inner space), from point O r The vector formed to the point Lookat is z of the inner space c Axis, vector up is the top vector of the inner space (used to construct the unit orthogonal basis of the inner space). Plane EIAP, plane LAP, plane CDP, and plane ROP all with z of the inner space c The axes are perpendicular and the distances between them are indicated by g, h, p, respectively. Plane ROP and z c The axes intersecting at a point O r Plane CDP and z c The axes intersect at point D. To ensure basic initial display and ease of computation, the center of the lens near the middle of the virtual lens array (the center of the rectangle) is typically placed at point D. The position relation between the virtual EIA and the virtual lens array is consistent with the position relation between the EIA and the lens array in the integrated imaging physical reproduction system. Pixel A in virtual EIA ij Is determined by the pixel location (row i, column j) and the virtual pixel size pixelSize. Ray ij Representative pixel A ij Corresponding rays (in the present invention, rays all refer to the chief ray from the plane ROP, as distinguished from shadow rays), pixel a ij And the center point of (D) and the corresponding lens center mn The intersection point of the straight line and the plane ROP is a Ray ij Therefore, the virtual scene on the left side of the plane ROP is visible, and the virtual scene on the right side is not visible, so the initially set p value is generally larger than the size of the three-dimensional object. The closesHit point is Ray ij Recent intersections with three-dimensional models of virtual scenesCross point, line segment Ray s Representing the shadow rays in ray tracing techniques and Light representing the point sources in the virtual three-dimensional scene lighting model (point sources and ambient Light).
Integrating the light field View model for generating Ray for each pixel ij Then calculating Ray using Ray tracing technique ij Calculating the nearest cross point closest to the three-dimensional model of the virtual scene, and calculating the Ray according to the virtual scene information such as the illumination system (global illumination such as point Light), materials, textures and the like ij Then given an A radiation value ij And (4) obtaining the EIA finally.
The integrated light field visual model is realized by ILFR type, and the main attributes comprise an EIABuffer unit (storing the color value of each pixel of EIA), a point Lookat and a point O r The world coordinate data of the vector up, and the values g, h, p and pixelSize, and provides a read-write interface for these attributes. The EIABuffer unit is a two-dimensional structure body with the same size as the EIA, and the structure body unit is a custom data type RGBColor and stores the color value of one pixel.
3. Interaction module
And monitoring events of the keyboard and the mouse, such as left key dragging, right key dragging, roller key dragging and key pressing of the keyboard, by using the open source library OpenGL. If no event is received, the EIA is continuously rendered according to the attribute value of the current integrated light field visual model, otherwise, the attribute in the ILFR class is modified by using the related interface, and then the EIA is rendered. The interaction details comprise four, namely rotation, movement, zooming, display fine adjustment and the like.
And (I) rotating. Resetting the midpoint O of the integrated light field visual model by monitoring the dragging of the left button of the mouse r The world coordinates of the user can be realized.
And (II) moving. Resetting the point Lookat and the point O in the integrated light field visual model by monitoring the right button dragging of the mouse r The world coordinates of the user can be realized.
And (III) zooming. The method is realized by monitoring the dragging of a mouse wheel key and resetting the values g, h and pixelSize according to a certain proportion (0.5-3 times).
And (IV) displaying fine adjustment. The display effect is debugged by independently modifying the values of g, h, p and pixelSize through the keys of the keyboard.
4. Rendering module
And (I) generating light rays. The process is completed by an integrated light field visual model, the integrated light field visual model distributes a thread for each pixel to perform parallel computation, and each thread computes a ray equation of the light corresponding to the current pixel according to the attribute value of the integrated light field visual model.
In the present invention, the vector and the coordinates of the point are both expressed by column vectors, "+" represents the inner product of the vector, "^" represents the outer product of the vector, and "normaize (m)" represents the unit vector of the solution vector m (m cannot be a zero vector).
Light ray model R of the invention ij Is a ray model and contains an emission point O ij (coordinate is represented by a vector
Figure BDA0002071268810000142
Representation) and Direction unit vector Direction ij (coordinates are picked by a vector->
Figure BDA0002071268810000143
Expressed), a point P on the light (the coordinate is represented by the vector @)>
Figure BDA0002071268810000144
Expressed) satisfies the following formula, namely the ray equation: />
Figure BDA0002071268810000141
The size of the real number t determines the position of the point P. The light rays are generated in an integrated light field viewing model, as shown in FIG. 3. O in FIG. 3 ij And Direction ij Are respectively rays of light R ij Emission point and direction of D1 mn 、D2 mn Is the lens center D mn The x and y coordinates of the inner space are the same at the vertical projection points of the plane EIAP and the plane ROP plane. Point O ij Point A ij Point D1 mn Point D2 mn Inner space ofThe coordinates have the following relationship:
Figure BDA0002071268810000151
calculating the light emission point O by the formula (2) ij The inner space coordinate origin can be converted into O through a space coordinate system ij World coordinate origin of w And unit direction vector direction of light w The calculation formula is as follows:
Figure BDA0002071268810000152
Figure BDA0002071268810000153
the vector set { U, V, W } is a unit orthogonal basis of the inner spatial coordinate system, which may be represented by point lookup, point O r Is calculated from the vector up, and the calculation formula is as follows:
Figure BDA0002071268810000154
in the above formula, the top vector up, the point Lookat and the point O of the light field visual model are integrated r The straight lines (the z axis of the inner space) cannot be parallel, otherwise, the cross multiplication result U vector of the vector up and the vector W is a zero vector and cannot be used as the unit orthogonal base of the inner space.
After the coordinates and the direction vectors of the emitting points of the rays are obtained through the calculation, a ray equation can be obtained, and the subsequent calculation of the ray radiance is facilitated.
And (II) rendering the light by using a ray tracing technology. The method is realized by using an open-source ray tracing technology engine Optix and a virtual scene acceleration structure, the radiance value (represented by RGB color value) of each ray is calculated in parallel and stored in a data structure EIABuffer of an integrated light field visual model, and after each pixel in the EIA is calculated, a frame of EIA data is obtained, and the data structure EIABuffer is delivered to a display module for display. The data structure EIABuffer is then refreshed and loops to the interaction module to continue calculating the EIA for the next frame.
5. Display module
And drawing the EIA on an LCD display screen by using a double-cache technology in an open source OpenGL graphic library to realize the display of one frame of EIA data.
The dynamic model optimization method comprises an input module, a preprocessing module, an interaction module, a rendering module and a display module, is similar to five modules in an ISPP optimization algorithm (the ISPP optimization algorithm is a more typical algorithm in the past algorithm), and relative to a dynamic model (animation data) with constantly changing model data, the real-time performance of static model calculation and display is not influenced by the calculation efficiency of the input module and the preprocessing module, but depends on the calculation efficiency of the other three modules (such as the interaction module, the rendering module and the display module of the dynamic model optimization algorithm, or the calculation module, the rendering module and the display module of the ISPP optimization algorithm). Therefore, the virtual scene acceleration structure is constructed in the preprocessing module, the efficiency of ray tracing and rendering in the rendering module is improved, an Integrated Light Field Viewing Model (ILFVM) is constructed at the same time, the virtual camera array in the traditional algorithm is replaced, the calculation processes of generating the virtual camera array and calculating the virtual camera conversion matrix in the ISPP optimization algorithm are omitted, the algorithm complexity is simplified, the flexibility is high, and the real-time performance of the interaction function is improved.
EXAMPLE III
As shown in fig. 4, the present embodiment provides an integrated imaging generation system based on ray tracing and capable of interacting in real time, including:
the initialization module 100 is configured to read a system parameter file using an open-source function library, and load a virtual scene three-dimensional model and a virtual scene texture file using the open-source function library; the system parameter file is stored in a parameter class ConfigXML, and the virtual scene three-dimensional model and the virtual scene texture file are stored in a model data structure MeshBuffer.
And the hierarchical bounding box acceleration structure establishing module 200 is configured to establish a hierarchical bounding box acceleration structure according to the data in the model data structure MeshBuffer.
An integrated light field visual model establishing module 300, configured to establish an integrated light field visual model according to the data in the parameter type ConfigXML; the integrated light field visual model sequentially comprises a plane where a virtual unit image array is located, a plane where a virtual lens array is located, a plane where the center of a three-dimensional object is located and a light ray emission plane; the integrated light field visual model is realized by ILFR type, and the attribute values of the integrated light field visual model comprise data type EIABuffer, world coordinate data, plane distance data, pixel size of LCD display screen and position data of lens optical center; the data type EIABbuffer is a two-dimensional structure body with the same size as the unit image array and is used for storing the color value of each pixel in the unit image array; the world coordinate data comprises a point Lookat and a point O r And the world coordinate data of the vector up, wherein the point Lookat is the origin of the coordinate system of the integrated light field visual model, the point Or is the origin of the world coordinate system, and the vector up is the top vector of the coordinate system of the integrated light field visual model; the plane distance data comprises the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, and the distance between the plane of the center of the three-dimensional object and the light emitting plane; the position data of the optical center of the lens refers to the position data of the pixel where the vertical projection point of the optical center of the lens on the unit image array is located in the unit image array.
A first determination result obtaining module 400, configured to determine whether an interactive instruction is received, so as to obtain a first determination result; the interaction instruction comprises a keyboard interaction instruction and a mouse interaction instruction.
A first light ray generating module 500, configured to modify, according to the interaction instruction, the attribute value of the integrated light field visual model when the first determination result indicates that the interaction instruction is received, and generate, according to the modified attribute value of the integrated light field visual model, a light ray corresponding to each pixel.
A second light ray generating module 600, configured to generate a light ray corresponding to each pixel according to the attribute value of the integrated light field visual model when the first determination result indicates that the interactive instruction is not received.
The unit image array generating module 700 is configured to render all the rays in parallel by using the hierarchical bounding box acceleration structure and the ray tracing technology, so as to generate a unit image array.
And a display module 800, configured to draw and display the unit image array on a display screen by using a double-cache technology.
The system parameter file comprises an xml file and a csv file; the csv file comprises position data of centers of all lenses in the lens array, and the xml file comprises pixel size of an LCD display screen, distance between the unit image array and the lens array, lens focal length, width of the LCD display screen, width numerical value of the unit image array in a virtual space, horizontal resolution of the unit image array, vertical resolution of the unit image array, number of lenses in the lens array, pixel number of each unit image array and file name of the position data file of the lens center.
The formats of the files in the virtual scene three-dimensional model are a ply file, an obj file and a txt file, and the formats of the virtual scene texture files are a ppm file, an hdr file and a jpg file.
The integrated light field visual model module 300 specifically includes:
a world coordinate system establishing unit for adopting a right-handed Cartesian coordinate system and establishing a coordinate axis as x w 、y w 、z w The world coordinate system of (a); wherein the point Or is the origin of the world coordinate system.
An integrated light field visual model coordinate system establishing unit for setting the point Lookat as the origin of the integrated light field visual model coordinate system and the point O r The vector formed to the point Lookat is set as z of the integrated light field visual model coordinate system c Axis, establishing the coordinate axis as x c 、y c 、z c The integrated light field visual model coordinate system of (1); wherein the point loosat is a volume central point of the three-dimensional virtual object; the vector up is the top vector of the integrated light field view model coordinate system, used for constructing the integrated lightThe units of the field view model coordinate system are orthogonal to the base.
The integrated light field visual model establishing unit is used for establishing an integrated light field visual model according to a plane where the virtual unit image array is located, a plane where the virtual lens array is located, a plane where the center of the three-dimensional object is located, a light emitting plane and the integrated light field visual model coordinate system; wherein the plane of the virtual unit image array, the plane of the virtual lens array, the plane of the center of the three-dimensional object, and the light emission plane are all in the z-coordinate system of the integrated light field visual model coordinate system c The axis is vertical; z of the light ray emission plane and the integrated light field visual model coordinate system c The axes intersecting at a point O r The plane of the center of the three-dimensional object and the z of the integrated light field visual model coordinate system c The axis intersects at a point D, and the center point of the virtual lens array coincides with the point D; the position relationship between the plane of the virtual unit image array and the plane of the virtual lens array is consistent with the position relationship between the plane of the virtual unit image array and the plane of the virtual lens array in the physical reproduction system.
The interactive instruction comprises a rotation instruction, a moving instruction, a zooming instruction and a display fine-tuning instruction; wherein the content of the first and second substances,
the rotation instruction is to reset the midpoint O of the integrated light field visual model by monitoring the dragging of the left button of the mouse r Is realized by the world coordinates of (1).
The moving instruction is to reset the center point Lookat and the point O of the integrated light field visual model by monitoring the dragging of the right button of the mouse r Of the world coordinate system.
The zooming instruction is realized by monitoring the dragging of a mouse wheel key and resetting the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object and the pixel size of the LCD display screen in the integrated light field visual model according to a certain proportion.
The display fine adjustment instruction is realized by independently modifying the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, the distance between the plane of the center of the three-dimensional object and the light emission plane and the pixel size of the LCD display screen in the integrated light field visual model through keys of a keyboard.
The first light generating module 500 specifically includes:
a first allocation unit for allocating one thread to each pixel.
And the attribute value modifying unit is used for modifying the attribute value of the integrated light field visual model according to the interactive instruction.
And the first emitting point coordinate and direction vector calculating unit is used for calculating the emitting point coordinate and the direction vector of the light ray corresponding to the current pixel according to the thread corresponding to each pixel and the modified attribute value of the integrated light field visual model.
And the first light ray generating unit is used for generating light rays corresponding to each pixel according to the coordinates of the emitting points of the light rays and the direction vectors.
The second light generating module 600 specifically includes:
a second assigning unit for assigning a thread to each pixel.
And the second emitting point coordinate and direction vector calculating unit is used for calculating the emitting point coordinate and the direction vector of the light ray corresponding to the current pixel according to the thread corresponding to each pixel and the attribute value of the integrated light field visual model.
And the second light ray generating unit is used for generating light rays corresponding to each pixel according to the coordinates of the emitting points of the light rays and the direction vectors.
The unit image array module 700 specifically includes the following steps:
step S1: and an engine Optix in the hierarchical bounding box acceleration structure and the open-source ray tracing technology is adopted to calculate the radiance value of each ray in parallel, and the radiance value is stored in the data structure EIABuffer of the integrated light field visual model.
Step S2: until the radiance value of the light corresponding to each pixel in the unit image array is calculated, all data in the data structure EIABuffer are a frame of unit image array, all data in the data structure EIABuffer are copied to an idle cache of OpenGL in an integrated light field viewing model, and the data structure EIABuffer is refreshed.
And step S3: repeating steps S1 to S2 until the unit image arrays of all frames are generated.
The display module 800 specifically includes:
and drawing and displaying the unit image array of each frame on an LCD display screen by using a double-cache technology in an open source OpenGL graphic library.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. An integrated imaging generation method based on ray tracing and capable of interacting in real time, wherein the integrated imaging generation method comprises the following steps:
reading a system parameter file by using an open-source function library, and loading a virtual scene three-dimensional model and a virtual scene texture file by using the open-source function library; the system parameter file is stored in a parameter class ConfigXML, and the virtual scene three-dimensional model and the virtual scene texture file are stored in a model data structure MeshBuffer;
establishing a hierarchical bounding box acceleration structure according to the data in the model data structure MeshBuffer;
establishing an integrated light field visual model according to the data in the parameter class ConfigXML; the integrated light field visual model sequentially comprises a plane where a virtual unit image array is located, a plane where a virtual lens array is located, a plane where the center of a three-dimensional object is located and a light ray emission plane; the integrated light field visual model is realized by ILFR type, and the attribute values of the integrated light field visual model comprise data type EIABuffer, world coordinate data, plane distance data, pixel size of LCD display screen and position data of lens optical center; the data EIABbuffer is a two-dimensional structure body with the same size as the unit image array and is used for storing the color value of each pixel in the unit image array; the world coordinate data comprises a point Lookat and a point O r And the world coordinate data of the vector up, wherein the point Lookat is the origin of the coordinate system of the integrated light field visual model, the point Or is the origin of the world coordinate system, and the vector up is the top vector of the coordinate system of the integrated light field visual model; the plane distance data comprises the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, and the distance between the plane of the center of the three-dimensional object and the light emitting plane; the position data of the optical center of the lens refers to the position data of the pixel of the vertical projection point of the optical center of the lens on the unit image array in the unit image array;
judging whether an interactive instruction is received or not to obtain a first judgment result; the interactive instruction comprises a keyboard interactive instruction and a mouse interactive instruction;
if the first judgment result indicates that an interactive instruction is received, modifying the attribute value of the integrated light field visual model according to the interactive instruction, and generating light rays corresponding to each pixel according to the modified attribute value of the integrated light field visual model;
if the first judgment result indicates that an interactive instruction is not received, generating light rays corresponding to each pixel according to the attribute value of the integrated light field visual model;
rendering all the rays in parallel by adopting the hierarchical bounding box acceleration structure and a ray tracing technology to generate a unit image array;
and drawing and displaying the unit image array on a display screen by adopting a double-cache technology.
2. The integrated imaging generation method based on ray tracing and real-time interaction as claimed in claim 1, wherein said system parameter file comprises an xml file and a csv file; the csv file comprises position data of centers of all lenses in the lens array, and the xml file comprises the pixel size of an LCD display screen, the distance between a unit image array and the lens array, the focal length of the lens, the width of the LCD display screen, the width numerical value of the unit image array in a virtual space, the horizontal resolution of the unit image array, the vertical resolution of the unit image array, the number of the lenses in the lens array, the pixel number of each unit image array and the file name of a lens center position data file;
the files in the virtual scene three-dimensional model are a ply file, an obj file and a txt file, and the virtual scene texture files are a ppm file, an hdr file and a jpg file.
3. The method as claimed in claim 2, wherein the creating of the integrated light field visual model according to the data in the parameter class ConfigXML includes:
adopting a right-handed Cartesian coordinate system and establishing a coordinate axis as x w 、y w 、z w The world coordinate system of (a); wherein the point Or is the origin of the world coordinate system;
setting the point Lookat as the origin of the coordinate system of the integrated light field visual model, and setting the point O r The vector formed to the point Lookat is set as z of the integrated light field visual model coordinate system c Axis, establishing the coordinate axis as x c 、y c 、z c The integrated light field visual model coordinate system of (1); wherein the point loosat is a volume central point of the three-dimensional virtual object; the vector up is the top vector of the integrated light field view model coordinate system,a unit orthogonal substrate for constructing the integrated light field view model coordinate system;
establishing an integrated light field visual model according to a plane of the virtual unit image array, a plane of the virtual lens array, a plane of the center of the three-dimensional object, a light emitting plane and the integrated light field visual model coordinate system; wherein the plane of the virtual unit image array, the plane of the virtual lens array, the plane of the center of the three-dimensional object, and the light emission plane are all in the z-coordinate system of the integrated light field visual model coordinate system c The axis is vertical; z of the light emission plane and the integrated light field visual model coordinate system c The axes intersecting at a point O r The plane of the center of the three-dimensional object and the z of the integrated light field visual model coordinate system c The axis intersects at a point D, and the center point of the virtual lens array coincides with the point D; the position relation of the plane of the virtual unit image array and the plane of the virtual lens array is consistent with the position relation of the plane of the virtual unit image array and the plane of the virtual lens array in the integrated imaging physical reproduction system.
4. The method of claim 3, wherein the interactive commands comprise a rotation command, a movement command, a zoom command, and a display fine-tuning command; wherein the content of the first and second substances,
the rotation instruction is to reset the midpoint O of the integrated light field visual model by monitoring the dragging of a left mouse button r Is realized by the world coordinates of;
the moving instruction is to reset the center point Lookat and the point O of the integrated light field visual model by monitoring the dragging of a right mouse button r Is realized by the world coordinates of;
the zooming instruction is realized by monitoring the dragging of a mouse wheel key and resetting the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object and the pixel size of the LCD display screen in the integrated light field visual model according to the proportion of 0.5-3 times;
the display fine adjustment instruction is realized by independently modifying the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, the distance between the plane of the center of the three-dimensional object and the light emission plane and the pixel size of the LCD display screen in the integrated light field visual model through keyboard keys.
5. The method as claimed in claim 1, wherein the generating of the light corresponding to each pixel comprises:
assigning a thread to each pixel;
calculating the coordinates and direction vectors of the emitting points of the light rays corresponding to the current pixels according to the thread corresponding to each pixel and the attribute values of the integrated light field visual model;
and generating the light corresponding to each pixel according to the coordinates and the direction vectors of the emitting points of the light.
6. The method as claimed in claim 1, wherein the step of rendering all the rays in parallel by using the hierarchical bounding box acceleration structure and ray tracing technique to generate the unit image array includes:
step S1: an engine Optix in the hierarchical bounding box acceleration structure and the open-source ray tracing technology is adopted to calculate the radiance value of each ray in parallel, and the radiance value is stored in a data structure EIABuffer of the integrated light field visual model;
step S2: until the radiance value of light corresponding to each pixel in the unit image array is calculated, all data in the data structure EIABuffer are a frame of unit image array, copying all data in the data structure EIABuffer to an idle cache of OpenGL in an integrated light field viewing model, and refreshing the data structure EIABuffer;
and step S3: repeating steps S1 to S2 until the unit image arrays of all frames are generated.
7. The method as claimed in claim 1, wherein the step of rendering and displaying the unit image array on a display screen by using a double-buffer technique comprises:
and drawing and displaying the unit image array of each frame on an LCD display screen by using a double-cache technology in an open source OpenGL graphic library.
8. An integrated imaging generation system based on ray tracing and real-time interactable, the integrated imaging generation system comprising:
the initialization module is used for reading the system parameter file by using the open-source function library and loading the virtual scene three-dimensional model and the virtual scene texture file by using the open-source function library; the system parameter file is stored in a parameter class ConfigXML, and the virtual scene three-dimensional model and the virtual scene texture file are stored in a model data structure MeshBuffer;
the hierarchical bounding box acceleration structure establishing module is used for establishing a hierarchical bounding box acceleration structure according to the data in the model data structure MeshBuffer;
the integrated light field visual model establishing module is used for establishing an integrated light field visual model according to the data in the parameter class ConfigXML; the integrated light field visual model sequentially comprises a plane where a virtual unit image array is located, a plane where a virtual lens array is located, a plane where the center of a three-dimensional object is located and a light ray emission plane; the integrated light field visual model is realized by ILFR type, and the attribute values of the integrated light field visual model comprise data type EIABuffer, world coordinate data, plane distance data, pixel size of LCD display screen and position data of lens optical center; the data type EIABbuffer is a two-dimensional structure body with the same size as the unit image array and is used for storing the color value of each pixel in the unit image array; the world coordinate dataFrom point loosat, point O r And the world coordinate data of the vector up, wherein the point Lookat is the origin of the coordinate system of the integrated light field visual model, the point Or is the origin of the world coordinate system, and the vector up is the top vector of the coordinate system of the integrated light field visual model; the plane distance data comprises the distance between the plane of the virtual unit image array and the plane of the virtual lens array, the distance between the plane of the virtual lens array and the plane of the center of the three-dimensional object, and the distance between the plane of the center of the three-dimensional object and the light emission plane; the position data of the optical center of the lens refers to the position data of the pixel of the vertical projection point of the optical center of the lens on the unit image array in the unit image array;
the first judgment result obtaining module is used for judging whether the interactive instruction is received or not to obtain a first judgment result; the interactive instruction comprises a keyboard interactive instruction and a mouse interactive instruction;
a first light ray generation module, configured to modify an attribute value of the integrated light field visual model according to the interaction instruction when the first determination result indicates that the interaction instruction is received, and generate a light ray corresponding to each pixel according to the modified attribute value of the integrated light field visual model;
the second light ray generation module is used for generating light rays corresponding to each pixel according to the attribute value of the integrated light field visual model when the first judgment result shows that the interactive instruction is not received;
the unit image array generating module is used for rendering all the rays in parallel by adopting the hierarchical bounding box acceleration structure and the ray tracing technology to generate a unit image array;
and the display module is used for drawing and displaying the unit image array on a display screen by adopting a double-cache technology.
9. The integrated imaging generation system based on ray tracing and real-time interactive as claimed in claim 8, wherein said integrated light field visual model building module specifically comprises:
a world coordinate system establishing unit for adopting right-handed Cartesian sittingA coordinate system, establishing a coordinate axis as x w 、y w 、z w The world coordinate system of (a); wherein the point Or is the origin of the world coordinate system;
an integrated light field visual model coordinate system establishing unit for setting the point Lookat as the original point of the integrated light field visual model coordinate system and setting the point O r The vector formed to the point Lookat is set as z of the integrated light field visual model coordinate system c Axis, establishing the coordinate axis as x c 、y c 、z c The integrated light field visual model coordinate system of (1); the point lookup at is a volume central point of the three-dimensional virtual object; the vector up is a top vector of the integrated light field visual model coordinate system and is used for constructing a unit orthogonal substrate of the integrated light field visual model coordinate system;
the integrated light field visual model establishing unit is used for establishing an integrated light field visual model according to a plane where the virtual unit image array is located, a plane where the virtual lens array is located, a plane where the center of the three-dimensional object is located, a light emitting plane and the integrated light field visual model coordinate system; wherein the plane of the virtual unit image array, the plane of the virtual lens array, the plane of the center of the three-dimensional object, and the light emission plane are all in the z-coordinate system of the integrated light field visual model coordinate system c The axis is vertical; z of the light emission plane and the integrated light field visual model coordinate system c The axes intersecting at a point O r The plane of the center of the three-dimensional object and the z of the integrated light field visual model coordinate system c The axis intersects at a point D, and the center point of the virtual lens array coincides with the point D; the position relationship of the plane of the virtual unit image array and the plane of the virtual lens array is consistent with the position relationship of the plane of the virtual unit image array and the plane of the virtual lens array in the integrated imaging physical reproduction system.
10. The system of claim 8, wherein the display module comprises:
and the display unit is used for drawing and displaying the unit image array of each frame on the LCD display screen by using a double-cache technology in an open source OpenGL graphic library.
CN201910438381.4A 2019-05-24 2019-05-24 Ray tracing based real-time interactive integrated imaging generation method and system Active CN110276823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910438381.4A CN110276823B (en) 2019-05-24 2019-05-24 Ray tracing based real-time interactive integrated imaging generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910438381.4A CN110276823B (en) 2019-05-24 2019-05-24 Ray tracing based real-time interactive integrated imaging generation method and system

Publications (2)

Publication Number Publication Date
CN110276823A CN110276823A (en) 2019-09-24
CN110276823B true CN110276823B (en) 2023-04-07

Family

ID=67960157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910438381.4A Active CN110276823B (en) 2019-05-24 2019-05-24 Ray tracing based real-time interactive integrated imaging generation method and system

Country Status (1)

Country Link
CN (1) CN110276823B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427166B (en) * 2020-03-31 2022-07-05 京东方科技集团股份有限公司 Light field display method and system, storage medium and display panel
CN113654458B (en) * 2021-01-21 2024-05-28 中国人民解放军陆军装甲兵学院 Transverse position error three-dimensional measurement method and system for lens array
CN113031262B (en) * 2021-03-26 2022-06-07 中国人民解放军陆军装甲兵学院 Integrated imaging system display end pixel value calculation method and system
CN113240785B (en) * 2021-04-13 2024-03-29 西安电子科技大学 Multi-camera combined rapid ray tracing method, system and application

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702099A (en) * 2013-12-17 2014-04-02 四川大学 Ultra-large visual-angle integrated-imaging 3D(Three-Dimensional)displaying method based on head tracking
CN107563088A (en) * 2017-09-14 2018-01-09 北京邮电大学 A kind of light field display device emulation mode based on Ray Tracing Algorithm
GB201721702D0 (en) * 2017-07-13 2018-02-07 Imagination Tech Ltd Hybrid hierarchy for ray tracing
CN107924580A (en) * 2015-09-03 2018-04-17 西门子保健有限责任公司 The visualization of surface volume mixing module in medical imaging

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8525826B2 (en) * 2008-08-08 2013-09-03 International Business Machines Corporation System for iterative interactive ray tracing in a multiprocessor environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702099A (en) * 2013-12-17 2014-04-02 四川大学 Ultra-large visual-angle integrated-imaging 3D(Three-Dimensional)displaying method based on head tracking
CN107924580A (en) * 2015-09-03 2018-04-17 西门子保健有限责任公司 The visualization of surface volume mixing module in medical imaging
GB201721702D0 (en) * 2017-07-13 2018-02-07 Imagination Tech Ltd Hybrid hierarchy for ray tracing
CN107563088A (en) * 2017-09-14 2018-01-09 北京邮电大学 A kind of light field display device emulation mode based on Ray Tracing Algorithm

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Elemental image array generation based on object front reference;Junfu Wang等;《Contents lists available at ScienceDirect》;20180927;第171卷;全文 *
动态3D虚拟场景并行化光线跟踪加速结构设计;李华等;《长春理工大学学报(自然科学版)》;20131215(第06期);全文 *
基于场景建模的虚拟漫游系统;王瑞玲等;《计算机应用与软件》;20070715(第07期);全文 *
集成成像三维显示系统的研究进展及优化方法;蒋晓瑜等;《光学与光电技术》;20171031;第15卷(第5期);全文 *

Also Published As

Publication number Publication date
CN110276823A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110276823B (en) Ray tracing based real-time interactive integrated imaging generation method and system
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US11069124B2 (en) Systems and methods for reducing rendering latency
US10529117B2 (en) Systems and methods for rendering optical distortion effects
CN106204712B (en) Piecewise linearity irregularly rasterizes
US11704806B2 (en) Scalable three-dimensional object recognition in a cross reality system
US8243081B2 (en) Methods and systems for partitioning a spatial index
CN102289845B (en) Three-dimensional model drawing method and device
US10553012B2 (en) Systems and methods for rendering foveated effects
TW201918745A (en) System and method for near-eye light field rendering for wide field of view interactive three-dimensional computer graphics
US20190318528A1 (en) Computer-Graphics Based on Hierarchical Ray Casting
CN108573521B (en) Real-time interactive naked eye 3D display method based on CUDA parallel computing framework
US10846908B2 (en) Graphics processing apparatus based on hybrid GPU architecture
WO2021253640A1 (en) Shadow data determination method and apparatus, device, and readable medium
WO2022143367A1 (en) Image rendering method and related device therefor
CN102243768A (en) Method for drawing stereo picture of three-dimensional virtual scene
JP2012190428A (en) Stereoscopic image visual effect processing method
CN116051713B (en) Rendering method, electronic device, and computer-readable storage medium
US9401044B1 (en) Method for conformal visualization
US11120611B2 (en) Using bounding volume representations for raytracing dynamic units within a virtual space
US6346939B1 (en) View dependent layer ordering method and system
WO2023169002A1 (en) Soft rasterization method and apparatus, device, medium, and program product
CN106780716A (en) Historical and cultural heritage digital display method
JP2022136963A (en) Image processing method and device for creating reconstructed image
CN117392358B (en) Collision detection method, collision detection device, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant