CN115131436A - Halo rendering method, device and storage medium - Google Patents

Halo rendering method, device and storage medium Download PDF

Info

Publication number
CN115131436A
CN115131436A CN202210812478.9A CN202210812478A CN115131436A CN 115131436 A CN115131436 A CN 115131436A CN 202210812478 A CN202210812478 A CN 202210812478A CN 115131436 A CN115131436 A CN 115131436A
Authority
CN
China
Prior art keywords
light source
camera
orientation
position information
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210812478.9A
Other languages
Chinese (zh)
Inventor
李铭豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Perfect Time And Space Software Co ltd
Original Assignee
Shanghai Perfect Time And Space Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Perfect Time And Space Software Co ltd filed Critical Shanghai Perfect Time And Space Software Co ltd
Priority to CN202210812478.9A priority Critical patent/CN115131436A/en
Publication of CN115131436A publication Critical patent/CN115131436A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)

Abstract

The application relates to a halo rendering method, a halo rendering device and a storage medium, wherein the method comprises the following steps: acquiring first position information of a current light source in a current virtual scene and acquiring second position information of a camera in the current virtual scene; determining whether the orientation of the current light source is opposite to the orientation of the camera according to the first position information and the second position information; under the condition that the orientation of the current light source is opposite to the orientation of the camera, acquiring third position information of the current light source in a screen space; and rendering the halo generated by the light source in the imaging of the camera according to the third position information and the spatial position range of the camera screen. Through the method and the device, the problem that the camera view angle is difficult to wrap the actual light source into the screen view field in part of scenes in the rendering mode in the prior art is solved.

Description

Halo rendering method, device and storage medium
Technical Field
The present disclosure relates to the field of light source rendering technologies, and in particular, to a method and an apparatus for halo rendering, and a storage medium.
Background
The existing rendering scheme is a lens halo scheme or a post-processing halo scheme based on 2D patch rendering, and lens halo is needed in a part of scenes to improve scene rendering effects, but in the rendering mode in the prior art, camera viewing angles are difficult to wrap actual light sources in screen views in a part of scenes, for example: there are a large number of scenes in the project that follow the top view using the third person, but most of the time the light source cannot be contained in the field of view resulting in standard lens halos not being rendered.
In view of the above problems in the related art, no effective solution exists at present.
Disclosure of Invention
The application provides a halo rendering method, a halo rendering device and a storage medium, and aims to solve the problem that in the prior art, the camera view angle is difficult to wrap an actual light source in a screen view field in a partial scene in a rendering mode.
In a first aspect, the present application provides a halo rendering method, including: acquiring first position information of a current light source in a current virtual scene and acquiring second position information of a camera in the current virtual scene; determining whether the orientation of the current light source is opposite to the orientation of the camera according to the first position information and the second position information; under the condition that the orientation of the current light source is opposite to the orientation of the camera, acquiring third position information of the current light source in a screen space; and rendering the halo generated by the light source in the imaging of the camera according to the third position information and the spatial position range of the camera screen.
In a second aspect, the present application provides a halo rendering apparatus, including: the first acquisition module is used for acquiring first position information of a current light source in a current virtual scene and acquiring second position information of a camera in the current virtual scene; a first determining module, configured to determine whether the orientation of the current light source is opposite to the orientation of the camera according to the first position information and the second position information; the second acquisition module is used for acquiring third position information of the current light source in the screen space under the condition that the orientation of the current light source is opposite to the orientation of the camera; and the rendering module is used for rendering the halo generated by the light source in the imaging of the camera according to the third position information and the space position range of the camera screen.
In a third aspect, an electronic device is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor configured to implement the method steps of any of the embodiments of the first aspect when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any of the embodiments of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the method provided by the embodiment of the application, whether the directions of the current light source and the camera are opposite or not is determined according to the respective position information of the current light source and the camera in the current virtual scene, and under the condition that the directions of the current light source and the camera are opposite, the halo generated by the light source is rendered in the imaging of the camera according to the position information of the current light source in the screen space and the position range of the camera screen space.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a halo rendering method according to an embodiment of the present application;
fig. 2 is a second flowchart illustrating a halo rendering method according to an embodiment of the present application;
fig. 3 is a third schematic flowchart illustrating a halo rendering method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a halo rendering apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making creative efforts shall fall within the protection scope of the present application.
Fig. 1 is a schematic flowchart of a halo rendering method provided in an embodiment of the present application, and as shown in fig. 1, the method includes the steps of:
102, acquiring first position information of a current light source in a current virtual scene and acquiring second position information of a camera in the current virtual scene;
the current virtual scene may be a virtual scene in various animations, such as a virtual scene in a japanese animation, a virtual scene in an american animation, and the like.
104, determining whether the orientation of the current light source is opposite to the orientation of the camera or not according to the first position information and the second position information;
106, acquiring third position information of the current light source in a screen space under the condition that the orientation of the current light source is opposite to that of the camera;
it should be noted that if the current orientation of the light source and the orientation of the camera are opposite, they are indicated to be in the same direction, that is, the light source rendering may be performed, and if the current orientation of the light source and the orientation of the camera are not opposite, they are indicated to be in different directions, that is, the camera is backlit, then the light source rendering cannot be performed.
And step 108, rendering the halo generated by the light source in the imaging of the camera according to the third position information and the spatial position range of the camera screen.
Through the above steps 102 to 108, it is determined whether the orientations of the current light source and the camera are opposite according to the respective position information of the current light source and the camera in the current virtual scene, and when the orientations of the current light source and the camera are opposite, the halo generated by the light source is rendered in the imaging of the camera according to the position information of the current light source in the screen space and the spatial position range of the camera screen, that is, in the embodiment of the present application, the light source rendering may be performed according to the position information of the current light source in the screen space and the spatial position range of the camera screen, so as to ensure that the halo rendering is performed again by the current light source in the spatial position range of the camera screen, thereby solving the problem that the camera viewing angle in the rendering manner in the prior art is difficult to pack the actual light source into the screen viewing field in part of scenes.
In the embodiment of the application, the position information types of the first position information and the second position information correspond to the light source type of the current light source; the position information type is coordinate orientation under the condition that the light source type of the current light source is parallel light; in the case where the light source type of the current light source is a point light, the position information type is a coordinate position.
In this regard, in a specific example, if the light source type of the current light source is parallel light, the first position information and the second position information respectively represent a coordinate orientation of the current light source within world coordinates, and a coordinate orientation of the camera within the world coordinates; if the light source type of the current light source is point light, the first position information and the second position information respectively represent a vector from the coordinate position of the current light source in the world coordinate to the coordinate position from the camera to the world coordinate, and represent a vector from the coordinate position of the camera in the world coordinate to the coordinate position from the current light source in the world coordinate.
In this embodiment of the application, the determining, according to the first position information and the second position information, whether the current orientation of the light source is opposite to the orientation of the camera in the above step 104 may further include:
step 11, determining a light source orientation vector of the current light source in the virtual scene according to the first position information;
step 12, determining a camera orientation vector of the camera in the virtual scene according to the second position information;
step 13, determining a dot product result of the target orientation vector and the camera orientation vector;
in the case where the light source type of the current light source is a spot light, the first position information may be
Figure BDA0003739771970000041
The second location information may be
Figure BDA0003739771970000042
Then pass through
Figure BDA0003739771970000043
And
Figure BDA0003739771970000044
and judging whether the current direction of the light source is opposite to the direction of the camera or not according to the dot product result. In case that the light source type of the current light source is parallel light, then the first position information may be a ═ x (x) 1 ,y 1 ,z 1 ) The second position information may be b ═ x 2 ,y 2 ,z 2 ) And determining a vector between the a and the b according to the a and the b, and judging whether the orientation of the current light source is opposite to the orientation of the camera according to a dot product result of the determined vector.
Step 14, determining that the orientation of the current light source is opposite to the orientation of the camera under the condition that the dot product result is less than 0;
and step 15, determining that the orientation of the current light source is not opposite to the orientation of the camera under the condition that the dot product result is greater than or equal to 0.
Through the above steps 11 to 15, it is determined whether the first position information and the second position information are opposite to each other according to a dot product result of the orientation vectors represented by the first position information and the second position information, and if the orientation of the current light source is opposite to the orientation of the camera, it indicates that the current light source and the camera are in the same direction, that is, light source rendering can be performed, and if the orientation of the current light source is not opposite to the orientation of the camera, it indicates that the current light source and the camera are in different directions, that is, the camera is in a backlight, light source rendering cannot be performed. Therefore, only under the condition that the camera and the light source are opposite to each other, halo rendering is carried out so as to ensure that light source rendering can be carried out normally.
Based on this, as shown in fig. 2, the method steps of the embodiment of the present application may further include:
and step 107, in the case that the orientation of the current light source is not opposite to the orientation of the camera, forbidding to render the halation generated by the light source in the imaging of the camera.
That is, in the present application, if the current light source is oriented not opposite to the camera, halo rendering is prohibited, because if the current light source is oriented not opposite to the camera, it indicates that the camera is backlit and light source rendering is impossible.
In an optional implementation manner of the embodiment of the present application, the manner of acquiring the third position information of the current light source in the screen space, which is referred to in step 106 in the embodiment of the present application, further may include: mapping the position of the light source in the world coordinate space to a screen coordinate system corresponding to the screen space;
before the halo generated by the light source is rendered in the imaging of the camera, as shown in fig. 3, the method of the embodiment of the present application may further include: step 302, determining whether the position of the light source in the screen coordinate system is within an initial interval, wherein the initial interval is an interval corresponding to the camera screen in the screen coordinate system.
It should be noted that, in the initial interval of the camera screen, the lower left corner of the screen is generally set as [0,0], and the upper right corner is set as the range covered by [1,1 ].
And 304, under the condition that the position of the light source in the screen coordinate system is not in the initial interval, adjusting the initial interval to be the target interval so as to enable the position of the light source in the screen coordinate system to be in the target interval.
For this, taking the range covered by the initial screen initial interval of [0,0] at the lower left corner and [1,1] at the upper right corner as an example, if both x and y of the screen space coordinate of the light source are within the interval from the camera screen coordinate min to max, the screen space range can be modified to the range of [ -1,2] (min-1max +1) by the present application, so as to ensure that the light source can also generate halo projection to the camera at the position to enter the screen.
In this embodiment of the application, the manner of rendering the halo generated by the light source in the imaging of the camera, which is referred to in step 106, may further include: the halo generated by the light source is rendered in the imaging of the camera based on halo patches, wherein the halo patches are a grid generated towards the camera relative to the distance of the screen space center.
Corresponding to fig. 1, an embodiment of the present application provides a halo rendering apparatus, as shown in fig. 4, the apparatus includes:
a first obtaining module 42, configured to obtain first position information of a current light source in a current virtual scene, and obtain second position information of a camera in the current virtual scene;
a first determining module 44, configured to determine whether the current orientation of the light source is opposite to the orientation of the camera according to the first position information and the second position information;
a second obtaining module 46, configured to obtain third position information of the current light source in the screen space when it is determined that the orientation of the current light source is opposite to the orientation of the camera;
and a rendering module 48, configured to render the halo generated by the light source in the imaging of the camera according to the third position information and the spatial position range of the camera screen.
According to the device, whether the directions of the current light source and the camera are opposite or not is determined according to the position information of the current light source and the position information of the camera in the current virtual scene, under the condition that the directions of the current light source and the camera are opposite, the halo generated by the light source is rendered in the imaging of the camera according to the position information of the current light source in the screen space and the position range of the camera screen space, namely, the light source rendering can be performed according to the position information of the current light source in the screen space and the position range of the camera screen space in the embodiment of the application, so that the halo rendering is performed again in the position range of the camera screen space by the current light source, and the problem that the actual light source is difficult to be packed into the screen field in a part of the scene by the rendering mode in the prior art is solved.
Optionally, the position information types of the first position information and the second position information in the embodiment of the present application correspond to the light source type of the current light source; the position information type is coordinate orientation under the condition that the light source type of the current light source is parallel light; in the case where the light source type of the current light source is a spot light, the position information type is a coordinate position.
Optionally, the first determining module 44 in this embodiment may further include: the first determining unit is used for determining a light source orientation vector of the current light source in the virtual scene according to the first position information; a third determining unit, configured to determine a camera orientation vector of the camera in the virtual scene according to the second position information; a third determination unit configured to determine a dot product of the target orientation vector and the camera orientation vector; a fourth determination unit configured to determine that the orientation of the current light source is opposite to the orientation of the camera if the dot product result is less than 0; a fifth determining unit configured to determine that the orientation of the current light source and the orientation of the camera are not facing each other if the dot product result is greater than or equal to 0.
Optionally, the second obtaining module 46 in this embodiment of the present application further includes: the mapping unit is used for mapping the position of the light source in the world coordinate space to a screen coordinate system corresponding to the screen space;
based on this, the apparatus in the embodiment of the present application may further include: and the second determining module is used for determining whether the position of the light source in the screen coordinate system is within an initial interval before the halo generated by the light source is rendered in the imaging of the camera, wherein the initial interval is an interval corresponding to the camera screen in the screen coordinate system.
Optionally, the apparatus in this embodiment of the present application may further include: and the adjusting module is used for adjusting the initial interval to be the target interval under the condition that the position of the light source in the screen coordinate system is not in the initial interval, so that the position of the light source in the screen coordinate system is in the target interval.
Optionally, the apparatus in this embodiment of the present application may further include: and the inhibition module is used for inhibiting the halation generated by the light source from being rendered in the imaging of the camera under the condition that the orientation of the current light source is not opposite to that of the camera.
Optionally, the rendering module 46 in this embodiment of the present application further may include: a rendering unit for rendering a halo generated by the light source in an imaging of the camera based on halo patches, wherein the halo patches are a grid towards the camera generated for a distance from a center of a screen space.
As shown in fig. 5, an electronic device according to an embodiment of the present application includes a processor 111, a communication interface 112, a memory 113, and a communication bus 114, where the processor 111, the communication interface 112, and the memory 113 complete mutual communication via the communication bus 114,
a memory 113 for storing a computer program;
in an embodiment of the present application, when the processor 111 is configured to execute the program stored in the memory 113, the effect of the method for rendering a halo provided in any one of the foregoing method embodiments is also similar, and is not described herein again.
The present application further provides a computer readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the halo rendering method as provided in any one of the foregoing method embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A halo rendering method, comprising:
acquiring first position information of a current light source in a current virtual scene and acquiring second position information of a camera in the current virtual scene;
determining whether the orientation of the current light source is opposite to the orientation of the camera according to the first position information and the second position information;
under the condition that the orientation of the current light source is opposite to the orientation of the camera, acquiring third position information of the current light source in a screen space;
and rendering the halo generated by the light source in the imaging of the camera according to the third position information and the spatial position range of the camera screen.
2. The method of claim 1, wherein the first location information and the second location information have a location information type corresponding to a light source type of the current light source; wherein, under the condition that the light source type of the current light source is parallel light, the position information type is coordinate orientation; and under the condition that the light source type of the current light source is point light, the position information type is a coordinate position.
3. The method of claim 2, wherein determining whether the orientation of the current light source is opposite to the orientation of the camera based on the first position information and the second position information comprises:
determining a light source orientation vector of the current light source under the virtual scene according to the first position information;
determining a camera orientation vector of the camera in the virtual scene according to the second position information;
determining a point multiplication result of the target orientation vector and the camera orientation vector;
determining that the orientation of the current light source is opposite to the orientation of the camera if the dot product result is less than 0;
determining that the orientation of the current light source is not opposite to the orientation of the camera if the dot product result is greater than or equal to 0.
4. The method of claim 3,
the acquiring third position information of the current light source in the screen space comprises: mapping the position of the light source in the world coordinate space to a screen coordinate system corresponding to the screen space;
before rendering the halo generated by the light source in the imaging of the camera, the method further comprises: and determining whether the position of the light source in the screen coordinate system is within an initial interval, wherein the initial interval is an interval corresponding to the camera screen in the screen coordinate system.
5. The method of claim 4, further comprising:
and under the condition that the position of the light source in the screen coordinate system is not in the initial interval, adjusting the initial interval to be a target interval so as to enable the position of the light source in the screen coordinate system to be in the target interval.
6. The method of claim 3, further comprising:
and in the case that the orientation of the current light source is not opposite to the orientation of the camera, prohibiting the halation generated by the light source from being rendered in the imaging of the camera.
7. The method of claim 1, wherein the rendering the halo generated by the light source in the imaging of the camera comprises:
rendering halos produced by the light source in imaging of the camera based on halo patches, wherein the halo patches are a grid generated toward the camera for a distance from the screen space center.
8. A halo rendering apparatus, comprising:
the first acquisition module is used for acquiring first position information of a current light source in a current virtual scene and acquiring second position information of a camera in the current virtual scene;
a first determining module, configured to determine whether the orientation of the current light source is opposite to the orientation of the camera according to the first position information and the second position information;
the second acquisition module is used for acquiring third position information of the current light source in a screen space under the condition that the orientation of the current light source is opposite to the orientation of the camera;
and the rendering module is used for rendering the halo generated by the light source in the imaging of the camera according to the third position information and the space position range of the camera screen.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-7 when executing a program stored on a memory.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any of claims 1-7.
CN202210812478.9A 2022-07-11 2022-07-11 Halo rendering method, device and storage medium Pending CN115131436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210812478.9A CN115131436A (en) 2022-07-11 2022-07-11 Halo rendering method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210812478.9A CN115131436A (en) 2022-07-11 2022-07-11 Halo rendering method, device and storage medium

Publications (1)

Publication Number Publication Date
CN115131436A true CN115131436A (en) 2022-09-30

Family

ID=83384622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210812478.9A Pending CN115131436A (en) 2022-07-11 2022-07-11 Halo rendering method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115131436A (en)

Similar Documents

Publication Publication Date Title
EP3664443B1 (en) Panoramic image generation method and device
US10755381B2 (en) Method and device for image stitching
CN109961406A (en) A kind of method, apparatus and terminal device of image procossing
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
CN107451976B (en) A kind of image processing method and device
CN111127623A (en) Model rendering method and device, storage medium and terminal
US20230338842A1 (en) Rendering processing method and electronic device
US10789766B2 (en) Three-dimensional visual effect simulation method and apparatus, storage medium, and display device
CN111275801A (en) Three-dimensional picture rendering method and device
US20190147616A1 (en) Method and device for image rectification
CN113947768A (en) Monocular 3D target detection-based data enhancement method and device
US20110210966A1 (en) Apparatus and method for generating three dimensional content in electronic device
CN112581389A (en) Virtual viewpoint depth map processing method, equipment, device and storage medium
US11017557B2 (en) Detection method and device thereof
CN109271123B (en) Picture display method and picture display device
CN111988596B (en) Virtual viewpoint synthesis method and device, electronic equipment and readable storage medium
CN115131436A (en) Halo rendering method, device and storage medium
CN111784811A (en) Image processing method and device, electronic equipment and storage medium
CN116017129A (en) Method, device, system, equipment and medium for adjusting angle of light supplementing lamp
US20150215602A1 (en) Method for ajdusting stereo image and image processing device using the same
US20120281067A1 (en) Image processing method, image processing apparatus, and display apparatus
CN109388311A (en) A kind of image display method, device and equipment
CN113766258A (en) Live broadcast room virtual gift presentation processing method, equipment and storage medium
Muddala et al. Disocclusion handling using depth-based inpainting
CN109410304B (en) Projection determination method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination