CN115546082A - Lens halo generation method and device and electronic device - Google Patents

Lens halo generation method and device and electronic device Download PDF

Info

Publication number
CN115546082A
CN115546082A CN202211157998.7A CN202211157998A CN115546082A CN 115546082 A CN115546082 A CN 115546082A CN 202211157998 A CN202211157998 A CN 202211157998A CN 115546082 A CN115546082 A CN 115546082A
Authority
CN
China
Prior art keywords
halo
map
maps
patch
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211157998.7A
Other languages
Chinese (zh)
Inventor
陈纾
周振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211157998.7A priority Critical patent/CN115546082A/en
Publication of CN115546082A publication Critical patent/CN115546082A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a method and a device for generating lens halo and an electronic device. The method comprises the following steps: generating a facet set corresponding to the lens halo based on the attribute information of the lens halo; acquiring a sequence diagram set corresponding to the patch set; determining sampling texture coordinates of the multi-frame pictures based on the original texture coordinates of the patch set and the region information of the region where the multi-frame pictures are located in the sequence picture set; sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame maps to obtain texture information of the patch set; and generating lens halos based on the texture information of the patch sets. The method and the device solve the technical problems that the generation process of the lens halo is complex and the generation effect is poor in the related art.

Description

Lens halo generation method and device and electronic device
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for generating lens halo, and an electronic apparatus.
Background
At present, when lens halos are generated in virtual scenes such as games or animations, multiple halo maps are usually adopted to be formed by simple superposition, but because the shape and the position of the maps are mostly fixed, in order to ensure that the generated lens halo effect is more consistent with the real scene, a system needs to adjust the maps in real time according to the position of a virtual lens and the position of a light source, the whole process of generating the lens halos is complex, and because the transverse line and the main ray maps used for generating the lens halos do not have the dynamic effect in the real scene, the generated lens halos are poor in effect.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
At least some embodiments of the present disclosure provide a method and an apparatus for generating lens halos, and an electronic apparatus, so as to at least solve the technical problems in the related art that a lens halo generation process is complex and a generation effect is poor.
According to an embodiment of the present disclosure, a method for generating lens halo is provided, where the method includes: based on the attribute information of the lens halos, generating a patch set corresponding to the lens halos, wherein the patch set comprises: the device comprises a light ray surface patch, a first halo surface patch, a plurality of light spot surface patches and a second halo surface patch; acquiring a sequence diagram set corresponding to the patch set, wherein the sequence diagram set comprises a first light source sequence diagram and a second light source sequence diagram, the first light source sequence diagram is composed of a flare map corresponding to a flare patch, a first halo map corresponding to a first halo patch and a plurality of frames of light spot maps corresponding to a plurality of light spot patches, and the second light source sequence diagram is composed of a plurality of frames of second halo maps corresponding to a second halo patch; determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the region information of the region where the multi-frame maps in the sequence map set are located; sampling a sequence diagram set based on sampling texture coordinates of multiple frames of maps to obtain texture information of the patch set; and generating lens halos based on the texture information of the patch sets.
According to an embodiment of the present disclosure, there is also provided a lens halo generating device, including: a patch generating module, configured to generate a patch set corresponding to the lens halos based on attribute information of the lens halos, where the patch set includes: the device comprises a light ray surface patch, a first halo surface patch, a plurality of light spot surface patches and a second halo surface patch; the device comprises a sequence diagram acquisition module, a sequence diagram acquisition module and a control module, wherein the sequence diagram acquisition module is used for acquiring a sequence diagram set corresponding to a patch set, the sequence diagram set comprises a first light source sequence diagram and a second light source sequence diagram, the first light source sequence diagram consists of a radiance patch corresponding to a radiance patch, a first halo patch corresponding to a first halo patch and a multi-frame light spot patch corresponding to a plurality of light spot patches, and the second light source sequence diagram consists of a multi-frame second halo patch corresponding to a second halo patch; the texture coordinate generating module is used for determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the region information of the regions where the multi-frame maps in the sequence map set are located; the sequence diagram acquisition module is used for sampling the sequence diagram set based on sampling texture coordinates of the multi-frame mapping to obtain texture information of the patch set; and the lens halo generating module is used for generating lens halo based on the texture information of the patch set.
According to an embodiment of the present disclosure, there is further provided a computer-readable storage medium having a computer program stored therein, where the computer program is configured to execute the method for generating lens halos in any one of the above.
According to an embodiment of the present disclosure, there is also provided an electronic apparatus, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the lens halo generation method in any one of the above.
In at least some embodiments of the present disclosure, a facet set corresponding to lens halos is generated by using attribute information based on the lens halos; acquiring a sequence diagram set corresponding to the patch set; determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the region information of the region where the multi-frame maps in the sequence map set are located; sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame maps to obtain texture information of the patch set; the method comprises the steps of generating a patch set based on texture information of the patch set, calling corresponding halo maps from a sequence map set according to the information of the patch set to generate lens halos, not needing to add all the halo maps to a virtual scene, and then adjusting according to the positions of light emitting sources and the positions of virtual lenses, so that the whole lens halo generation process is simplified to a great extent, and the halo maps can be adjusted according to initial texture coordinates of the patch set, so that the generated halo maps better conform to a real scene, the visual effect of the generated lens halos is improved, and the technical problems of complex lens halo generation process and poor generation effect in the related technology are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a method for generating lens halo according to an embodiment of the present disclosure;
fig. 2 is a flow chart of a method of generating lens halos according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a halo image corresponding to a normal lens according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a first light source sequence diagram according to one embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a second light source sequence diagram according to one embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a user interface for lens halo generation according to one embodiment of the present disclosure;
fig. 7 is a block diagram of a structure of a lens flare generation apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In a possible implementation manner, aiming at a method for generating lens halos through map overlaying, which is usually adopted in the background of lens halo generation in the field of image processing, after practice and careful research, the inventor still has the technical problems of complex lens halo generation process and poor generation effect, based on which, a virtual scene applied in the embodiment of the disclosure can be a virtual scene such as a game, an animation and the like, and the type of the game aiming at the game is a game generally comprising the virtual scene and the lens halos, the method for generating the lens halos is provided, and the adopted technical concept generates a patch set corresponding to the lens halos based on the attribute information of the lens halos; acquiring a sequence diagram set corresponding to the patch set; determining sampling texture coordinates of the multi-frame pictures based on the original texture coordinates of the patch set and the region information of the region where the multi-frame pictures are located in the sequence picture set; sampling a sequence diagram set based on sampling texture coordinates of multiple frames of maps to obtain texture information of the patch set; the lens halo is generated based on the texture information of the patch set, so that the process of generating the halo of different lenses is simplified, the efficiency of the whole lens halo generating process is improved, and the visual effect of generating the lens halo is improved.
The above method embodiments related to the present disclosure may be executed in a mobile terminal, a computer terminal or a similar computing device. Taking the mobile terminal as an example, the mobile terminal may be a smart phone, a tablet computer, a palmtop computer, a mobile internet device, a PAD, a game machine, or other terminal devices. Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a method for generating lens halo according to an embodiment of the present disclosure. As shown in fig. 1, the mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data, which in one embodiment of the present disclosure may further include: a transmission device 106, an input output device 108, and a display device 110.
In some optional embodiments based on virtual scenes, the device may further provide a human-machine interaction interface with a touch-sensitive surface, the human-machine interaction interface may sense finger contact and/or gestures to perform human-machine interaction with a Graphical User Interface (GUI), and the human-machine interaction function may include the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
In accordance with one embodiment of the present disclosure, an embodiment of a lens halo generating method is provided, it is noted that the steps illustrated in the flowcharts of the figures may be executed in a computer system such as a set of computer executable instructions, and although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be executed in an order different from that herein.
In a possible implementation manner, an embodiment of the present disclosure provides a method for generating lens halo, where a graphical user interface is provided by a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in the cloud interaction system. Fig. 2 is a flowchart of a method for generating lens flare according to an embodiment of the present disclosure, as shown in fig. 2, the method includes the following steps:
step S202, generating a patch set corresponding to the lens halo based on the attribute information of the lens halo.
Wherein the set of patches comprises: the device comprises a light ray surface patch, a first halo surface patch, a plurality of facula surface patches and a second halo surface patch.
Generally, when a lens halo is generated in a virtual scene, a plurality of facets, such as a glare facet, a circular halo facet (i.e., the first halo facet), a light spot facet, etc., are often required to be created and applied to a plurality of halo maps to determine texture information of different facets, such as a glare map, a circular halo map, a light spot map, etc., where the glare generally refers to light emitted by a light emitting source, such as light emitted by a sun, an electric lamp, etc.; the circular halo mapping generally refers to a halo generated around a luminous source when the luminous source is shot through a lens; a spot generally refers to a circular light outside of the light source.
Generally, when a lens is used to shoot a light emitting source in a real scene, a certain amount of transverse halos are generated according to the relative positions of the lens and the light emitting source, so that a user can obtain a good visual effect in order to ensure the authenticity of the lens halos generated in a virtual scene, and when the lens halos are generated, a transverse halo patch (i.e., the second halo patch) and a transverse halo map can be added to ensure that the lens halos have a good visual effect.
Optionally, the attribute information of the lens halo includes, but is not limited to: attribute information of the rays, attribute information of the first halo, attribute information of the plurality of light spots, and attribute information of the second halo.
The attribute information generally refers to halo information in a simulated real scene according to a light source and a virtual lens in a virtual scene, such as brightness, size and position of circular and transverse halos of the light source, brightness, size and position of a light beam and a light spot, and the like.
In an alternative of this embodiment, a patch set of a plurality of halos, circular halos, transverse halos, and light spots may be constructed according to the size of each halo patch, the position of the halo closest to the sun, the preset interval between the halo patches, the individual distance addition of the halo patches, the individual size multiple of the halo patches, the intensity of the rotating halos, the color of the rotating halos, the size multiple, the brightness multiple, the zoom size of the halo ring, the halo ring offset, the halo ring moving speed, the halo ring intensity, the halo ring gap, and other parameters, so as to ensure that the lens halos generated by using the sequence diagram set more conform to the actual scene.
For example, the light spot patch can be generated based on the size of the light spot patch, the position of the patch closest to the virtual light source, the preset interval of the light spot patch, the individual distance addition of the light spot patch, the individual size multiple of the light spot patch and other information; a circular halo patch can be generated based on information such as the position of the virtual luminous source, the size of the circular halo patch and the like; the main light ray patch can be generated based on the information such as the position of the virtual luminous source, the size of the main light ray patch and the like; the lateral halo may be generated based on information such as the position of the virtual light emitting source, the lateral halo size multiple, and the like.
Fig. 3 is a schematic diagram of a halo image corresponding to a normal lens according to an embodiment of the disclosure, as shown in fig. 3, a plurality of circular halos and transverse halos which diverge outward exist at a position where a light emitting source is located, a plurality of circular light spots exist outside the light emitting source, and the halo degree shown in fig. 3 is often required to be achieved by using an effect of generating the lens halos by using a plurality of maps, so as to improve the viewing experience of a user.
Step S204, acquiring a sequence diagram set corresponding to the patch set.
The sequence diagram set comprises a first light source sequence diagram and a second light source sequence diagram, the first light source sequence diagram is composed of a radiance map corresponding to a radiance surface patch, a first halo map corresponding to a first halo surface patch and multi-frame light spot maps corresponding to a plurality of light spot surface patches, and the second light source sequence diagram is composed of a multi-frame second halo map corresponding to a second halo surface patch.
The sequence diagram set may refer to a map set composed of the halo maps, and in general, two sequence diagram sets may be constructed because the transverse halo map has a different shape from other halo maps.
The first light source sequence diagram may be a sequence diagram composed of a light map, a circular halo map, and a plurality of light spot maps; the second light source sequence chart may be a sequence chart composed of a plurality of transverse halo maps.
Fig. 4 is a schematic diagram of a sequence diagram of a first light source according to an embodiment of the disclosure, as shown in fig. 4, where numbers 1 to 7 may refer to multiple spot maps, number 8 may refer to a circular halo map, and number 9 may refer to a flare map.
Fig. 5 is a schematic diagram of a second light source sequence diagram according to an embodiment of the disclosure, as shown in fig. 5, images represented by numbers 1, 2, and 3 are all horizontal halo maps, but corresponding image textures are different, and generating a virtual light source by using a plurality of horizontal halo maps with different texture information can improve the realism of the generated virtual light source, thereby improving the visual experience of a user; the serial numbers 4, 5 and 6 represent mask channels corresponding to the horizontal halo maps of the serial numbers 1, 2 and 3, respectively, and when the mask texture coordinates of the horizontal halo maps are obtained, the mask channel images of the horizontal halo maps can be obtained first, and then the corresponding mask texture coordinates are determined according to the channel images.
And step S206, determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the region information of the region where the multi-frame maps in the sequence map set are located.
The original texture coordinates may refer to the coordinate position at which each chartlet patch appears in the user interface; the area information may refer to information such as a position, a size, and a sequence number in the sequence diagram set of each map, but is not limited thereto; the sampling texture coordinate can be a texture coordinate obtained by rotating, shifting and scaling the original texture coordinate, and the texture information for generating different surface patches can be accurately extracted from the sequence diagram through the texture coordinate.
In order to ensure that the lens halo can present a good visual effect in the user interface, the sampling texture coordinate of each tile can be determined based on the original texture coordinate of the patch and the area information of each tile.
For example, when a light source is shot by using a lens, an image presented by the light source in the lens generates a plurality of transverse halos in different directions as shown in fig. 3, and parameters such as size and brightness of each transverse halo are also different, so before a virtual light source is generated by using the transverse halo map in fig. 4, sampling texture coordinates corresponding to the transverse halo map may be determined according to patch information corresponding to the transverse halos in the patch set, so as to ensure efficiency when the transverse halo is generated subsequently, and when a lens halo is generated in a virtual scene, a plurality of transverse halo maps in different directions and sizes may be set according to the positions and brightness of the light source and the relative positions with the virtual lens, so as to ensure that the halos of the virtual light source finally presented in a user interface more conform to a real scene.
And S208, sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame map to obtain the texture information of the patch set.
The texture information may be texture information obtained by sampling colors of different maps in the sequence diagram according to preset luminance information, and the texture information may be used to generate a corresponding patch.
After the sampling texture coordinates are determined, a corresponding map can be selected from the sequence map set, and the map is processed by using coordinate position information corresponding to the sampling texture coordinates, so that texture information for generating a patch is obtained. In an alternative of this embodiment, the operation of adjusting the map may include, but is not limited to: brightness adjustment, color adjustment, etc.
For example, after the transverse halo patches corresponding to the plurality of transverse halos are determined, the transverse halos can be sampled from the sequence diagram set according to sampling texture coordinates of the transverse halos corresponding to the patches, and then brightness and color of the transverse halos are adjusted to obtain texture information conforming to the halos of the light-emitting source in the actual scene, so that the substantivity of the halos of the light-emitting source finally presented in the user interface is improved.
In step S210, a lens halo is generated based on the texture information of the patch set.
After determining the texture information corresponding to each of the mapping patches, a final lens halo may be generated from the texture information corresponding to each of the mapping patches.
Fig. 6 is a schematic diagram of a user interface for lens halo generation according to an embodiment of the present disclosure, as shown in fig. 6, it can be obviously observed in the user interface that the light emitting source includes a plurality of transverse halos, circular halos, and flare images captured by the virtual lens, and the lens halos are generated through the above steps, so that the visual effect of the generated lens halos is greatly improved, and a good experience can be obtained for users no matter the users play games or watch animation deductive videos.
In at least some embodiments of the present disclosure, a facet set corresponding to a lens halo is generated by using attribute information based on the lens halo; acquiring a sequence diagram set corresponding to the patch set; determining sampling texture coordinates of the multi-frame pictures based on the original texture coordinates of the patch set and the region information of the region where the multi-frame pictures are located in the sequence picture set; sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame maps to obtain texture information of the patch set; the method comprises the steps of generating a patch set based on texture information of the patch set in a mode of generating lens halos, calling corresponding halo maps from a sequence map set according to the information of the patch set to generate the lens halos, not needing to add all the halo maps to a virtual scene, and then adjusting according to the positions of luminous sources and the position of the virtual lens, so that the generation process of the whole lens halos is simplified to a great extent, and the halo maps can be adjusted according to initial texture coordinates of the patch set, so that the generated halo maps better accord with a real scene, the visual effect of the generated lens halos is improved, and the technical problems that the generation process of the lens halos in the related technology is complex and the generation effect is poor are solved.
Optionally, determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the area information of the area where the multi-frame maps in the sequence map set are located includes: determining sampling texture coordinates of the light ray mapping chart based on the original texture coordinates of the light ray surface patch and the area information of the area where the light ray mapping chart is located in the first light source sequence chart; determining a sampling texture coordinate of the first halo mapping based on the original texture coordinate of the first halo patch and the region information of the region where the first halo mapping is located in the first light source sequence diagram; determining sampling texture coordinates of the multi-frame light spot maps based on original texture coordinates of the light spot patches and area information of an area where the multi-frame light spot maps in the first light source sequence chart are located; and determining sampling texture coordinates of the second halo maps of the multiple frames based on the original texture coordinates of the second halo patches and the region information of the regions where the multiple frames of second halo maps in the second light source sequence diagram are located.
In an alternative of this embodiment, the halo map corresponding to the original texture coordinate information of the patch may be determined first according to the original texture coordinate information of the patch, and then the sampling texture coordinate of the halo map may be determined according to the area information of the halo map in the sequence map set.
For example, if the patch is a light patch, the area information of the light patch corresponding to the light patch in the first light source sequence diagram, for example, the size, serial number, and the like of the light patch with serial number 9 in fig. 4, may be determined first, and then the area information is adjusted according to the original texture coordinates of the light patch in the virtual scene, so as to determine the sampling texture coordinates of the light patch in the virtual scene.
Similarly, if the patch is the first halo patch, a circular halo map, for example, the area information of the circular halo map with serial number 8 in fig. 4, may be determined from the first light source sequence diagram, and then the area information may be adjusted according to the original texture coordinate of the first halo patch, so as to determine the sampling texture coordinate of the circular halo map in the virtual scene.
If the patch is a light spot patch, for example, the area information of the circular halo patch numbered 1-7 in fig. 4, may be determined from the first light source sequence diagram, and then the area information may be adjusted according to the original texture coordinates of the light spot patch, so as to determine the sampling texture coordinates of the light spot patch in the virtual scene.
If the patch is the second halo patch, the horizontal halo map, for example, the area information of the circular halo maps with the sequence numbers 1, 2, and 3 in fig. 5, may be determined from the second light source sequence map, and then the area information may be adjusted according to the original texture coordinates of the second halo patch, so as to determine the sampling texture coordinates of the horizontal halo map in the virtual scene.
By determining the sampling texture coordinates of each map, the halo map can be directly called from the sequence map set when the halo map is adjusted and generated subsequently, and the halo map does not need to be added into a virtual scene corresponding to a user interface, so that the problems that the halo map moves repeatedly due to the movement of a virtual lens, the process of generating the lens halo is too complex, and the generation efficiency is low are solved.
Optionally, determining a sampling texture coordinate of the radiance map based on the original texture coordinate of the radiance patch and the area information of the area where the radiance map is located in the first light source sequence diagram, including: determining a target texture coordinate of the radiance mapping based on the original texture coordinate of the radiance patch and the area information of the area where the radiance mapping is located in the first light source sequence diagram; and determining the sampling texture coordinate of the radiance map based on the target texture coordinate, the preset rotation center and the preset rotation angle of the radiance map.
In order to better represent the whole process of obtaining the sampling texture coordinates of the halo map, the following detailed description is given of the sampling texture coordinates of the halo map:
firstly, after a plurality of patch sets are generated based on the attribute information of lens halos, the specific type corresponding to each patch in the patch sets can be determined according to the position information of each patch, and if the position of the patch is direct between a virtual light source and a virtual lens, the specific type of the patch can be determined as a lens halo patch. Then, based on the position coordinates presented by the light sheet in the user interface, the original texture coordinates of the light sheet are determined, a required light mapping diagram is extracted from the first light source sequence diagram, and area information such as the position coordinates, the normal size and the like of the light mapping diagram is obtained.
After the original texture coordinates and the area information are obtained, the light ray map can be further adjusted according to some preset parameters, such as the position and the brightness of a virtual light emitting source, the relative position of the virtual light emitting source and a virtual lens, so as to control the parameters, such as the position, the size, the brightness, the distance between maps and the like, of the light ray map displayed in the user interface, and thus, the target texture coordinates of the light ray map in the user interface can be determined.
Finally, considering that the glare can generate a dynamic glare effect due to the change of the lens angle in the real scene, the glare map can be dynamically processed in the process of generating the lens halo, for example, when the virtual lens moves, the glare map is controlled to rotate according to a preset rotation center and a preset rotation angle according to the displacement and the direction of the movement, and the glare map is generally matched with the virtual luminous source, so the dynamic processing can be performed on the basis of the determined target texture coordinate to obtain a final sampling texture coordinate, thereby avoiding the problem that the glare map is not changed when the virtual lens moves, so that the generated lens halo better conforms to the real scene, and improving the viewing experience of the user.
Optionally, determining sampling texture coordinates of multiple frames of second halo maps based on the original texture coordinates of the second halo patch and the area information of the area where the multiple frames of second halo maps in the second light source sequence map are located includes: determining a first map texture coordinate and a first mask texture coordinate of a plurality of frames of second halo maps based on the original texture coordinate of the second halo patch, the region information of the region where the plurality of frames of second halo maps in the second light source sequence map are located, and the scaling size of the plurality of frames of second halo maps; determining a second map texture coordinate of the multi-frame second halo map based on the first map texture coordinate of the multi-frame second halo map and the offset of the multi-frame second halo map; determining a second mask texture coordinate of the plurality of frames of second halo maps based on the first mask texture coordinate of the plurality of frames of second halo maps and the offset of the plurality of frames of second halo maps; and obtaining sampling texture coordinates based on the second mapping texture coordinates and the second mask texture coordinates.
Because the transverse halo map is easily influenced by the virtual lens and can be greatly changed according to the change of the virtual lens, when the sampling texture coordinate of the transverse halo map is determined, the transverse halo map in the figure 5 can be respectively zoomed, displaced and masked, and finally a plurality of processed transverse halo maps are superposed on the main body position of the virtual luminous source, and meanwhile, the masking blanking, the displacement blanking and the dynamic transverse line halo post-processing operation of light spot movement along with the change of the angle and the position of the lens are generated, so that the transverse halo change more conforming to the real scene is obtained.
Specifically, the first map texture coordinate and the first mask texture coordinate of the transverse halo map may be preliminarily determined according to the original texture coordinate of the second halo patch, and the area information, scaling size, and the like of the transverse halo map in fig. 5.
When the virtual lens moves, the position of the virtual luminous source in the user interface is also changed, and the position of the transverse halo map is changed, so that the offset displacement of the halo map can be preliminarily determined based on the relative position of the virtual luminous source after the position of the virtual lens is changed and the position change direction. And then, respectively carrying out offset processing on the first map texture coordinate and the first mask texture coordinate by using the offset displacement so as to obtain a second map texture coordinate and a second mask texture coordinate after offset.
And finally, the texture coordinates of the second chartlet and the texture coordinates of the second shade are fused to obtain sampling texture coordinates of a plurality of transverse halos, so that the generated transverse halos are ensured to better accord with the change result of the transverse halos after the lens in the actual scene is moved.
Optionally, determining the first map texture coordinate and the first mask texture coordinate of the multiple frames of second halo maps based on the original texture coordinate of the second halo patch, the area information of the area where the multiple frames of second halo maps in the second light source sequence map are located, and the scaling size of the multiple frames of second halo maps, includes: determining a target moving direction based on a preset position; determining the moving position of the second halo maps of the plurality of frames based on the preset position, the second halo interval and the flare value of the second halo maps of the plurality of frames; and determining a first map texture coordinate and a first mask texture coordinate of the multi-frame second halo map based on the target moving direction, the moving position of the multi-frame second halo map, the moving speed of the multi-frame second halo map, the original texture coordinate of the second halo patch, the region information of the region where the multi-frame second halo map is located in the second light source sequence diagram, and the scaling size of the multi-frame second halo map.
The preset position may refer to a position of the virtual light source relative to the virtual lens after moving, and specifically, when determining the offset displacement amount of the transverse halo map, a moving direction of a relative movement between the virtual lens and the virtual light source may be determined by using a relative position of the virtual light source after moving and the virtual lens, so as to determine a target moving direction of the transverse halo map.
After the target moving direction is determined, the target moving position of the transverse halo maps can be further determined according to preset intervals among the plurality of transverse halo maps and the change of preset flare values of the transverse halo maps, so that the texture coordinate position of the transverse halo maps is roughly determined.
Finally, the first map texture coordinates and the first mask texture coordinates of the transverse halo map are determined based on a series of preset parameters, such as texture coordinates of the transverse halo before the displacement change occurs, the coordinate position of the virtual luminous source, the map number of the transverse halo map to be generated and used in the second light source sequence map, the scaling size of the transverse halo, the interval between the plurality of transverse halos, the offset displacement amount of each transverse halo, and the moving speed of each transverse halo.
Optionally, the sampling a sequence diagram set based on sampling texture coordinates of multiple frames of maps to obtain texture information of a patch set, including: sampling the first light source sequence diagram based on the sampling texture coordinates of the light ray mapping to obtain the color of the light ray mapping, and obtaining texture information of the light ray patch based on the color of the light ray mapping and the brightness of the light ray mapping; sampling the first light source sequence chart based on sampling texture coordinates of the first halo mapping to obtain the color of the first halo mapping, and obtaining texture information of the first halo patch based on the color of the first halo mapping and the brightness of the first halo mapping; sampling the first light source sequence chart based on sampling texture coordinates of the multi-frame light spot maps to obtain the color of the multi-frame light spot maps, and obtaining texture information of the plurality of light spot patches based on the color of the multi-frame light spot maps and the brightness of the multi-frame light spot maps; and sampling the second light source sequence chart based on the sampling texture coordinates of the multiple frames of second halo maps to obtain the color of the multiple frames of second halo maps, and obtaining texture information of the second halo patch based on the color of the multiple frames of second halo maps and the brightness of the multiple frames of second halo maps.
After the sample texture coordinates for each halo image are determined, a plurality of texture information for generating the patch set may be determined based on the texture coordinates.
Specifically, the texture information may include, but is not limited to: and the parameters of map color, map brightness, map texture detail and the like. For example, if the radiance patch needs to be generated, the related texture information, such as parameter information of color, brightness, and the like of the radiance patch, may be obtained from the first light source sequence diagram based on the sampling texture coordinates of the radiance patch, and then, based on the brightness of the patch and a preset brightness multiple of the radiance patch, the radiance brightness actually used when the radiance patch is generated is determined. Based on the above-mentioned color and brightness parameters of the light rays, the texture information of the light ray patch can be generated.
Similarly, if the first halo patch needs to be generated, the related texture information may be obtained from the first light source sequence diagram based on the sampled texture coordinates of the circular halo patch, and the circular halo luminance used for generating the first halo patch is determined based on the luminance multiple of the first halo patch and the luminance of the circular halo patch, so as to obtain the texture information for generating the first halo patch.
If the light spot patch needs to be generated, the related texture information can be acquired from the first light source sequence diagram based on the sampling texture coordinates of the light spot mapping, and the light spot brightness for generating the light spot patch is determined based on the brightness multiple of the light spot mapping and the brightness of the circular halo mapping, so as to obtain the texture information for generating the light spot patch.
If the second halo patch needs to be generated, the related texture information may be acquired from the second light source sequence diagram based on the sampling texture coordinates of the circular halo patch, and it should be noted that, since the brightness of the transverse halo patch is affected by the virtual light source, when the transverse halo brightness is determined, the transverse halo brightness may be determined based on the coordinate position of the virtual light source in combination with the brightness of the transverse halo patch, so as to obtain the texture information for generating the transverse halo patch.
Optionally, obtaining texture information of the multiple light spot patches based on the colors of the multiple frames of light spot maps and the brightness of the multiple frames of light spot maps includes: obtaining texture information of a first light spot patch based on the color of a first frame of light spot patch in the multi-frame light spot patch and the brightness of the first frame of light spot patch; and obtaining texture information of the second light spot patch based on the color of a second light spot map, the color of a third light spot map and the brightness of the second light spot map in the multi-frame light spot map, wherein the second light spot map is the other maps except the first light spot map in the multi-frame light spot map, and the third light spot map is the map of one frame before the second light spot map in the multi-frame light spot map.
In an alternative of this embodiment, in consideration of the fact that light spots overlap when a lens is used to capture a light source in an actual scene, when determining the light spot brightness corresponding to each light spot patch, the brightness of the current light spot may be determined by combining the light spot parameters of the previous and subsequent frames of the current light spot, that is, the first light spot map, the second light spot map, and the third light spot map.
Specifically, the color and brightness information of the first light spot map can be firstly obtained according to the texture coordinate information of the first light spot map, and meanwhile, the color and brightness of the second light spot map and the color information of the third light spot map can be obtained to perform superposition processing on the color and brightness of the first light spot map, so that the light spot superposition area can be ensured to be more in line with the light spot information shot by a lens in a real scene.
Optionally, the sampling a first light source sequence diagram based on sampling texture coordinates of multiple frames of second halo maps to obtain colors of the multiple frames of second halo maps, and obtaining texture information of the second halo patch based on the colors of the multiple frames of second halo maps and brightness of the multiple frames of second halo maps, including: sampling a second light source sequence chart based on second map texture coordinates of a plurality of frames of second halo maps to obtain map colors of the plurality of frames of second halo maps; sampling a preset channel of a second light source sequence diagram based on second mask texture coordinates of a plurality of frames of second halo maps to obtain the mask color of the plurality of frames of second halo maps; obtaining the brightness of the multiple frames of second halo maps based on the second halo positions and the intensities of the multiple frames of second halo maps; and fusing the mapping color of the multi-frame second halo mapping, the mask color of the multi-frame second halo mapping and the brightness of the multi-frame second halo mapping to obtain the texture information of the second halo surface patch.
The predetermined channel may be a transverse lens halo sequence diagram channel, and is generally a mask channel of a transverse halo mapping.
When texture information of the transverse halo map is obtained, mapping colors in the second light source sequence map and preset channels of the mapping can be respectively sampled based on a second mapping texture coordinate and a second mask texture coordinate, so as to obtain mapping colors and mask colors of the transverse halo map.
Since the lateral halo is affected by the virtual source position and the brightness, the lateral halo brightness can be determined based on the halo position of the lateral halo patch and the chartlet brightness of the lateral halo map, so that the lateral halo brightness is more realistic.
After the mapping color, the mask color, and the mapping brightness are obtained, the parameters may be subjected to fusion processing to obtain final texture information for generating the transverse halo patch.
After texture information corresponding to each patch is obtained, the patches may be continuously fused and generated based on the texture information to obtain a final lens halo, and a specific halo effect may be as shown in fig. 6.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the methods of the embodiments of the present disclosure.
In this embodiment, a lens halo generating device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and the description of the device already made is omitted. As used below, the terms "unit", "module" may implement a combination of software and/or hardware of predetermined functions. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a block diagram of a structure of an apparatus for generating lens halo according to an embodiment of the present disclosure, in which a terminal device provides a graphical user interface, and content displayed by the graphical user interface at least partially includes a virtual scene and lens halo, as shown in fig. 7, the apparatus includes: a patch generating module 702, configured to generate a patch set corresponding to the lens halo based on the attribute information of the lens halo, where the patch set includes: the device comprises a light ray surface patch, a first halo surface patch, a plurality of light spot surface patches and a second halo surface patch; a sequence diagram acquisition module 704, configured to acquire a sequence diagram set corresponding to the patch set, where the sequence diagram set includes a first light source sequence diagram and a second light source sequence diagram, the first light source sequence diagram is composed of a radiance map corresponding to the radiance patch, a first halo map corresponding to the first halo patch, and multi-frame light spot maps corresponding to multiple light spot patches, and the second light source sequence diagram is composed of a multi-frame second halo map corresponding to the second halo patch; a texture coordinate generating module 706, configured to determine sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the area information of the area where the multi-frame maps in the sequence map set are located; a sequence diagram acquisition module 708, configured to sample a sequence diagram set based on sampling texture coordinates of multiple frames of maps to obtain texture information of a patch set; and a lens halo generating module 710 for generating a lens halo based on the texture information of the patch set.
Optionally, the texture coordinate generating module 706 includes: the first coordinate determination unit is used for determining sampling texture coordinates of the radiance mapping based on the original texture coordinates of the radiance patch and the area information of the area where the radiance mapping is located in the first light source sequence diagram; the second coordinate determination unit is used for determining sampling texture coordinates of the first halo mapping image based on the original texture coordinates of the first halo patch and the area information of the area where the first halo mapping image is located in the first light source sequence diagram; the third coordinate determination unit is used for determining sampling texture coordinates of the multi-frame light spot mapping based on the original texture coordinates of the light spot patches and the area information of the area where the multi-frame light spot mapping in the first light source sequence diagram is located; and the fourth coordinate determination unit is used for determining the sampling texture coordinate of the multi-frame second halo mapping based on the original texture coordinate of the second halo patch and the area information of the area where the multi-frame second halo mapping in the second light source sequence diagram is located.
Optionally, the first coordinate determination unit includes: the first coordinate determination subunit is used for determining a target texture coordinate of the radiance mapping based on the original texture coordinate of the radiance patch and the area information of the area where the radiance mapping is located in the first light source sequence diagram; and the second coordinate determination subunit is used for determining sampling texture coordinates of the radiance map based on the target texture coordinates, the preset rotation center and the preset rotation angle of the radiance map.
Optionally, the fourth coordinate determination unit includes: the third coordinate determination subunit is used for determining a first chartlet texture coordinate and a first shade texture coordinate of a plurality of frames of second halo chartlets based on the original texture coordinate of the second halo patch, the area information of the area where the plurality of frames of second halo chartlets are located in the second light source sequence diagram and the scaling size of the plurality of frames of second halo chartlets; a fourth coordinate determination subunit, configured to determine second map texture coordinates of the multiple frames of second halo maps based on the first map texture coordinates of the multiple frames of second halo maps and an offset of the multiple frames of second halo maps; determining a second mask texture coordinate of the plurality of frames of second halo maps based on the first mask texture coordinate of the plurality of frames of second halo maps and the offset of the plurality of frames of second halo maps; and the fifth coordinate determination subunit is used for obtaining sampling texture coordinates based on the second mapping texture coordinates and the second mask texture coordinates.
Optionally, the fourth coordinate determination subunit is further configured to: determining a target moving direction based on a preset position; determining the moving position of the multiple frames of second halo maps based on the preset position, the second halo interval and the flare value of the multiple frames of second halo maps; and determining a first map texture coordinate and a first mask texture coordinate of the multi-frame second halo map based on the target moving direction, the moving position of the multi-frame second halo map, the moving speed of the multi-frame second halo map, the original texture coordinate of the second halo patch, the region information of the region where the multi-frame second halo map is located in the second light source sequence diagram, and the scaling size of the multi-frame second halo map.
Optionally, the sequence diagram acquisition module 708 comprises: the first information acquisition unit is used for sampling the first light source sequence diagram based on the sampling texture coordinates of the radiance mapping to obtain the color of the radiance mapping, and obtaining the texture information of the radiance patch based on the color of the radiance mapping and the brightness of the radiance mapping; the second information acquisition unit is used for sampling the first light source sequence chart based on the sampling texture coordinates of the first halo mapping to obtain the color of the first halo mapping, and obtaining the texture information of the first halo surface patch based on the color of the first halo mapping and the brightness of the first halo mapping; the third information acquisition unit is used for sampling the first light source sequence chart based on the sampling texture coordinates of the multi-frame light spot mapping to obtain the color of the multi-frame light spot mapping and obtaining the texture information of the multiple light spot patches based on the color of the multi-frame light spot mapping and the brightness of the multi-frame light spot mapping; and the fourth information acquisition unit is used for sampling the second light source sequence chart based on the sampling texture coordinates of the multiple frames of second halo maps to obtain the color of the multiple frames of second halo maps, and obtaining the texture information of the second halo surface patch based on the color of the multiple frames of second halo maps and the brightness of the multiple frames of second halo maps.
Optionally, the third information acquiring unit includes: the first information acquisition subunit is used for acquiring texture information of a first light spot patch based on the color of a first frame of light spot map and the brightness of the first frame of light spot map in the multi-frame light spot map; and the second information obtaining subunit is configured to obtain texture information of the second light spot patch based on a color of a second light spot map, a color of a third light spot map, and a brightness of the second light spot map in the multi-frame light spot map, where the second light spot map is another map in the multi-frame light spot map except the first light spot map, and the third light spot map is a map of a frame before the second light spot map in the multi-frame light spot map.
Optionally, the fourth information acquiring unit includes: the first color obtaining subunit samples the second light source sequence diagram based on the second map texture coordinates of the multiple frames of second halo maps to obtain the map colors of the multiple frames of second halo maps; the second color obtaining subunit is configured to sample a preset channel of the second light source sequence diagram based on second mask texture coordinates of multiple frames of second halo maps to obtain mask colors of the multiple frames of second halo maps; the first brightness obtaining subunit is configured to obtain the brightness of the multiple frames of second halo maps based on the second halo position and the intensities of the multiple frames of second halo maps; and the third information acquisition subunit is used for fusing the map color of the multiframe second halo map, the mask color of the multiframe second halo map and the brightness of the multiframe second halo map to obtain the texture information of the second halo patch.
Optionally, the attribute information of the lens halo includes: attribute information of the rays, attribute information of the first halo, attribute information of the plurality of light spots, and attribute information of the second halo.
It should be noted that the above units and modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the units and the modules are all positioned in the same processor; alternatively, the units and modules may be located in different processors in any combination.
Embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps in any of the above method embodiments when executed.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1: generating a patch set corresponding to the lens halo based on the attribute information of the lens halo, wherein the patch set comprises: the device comprises a light ray surface patch, a first halo surface patch, a plurality of light spot surface patches and a second halo surface patch;
s2: acquiring a sequence diagram set corresponding to the patch set, wherein the sequence diagram set comprises a first light source sequence diagram and a second light source sequence diagram, the first light source sequence diagram is composed of a flare map corresponding to a flare patch, a first halo map corresponding to a first halo patch and a plurality of frames of light spot maps corresponding to a plurality of light spot patches, and the second light source sequence diagram is composed of a plurality of frames of second halo maps corresponding to a second halo patch;
s3: determining sampling texture coordinates of the multi-frame pictures based on the original texture coordinates of the patch set and the region information of the region where the multi-frame pictures are located in the sequence picture set;
s4: sampling a sequence diagram set based on sampling texture coordinates of multiple frames of maps to obtain texture information of the patch set;
s5: and generating lens halos based on the texture information of the patch sets.
Optionally, determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the area information of the area where the multi-frame maps in the sequence map set are located includes: determining sampling texture coordinates of the radiance mapping chart based on original texture coordinates of the radiance patch and area information of an area where the radiance mapping chart is located in the first light source sequence chart; determining sampling texture coordinates of the first halo mapping based on the original texture coordinates of the first halo patch and the region information of the region where the first halo mapping is located in the first light source sequence diagram; determining sampling texture coordinates of the multi-frame light spot maps based on original texture coordinates of the light spot patches and area information of an area where the multi-frame light spot maps are located in the first light source sequence map; and determining sampling texture coordinates of the second halo maps of the multiple frames based on the original texture coordinates of the second halo patches and the region information of the regions where the multiple frames of second halo maps in the second light source sequence diagram are located.
Optionally, determining a sampling texture coordinate of the light ray mapping based on the original texture coordinate of the light ray patch and the area information of the area where the light ray mapping is located in the first light source sequence diagram, including: determining a target texture coordinate of the radiance mapping based on the original texture coordinate of the radiance patch and the area information of the area where the radiance mapping is located in the first light source sequence diagram; and determining sampling texture coordinates of the light map based on the target texture coordinates, the preset rotation center and the preset rotation angle of the light map.
Optionally, determining sampling texture coordinates of multiple frames of second halo maps based on the original texture coordinates of the second halo patch and the area information of the area where the multiple frames of second halo maps in the second light source sequence map are located includes: determining a first map texture coordinate and a first mask texture coordinate of a plurality of frames of second halo maps based on the original texture coordinate of the second halo patch, the region information of the region where the plurality of frames of second halo maps in the second light source sequence map are located, and the scaling size of the plurality of frames of second halo maps; determining a second map texture coordinate of the plurality of frames of second halo maps based on the first map texture coordinate of the plurality of frames of second halo maps and the offset of the plurality of frames of second halo maps; determining a second mask texture coordinate of the plurality of frames of second halo maps based on the first mask texture coordinate of the plurality of frames of second halo maps and the offset of the plurality of frames of second halo maps; and obtaining a sampling texture coordinate based on the second chartlet texture coordinate and the second mask texture coordinate.
Optionally, determining first map texture coordinates and first mask texture coordinates of the multiple frames of second halo maps based on original texture coordinates of the second halo patch, area information of an area where the multiple frames of second halo maps are located in the second light source sequence map, and a scaling size of the multiple frames of second halo maps, including: determining a target moving direction based on a preset position; determining the moving position of the second halo maps of the plurality of frames based on the preset position, the second halo interval and the flare value of the second halo maps of the plurality of frames; and determining a first map texture coordinate and a first mask texture coordinate of the multi-frame second halo map based on the target moving direction, the moving position of the multi-frame second halo map, the moving speed of the multi-frame second halo map, the original texture coordinate of the second halo patch, the region information of the region where the multi-frame second halo map is located in the second light source sequence diagram, and the scaling size of the multi-frame second halo map.
Optionally, sampling the sequence diagram set based on sampling texture coordinates of multiple frames of maps to obtain texture information of the patch set, where the sampling texture coordinates include: sampling the first light source sequence diagram based on the sampling texture coordinates of the radiance map to obtain the color of the radiance map, and obtaining texture information of the radiance sheet based on the color of the radiance map and the brightness of the radiance map; sampling the first light source sequence chart based on sampling texture coordinates of the first halo mapping to obtain the color of the first halo mapping, and obtaining texture information of the first halo patch based on the color of the first halo mapping and the brightness of the first halo mapping; sampling the first light source sequence chart based on sampling texture coordinates of the multi-frame light spot mapping to obtain the color of the multi-frame light spot mapping, and obtaining texture information of the multiple light spot patches based on the color of the multi-frame light spot mapping and the brightness of the multi-frame light spot mapping; and sampling the second light source sequence chart based on the sampling texture coordinates of the multiple frames of second halo maps to obtain the color of the multiple frames of second halo maps, and obtaining the texture information of the second halo patch based on the color of the multiple frames of second halo maps and the brightness of the multiple frames of second halo maps.
Optionally, obtaining texture information of the multiple light spot patches based on the colors of the multiple frames of light spot maps and the brightness of the multiple frames of light spot maps includes: obtaining texture information of a first light spot patch based on the color of a first frame of light spot patch in the multi-frame light spot patch and the brightness of the first frame of light spot patch; and obtaining texture information of the second light spot patch based on the color of a second light spot map, the color of a third light spot map and the brightness of the second light spot map in the multi-frame light spot map, wherein the second light spot map is the other maps except the first light spot map in the multi-frame light spot map, and the third light spot map is the map of one frame before the second light spot map in the multi-frame light spot map.
Optionally, the sampling a first light source sequence diagram based on sampling texture coordinates of multiple frames of second halo maps to obtain colors of the multiple frames of second halo maps, and obtaining texture information of the second halo patch based on the colors of the multiple frames of second halo maps and brightness of the multiple frames of second halo maps, including: sampling a second light source sequence chart based on second map texture coordinates of a plurality of frames of second halo maps to obtain map colors of the plurality of frames of second halo maps; sampling a preset channel of a second light source sequence diagram based on second mask texture coordinates of a plurality of frames of second halo maps to obtain mask colors of the plurality of frames of second halo maps; obtaining the brightness of the multi-frame second halo map based on the second halo position and the intensity of the multi-frame second halo map; and fusing the mapping color of the multi-frame second halo mapping, the mask color of the multi-frame second halo mapping and the brightness of the multi-frame second halo mapping to obtain the texture information of the second halo surface patch.
Optionally, the attribute information of the lens halo includes: attribute information of the rays, attribute information of the first halo, attribute information of the plurality of light spots, and attribute information of the second halo.
In at least some embodiments of the present disclosure, a facet set corresponding to a lens halo is generated by using attribute information based on the lens halo; acquiring a sequence diagram set corresponding to the patch set; determining sampling texture coordinates of the multi-frame pictures based on the original texture coordinates of the patch set and the region information of the region where the multi-frame pictures are located in the sequence picture set; sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame maps to obtain texture information of the patch set; the method comprises the steps of generating a patch set based on texture information of the patch set in a mode of generating lens halos, calling corresponding halo maps from a sequence map set according to the information of the patch set to generate the lens halos, not needing to add all the halo maps to a virtual scene, and then adjusting according to the positions of luminous sources and the position of the virtual lens, so that the generation process of the whole lens halos is simplified to a great extent, and the halo maps can be adjusted according to initial texture coordinates of the patch set, so that the generated halo maps better accord with a real scene, the visual effect of the generated lens halos is improved, and the technical problems that the generation process of the lens halos in the related technology is complex and the generation effect is poor are solved.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer-readable storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present application, a computer readable storage medium has stored thereon a program product capable of implementing the above-described method of the present embodiment. In some possible implementations, various aspects of the embodiments of the present disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present disclosure described in the above section "exemplary method" of this embodiment, when the program product is run on the terminal device.
The program product for implementing the above method according to the embodiments of the present disclosure may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the disclosed embodiments is not limited in this respect, and in the disclosed embodiments, the computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more computer-readable media. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present disclosure also provide an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1: generating a patch set corresponding to the lens halo based on the attribute information of the lens halo, wherein the patch set comprises: the device comprises a light ray surface patch, a first halo surface patch, a plurality of light spot surface patches and a second halo surface patch;
s2: acquiring a sequence diagram set corresponding to the patch set, wherein the sequence diagram set comprises a first light source sequence diagram and a second light source sequence diagram, the first light source sequence diagram is composed of a flare map corresponding to a flare patch, a first halo map corresponding to a first halo patch and a plurality of frames of light spot maps corresponding to a plurality of light spot patches, and the second light source sequence diagram is composed of a plurality of frames of second halo maps corresponding to a second halo patch;
s3: determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the region information of the region where the multi-frame maps in the sequence map set are located;
s4: sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame maps to obtain texture information of the patch set;
s5: and generating lens halos based on the texture information of the patch sets.
Optionally, determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the area information of the area where the multi-frame maps in the sequence map set are located includes: determining sampling texture coordinates of the radiance mapping chart based on original texture coordinates of the radiance patch and area information of an area where the radiance mapping chart is located in the first light source sequence chart; determining sampling texture coordinates of the first halo mapping based on the original texture coordinates of the first halo patch and the region information of the region where the first halo mapping is located in the first light source sequence diagram; determining sampling texture coordinates of the multi-frame light spot maps based on original texture coordinates of the light spot patches and area information of an area where the multi-frame light spot maps are located in the first light source sequence map; and determining sampling texture coordinates of the second halo maps of the multiple frames based on the original texture coordinates of the second halo patches and the region information of the regions where the multiple frames of second halo maps in the second light source sequence diagram are located.
Optionally, determining a sampling texture coordinate of the light ray mapping based on the original texture coordinate of the light ray patch and the area information of the area where the light ray mapping is located in the first light source sequence diagram, including: determining a target texture coordinate of the light ray mapping chart based on the original texture coordinate of the light ray patch and the area information of the area where the light ray mapping chart is located in the first light source sequence chart; and determining the sampling texture coordinate of the radiance map based on the target texture coordinate, the preset rotation center and the preset rotation angle of the radiance map.
Optionally, determining sampling texture coordinates of multiple frames of second halo maps based on the original texture coordinates of the second halo patch and the area information of the area where the multiple frames of second halo maps in the second light source sequence map are located includes: determining a first chartlet texture coordinate and a first shade texture coordinate of a plurality of frames of second halo chartlets based on an original texture coordinate of the second halo patch, area information of an area where the plurality of frames of second halo chartlets are located in a second light source sequence diagram and a scaling size of the plurality of frames of second halo chartlets; determining a second map texture coordinate of the multi-frame second halo map based on the first map texture coordinate of the multi-frame second halo map and the offset of the multi-frame second halo map; determining a second mask texture coordinate of the plurality of frames of second halo maps based on the first mask texture coordinate of the plurality of frames of second halo maps and the offset of the plurality of frames of second halo maps; and obtaining sampling texture coordinates based on the second mapping texture coordinates and the second mask texture coordinates.
Optionally, determining the first map texture coordinate and the first mask texture coordinate of the multiple frames of second halo maps based on the original texture coordinate of the second halo patch, the area information of the area where the multiple frames of second halo maps in the second light source sequence map are located, and the scaling size of the multiple frames of second halo maps, includes: determining a target moving direction based on a preset position; determining the moving position of the second halo maps of the plurality of frames based on the preset position, the second halo interval and the flare value of the second halo maps of the plurality of frames; and determining a first map texture coordinate and a first mask texture coordinate of the multi-frame second halo map based on the target moving direction, the moving position of the multi-frame second halo map, the moving speed of the multi-frame second halo map, the original texture coordinate of the second halo patch, the region information of the region where the multi-frame second halo map is located in the second light source sequence diagram, and the scaling size of the multi-frame second halo map.
Optionally, sampling the sequence diagram set based on sampling texture coordinates of multiple frames of maps to obtain texture information of the patch set, where the sampling texture coordinates include: sampling the first light source sequence diagram based on the sampling texture coordinates of the light ray mapping to obtain the color of the light ray mapping, and obtaining texture information of the light ray patch based on the color of the light ray mapping and the brightness of the light ray mapping; sampling the first light source sequence chart based on sampling texture coordinates of the first halo mapping to obtain the color of the first halo mapping, and obtaining texture information of the first halo patch based on the color of the first halo mapping and the brightness of the first halo mapping; sampling the first light source sequence chart based on sampling texture coordinates of the multi-frame light spot mapping to obtain the color of the multi-frame light spot mapping, and obtaining texture information of the multiple light spot patches based on the color of the multi-frame light spot mapping and the brightness of the multi-frame light spot mapping; and sampling the second light source sequence chart based on the sampling texture coordinates of the multiple frames of second halo maps to obtain the color of the multiple frames of second halo maps, and obtaining the texture information of the second halo patch based on the color of the multiple frames of second halo maps and the brightness of the multiple frames of second halo maps.
Optionally, obtaining texture information of the multiple light spot patches based on the colors of the multiple frames of light spot maps and the brightness of the multiple frames of light spot maps includes: obtaining texture information of a first light spot patch based on the color of a first frame of light spot patch in the multi-frame light spot patch and the brightness of the first frame of light spot patch; and obtaining texture information of the second light spot patch based on the color of a second light spot map, the color of a third light spot map and the brightness of the second light spot map in the multi-frame light spot maps, wherein the second light spot map is the other maps except the first light spot map in the multi-frame light spot maps, and the third light spot map is the map of one frame before the second light spot map in the multi-frame light spot maps.
Optionally, the sampling the first light source sequence diagram based on the sampling texture coordinates of the multiple frames of second halo maps to obtain the colors of the multiple frames of second halo maps, and obtaining the texture information of the second halo patch based on the colors of the multiple frames of second halo maps and the brightness of the multiple frames of second halo maps includes: sampling a second light source sequence chart based on second map texture coordinates of a plurality of frames of second halo maps to obtain map colors of the plurality of frames of second halo maps; sampling a preset channel of a second light source sequence diagram based on second mask texture coordinates of a plurality of frames of second halo maps to obtain mask colors of the plurality of frames of second halo maps; obtaining the brightness of the multi-frame second halo map based on the second halo position and the intensity of the multi-frame second halo map; and fusing the map color of the multi-frame second halo map, the mask color of the multi-frame second halo map and the brightness of the multi-frame second halo map to obtain the texture information of the second halo surface patch.
Optionally, the attribute information of the lens halo includes: attribute information of the rays, attribute information of the first halo, attribute information of the plurality of light spots, and attribute information of the second halo.
In at least some embodiments of the present disclosure, a facet set corresponding to lens halos is generated by using attribute information based on the lens halos; acquiring a sequence diagram set corresponding to the patch set; determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the region information of the region where the multi-frame maps in the sequence map set are located; sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame maps to obtain texture information of the patch set; the method comprises the steps of generating a patch set based on texture information of the patch set in a mode of generating lens halos, calling corresponding halo maps from a sequence map set according to the information of the patch set to generate the lens halos, not needing to add all the halo maps to a virtual scene, and then adjusting according to the positions of luminous sources and the position of the virtual lens, so that the generation process of the whole lens halos is simplified to a great extent, and the halo maps can be adjusted according to initial texture coordinates of the patch set, so that the generated halo maps better accord with a real scene, the visual effect of the generated lens halos is improved, and the technical problems that the generation process of the lens halos in the related technology is complex and the generation effect is poor are solved.
Fig. 8 is a schematic diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 8, the electronic device 800 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, the electronic apparatus 800 is in the form of a general purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processor 810, the at least one memory 820, the bus 830 connecting the various system components (including the memory 820 and the processor 810), and the display 840.
Wherein the memory 820 stores program codes, which can be executed by the processor 810, to cause the processor 810 to perform the steps according to various exemplary embodiments of the present disclosure described in the method section of the embodiments of the present application.
The memory 820 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 8201 and/or a cache memory unit 8202, may further include a read only memory unit (ROM) 8203, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
In some examples, memory 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The memory 820 may further include memory located remotely from the processor 810, which may be connected to the electronic device 800 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, processor 810, or a local bus using any of a variety of bus architectures.
Display 840 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 800.
Optionally, the electronic apparatus 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic apparatus 800, and/or any device (e.g., router, modem, etc.) that enables the electronic apparatus 800 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 850. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. As shown in fig. 8, the network adapter 860 communicates with the other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown in FIG. 8, other hardware and/or software modules may be used in conjunction with the electronic device 800, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The electronic device 800 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power source, and/or a camera.
It will be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 800 may also include more or fewer components than shown in FIG. 8, or have a different configuration than shown in FIG. 1. The memory 820 may be used for storing a computer program and corresponding data, such as a computer program and corresponding data corresponding to the lens halo generating method in the embodiment of the disclosure. The processor 810 executes various functional applications and data processing, i.e., implements the above-described lens halo generation method, by executing the computer program stored in the memory 820.
In the above embodiments of the present disclosure, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, a division of a unit may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present disclosure, and it should be noted that modifications and embellishments could be made by those skilled in the art without departing from the principle of the present disclosure, and these should also be considered as the protection scope of the present disclosure.

Claims (12)

1. A method for generating lens halo, comprising:
generating a patch set corresponding to the lens halo based on the attribute information of the lens halo, wherein the patch set comprises: the device comprises a light ray surface patch, a first halo surface patch, a plurality of light spot surface patches and a second halo surface patch;
acquiring a sequence diagram set corresponding to the patch set, wherein the sequence diagram set comprises a first light source sequence diagram and a second light source sequence diagram, the first light source sequence diagram consists of a radiance mapping corresponding to the radiance patch, a first halo mapping corresponding to the first halo patch and multi-frame light spot mappings corresponding to the plurality of light spot patches, and the second light source sequence diagram consists of a multi-frame second halo mapping corresponding to the second halo patch;
determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the region information of the region where the multi-frame maps in the sequence map set are located;
sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame mapping to obtain texture information of the patch set;
generating the lens halo based on texture information of the set of patches.
2. The method of claim 1, wherein determining the sampled texture coordinates of the multi-frame map based on the original texture coordinates of the set of patches and the region information of the region where the multi-frame map in the set of sequence maps is located comprises:
determining sampling texture coordinates of the light ray mapping chart based on the original texture coordinates of the light ray surface patch and the area information of the area where the light ray mapping chart is located in the first light source sequence chart;
determining sampling texture coordinates of the first halo mapping based on the original texture coordinates of the first halo patch and the region information of the region where the first halo mapping is located in the first light source sequence diagram;
determining sampling texture coordinates of the multi-frame light spot maps based on the original texture coordinates of the light spot patches and the area information of the area where the multi-frame light spot maps are located in the first light source sequence map;
and determining sampling texture coordinates of the second halo maps of the plurality of frames based on the original texture coordinates of the second halo patch and the area information of the area where the second halo maps of the plurality of frames are located in the second light source sequence diagram.
3. The method of claim 2, wherein determining the sample texture coordinates of the light map based on the original texture coordinates of the light patch and the region information of the region of the light map in the first light source sequence map comprises:
determining a target texture coordinate of the light ray mapping chart based on the original texture coordinate of the light ray surface patch and the area information of the area where the light ray mapping chart is located in the first light source sequence chart;
and determining sampling texture coordinates of the radiance map based on the target texture coordinates, a preset rotation center and a preset rotation angle of the radiance map.
4. The method of claim 2, wherein determining the sampled texture coordinates of the plurality of frames of second halo maps based on the original texture coordinates of the second halo patch and the region information of the region of the second light source sequence map where the plurality of frames of second halo maps are located comprises:
determining a first map texture coordinate and a first mask texture coordinate of the multiple frames of second halo maps based on the original texture coordinate of the second halo patch, the area information of the area where the multiple frames of second halo maps are located in the second light source sequence map, and the scaling size of the multiple frames of second halo maps;
determining second map texture coordinates of the plurality of frames of second halo maps based on first map texture coordinates of the plurality of frames of second halo maps and offsets of the plurality of frames of second halo maps;
determining second mask texture coordinates of the plurality of frames of second halo maps based on first mask texture coordinates of the plurality of frames of second halo maps and an offset of the plurality of frames of second halo maps;
and obtaining the sampling texture coordinate based on the second mapping texture coordinate and the second mask texture coordinate.
5. The method of claim 4, wherein determining the first map texture coordinates and the first mask texture coordinates of the plurality of frames of the second halo map based on the original texture coordinates of the second halo patch, the region information of the region of the plurality of frames of the second halo map in the second light source sequence diagram, and the scaled size of the plurality of frames of the second halo map comprises:
determining a target moving direction based on a preset position;
determining a moving position of the plurality of frames of second halo maps based on the preset position, a second halo interval and a flare value of the plurality of frames of second halo maps;
and determining a first map texture coordinate and a first mask texture coordinate of the multi-frame second halo map based on the target moving direction, the moving position of the multi-frame second halo map, the moving speed of the multi-frame second halo map, the original texture coordinate of the second halo patch, the region information of the region where the multi-frame second halo map is located in the second light source sequence diagram, and the scaling size of the multi-frame second halo map.
6. The method of claim 1, wherein sampling the set of sequence diagrams based on sampled texture coordinates of the multiple frames of the maps to obtain texture information of the set of patches comprises:
sampling the first light source sequence diagram based on the sampling texture coordinates of the radiance map to obtain the color of the radiance map, and obtaining the texture information of the radiance patch based on the color of the radiance map and the brightness of the radiance map;
sampling the first light source sequence diagram based on the sampling texture coordinates of the first halo mapping to obtain the color of the first halo mapping, and obtaining texture information of the first halo patch based on the color of the first halo mapping and the brightness of the first halo mapping;
sampling the first light source sequence chart based on sampling texture coordinates of the multiple frames of light spot maps to obtain the color of the multiple frames of light spot maps, and obtaining texture information of the multiple light spot patches based on the color of the multiple frames of light spot maps and the brightness of the multiple frames of light spot maps;
and sampling the second light source sequence chart based on the sampling texture coordinates of the plurality of frames of second halo maps to obtain the color of the plurality of frames of second halo maps, and obtaining the texture information of the second halo patch based on the color of the plurality of frames of second halo maps and the brightness of the plurality of frames of second halo maps.
7. The method of claim 6, wherein obtaining texture information for the plurality of spot patches based on the color of the plurality of frames of spot maps and the intensity of the plurality of frames of spot maps comprises:
obtaining texture information of a first light spot patch based on the color of a first frame of light spot maps in the multiple frames of light spot maps and the brightness of the first frame of light spot maps;
and obtaining texture information of a second light spot patch based on the color of a second light spot map, the color of a third light spot map and the brightness of the second light spot map in the multi-frame light spot map, wherein the second light spot map is the other maps except the first frame of light spot map in the multi-frame light spot map, and the third light spot map is the one frame of map before the second light spot map in the multi-frame light spot map.
8. The method of claim 6, wherein sampling the first light source sequence diagram based on sampled texture coordinates of the plurality of frames of second halo maps to obtain a color of the plurality of frames of second halo maps, and obtaining texture information of the second halo patch based on the color of the plurality of frames of second halo maps and a brightness of the plurality of frames of second halo maps comprises:
sampling the second light source sequence diagram based on the second mapping texture coordinates of the multiple frames of second halo mapping to obtain mapping colors of the multiple frames of second halo mapping;
sampling a preset channel of the second light source sequence diagram based on second mask texture coordinates of the multiple frames of second halo maps to obtain the mask color of the multiple frames of second halo maps;
obtaining the brightness of the plurality of frames of second halo maps based on the second halo positions and the intensities of the plurality of frames of second halo maps;
and fusing the map color of the multi-frame second halo map, the shade color of the multi-frame second halo map and the brightness of the multi-frame second halo map to obtain the texture information of the second halo surface patch.
9. The method according to claim 1, wherein the attribute information of the lens flare includes: attribute information of the rays, attribute information of the first halo, attribute information of the plurality of light spots, and attribute information of the second halo.
10. A lens halo generating device, comprising:
a patch generating module, configured to generate a patch set corresponding to the lens halo based on attribute information of the lens halo, where the patch set includes: the device comprises a light ray surface patch, a first halo surface patch, a plurality of light spot surface patches and a second halo surface patch;
a sequence diagram acquisition module, configured to acquire a sequence diagram set corresponding to the patch set, where the sequence diagram set includes a first light source sequence diagram and a second light source sequence diagram, the first light source sequence diagram is composed of a flare map corresponding to the flare patch, a first halo map corresponding to the first halo patch, and multiple frames of light spot maps corresponding to the multiple light spot patches, and the second light source sequence diagram is composed of multiple frames of second halo maps corresponding to the second halo patch;
the texture coordinate generating module is used for determining sampling texture coordinates of the multi-frame maps based on the original texture coordinates of the patch set and the region information of the region where the multi-frame maps in the sequence map set are located;
the sequence diagram acquisition module is used for sampling the sequence diagram set based on the sampling texture coordinates of the multi-frame mapping to obtain texture information of the patch set;
a lens halo generation module to generate the lens halo based on texture information of the set of patches.
11. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to, when executed by a processor, perform the method of generating lens halos as claimed in any one of claims 1 to 9.
12. An electronic apparatus comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the lens halo generation method of any one of claims 1 to 9.
CN202211157998.7A 2022-09-22 2022-09-22 Lens halo generation method and device and electronic device Pending CN115546082A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211157998.7A CN115546082A (en) 2022-09-22 2022-09-22 Lens halo generation method and device and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211157998.7A CN115546082A (en) 2022-09-22 2022-09-22 Lens halo generation method and device and electronic device

Publications (1)

Publication Number Publication Date
CN115546082A true CN115546082A (en) 2022-12-30

Family

ID=84728937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211157998.7A Pending CN115546082A (en) 2022-09-22 2022-09-22 Lens halo generation method and device and electronic device

Country Status (1)

Country Link
CN (1) CN115546082A (en)

Similar Documents

Publication Publication Date Title
US11756223B2 (en) Depth-aware photo editing
TW202220438A (en) Display method, electronic device and computer readable storage medium in augmented reality scene
WO2021218040A1 (en) Image processing method and apparatus
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
CN114615513A (en) Video data generation method and device, electronic equipment and storage medium
JP7183414B2 (en) Image processing method and apparatus
JP6661780B2 (en) Face model editing method and apparatus
WO2023143120A1 (en) Material display method and apparatus, electronic device, storage medium, and program product
EP4070286A1 (en) Virtual, augmented and mixed reality systems and methods
Liu et al. Stereo-based bokeh effects for photography
WO2022022260A1 (en) Image style transfer method and apparatus therefor
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
CN115546082A (en) Lens halo generation method and device and electronic device
CN114972466A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN111462007B (en) Image processing method, device, equipment and computer storage medium
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
CN116055708B (en) Perception visual interactive spherical screen three-dimensional imaging method and system
CN114299207A (en) Virtual object rendering method and device, readable storage medium and electronic device
CN115426505B (en) Preset expression special effect triggering method based on face capture and related equipment
KR102601179B1 (en) Apparatus, method and system for generating extended reality XR content
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
CN115580778A (en) Background area determination method and device and electronic device
CN115661318A (en) Method, device, storage medium and electronic device for rendering model
CN117376591A (en) Scene switching processing method, device, equipment and medium based on virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination