CN117372555A - Image generation method, device, electronic equipment and readable storage medium - Google Patents

Image generation method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117372555A
CN117372555A CN202311208987.1A CN202311208987A CN117372555A CN 117372555 A CN117372555 A CN 117372555A CN 202311208987 A CN202311208987 A CN 202311208987A CN 117372555 A CN117372555 A CN 117372555A
Authority
CN
China
Prior art keywords
image
light source
information
light
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311208987.1A
Other languages
Chinese (zh)
Inventor
杨丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311208987.1A priority Critical patent/CN117372555A/en
Publication of CN117372555A publication Critical patent/CN117372555A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image generation method, an image generation device, electronic equipment and a readable storage medium, and belongs to the technical field of electronic information. The method comprises the following steps: collecting a first image through a camera; determining first information according to image information of a first image, wherein the first information comprises a light source starting point position and a basic light source layer; and generating a second image based on the first information through a radial blurring algorithm, wherein the second image is an image with the Tyndall luminous efficacy relative to the first image.

Description

Image generation method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image generation method, an image generation device, electronic equipment and a readable storage medium.
Background
The tyndall luminous efficacy can create mystery and beautiful atmosphere for the image, and is deeply favored by photographic lovers. Under natural conditions, the formation of the tyndall effect requires the simultaneous provision of colloidal particles and a light source. Such as a room space where the soot layer is diffused under the irradiation of a light source. If the user wants to take a picture with the light effect of the tyndall, he needs to wait for a proper time, which is not easy.
At present, in order to obtain an image with the light effect of the tyndall, the light effect of the tyndall is added in a mode of post-processing the image by a graph repairing software, however, the mode requires a user to have basic expertise of graph repairing, and the operation difficulty is high and the time consumption is long. Therefore, the method has the advantages of high threshold for common users, high manufacturing cost and high difficulty, so that how to conveniently and efficiently manufacture the image with the real Tyndall luminous efficiency is a problem to be solved.
Disclosure of Invention
An object of the embodiment of the application is to provide an image generation method, an image generation device, an electronic device and a readable storage medium, which can conveniently obtain a high artistic-feeling tyndall light effect image.
In a first aspect, an embodiment of the present application provides an image generating method, including: acquiring a first image; determining first information according to image information of a first image, wherein the first information comprises a light source starting point position and a basic light source layer; based on the first information, a second image is generated, which is an image having a tyndall effect with respect to the first image.
In a second aspect, an embodiment of the present application provides an image generating apparatus including: the device comprises an acquisition module, a determination module and a generation module, wherein: the acquisition module is used for acquiring a first image; the determining module is configured to determine first information according to the image information of the first image acquired by the acquiring module, where the first information includes a light source starting point position and a basic light source layer; the generation module is configured to generate a second image based on the first information determined by the determination module, where the second image is an image having a tyndall effect with respect to the first image.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first image is acquired through a camera, first information is determined according to image information of the first image, the first information comprises a light source starting point position and a basic light source image layer, and a second image is generated based on the first information and is an image with a tyndall light effect relative to the first image. According to the method, when the image shooting is carried out, the Tyndall image can be generated based on the determined light source starting point position and the basic light source image layer, so that the generation of the Tyndall light effect image is realized, and the Tyndall light effect image with high artistic sense can be conveniently obtained.
Drawings
Fig. 1 is a schematic flow chart of an image generating method according to an embodiment of the present application;
fig. 2 (a) is a schematic diagram of a real-time image acquired by a camera according to an embodiment of the present application;
FIG. 2 (B) is a schematic diagram of a light effect overlay layer according to an embodiment of the present disclosure;
FIG. 2 (C) is a schematic diagram of a generated preview image with Tyndall light effect provided by an embodiment of the present application;
fig. 3 (a) is a schematic diagram of another real-time image acquired by the camera according to the embodiment of the present application;
FIG. 3 (B) is a schematic diagram of another light effect overlay layer according to an embodiment of the present disclosure;
FIG. 3 (C) is a schematic diagram of another generated preview image with Tyndall light effect provided by an embodiment of the present application;
FIG. 4 is a second flowchart of an image generating method according to an embodiment of the present disclosure;
FIG. 5 is a third flow chart of an image generating method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The terms "at least one," "at least one," and the like in the description and in the claims of the present application mean that they encompass any one, any two, or a combination of two or more of the objects. For example, at least one of a, b, c (item) may represent: "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The execution subject of the image generating method provided in the embodiment of the present invention may be an electronic device, or may be at least one of a functional module and an entity module capable of implementing the image generating method in the electronic device, and specifically may be determined according to actual use requirements, which is not limited by the embodiment of the present invention. The image generation method provided in the embodiment of the present application will be described below by taking an example in which the image generation device executes the image generation method.
The phenomenon of Tyndall often appears at our side, but the phenomenon of Tyndall is not mystery, on our working road in the morning, a beam of light passes through the boulevard, a clear beam is formed by transmitting a beam of light from leaves, on the working road, we see the city under the sunset, a beam of light passes through between a seat of buildings, and the phenomenon of Tyndall is all that is.
Currently, in the process of shooting scenery photos for users, when the users want to shoot the light effect of tyndall, two main shooting elements are needed to shoot such photos: the formation of the tyndall effect requires that two conditions be met, one is that there are enough suspended particles in the air and that the light is otherwise beam-shaped. Second, a darker background is required at the time of photographing, and the darker the background, the more pronounced the effect of the light beam generation. The shooting conditions for shooting the light effect of the tyndall are difficult to find, and the probability of meeting the light effect of the tyndall in daily life is really low. The traditional scheme is to add the light effect of the tyndall through the later software of the computer, and the mode is too high in learning cost and low in efficiency for common users. Therefore, the embodiment of the application provides the image generation method, which can generate the image with the Tyndall light effect in real time under the condition of collecting the preview image so as to provide the image preview, so that the image with the Tyndall light effect is quickly generated in a mode of post-processing the real-time image, a large second of the image can be realized, the artistry and the ornamental value of the photo are improved, and a simple, elegant, efficient and artistic landscape shooting experience is provided for a user.
The image generating method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a method flow of an image generating method according to an embodiment of the present application, as shown in fig. 1, the image generating method may include the following steps S201 to S203:
step S201: the image generating device acquires a first image.
Alternatively, in the embodiment of the present application, the first image may be a real-time image obtained by photographing the photographing object through the camera, or a non-real-time image, for example, an image in an album. The first image may include, for example, a landscape, a person, a building, and the like.
The first image may include at least one image acquired in real time by a camera, for example. For example, when shooting a landscape in a shooting scene, the image generating device acquires a landscape image in the shooting scene in real time through the camera, and at least one landscape image is obtained.
The camera may be, for example, a camera of an electronic device. The camera may include a built-in camera integrated inside the electronic device, and an external camera connected during use of the electronic device, for example. Illustratively, the camera includes, but is not limited to, at least one of: 2 times camera, wide angle camera, ordinary camera (main camera) and far vision camera.
The first image may include an image stored locally on the electronic device, captured by a camera, or downloaded by an application, for example.
Step S202: the image generating device determines first information based on the image information of the first image.
Wherein the first information includes a light source start point position and a basic light source layer.
Alternatively, in the embodiment of the present application, the first image may be a two-dimensional image or a three-dimensional image, and the image information may include position information and RGB color values of each pixel point of the two-dimensional image.
Alternatively, in the embodiment of the present application, the image generating device may establish an image coordinate system based on the first image, where the image coordinate system is used to determine the position of each pixel point in the image.
Optionally, in an embodiment of the present application, the position of the light source start point is a position of a light source point that generates a tyndall light effect in the first image.
Alternatively, in the embodiment of the present application, the image generating device may determine, in a case where the presence of the light emitting object in the first image is detected, the center point position of the light emitting object as the light source start point position. The above-described light source point positions may be, for example, the center point position of the sun in the scenic image.
Optionally, in the embodiment of the present application, the image generating device may adaptively calculate the light source point position of the tyndall light effect according to a light source start point distribution calculation algorithm searched by exponential decay convolution when detecting that no light emitting object exists in the first image.
Optionally, in an embodiment of the present application, the basic light source layer is a binarized image generated based on pixel values of each pixel point of the first image, and used for characterizing the whole and local features of the first image.
Step S203: the image generates a second image based on the first information.
The second image is an image having a tyndall effect with respect to the first image.
Alternatively, in the embodiment of the present application, the image generating device may generate the light effect superposition layer by using a Radial Blur algorithm (Radial blue) according to the light source start point position and the basic light source layer, and generate the second image according to the light effect superposition layer and the first image.
Optionally, in the embodiment of the present application, after the first image is a real-time image acquired by the camera, after the image generates the second image, the second image may be displayed on the image preview interface, so that a user may view the real-time preview image with the tyndall light effect on the image preview interface, and may trigger shooting to obtain the image with the tyndall light effect.
When shooting a landscape in a shooting scene, an image generating device acquires a landscape image in the shooting scene in real time through a camera to obtain a landscape image, wherein the landscape image comprises trees (namely polygons in the figure) and light sources (namely circular areas in the figure) as shown in fig. 2 (A), the image generating device determines the starting point position of the light sources according to brightness information of the landscape image and carries out binarization processing on the landscape image to obtain a basic light source image layer, then a radial blurring algorithm is adopted to generate a light effect superposition image layer according to the starting point position of the light sources and the basic light source image layer, the light effect superposition image layer is shown in fig. 2 (B), the light effect superposition image layer and the landscape image are processed to obtain a landscape image with the Tyndall light effect, the landscape image is shown in fig. 2 (C), and finally the landscape image with the Tyndall light effect is displayed on an image preview interface, so that a user can view the landscape image with the Tyndall light effect at the image preview interface.
Alternatively, in the embodiment of the present application, in the case where the first image is a non-real-time image in the album, after the image generates the second image, the second image may be displayed on the album interface.
In fig. 2 (B), the blurred state of the image is represented by filling, the white arrow-headed line in fig. 2 (B) represents the generated light beam, the arrow represents the direction of the light beam, and the start point of the arrow is the light source start point position.
In fig. 2 (C), a black arrow-headed line indicates a generated light beam, an arrow indicates a direction of the light beam, and a start point of the arrow is a light source start point position.
According to the image generation method provided by the embodiment of the application, the image generation device acquires the first image through the camera, determines first information according to the image information of the first image, wherein the first information comprises the light source starting point position and the basic light source image layer, and generates the second image based on the first information, and the second image is an image with the Tyndall light effect relative to the first image. According to the method, when the image is shot, the image generating device can generate the large sheet of the Tyndall light effect based on the determined light source starting point position and the basic light source image layer, so that the generation of the image of the Tyndall light effect is realized, and further, the image of the Tyndall light effect with high artistic sense can be conveniently obtained.
Alternatively, in the embodiment of the present application, the process of step S203 described above may include the following steps S203a and S203b:
Step S203a: and the image generating device performs radial blurring processing on the basic light source image layer according to the light source starting point position to obtain a light effect superposition image layer.
Step S203b: and the image generating device performs superposition processing on the light effect superposition image layer and the first image to obtain a second image.
Optionally, in an embodiment of the present application, the light effect superimposed layer is a blurred image with radial scattering beam shape lines.
Optionally, in the embodiment of the present application, the image generating device may add the light effect superimposed layer to pixel values of respective corresponding pixels of the first image to obtain the second image.
Illustratively, where the first information includes a Light source start point position and a base Light source layer, the image generation apparatus determines a Light source point position Light of the tyndall Light effect source After the Light is combined with a basic Light source layer Light base, generating a Tyndall Light effect superposition graph Light by utilizing a conventional radial fuzzy algorithm radius () DDR The specific implementation is shown in a formula (1):
Light DDR =RadialBlur(lightbase,Light source ) (1)
optionally, in an embodiment of the present application, the first information may further include at least one of a main light scattering direction and a light scattering radius.
Optionally, in an embodiment of the present application, the light-effect scattering main direction is a direction of light generating a tyndall light effect. Alternatively, the image generating means may determine the direction of the ray passing through the center position of the first image with the light source point as the start point as the light effect scattering main direction.
Illustratively, where the first information includes a Light source start point location, a base Light source layer, and a Light effect scattering main direction, the image generation apparatus determines a Light source point location Light of the tyndall Light effect source Principal direction of light scatteringAnd a basic Light source layer lightbase, generating a tyndall Light effect superposition graph lightusing a conventional radial fuzzy algorithm radius () with specified center points and directions DDR The specific implementation is shown in a formula (2):
optionally, in an embodiment of the present application, the light-effect scattering radius is a radius of a light area generating a tyndall light effect. Alternatively, the image generation means may determine the distance between the light source point and the first image as the light effect scattering radius or defaults to one tenth of the image width of the first image.
Exemplary, the first information includes a Light source start point position, a basic Light source pattern layer, and a Light effect scattering radius Light Radius In the case of (a), the image generating apparatus determines the Light source point position Light of the tyndall Light effect source Light effective scattering radius Light Radius After the basic Light source layer lightbase, generating a tyndall Light effect superposition graph Light by utilizing a conventional radial fuzzy algorithm radius () with specified center points and radii DDR The specific implementation is shown in a formula (3):
Light DDR =RadialBlur(lightbase,Light source ,Light Radius ) (3)
Exemplary, the first information includes a Light source start point position, a basic Light source pattern layer, a Light effect scattering main direction, and a Light effect scattering radius Light Radius In the case of (a), the image generating apparatus determines the Light source point position Light of the tyndall Light effect source Principal direction of light scatteringLight-effect scattering radius Light Radius After the basic light source layer lightbase, generating the optimal tyndall light effect stack by utilizing a conventional radial fuzzy algorithm radius () with specified center point, direction and radiusLight with picture DDR The specific implementation is shown in a formula (4):
optionally, in the embodiment of the present application, the image generating device performs a superposition process on the tyndall light effect superposition graph and the first image, and a process of finally obtaining the image with the tyndall light effect increased is specifically shown in formula (5):
Img DDR =OriImg+Light DDR (5)
in the embodiment of the application, based on a conventional radial fuzzy algorithm of a specified center point, a specified direction and a specified radius, a self-adaptive Tyndall light effect generation algorithm is designed, so that the generation of an image with the Tyndall light effect is realized, and a large sheet of high artistic sense Tyndall light effect can be easily obtained.
Further optionally, in an embodiment of the present application, the base light source layer is a base light source layer with depth information. For example, the image generating apparatus may determine an initial basic light source layer based on the mask image corresponding to the first image, and then obtain a basic light source layer having depth information based on the depth information of the first image and the initial basic light source layer.
Illustratively, taking the first image as a landscape image OriImg as an example, the image generating apparatus may acquire a HighLight region HighLight of the landscape OriImg mask Calculating initial basic light source layer lightbase of self-adaptive tone tyndall luminous efficacy light beam color And adopts a depth estimation algorithm to obtain depth information Img of the landscape image depth The depth information Img is then added to depth Lightbase with initial base light source layer color And (3) carrying out multiplication operation on the pixel values of the pixel values to obtain the basic light source layer with the depth information. The specific calculation process is shown in formula (6):
lightbase=Img depth *lightbase color (6)
when shooting a scenery in a shooting scene, an image generating device acquires scenery images in the shooting scene in real time through a camera to obtain scenery images, the scenery images comprise orange-yellow sunset, lake water and trees, the image generating device determines a light source starting point position, a light effect scattering main direction and a light effect scattering radius according to brightness information of the scenery images, a basic light source image layer with depth information is determined, then a light effect superposition image layer is generated according to the light source starting point position, the light effect scattering main direction and the light effect scattering radius and the basic light source image layer with the depth information through a radial fuzzy algorithm, the light effect superposition image layer is shown in fig. 3 (B), the light effect superposition image layer is processed with the scenery images to obtain scenery images with the Tyndall light effect, the scenery images are shown in fig. 3 (C), and finally the scenery images with the Tyndall light effect are displayed on an image preview interface, so that a user can view the scenery images with high artistic sense of the Tyndall light effect on the image preview interface.
Note that, in fig. 2 (B), the overall color of the light beam is white, and in fig. 3 (B), the overall color of the light beam is pale yellow, that is, similar to the color of the original landscape image, so that the generated image with tyndall rays is more vivid.
Note that, fig. 2 (B) and 3 (B) show the blurred state of the image by filled diagonal lines, and the image information in the blurred original landscape image may be further included in fig. 2 (B) and 3 (B), which is not shown in the drawings.
In the embodiment of the application, in the process of generating the final basic light source image layer, depth estimation information is introduced to increase the depth effect of the light effect of the basic light source image layer, so that the generated image with the Tyndall light effect has more layering and stereoscopic impression.
Alternatively, in the embodiment of the present application, the process of determining the basic light source layer in the first information according to the image information of the first image in the step S202 may include the following steps S202a and S202b:
step S202a: the image generating device respectively carries out binarization processing on pixel values of each image channel of the first image by utilizing a preset threshold value to obtain a mask image corresponding to the first image.
The mask image is used for extracting a highlight region of the first image.
Step S202b: the image generating device determines a basic light source image layer according to the mask image and the third image.
The third image is a preset pure white image or the first image.
Alternatively, in the embodiment of the present application, each image channel of the first image may include an R channel, a G channel, and a B channel.
Optionally, in this embodiment of the present application, the preset threshold may include a first threshold, a second threshold, and a third threshold, where the first threshold may be a threshold corresponding to an R channel, the second threshold may be a threshold corresponding to a G channel, and the third threshold may be a threshold corresponding to a B channel.
Optionally, in an embodiment of the present application, the mask image includes mask images corresponding to an R channel, a G channel, and a B channel of the first image.
Alternatively, in the embodiment of the present application, the pixel values of the mask image may include 0 and 1.
Optionally, in this embodiment of the present application, the image generating device may perform binarization processing on a pixel value of the R channel according to the first threshold to obtain a mask image corresponding to the R channel, perform binarization processing on a pixel value of the G channel according to the second threshold to obtain a mask image corresponding to the G channel, and perform binarization processing on a pixel value of the B channel according to the third threshold to obtain a mask image corresponding to the B channel.
Taking the first image as a landscape image OriImg based on the original camera as an example, the image generating device sets different thresholds on RGB different channels according to pixel information of the landscape image OriImg to calculate a HighLight region Highlight of the whole landscape image mask The environment illumination condition of the scene is shot by the self-adaptive landscape, and the specific implementation mode is shown in a formula (7):
wherein, (Highlight) mask _R,HighLight mask _G,HighLight mask Mask images corresponding to the R channel, the G channel and the B channel of the OriImg image respectively, wherein (OriImg_R, oriImg_G, oriImg_B) are R, G, B channels of the OriImg image respectively, mu 123 Respectively calculate HighLight mask Corresponding R, G, B three channel constraint parameters.
Illustratively, the HighLight region Highlight of the entire landscape image is calculated mask The image generating device can then combine the pure white image with the Highlight mask And performing product operation to obtain a basic light source image layer, and taking the basic light source image layer as a basic light source image layer for generating a Tyndall light effect beam subsequently.
Alternatively, in the embodiment of the present application, the above-mentioned pure white image is an image having the same image size as the first image and pixel values of 255.
Further optionally, in an embodiment of the present application, the third image is a preset pure white image; illustratively, the above step S202b may be implemented by the following step S202b 1.
Step S202b1: the image generating device multiplies the pixel value of the mask image and the pixel value of the pure white image to obtain the basic light source image layer.
Optionally, in an embodiment of the present application, the mask image includes mask images corresponding to an R channel, a G channel, and a B channel of the first image.
Illustratively, highlight is performed on a pure white image mask Taking the value as a white basic light source layer lightbase of a follow-up tyndall light effect beam, wherein the specific expression is calculated as formula (8):
lightbase=HighLight mask *(255,255,255) (8)
further optionally, in an embodiment of the present application, the third image is a first image; illustratively, the above step S202b may be implemented by the following steps S202b2 and S202b 3.
Step S202b2: the image generating device calculates an average value of pixel values of a first image area in the first image.
The first image area is an image area corresponding to the mask image in the first image.
Step S202b3: the image generating device multiplies the pixel value of the mask image by the average value to obtain the basic light source image layer.
Illustratively, taking the first image as a landscape image OriImg as an example, the image generating device generates HighLight for the landscape image OriImg mask RGB pixels of the region are valued, and the average value (r 0 ,g 0 ,b 0 ) Then obtaining the adaptive basic light source layer lightbase based on the calculated mean value and the mask image color The specific calculation expression is shown in formula (9):
it should be noted that, because there is a large difference in the environmental light color adjustment of different shooting environments, the above-mentioned adaptive tone base light source layer calculation algorithm is adopted to determine the base light source layer, compared with selecting a fixed white light source tone, the adaptive tyndall light effect adjustment can be performed according to the real condition of the environment, and the generalization of the image is improved, so that the final tyndall light effect generation is more adapted to the real shooting scene.
Alternatively, in the embodiment of the present application, the process of determining the light source start point position in the first information according to the image information of the first image in the step S202 may include the following steps S202c and S202d:
step S202c: the image generating means determines a position of the light emitting object in the first image in case that it is detected that the light emitting object is included in the first image.
Step S202d: the image generating device determines the position of the luminous object in the first image as the light source starting point position.
Alternatively, in the embodiment of the present application, the image generating device may detect whether the first image includes the luminescent object by a luminescent object detection algorithm. The above-mentioned luminescent object may be a natural light source, such as the sun, or a luminescent lamp, for example.
For example, in the case where the camera is aimed at a scene to be photographed, the image generating apparatus detects whether the sun is included in the captured image using a light-emitting object detection algorithm. When it is determined that there is a sun, the sun center point position (x sun ,y sun ) Light as Light source point starting position source
In this way, the image generating apparatus can quickly determine the light source start point position from the position of the light emitting object in the image by detecting whether the light emitting object is present in the shooting scene and in the case where the light emitting object is present.
Alternatively, in the embodiment of the present application, the process of determining the light source start point position in the first information according to the image information of the first image in the step S202 may include the following steps S202e and S202f:
step S202e: and the image generating device carries out iterative convolution processing on the first image through a convolution kernel of the convolution neural network to obtain average brightness information of M image blocks corresponding to the first image.
Wherein M is a positive integer.
Step S202f: the image generating device determines the position of the central pixel point of the first image block as the light source starting point position.
The first image block is the image block with the largest average brightness information among the M image blocks.
Alternatively, in the embodiment of the present application, the image generating device may adaptively calculate the light source point position of the tyndall light effect based on a light source start point distribution calculation algorithm of the exponential decay convolution lookup of the convolutional neural network.
Optionally, in an embodiment of the present application, a convolution kernel size of the convolution kernel is smaller than a size of the first image.
Illustratively, the first image is a landscape imageFor example, the image generating apparatus performs exponential decay slicing on the picture by sliding window through convolution block kernel from the upper left corner of the landscape image, wherein each slice only retains the block with the highest average brightness avgLight of the picture, and the exponential decay coefficient is 2 i (i=1, 2, …, p) to quickly locate the Light source start point, where i represents the number of cuts, and according to the convolution operation rules, the embodiments of the present application define the minimum convolution block size kernel_size as (3, 3), the specific Light source start point location Light source The calculation steps are as follows:
first, the convolution kernel size kernel_size of the ith decaying sliding cut is calculated i For the subsequent sliding window, the specific calculation expression is shown in formula (10):
where p represents the number of times the total convolution kernel size decays exponentially to (3, 3) decaying slices. kernel_size needs to follow the basic rules of convolutional networks, guaranteeing that the size of the kernel is odd, thus introducing a forensic function: y= [ x ], also known as gaussian function. The largest integer not exceeding the real number x is called the integer part of x, denoted x.
For example, in the case of performing the first switching operation, the convolution Kernel size kernel_size is calculated as follows:in the case of performing the second switching operation, the convolution Kernel size kernel_size is calculated as follows: />In the case of performing the last switching operation, the convolution Kernel size kernel_size is (3, 3), that is, the convolution Kernel size decreases exponentially during the course of performing the convolution operation a plurality of times.
Secondly, carrying out exponential decay dicing on the picture in a sliding window mode through a convolution block kernel to obtain a block diced image ImgPic with highest average brightness of each dicing i (i=1, 2, …, p) corresponds toAverage luminance avgLight (ImgPic) ij ) The specific calculation expression is shown in formula (11):
(i=1,2,…,p;j=(1,2,…,(W-kernel_size i +1)*(H-kernel_size i +1)))
where i=1, 2, …, p denotes the number of decaying slices, j denotes the number of sliding window slides of the ith slice, (ImgPic) ij _R,ImgPic ij _G,ImgPic ij B) represent image blocks ImgPic, respectively ij RGB three channels of (c).
Illustratively, after all j tiles are calculated for the ith tile sliding window, choose avgLight ij Image block region ImgPic in which the image block corresponding to the maximum average brightness is reserved as i-time block sliding window i The expression is shown in the following formula (12):
further iterating the next dicing to quickly iterate the image block ImgPic with the highest brightness p Then sliding window is carried out by utilizing the convolution blocks of (3, 3) to obtain the image block with the highest final average brightness and 3*3, and the coordinates (x 0 ,y 0 ) I.e. the Light source starting point Light of the whole image source Is defined by the coordinates of (a).
ImgPic 1 =>ImgPic 2 =>ImgPic p =>Light source (x 0 ,y 0 )
For example, when the image generating apparatus performs the first switching on the landscape image, the convolution kernel size of the first switching may be (m 1, n 1), and after the moving convolution kernel performs the first slicing operation on the landscape image, x1 image blocks are obtained, where the size of each image block is m1×n1. Then, the image generating apparatus determines one image block with the largest average brightness information from the x1 image blocks, determines that the convolution kernel size of the block cutting operation is (m 2, n 2), and then moves the convolution kernel to perform a second switching operation on the one image block to obtain x2 image blocks, wherein the size of each image block is m2 x n2. Then, the image generating apparatus determines one image block with the largest average brightness information from the x2 image blocks, determines that the convolution kernel size of the block cutting operation is (m 3, n 3), then moves the convolution kernel to perform a third switching operation on the one image block to obtain x3 image blocks, each image block has a size of m3 x3, then determines that the convolution kernel size performs a fourth convolution operation on the image block with the largest average brightness in the x3 image blocks, and so on, and when the convolution kernel size of the switching operation is attenuated to be (3, 3), moves the convolution kernel to perform the final sequential switching operation to obtain the image block with the highest average brightness 3*3.
It should be noted that, the above convolution block is a convolution kernel, and one switching operation is one convolution operation.
According to the embodiment of the application, the adaptive calculation algorithm of the start point of the light source of the Tyndall light effect based on the exponential decay convolution algorithm is adopted, the position of the light effect of the Tyndall can be determined in an adaptive mode according to different shooting environments and illumination conditions, the start point of the light source does not need to be manually added by a user, and therefore the start point of the light source can be conveniently and accurately determined.
Alternatively, in the embodiment of the present application, the image generating device may determine at least one of the light-effect scattering main direction and the light-effect scattering radius according to the light source start point position.
Illustratively, the Light source starting point Light is determined to be obtained source After that, the Light source starting point Light source Is (x) 0 ,y 0 ) Light effect scattering principal directionCan be the light source starting point position (x 0 ,y 0 ) To the image center Img center The vector direction of the position, the coordinates of the center of the image are (x 0 ,y 0 ) Light scattering radius Light Radius The distance from the starting point position of the light source to the central position of the image can be specificallyThe ground calculation expression is shown in formula (13) and formula (14):
wherein, img center (x c ,y c ) Light, which is the coordinates of the center of the image source (x 0 ,y 0 ) Is the coordinates of the starting point of the light source.
Therefore, when the light source starting point position is obtained, the light effect scattering main direction and the light effect scattering radius can be obtained rapidly according to the light source starting point position, and the processing efficiency is improved.
The image generation method provided in the embodiment of the present application is exemplarily described below by two embodiments.
Embodiment 1,
For example, as shown in fig. 4, the above-described image generation method may include the following steps 11 to 17:
step 11: a click input is received for a tyndall light effect mode option of the camera application.
Step 12: and acquiring a first image through a camera, and triggering a sun detection algorithm.
Step 13: whether the shooting environment has the sun or not is judged, if yes, the following step 14 is executed, and if not, the following step 15 is executed.
Step 14: and determining the optimal light-effect scattering main direction and the optimal light-effect scattering radius by taking the sun central point as the light source point.
Step 15: and adaptively calculating the position of a light source point, the optimal light effect scattering main direction and the optimal light effect scattering radius of the Tyndall light effect by using a light source starting point distribution calculation algorithm searched by exponential decay convolution.
Optionally, the following steps 16 and 17 may be further included after the step 14 or 15:
step 16: a highlight region of the first image is acquired and a base light source map layer of the tyndall light effect beam is calculated.
Step 17: and generating a preview image Img_DDR for a user in real time through an adaptive Tyndall light effect generation algorithm, and displaying the preview image on a shooting preview interface.
It should be noted that, the explanation of this embodiment may be referred to the above description, and will not be repeated here.
Embodiment II,
For example, as shown in fig. 5, the above-described image generation method may include the following steps 21 to 28:
step 21: a click input is received for a tyndall light effect mode option of the camera application.
Step 22: and acquiring a first image through a camera, and triggering a sun detection algorithm.
Step 23: whether the shooting environment has the sun is determined, if yes, the following step 24 is executed, and if no, the following step 25 is executed.
Step 24: and determining the optimal light-effect scattering main direction and the optimal light-effect scattering radius by taking the sun central point as the light source point.
Step 25: and adaptively calculating the position of a light source point, the optimal light effect scattering main direction and the optimal light effect scattering radius of the Tyndall light effect by using a light source starting point distribution calculation algorithm searched by exponential decay convolution.
Optionally, the following steps 26 to 28 may be further included after the step 24 or the step 25:
step 26: depth information of a first image is acquired.
Step 27: a highlight region of the first image is acquired and a base light source map layer of the tyndall light effect beam is calculated.
Step 28: and generating a preview image Img_DDR for a user in real time through a Tyndall light effect generation algorithm of the self-adaptive beam hue combined with the depth information, and displaying the preview image on a shooting preview interface.
It should be noted that, the explanation of this embodiment may be referred to the above description, and will not be repeated here.
The foregoing method embodiments, or various possible implementation manners in the method embodiments, may be executed separately, or may be executed in combination with each other on the premise that no contradiction exists, and may be specifically determined according to actual use requirements, which is not limited by the embodiments of the present application.
According to the image generation method provided by the embodiment of the application, the execution subject can be an image generation device. In the embodiment of the present application, an image generating apparatus provided in the embodiment of the present application will be described by taking an example in which the image generating apparatus executes an image generating method.
Fig. 6 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application, and as shown in fig. 6, the image generating apparatus 600 may include: an acquisition module 601, a determination module 602, and a generation module 603, wherein: the acquiring module 601 is configured to acquire a first image; the determining module 602 is configured to determine first information according to the image information of the first image acquired by the acquiring module 601, where the first information includes a light source start point position and a basic light source layer; the generating module 603 is configured to generate a second image based on the first information determined by the determining module 602, where the second image is an image having a tyndall effect with respect to the first image.
Optionally, in this embodiment of the present application, the determining module is specifically configured to perform binarization processing on pixel values of each image channel of the first image by using a preset threshold, so as to obtain a mask image corresponding to the first image; the determining module is specifically configured to determine the basic light source layer according to the mask image and the third image; the third image is a preset pure white image or the first image.
Optionally, in an embodiment of the present application, the third image is a preset pure white image; the determining module is specifically configured to multiply the pixel value of the mask image with the pixel value of the pure white image to obtain the basic light source layer.
Optionally, in an embodiment of the present application, the third image is the first image; the determining module is specifically configured to calculate a mean value of pixel values of a third image area in the first image, where the third image area is an image area corresponding to the mask image in the first image; the determining module is specifically configured to multiply the pixel value of the mask image with the average value to obtain the basic light source layer.
Optionally, in an embodiment of the present application, the determining module is specifically configured to determine, when it is detected that the first image includes a light emitting object, a position of the light emitting object in the first image; the determining module is specifically configured to determine a position of the light-emitting object in the first image as the light source start point position.
Optionally, in this embodiment of the present application, the determining module is specifically configured to perform iterative convolution processing on the first image through a convolution kernel of a convolutional neural network to obtain average brightness information of M image blocks corresponding to the first image, where M is a positive integer; the determining module is specifically configured to determine a position of a center pixel point of a third image block as the light source start point position, where the third image block is an image block with the largest average brightness information among the M image blocks.
Optionally, in an embodiment of the present application, the generating module is specifically configured to perform radial blurring processing on the base light source layer according to the light source starting point position to obtain a light effect superimposed layer; the generating module is specifically configured to perform superposition processing on the light effect superposition layer and the first image to obtain the second image.
According to the image generation device provided by the embodiment of the application, the image generation device acquires a first image through the camera, and determines first information according to image information of the first image, wherein the first information comprises at least one of the following: the method comprises the steps of generating a second image based on first information according to a radial blurring algorithm, wherein the second image is an image with the Tyndall light effect relative to the first image. According to the method, when the image is shot, the image generating device can quickly generate the image with the Tyndall light effect based on the determined light source starting point position and the basic light source image layer, and then the Tyndall light effect image with high artistic sense can be conveniently obtained.
The image generating device in the embodiment of the application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image generating apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image generating device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 5, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 700, including a processor 701 and a memory 702, where the memory 702 stores a program or an instruction that can be executed by the processor 701, and the program or the instruction implements each step of the embodiment of the image generating method when executed by the processor 801, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
It should be noted that, the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device of the above 7.
Fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, processor 110, and processor 110, wherein: the processor 110 is configured to acquire a first image through a camera; the processor 110 is configured to determine first information according to the image information of the first image acquired by the processor 110, where the first information includes a light source start point position and a basic light source layer; the processor 110 is configured to generate a second image based on the first information determined by the processor 110, where the second image is an image having a tyndall effect with respect to the first image.
Optionally, in this embodiment of the present application, the processor 110 is specifically configured to perform binarization processing on pixel values of each image channel of the first image by using a preset threshold, so as to obtain a mask image corresponding to the first image; the processor 110 is specifically configured to determine the basic light source layer according to the mask image and the third image; the third image is a preset pure white image or the first image.
Optionally, in an embodiment of the present application, the third image is a preset pure white image; the processor 110 is specifically configured to multiply the pixel value of the mask image with the pixel value of the pure white image to obtain the basic light source layer.
Optionally, in an embodiment of the present application, the third image is the first image; the processor 110 is specifically configured to calculate a mean value of pixel values of a third image area in the first image, where the third image area is an image area corresponding to the mask image in the first image; the processor 110 is specifically configured to multiply the pixel value of the mask image with the average value to obtain the basic light source layer.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to determine, when it is detected that the first image includes a light emitting object, a position of the light emitting object in the first image; the processor 110 is specifically configured to determine a position of the light-emitting object in the first image as the light source start point position.
Optionally, in this embodiment of the present application, the processor 110 is specifically configured to perform iterative convolution processing on the first image through a convolution kernel of a convolutional neural network to obtain average brightness information of M image blocks corresponding to the first image, where M is a positive integer; the processor 110 is specifically configured to determine a position of a center pixel of a third image block as the light source start point position, where the third image block is an image block with the largest average brightness information among the M image blocks.
Optionally, in this embodiment of the present application, the processor 110 is specifically configured to perform a radial blurring process on the base light source layer according to the position of the light source starting point to obtain a light effect superimposed layer; the processor 110 is specifically configured to perform a superposition process on the light effect superposition layer and the first image to obtain the second image.
According to the electronic device provided by the embodiment of the application, the electronic device collects the first image through the camera, and determines first information according to the image information of the first image, wherein the first information comprises at least one of the following: the method comprises the steps of generating a second image based on first information according to a radial blurring algorithm, wherein the second image is an image with the Tyndall light effect relative to the first image. According to the method, when the image is shot, the electronic equipment can quickly generate the image with the Tyndall light effect based on the determined light source starting point position and the basic light source image layer, and further the Tyndall light effect image with high artistic sense can be conveniently obtained.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the image generating method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image generation method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image generating method described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. An image generation method, the method comprising:
acquiring a first image;
determining first information according to the image information of the first image, wherein the first information comprises a light source starting point position and a basic light source image layer;
and generating a second image based on the first information, wherein the second image is an image with the Tyndall luminous efficacy relative to the first image.
2. The method of claim 1, wherein determining a base light source layer in the first information from image information of the first image comprises:
respectively carrying out binarization processing on pixel values of each image channel of the first image by using a preset threshold value to obtain a mask image corresponding to the first image;
determining the basic light source image layer according to the mask image and the third image;
the third image is a preset pure white image or the first image.
3. The method of claim 2, wherein the third image is a preset pure white image; the determining the basic light source layer according to the mask image and the third image comprises the following steps:
And multiplying the pixel value of the mask image and the pixel value of the pure white image to obtain the basic light source image layer.
4. The method of claim 2, wherein the third image is the first image; the determining the basic light source layer according to the mask image and the third image comprises the following steps:
calculating the average value of pixel values of a first image area in the first image, wherein the first image area is an image area corresponding to the mask image in the first image;
and multiplying the pixel value of the mask image with the average value to obtain the basic light source image layer.
5. The method of claim 1, wherein determining a light source start point position in the first information from image information of the first image comprises:
determining a position of a light-emitting object in the first image in case that it is detected that the light-emitting object is included in the first image;
and determining the position of the luminous object in the first image as the light source starting point position.
6. The method of claim 1, wherein determining a light source start point position in the first information from image information of the first image comprises:
Performing iterative convolution processing on the first image through a convolution kernel of a convolution neural network to obtain average brightness information of M image blocks corresponding to the first image, wherein M is a positive integer;
and determining the position of the central pixel point of a first image block as the position of the light source starting point, wherein the first image block is the image block with the largest average brightness information in the M image blocks.
7. The method of claim 1, wherein the generating a second image based on the first information comprises:
according to the position of the light source starting point, performing radial fuzzy processing on the basic light source layer to obtain a light effect superposition layer;
and superposing the light effect superposition layer and the first image to obtain the second image.
8. An image generation apparatus, the apparatus comprising: the device comprises an acquisition module, a determination module and a generation module, wherein:
the acquisition module is used for acquiring a first image through the camera;
the determining module is configured to determine first information according to the image information of the first image acquired by the acquiring module, where the first information includes at least one of: the light source starting point position, the light effect scattering main direction, the light effect scattering radius and the basic light source layer;
The generation module is configured to generate, by using a radial blur algorithm, a second image based on the first information determined by the determination module, where the second image is an image having a tyndall light effect with respect to the first image.
9. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image generation method of any of claims 1-7.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the image generation method according to any of claims 1-7.
CN202311208987.1A 2023-09-18 2023-09-18 Image generation method, device, electronic equipment and readable storage medium Pending CN117372555A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311208987.1A CN117372555A (en) 2023-09-18 2023-09-18 Image generation method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311208987.1A CN117372555A (en) 2023-09-18 2023-09-18 Image generation method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117372555A true CN117372555A (en) 2024-01-09

Family

ID=89401272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311208987.1A Pending CN117372555A (en) 2023-09-18 2023-09-18 Image generation method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117372555A (en)

Similar Documents

Publication Publication Date Title
EP3061071B1 (en) Method, apparatus and computer program product for modifying illumination in an image
CN107993216B (en) Image fusion method and equipment, storage medium and terminal thereof
CN110152291A (en) Rendering method, device, terminal and the storage medium of game picture
US9508190B2 (en) Method and system for color correction using three-dimensional information
US20110273369A1 (en) Adjustment of imaging property in view-dependent rendering
CN104866755B (en) Setting method and device for background picture of application program unlocking interface and electronic equipment
Liu et al. Image de-hazing from the perspective of noise filtering
US8619071B2 (en) Image view synthesis using a three-dimensional reference model
CN113052923B (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
US11908096B2 (en) Stereoscopic image acquisition method, electronic device and storage medium
CN116457821A (en) Object re-illumination using neural networks
CN111787230A (en) Image display method and device and electronic equipment
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN114612283A (en) Image processing method, image processing device, electronic equipment and storage medium
CN104394320A (en) Image processing method, device and electronic equipment
WO2024002064A1 (en) Method and apparatus for constructing three-dimensional model, and electronic device and storage medium
CN117372555A (en) Image generation method, device, electronic equipment and readable storage medium
CN116912387A (en) Texture map processing method and device, electronic equipment and storage medium
CN114979479A (en) Shooting method and device thereof
EP3143586A1 (en) Perimeter detection
CN111223114B (en) Image area segmentation method and device and electronic equipment
CN114288671A (en) Method, device and equipment for making map and computer readable medium
KR20230022153A (en) Single-image 3D photo with soft layering and depth-aware restoration
CN113313630B (en) Image processing method and device and electronic equipment
CN113489901B (en) Shooting method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination