CN116263941A - Image processing method, device, storage medium and electronic equipment - Google Patents

Image processing method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116263941A
CN116263941A CN202111521335.4A CN202111521335A CN116263941A CN 116263941 A CN116263941 A CN 116263941A CN 202111521335 A CN202111521335 A CN 202111521335A CN 116263941 A CN116263941 A CN 116263941A
Authority
CN
China
Prior art keywords
target
preset
image
brightness
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111521335.4A
Other languages
Chinese (zh)
Inventor
李子沁
彭鑫
周代国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Technology Wuhan Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Technology Wuhan Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd, Beijing Xiaomi Pinecone Electronic Co Ltd, Xiaomi Technology Wuhan Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202111521335.4A priority Critical patent/CN116263941A/en
Priority to PCT/CN2022/090722 priority patent/WO2023108992A1/en
Publication of CN116263941A publication Critical patent/CN116263941A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The present disclosure relates to an image processing method, an image processing apparatus, a storage medium, and an electronic device. The method comprises the following steps: acquiring light transmission information of an image to be processed in a preset illumination environment; performing shadow processing on the image to be processed according to preset shadow attenuation parameters and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment. The reflection coefficient of the target object in the image to be processed in the preset illumination environment can be represented by the preset shadow attenuation parameter, so that a target shadow image matched with the actual preset illumination environment can be obtained, and the efficiency of shadow image acquisition can be improved because manual shooting is not required.

Description

Image processing method, device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method, an image processing device, a storage medium, and an electronic apparatus.
Background
With the continuous development of computer and internet technologies, the shadow removal process for images has received increasing attention. The shadow removal process can be implemented by using a preset model, and a large number of shadow images and non-shadow images which correspond to each other are required to be used as training data for training the preset model. In the related art, a shadow image and a non-shadow image can be obtained by adopting a manual shooting mode as training data, but the manual shooting mode for obtaining the shadow image is time-consuming and labor-consuming and has low efficiency.
Disclosure of Invention
In order to overcome the above-mentioned problems in the related art, the present disclosure provides an image processing method, apparatus, storage medium, and electronic device.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, the method including:
acquiring light transmission information of an image to be processed in a preset illumination environment;
performing shadow processing on the image to be processed according to preset shadow attenuation parameters and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment.
Optionally, the light transmission information includes a target opacity for each pixel location of the image to be processed; performing shadow processing on the image to be processed according to a preset shadow attenuation parameter and the light transmission information, and obtaining a target shadow image corresponding to the image to be processed comprises the following steps:
determining target reflection brightness of each pixel position according to a preset shadow attenuation parameter, wherein the target reflection brightness is used for representing the reflection brightness of a target object in the image to be processed in the preset illumination environment;
And carrying out shadow processing on the image to be processed according to the target opacity and the target reflection brightness to obtain the target shadow image.
Optionally, the performing shadow processing on the image to be processed according to the target opacity and the target reflection brightness to obtain the target shadow image includes:
acquiring target brightness of each pixel position according to the first brightness, the target opacity and the target reflection brightness of each pixel position of the image to be processed;
and generating the target shadow image according to the target brightness of the plurality of pixel positions.
Optionally, the image to be processed includes a plurality of channels, the first brightness includes a first channel brightness of the plurality of channels, and the target reflection brightness includes a target channel reflection brightness of the plurality of channels; the obtaining the target brightness of each pixel position according to the first brightness, the target opacity and the target reflection brightness of each pixel position of the image to be processed comprises:
for each channel of each pixel position of the image to be processed, according to the first channel brightness, the target opacity and the target channel reflection brightness, calculating to obtain a target single channel brightness of the channel of the pixel position by the following formula:
XS k =(1-m)*XN k +m*XD k
Wherein XS k A target single channel luminance representing a kth channel, m representing the target opacity, XN k Indicating the first channel brightness, XD, of the kth channel k A target channel reflection luminance representing the kth channel;
and combining the target single-channel brightness of the multiple channels to obtain the target brightness of the pixel position.
Optionally, the preset shadow attenuation parameters include preset direct reflection brightness and preset ambient light attenuation factors, wherein the preset direct reflection brightness is used for representing the reflection brightness of the target object to the direct illumination light source in the preset illumination environment, and the preset ambient light attenuation factors are used for representing the attenuation factors of the ambient illumination light source in the preset illumination environment; the determining the target reflection brightness of each pixel position according to the preset shadow attenuation parameter comprises the following steps:
and obtaining the target reflection brightness of each pixel position according to the preset direct reflection brightness, the preset ambient light attenuation factor and the first brightness.
Optionally, the preset direct reflection luminance includes a preset direct reflection channel luminance for each channel; the obtaining the target reflection brightness of each pixel position according to the preset direct reflection brightness, the preset ambient light attenuation factor and the first brightness includes:
For each pixel position, according to the preset direct reflection channel brightness, the preset ambient light attenuation factor and the first channel brightness, calculating to obtain the target channel reflection brightness of each channel of the pixel position through the following formula:
Figure BDA0003407566820000031
wherein XD k Representing the reflection brightness of the target channel of the kth channel, XN k Representing the first channel brightness, alpha, of the kth channel k Representing the brightness of a preset direct reflection channel of the kth channel, wherein gamma represents the preset ambient light attenuation factor;
the target channel reflection luminance of the plurality of channels is taken as the target reflection luminance of the pixel position.
Optionally, the obtaining the light transmission information of the image to be processed in the preset illumination environment includes:
determining a preset illumination environment, wherein the preset illumination environment comprises a preset light source, a preset shielding object, a preset camera and a preset virtual plane;
capturing the model brightness of each pixel position in the preset virtual plane through the preset camera in the preset illumination environment;
and determining the light transmission information of the image to be processed in the preset illumination environment according to the preset virtual plane and the model brightness.
According to a second aspect of embodiments of the present disclosure, there is provided an image processing apparatus including:
The information acquisition module is configured to acquire light transmission information of the image to be processed in a preset illumination environment;
the image processing module is configured to perform shadow processing on the image to be processed according to a preset shadow attenuation parameter and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment.
Optionally, the light transmission information includes a target opacity for each pixel location of the image to be processed; the image processing module is configured to determine target reflection brightness of each pixel position according to a preset shadow attenuation parameter, wherein the target reflection brightness is used for representing the reflection brightness of a target object in the image to be processed in the preset illumination environment; and carrying out shadow processing on the image to be processed according to the target opacity and the target reflection brightness to obtain the target shadow image.
Optionally, the image processing module is configured to obtain a target brightness of each pixel position of the image to be processed according to the first brightness, the target opacity and the target reflection brightness of each pixel position; and generating the target shadow image according to the target brightness of the plurality of pixel positions.
Optionally, the image to be processed includes a plurality of channels, the first brightness includes a first channel brightness of the plurality of channels, and the target reflection brightness includes a target channel reflection brightness of the plurality of channels; the image processing module is configured to, for each channel of each pixel position of the image to be processed, according to theThe first channel brightness, the target opacity and the target channel reflection brightness are calculated to obtain a target single channel brightness of the channel at the pixel position by the following formula: XS (extensible markup language) k =(1-m)*XN k +m*XD k The method comprises the steps of carrying out a first treatment on the surface of the Wherein XS k A target single channel luminance representing a kth channel, m representing the target opacity, XN k Indicating the first channel brightness, XD, of the kth channel k A target channel reflection luminance representing the kth channel; and combining the target single-channel brightness of the multiple channels to obtain the target brightness of the pixel position.
Optionally, the preset shadow attenuation parameters include preset direct reflection brightness and preset ambient light attenuation factors, wherein the preset direct reflection brightness is used for representing the reflection brightness of the target object to the direct illumination light source in the preset illumination environment, and the preset ambient light attenuation factors are used for representing the attenuation factors of the ambient illumination light source in the preset illumination environment; the image processing module is configured to obtain target reflection brightness of each pixel position according to the preset direct reflection brightness, the preset ambient light attenuation factor and the first brightness.
Optionally, the preset direct reflection luminance includes a preset direct reflection channel luminance for each channel; the image processing module is configured to calculate, for each pixel position, the target channel reflection brightness of each channel of the pixel position according to the preset direct reflection channel brightness, the preset ambient light attenuation factor and the first channel brightness by the following formula:
Figure BDA0003407566820000051
wherein XD K Representing the reflection brightness of the target channel of the kth channel, XN k A first channel brightness, alpha, representing a kth channel of the image to be processed k Representing the brightness of a preset direct reflection channel of the kth channel, wherein gamma represents the preset ambient light attenuation factor; the target channel reflection luminance of the plurality of channels is taken as the target reflection luminance of the pixel position.
Optionally, the information acquisition module is configured to determine a preset illumination environment, where the preset illumination environment includes a preset light source, a preset shade, a preset camera, and a preset virtual plane; capturing the model brightness of each pixel position in the preset virtual plane through the preset camera in the preset illumination environment; and determining the light transmission information of the image to be processed in the preset illumination environment according to the preset virtual plane and the model brightness.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the image processing method provided by the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the image processing method provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: acquiring light transmission information of an image to be processed in a preset illumination environment; performing shadow processing on the image to be processed according to preset shadow attenuation parameters and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment. The reflection coefficient of the target object in the image to be processed in the preset illumination environment can be represented by the preset shadow attenuation parameter, so that a target shadow image matched with the actual preset illumination environment can be obtained, and the efficiency of shadow image acquisition can be improved because manual shooting is not required.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a step S102 according to the embodiment shown in fig. 1.
Fig. 3 is a flowchart illustrating a step S101 according to the embodiment shown in fig. 1.
Fig. 4 is a schematic diagram illustrating a preset lighting environment according to an exemplary embodiment.
Fig. 5 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram of an electronic device, shown in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
First, an application scenario of the present disclosure will be described. The present disclosure may be applied to image processing scenes, particularly to scenes in which an image is shadow-processed. In order to train the shadow-removed preset model, a large number of shadow images and non-shadow images corresponding to each other are required as training data. In the related art, a shadow image and a non-shadow image can be obtained by adopting a manual shooting mode as training data, and the mode of manually shooting to obtain the images is time-consuming and labor-consuming and has low efficiency. In addition to manual shooting, the shading of the image may also be performed by:
shadow processing is carried out on the image in a 3D shadow rendering mode: shadows are rendered on the image by placing the obstruction and by ray tracing. The result of the rendering in this way may be incorrect on a physical level, but since the way cannot adjust the shadow rendering effect according to the material information of the target object in the image, the result of the rendering may be greatly different from the actual shadow image.
The image is shaded by GAN (Generative Adversarial Networks), generative countermeasure network. According to the method, the training samples are required to be acquired, the GAN model is obtained after training is carried out according to the training samples, however, the acquisition cost of the training samples is high, the efficiency is low, fewer training samples can be acquired, and therefore the generated GAN model is low in accuracy, and few shadow types are provided.
It can be seen that the above-described method for shading an image cannot provide an accurate shadow image.
In addition to the above-mentioned requirement for a large number of shadow images when training the pre-set model for removing shadows, in other scenes, such as virtual reality, in order to show more abundant and realistic daily environmental information, it is generally required to perform a shadow processing on the non-shadow images. For example, in a virtual reality scene, a shadow effect is obtained when a target object is blocked by a building or a tree, and an image corresponding to the target object needs to be subjected to shadow processing.
In order to solve the above problems, the present disclosure provides an image processing method, an apparatus, a storage medium, and an electronic device, which may perform a shadow processing on an image to be processed according to light transmission information of the image to be processed in a preset illumination environment and a preset shadow attenuation parameter, to obtain a target shadow image corresponding to the image to be processed.
The present disclosure is described below in connection with specific embodiments.
Fig. 1 is a diagram illustrating an image processing method according to an exemplary embodiment, and as shown in fig. 1, an execution subject of the method may be a terminal, and the method may include:
s101, acquiring light transmission information of an image to be processed in a preset illumination environment.
The light transmission information can be used for representing the transparency or the opacity of the image to be processed in a preset illumination environment. The image to be processed may be an unshaded image.
S102, performing shadow processing on the image to be processed according to a preset shadow attenuation parameter and the light transmission information to obtain a target shadow image corresponding to the image to be processed.
The preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment.
The target object may be a target object, a target animal, a target plant, or a target person, which is not limited in this disclosure. The material, color, shape, etc. of the target object will affect the reflection coefficient, so the preset shadow attenuation parameter can be determined according to the material, color, shape, etc. of the target object. In addition, the preset shadow attenuation parameters can include a plurality of different parameter values for representing different reflection coefficients of target objects with different materials, colors or morphologies in the preset illumination environment.
By adopting the method in the embodiment of the disclosure, the light transmission information of the image to be processed in the preset illumination environment is obtained; performing shadow processing on the image to be processed according to preset shadow attenuation parameters and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment. The reflection coefficient of the target object in the image to be processed in the preset illumination environment can be represented by the preset shadow attenuation parameter, so that a target shadow image matched with the actual preset illumination environment can be obtained, and the efficiency of shadow image acquisition can be improved because manual shooting is not required.
Furthermore, the number of the preset illumination environments may be one or more, each preset illumination environment corresponds to one piece of light transmission information, each preset illumination environment may also correspond to a plurality of preset shadow attenuation factors, so that shadow processing is performed according to a combination of the plurality of light transmission information and the plurality of preset shadow attenuation factors, and a diversified and vivid target shadow image may be obtained.
In another embodiment of the present disclosure, the above-described light transmission information may include a target opacity for each pixel location of the image to be processed.
For example, the target opacity of each pixel location may be used to characterize the intensity of the illumination of the pixel location that is blocked, if the pixel location is within the home shadow, the pixel location is completely blocked, the shadow intensity is the maximum, and it may be determined that the target opacity corresponding to the pixel location is 1; if the pixel position is within the penumbra, the illumination of the pixel position is blocked by a part, the shadow intensity is between the minimum value and the maximum value, and the target opacity corresponding to the pixel position can be determined to be a value greater than 0 and less than 1 according to the shadow intensity; if the pixel position is not blocked by the shadow, the shadow intensity of the pixel position is minimum, and the opacity of the target corresponding to the pixel position can be determined to be 0.
Fig. 2 is a flowchart illustrating a step S102 according to the embodiment shown in fig. 1, and as shown in fig. 2, the step S102 may include the following steps:
s1021, determining the target reflection brightness of each pixel position according to the preset shadow attenuation parameters.
The target reflection brightness is used for representing the reflection brightness of the target object in the image to be processed in the preset illumination environment.
Alternatively, the corresponding preset shadow attenuation parameter may be determined according to the material of the target object, so as to determine the brightness of the reflected light corresponding to the material of the target object.
S1022, performing shadow processing on the image to be processed according to the target opacity and the target reflection brightness to obtain a target shadow image.
In this step, the image to be processed may include a plurality of pixels, and the target brightness of each pixel position may be first obtained; then, a target shadow image is generated based on the target brightness of the plurality of pixel locations.
For example, the brightness of each pixel position of the image to be processed may be adjusted to the target brightness, thereby forming a shadow effect and obtaining a target shadow image.
The manner of obtaining the target luminance for each pixel location may include: and acquiring the target brightness of each pixel position according to the first brightness, the target opacity and the target reflection brightness of each pixel position of the image to be processed.
It should be noted that, an image may include three channels of red, green and blue, and the first luminance may include the first channel luminance of each channel of red, green and blue, and conversely, the first channel luminance corresponding to the three channels of red, green and blue through a pixel location may also be synthesized into the first luminance of the pixel location. Therefore, the target single-channel brightness of each channel of each pixel position can be calculated first, and then the target brightness of the pixel position can be calculated according to the combination of the target single-channel brightness of the red, green and blue channels.
Illustratively, the image to be processed includes a plurality of channels, the first luminance includes a first channel luminance of the plurality of channels, and the target reflected luminance includes a target channel reflected luminance of the plurality of channels; the target single channel luminance for each channel for each pixel location of the image to be processed can be calculated from the first channel luminance, the target opacity and the target channel reflection luminance by the following equation (1):
XS k =(1-m)*XN k +m*XD k (1);
wherein XS k A target single channel luminance representing the kth channel, m representing the target opacity, XN k Indicating the first channel brightness, XD, of the kth channel k Representing the target channel reflection luminance of the kth channel.
In an alternative embodiment, the XS described above k 、XN k 、XD k And m may be a value of 0 or more and 1 or less.
The target single channel intensities for the multiple channels may then be combined to obtain the target intensity for that pixel location. The specific combining method can refer to a method of combining the brightness of the red, green and blue channels in the prior art, which is not described in detail in this disclosure.
By the method, the target brightness of each pixel position of the image to be processed can be obtained, so that the shadow processing of the image to be processed is completed, and the processed target shadow image reflects the shadow effect more realistically.
Further, the target opacity may be multiple, and the preset shadow attenuation parameters may be multiple, so that multiple target shadow images may be obtained according to random combination of multiple target opacity and multiple shadow attenuation parameters, thereby obtaining richer target shadow images, and further improving the efficiency of shadow image acquisition.
In another embodiment of the present disclosure, the above-mentioned light transmission information may include a target transparency of each pixel position of the image to be processed. In this way, for each channel of each pixel position of the image to be processed, the target single channel luminance of the channel of the pixel position can be calculated according to the above-mentioned first channel luminance, target transparency, and target channel reflection luminance by the following formula (2):
XS k =n*XN k +(1-n)*XD k (2);
wherein XS k A target single channel luminance representing a kth channel, n representing the target transparency, XN k Indicating the first channel brightness, XD, of the kth channel k Representing the target channel reflection luminance of the kth channel.
The target single channel luminances for the multiple channels may then likewise be combined to obtain the target luminance for that pixel location.
In another embodiment of the disclosure, the preset shadow attenuation parameter includes a preset direct reflection luminance for characterizing a reflection luminance of the target object to the direct illumination source in the preset illumination environment, and a preset ambient light attenuation factor for characterizing an attenuation factor of the ambient illumination source in the preset illumination environment, where the preset direct reflection luminance may include a preset direct reflection luminance of each channel.
It should be noted that the preset illumination environment may include a preset light source and a preset shade, and the preset light source may include a direct illumination light source and an ambient illumination light source; according to the position, material and shape of the target object, the brightness of the target object reflected by the direct illumination light source can be determined, and the brightness of the reflected light can be used as the preset direct reflection brightness. According to the position, the material and the shape attribute of the preset shielding object, the attenuation factor of the preset shielding object to the ambient lighting source can be determined, and the attenuation factor can be used as the preset ambient light attenuation factor.
Further, according to the positions, materials and shapes of different target objects, a plurality of preset direct reflection brightnesses can be determined. Similarly, a plurality of different preset ambient light attenuation factors can be obtained according to the positions, the materials and the shapes of the plurality of preset shields.
In the step S1022, the target reflection brightness of each pixel position may be obtained according to the preset direct reflection brightness, the preset ambient light attenuation factor, and the first brightness.
The preset direct reflection luminance may include a preset direct reflection channel luminance of each channel, for example; the target channel reflection luminance of each channel of each pixel location may be calculated according to the preset direct reflection channel luminance, the preset ambient light attenuation factor, and the first channel luminance by the following formula (3):
Figure BDA0003407566820000121
Wherein XD k Representing the target channel reflection brightness of the kth channel, XN k Representing the first channel brightness, alpha, of the kth channel k Representing the preset direct reflection channel brightness of the kth channel, and gamma represents the preset ambient light attenuation factor.
Then, the target channel reflection luminance of the plurality of channels may be regarded as the target reflection luminance of the pixel position.
Therefore, the target reflection brightness which is closer to the target object can be obtained through presetting the direct reflection brightness and presetting the ambient light attenuation factor, so that the finally obtained target shadow image is more vivid.
Fig. 3 is a flowchart illustrating a step S101 according to the embodiment shown in fig. 1, and as shown in fig. 3, the step S101 may include the following steps:
s1011, determining a preset illumination environment.
The preset illumination environment may include a preset light source, a preset shade, a preset camera, and a preset virtual plane.
For example, in this step, the preset illumination environment may be constructed by a Blender (three-dimensional graphic image software), for example, in which a preset light source, one or more preset blinds, a preset camera, and a virtual plane are placed according to preset shape, size, and relative position information, the preset illumination environment may be constructed. For example, fig. 4 is a schematic diagram of a preset lighting environment, which may include a preset light source 401, a preset obstruction 402, a preset camera 403, and a preset virtual plane 404, as shown in fig. 4, according to an example embodiment.
S1012, capturing the model brightness of each pixel position in a preset virtual plane through a preset camera in the preset illumination environment.
S1013, determining light transmission information of the image to be processed in the preset illumination environment according to the preset virtual plane and the model brightness.
For example, the model luminance of each pixel position may be normalized to a value between 0 and 1, and the normalized value is taken as the light transmission information of the pixel position. If the light transmission information comprises the target opacity, the higher the model brightness is, the smaller the normalized numerical value is; if the light transmission information includes the target transparency, the smaller the model brightness is, the smaller the normalized numerical value is.
Further, by performing random scaling, translation and rotation operations on four components (a preset light source, a preset shade, a preset camera and a preset virtual plane) in a preset illumination environment, different light transmission information can be obtained, so that a plurality of target shadow images can be generated according to the different light transmission information.
It should be noted that the light transmission information may be stored in the form of a shadow mask. For example, different transmission information is stored by a plurality of different shadow masks in order to acquire a rich variety of target shadow images.
In summary, by adopting any one of the methods in the embodiments of the present disclosure, light transmission information of an image to be processed in a preset illumination environment is obtained; performing shadow processing on the image to be processed according to preset shadow attenuation parameters and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment. The reflection coefficient of the target object in the image to be processed in the preset illumination environment can be represented by the preset shadow attenuation parameter, so that a target shadow image matched with the actual preset illumination environment can be obtained, and the efficiency of shadow image acquisition can be improved because manual shooting is not required.
Fig. 5 is a block diagram of an image processing apparatus 500 according to an exemplary embodiment, as shown in fig. 5, the apparatus 500 may include:
the information acquisition module 501 is configured to acquire light transmission information of an image to be processed in a preset illumination environment;
the image processing module 502 is configured to perform shadow processing on the image to be processed according to a preset shadow attenuation parameter and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment.
Optionally, the light transmission information includes a target opacity for each pixel location of the image to be processed; the image processing module 502 is configured to determine a target reflection brightness of each pixel position according to a preset shadow attenuation parameter, where the target reflection brightness is used to characterize a reflection brightness of a target object in the image to be processed in the preset illumination environment; and carrying out shadow processing on the image to be processed according to the target opacity and the target reflection brightness to obtain the target shadow image.
Optionally, the image processing module 502 is configured to obtain a target brightness of each pixel position of the image to be processed according to the first brightness, the target opacity and the target reflection brightness of each pixel position; the target shadow image is generated based on target brightness for the plurality of pixel locations.
Optionally, the image to be processed includes a plurality of channels, the first brightness includes a first channel brightness of the plurality of channels, and the target reflected brightness includes a target channel reflected brightness of the plurality of channels; the image processing module 502 is configured to calculate, for each channel of each pixel location of the image to be processed, a target single channel luminance of the channel of the pixel location from the first channel luminance, the target opacity and the target channel reflection luminance by: XS (extensible markup language) k =(1-m)*XN k +m*XD k The method comprises the steps of carrying out a first treatment on the surface of the Wherein XS k A target single channel luminance representing the kth channel, m representing the target opacity, XN k Indicating the first channel brightness, XD, of the kth channel k A target channel reflection luminance representing the kth channel; and combining the single-channel brightness of the multiple channels to obtain the target brightness of the pixel position.
Optionally, the preset shadow attenuation parameter includes a preset direct reflection brightness and a preset ambient light attenuation factor, the preset direct reflection brightness is used for representing the reflection brightness of the target object to the direct illumination light source in the preset illumination environment, and the preset ambient light attenuation factor is used for representing the attenuation factor of the ambient illumination light source in the preset illumination environment; the image processing module 502 is configured to obtain a target reflected brightness for each pixel location according to the preset direct reflected brightness, the preset ambient light attenuation factor, and the first brightness.
Optionally, the preset direct reflection luminance comprises a preset direct reflection channel luminance for each channel; the image processing module 502 is configured to calculate, for each pixel location, a target channel reflection luminance for each channel of the pixel location according to the preset direct reflection channel luminance, the preset ambient light attenuation factor, and the first channel luminance by the following formula:
Figure BDA0003407566820000151
Wherein XD k Representing the target channel reflection brightness of the kth channel, XN K Representing the first channel brightness, alpha, of the kth channel K Representing the brightness of a preset direct reflection channel of the kth channel, wherein gamma represents the preset ambient light attenuation factor; the target channel reflection luminance of the plurality of channels is taken as the target reflection luminance of the pixel position.
Optionally, the information obtaining module 501 is configured to determine a preset illumination environment, where the preset illumination environment includes a preset light source, a preset shade, a preset camera, and a preset virtual plane; capturing the model brightness of each pixel position in the preset virtual plane through the preset camera in the preset illumination environment; and determining the light transmission information of the image to be processed in the preset illumination environment according to the preset virtual plane and the model brightness.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In summary, by adopting the device in the above embodiment of the disclosure, light transmission information of an image to be processed in a preset illumination environment is obtained; performing shadow processing on the image to be processed according to preset shadow attenuation parameters and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment. The reflection coefficient of the target object in the image to be processed in the preset illumination environment can be represented by the preset shadow attenuation parameter, so that a target shadow image matched with the actual preset illumination environment can be obtained, and the efficiency of shadow image acquisition can be improved because manual shooting is not required.
It should be noted that, the terminal in the present disclosure may be an electronic device such as a smart phone, a tablet computer, a smart watch, a smart bracelet, a PDA (Personal Digital Assistant, a personal digital assistant), a CPE (Customer Premise Equipment, a client terminal device), and the present disclosure is not limited thereto.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the image processing method provided by the present disclosure.
Fig. 6 is a block diagram of an electronic device 600, shown in accordance with an exemplary embodiment. For example, the electronic device 600 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, router, or the like.
Referring to fig. 6, an electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the image processing methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 may include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 606 provides power to the various components of the electronic device 600. The power components 606 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen between the electronic device 600 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front camera and/or a rear camera. When the electronic device 600 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor assembly 614 may detect an on/off state of the electronic device 600, a relative positioning of the components, such as a display and keypad of the electronic device 600, the sensor assembly 614 may also detect a change in position of the electronic device 600 or a component of the electronic device 600, the presence or absence of a user's contact with the electronic device 600, an orientation or acceleration/deceleration of the electronic device 600, and a change in temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communication between the electronic device 600 and other devices, either wired or wireless. The electronic device 600 may access a wireless network based on a communication standard, such as Wi-Fi,2G, 3G, 4G, 5G, NB-IOT, eMTC, or other 6G, etc., or a combination thereof. In one exemplary embodiment, the communication component 616 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the image processing methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 604, including instructions executable by processor 620 of electronic device 600 to perform the above-described image processing method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described image processing method when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, the method comprising:
Acquiring light transmission information of an image to be processed in a preset illumination environment;
performing shadow processing on the image to be processed according to preset shadow attenuation parameters and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment.
2. The method of claim 1, wherein the light transmission information comprises a target opacity for each pixel location of the image to be processed; performing shadow processing on the image to be processed according to a preset shadow attenuation parameter and the light transmission information, and obtaining a target shadow image corresponding to the image to be processed comprises the following steps:
determining target reflection brightness of each pixel position according to a preset shadow attenuation parameter, wherein the target reflection brightness is used for representing the reflection brightness of a target object in the image to be processed in the preset illumination environment;
and carrying out shadow processing on the image to be processed according to the target opacity and the target reflection brightness to obtain the target shadow image.
3. The method of claim 2, wherein the shading the image to be processed according to the target opacity and the target reflection brightness, to obtain the target shadow image comprises:
acquiring target brightness of each pixel position according to the first brightness, the target opacity and the target reflection brightness of each pixel position of the image to be processed;
and generating the target shadow image according to the target brightness of the plurality of pixel positions.
4. A method according to claim 3, wherein the image to be processed comprises a plurality of channels, the first luminance comprises a first channel luminance of the plurality of channels, and the target reflected luminance comprises a target channel reflected luminance of the plurality of channels; the obtaining the target brightness of each pixel position according to the first brightness, the target opacity and the target reflection brightness of each pixel position of the image to be processed comprises:
for each channel of each pixel position of the image to be processed, according to the first channel brightness, the target opacity and the target channel reflection brightness, calculating to obtain a target single channel brightness of the channel of the pixel position by the following formula:
XS k =(1-m)*XN k +m*XD k
Wherein XS k A target single channel luminance representing a kth channel, m representing the target opacity, XN k Indicating the first channel brightness, XD, of the kth channel k A target channel reflection luminance representing the kth channel;
and combining the target single-channel brightness of the multiple channels to obtain the target brightness of the pixel position.
5. The method of claim 4, wherein the pre-set shadow attenuation parameters include a pre-set direct reflection luminance that characterizes a reflected light luminance of the target object to a direct illumination source in the pre-set illumination environment and a pre-set ambient light attenuation factor that characterizes an attenuation factor of an ambient illumination source in the pre-set illumination environment; the determining the target reflection brightness of each pixel position according to the preset shadow attenuation parameter comprises the following steps:
and obtaining the target reflection brightness of each pixel position according to the preset direct reflection brightness, the preset ambient light attenuation factor and the first brightness.
6. The method of claim 5, wherein the preset direct reflection luminance comprises a preset direct reflection channel luminance for each channel; the obtaining the target reflection brightness of each pixel position according to the preset direct reflection brightness, the preset ambient light attenuation factor and the first brightness includes:
For each pixel position, according to the preset direct reflection channel brightness, the preset ambient light attenuation factor and the first channel brightness, calculating the target channel reflection brightness of each channel of the pixel position according to the following formula:
Figure FDA0003407566810000031
wherein XD k Representing the reflection brightness of the target channel of the kth channel, XN k Representing the first channel brightness, alpha, of the kth channel k Representing the brightness of a preset direct reflection channel of the kth channel, wherein gamma represents the preset ambient light attenuation factor;
the target channel reflection luminance of the plurality of channels is taken as the target reflection luminance of the pixel position.
7. The method according to any one of claims 1 to 6, wherein the acquiring the light transmission information of the image to be processed in the preset illumination environment comprises:
determining a preset illumination environment, wherein the preset illumination environment comprises a preset light source, a preset shielding object, a preset camera and a preset virtual plane;
capturing the model brightness of each pixel position in the preset virtual plane through the preset camera in the preset illumination environment;
and determining the light transmission information of the image to be processed in the preset illumination environment according to the preset virtual plane and the model brightness.
8. An image processing apparatus, characterized in that the apparatus comprises:
the information acquisition module is configured to acquire light transmission information of the image to be processed in a preset illumination environment;
the image processing module is configured to perform shadow processing on the image to be processed according to a preset shadow attenuation parameter and the light transmission information to obtain a target shadow image corresponding to the image to be processed; the preset shadow attenuation parameter is used for representing the reflection coefficient of the target object in the image to be processed in the preset illumination environment.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1 to 7.
CN202111521335.4A 2021-12-13 2021-12-13 Image processing method, device, storage medium and electronic equipment Pending CN116263941A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111521335.4A CN116263941A (en) 2021-12-13 2021-12-13 Image processing method, device, storage medium and electronic equipment
PCT/CN2022/090722 WO2023108992A1 (en) 2021-12-13 2022-04-29 Image processing method and apparatus, storage medium, electronic device, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111521335.4A CN116263941A (en) 2021-12-13 2021-12-13 Image processing method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116263941A true CN116263941A (en) 2023-06-16

Family

ID=86723216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111521335.4A Pending CN116263941A (en) 2021-12-13 2021-12-13 Image processing method, device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN116263941A (en)
WO (1) WO2023108992A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4223244B2 (en) * 2002-08-06 2009-02-12 株式会社バンダイナムコゲームス Image generation system, program, and information storage medium
WO2010026983A1 (en) * 2008-09-03 2010-03-11 日本電気株式会社 Image processing device, image processing method, and image processing program
US10380786B2 (en) * 2015-05-29 2019-08-13 General Electric Company Method and systems for shading and shadowing volume-rendered images based on a viewing direction
CN108573480B (en) * 2018-04-20 2020-02-11 太平洋未来科技(深圳)有限公司 Ambient light compensation method and device based on image processing and electronic equipment
CN110009723B (en) * 2019-03-25 2023-01-31 创新先进技术有限公司 Reconstruction method and device of ambient light source
CN111105365B (en) * 2019-12-05 2023-10-24 深圳积木易搭科技技术有限公司 Color correction method, medium, terminal and device for texture image
CN113674718B (en) * 2021-08-10 2022-10-18 北京小米移动软件有限公司 Display brightness adjusting method, device and storage medium

Also Published As

Publication number Publication date
WO2023108992A1 (en) 2023-06-22

Similar Documents

Publication Publication Date Title
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN108986199B (en) Virtual model processing method and device, electronic equipment and storage medium
JP6336206B2 (en) Method, apparatus, program and recording medium for processing moving picture file identifier
CN113160094A (en) Image processing method and device, electronic equipment and storage medium
US20210256672A1 (en) Method, electronic device and storage medium for processing image
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
CN109472738B (en) Image illumination correction method and device, electronic equipment and storage medium
CN110609649B (en) Interface display method, device and storage medium
EP3032482A1 (en) Page display method and apparatus
CN111338743B (en) Interface processing method and device and storage medium
CN107219989B (en) Icon processing method and device and terminal
US10204403B2 (en) Method, device and medium for enhancing saturation
CN111131596B (en) Screen brightness adjusting method and device
CN112905141A (en) Screen display method and device and computer storage medium
CN116263941A (en) Image processing method, device, storage medium and electronic equipment
EP3273437A1 (en) Method and device for enhancing readability of a display
CN114070998B (en) Moon shooting method and device, electronic equipment and medium
CN109389547B (en) Image display method and device
EP4068271A1 (en) Method and apparatus for processing brightness of display screen
CN111429854B (en) Method and device for driving display panel to display, display screen and terminal
CN110876015B (en) Method and device for determining image resolution, electronic equipment and storage medium
CN116797670A (en) Color adjustment method and device, display device and storage medium
CN118102080A (en) Image shooting method, device, terminal and storage medium
CN116805976A (en) Video processing method, device and storage medium
CN117309147A (en) Color calibration method, device, equipment and medium for virtual shooting system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination