CN116017167A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116017167A
CN116017167A CN202211704923.6A CN202211704923A CN116017167A CN 116017167 A CN116017167 A CN 116017167A CN 202211704923 A CN202211704923 A CN 202211704923A CN 116017167 A CN116017167 A CN 116017167A
Authority
CN
China
Prior art keywords
target
lighting
virtual scene
ambient light
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211704923.6A
Other languages
Chinese (zh)
Inventor
丛宽
林弘扬
赵清澄
许岚
张亦弛
吴红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ShanghaiTech University
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Original Assignee
ShanghaiTech University
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ShanghaiTech University, Hunan Happly Sunshine Interactive Entertainment Media Co Ltd filed Critical ShanghaiTech University
Priority to CN202211704923.6A priority Critical patent/CN116017167A/en
Publication of CN116017167A publication Critical patent/CN116017167A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The invention provides an image processing method, an image processing device, electronic equipment and a storage medium, which comprise the steps of determining a corresponding target re-lighting coefficient from an ambient light restoration model based on a target virtual scene selected in a shooting instruction; taking the target re-lighting system as a pixel brightness value; re-polishing a target object in the shooting instruction based on the pixel brightness value; and adding the target object image obtained by the re-lighting shooting into the target virtual scene to obtain a target image. According to the method, the ambient light is restored in the lighting equipment of the multispectral light source according to the target ambient light sampling diagrams corresponding to different virtual scenes, so that an ambient light restoration model is constructed, and the target re-lighting coefficient corresponding to the target virtual scene is determined through the ambient light restoration model; and re-polishing the target object in the shooting instruction based on the target re-polishing coefficient, so that the re-polishing effect of the virtual environment on the real object is more real, and the authenticity of the image can be improved.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of multispectral light source technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
With the development of video media and culture entertainment industry, the requirements of related industries on environment light restoration technology are continuously improved. The technology can efficiently restore the illumination condition under the appointed scene, and plays a vital role in the development and upgrading of related industries.
Generally, when a real human body or article image is directly added to a virtual scene, the real effect of the human body or article cannot be visually and vividly restored due to the difference of ambient illumination. The surfaces of the articles can show different reflection effects under different illumination environments; for different surfaces of the same article, different reflection effects can be presented due to factors such as angles, roughness, transparency and the like; even for the same surface of the same article, the simulated article surface and the real object surface show different reflection effects due to different reflectivities of different spectrums.
The current technology focuses on the restoration of the illumination direction, and aims to correct illumination and shadow information brought by different angles. The simple use of three-channel data of an image restores ambient light, which can cause larger image chromatic aberration, namely poorer image authenticity, due to factors such as a camera response curve, a lamplight response curve, weak expression capability of a general color standard sRGB color gamut and the like.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method, an apparatus, an electronic device, and a storage medium for processing an image, so as to solve the problem of poor image authenticity in the prior art.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
a first aspect of an embodiment of the present invention shows a method for processing an image, including:
receiving a shooting instruction input by any user;
determining a corresponding target re-lighting coefficient from an ambient light restoration model based on the target virtual scene selected in the shooting instruction, wherein the ambient light restoration model is used for storing a corresponding relation between each virtual scene and the first re-lighting coefficient;
taking the target heavy lighting coefficient as a pixel brightness value;
performing re-lighting on the target object in the shooting instruction based on the pixel brightness value;
and adding the target object image obtained by the re-lighting shooting into the target virtual scene to obtain a target virtual image.
Optionally, the determining, based on the target virtual scene selected in the shooting instruction, a corresponding target re-lighting coefficient from an ambient light restoration model includes:
traversing the corresponding relation between each virtual scene and the first re-lighting coefficient in the environment light restoration model, and determining the re-lighting coefficient corresponding to the target virtual scene;
and taking the first re-lighting coefficient corresponding to the target virtual scene as a target re-lighting coefficient.
Optionally, the construction process of the ambient light reduction model includes:
calibrating the color card image obtained by shooting by utilizing the various color lamp beads to obtain a calibrated color card image;
virtually polishing the calibrated color card image based on the preprocessed target ambient light sampling graph aiming at the target ambient light sampling graph corresponding to each virtual scene to obtain corresponding color card pixels;
determining a first re-lighting coefficient of each virtual scene based on the color card pixels;
and constructing an ambient light restoration model based on the corresponding relation between each virtual scene and the first re-lighting coefficient.
Optionally, the virtually lighting the calibrated color card image based on the preprocessed target ambient light sampling map for the target ambient light sampling map corresponding to each virtual scene to obtain a corresponding color card pixel, including:
preprocessing a target ambient light sampling image corresponding to each virtual scene to obtain a compressed initial pixel brightness value;
mapping the initial pixel brightness value according to the spherical multispectral light source coordinates to obtain an image area;
processing the image area to determine an expected pixel value;
and virtually polishing the color card by utilizing the lamplight measured by the expected pixel value to obtain the corresponding color card pixel.
A second aspect of an embodiment of the present invention shows an image processing apparatus, including:
the determining unit is used for determining a corresponding target re-lighting coefficient from the environment light restoration model based on a target virtual scene selected in any shooting instruction when the shooting instruction input by a user is received, wherein the environment light restoration model is obtained by construction of the constructing unit;
the lighting unit is used for taking the target re-lighting system as a pixel brightness value; performing re-lighting on the target object in the shooting instruction based on the pixel brightness value;
and the processing unit is used for adding the target object image obtained by the re-lighting shooting into the target virtual scene to obtain a target image.
Optionally, the determining unit is specifically configured to: traversing the corresponding relation between each virtual scene and the first re-lighting coefficient in the environment light restoration model, and determining the re-lighting coefficient corresponding to the target virtual scene;
and taking the first re-lighting coefficient corresponding to the target virtual scene as a target re-lighting coefficient.
Optionally, the construction unit is configured to: calibrating the color card image obtained by shooting by utilizing the various color lamp beads to obtain a calibrated color card image;
virtually polishing the calibrated color card image based on the preprocessed target ambient light sampling graph aiming at the target ambient light sampling graph corresponding to each virtual scene to obtain corresponding color card pixels;
determining a first re-lighting coefficient of each virtual scene based on the color card pixels;
and constructing an ambient light restoration model based on the corresponding relation between each virtual scene and the first re-lighting coefficient.
Optionally, the virtual lighting is performed on the calibrated color card image based on the preprocessed target ambient light sampling map for the target ambient light sampling map corresponding to each virtual scene, so as to obtain a building unit of the corresponding color card pixel, which is specifically used for:
preprocessing a target ambient light sampling image corresponding to each virtual scene to obtain a compressed initial pixel brightness value;
mapping the initial pixel brightness value according to the spherical multispectral light source coordinates to obtain an image area;
processing the image area to determine an expected pixel value;
and virtually polishing the color card by utilizing the lamplight measured by the expected pixel value to obtain the corresponding color card pixel.
A third aspect of the embodiment of the present invention shows an electronic device, where the electronic device is configured to execute a program, where the program executes the image processing method according to the first aspect of the embodiment of the present invention.
A fourth aspect of the embodiment of the present invention shows a computer storage medium, where the storage medium includes a storage program, where the program, when executed, controls a device in which the storage medium is located to execute the image processing method as shown in the first aspect of the embodiment of the present invention.
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: when a shooting instruction input by any user is received, determining a corresponding target re-lighting coefficient from an ambient light restoration model based on a target virtual scene selected in the shooting instruction, wherein the ambient light restoration model is used for storing a corresponding relation between each virtual scene and a first re-lighting coefficient; taking the target re-lighting system as a pixel brightness value; performing re-lighting on the target object in the shooting instruction based on the pixel brightness value; and adding the target object image obtained by the re-lighting shooting into the target virtual scene to obtain a target image. In the embodiment of the invention, according to target ambient light sampling diagrams corresponding to different virtual scenes, ambient light is restored in the lighting equipment of the multispectral light source so as to construct an ambient light restoration model, and a target heavy lighting coefficient corresponding to the target virtual scene is determined through the ambient light restoration model; and re-lighting the target object in the shooting instruction based on the target re-lighting coefficient, so that the re-lighting effect of the virtual environment on the real object is more real, and the target object image obtained by re-lighting shooting is added into the target virtual scene to obtain a target virtual image. The authenticity of the image obtained in the above manner can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a process for constructing an ambient light reduction model according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the description of "first", "second", etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implying an indication of the number of technical features being indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, a flowchart of a method for processing an image according to an embodiment of the present invention is shown, where the method includes:
step S101: and receiving a shooting instruction input by any user.
Optionally, the user may select a corresponding target virtual scene based on the user terminal, and trigger a corresponding shooting button to generate a corresponding shooting instruction, and send the corresponding shooting instruction to the image processing system.
In the specific implementation process of step S101, a shooting instruction carrying a target virtual scene is received.
Step S102: and determining a corresponding target re-lighting coefficient from the ambient light restoration model based on the target virtual scene selected in the shooting instruction.
In step S102, the ambient light restoration model is used to store a correspondence between each virtual scene and the first relight coefficient.
It should be noted that, the construction process of the ambient light reduction model, as shown in fig. 2, includes the following steps:
step S201: and calibrating the color card image obtained by shooting by utilizing the various color lamp beads to obtain the calibrated color card image.
It should be noted that the L channel in the color-opponent space (LAB) color space is brightness, and the a and B are color channels, so that this color mixing will produce a bright color. Since the RGB color space does not intuitively correspond to the perception of the human eye. The LAB color space can accurately correspond to the feeling of human eyes, and the related brightness, color difference, color correction and other calculations can be more accurate; therefore, the RGB color space needs to be converted to the LAB color space.
In the specific implementation process of step S201, first, because the brightness of the light beads with different spectra excited under the maximum brightness is different, in order to avoid the influence under the low light condition, in the process of shooting the color card, the exposure time of shooting the light beads with each light source spectrum by adopting the image camera is different. For different exposure times, the red, green and blue RGB color space of the color chart image is converted into a color-opposite space (LAB) color space, and then the pixel brightness L and the exposure time of the image-converted color chart image are substituted into formula (1) to determine the pixel brightness L' of the corrected color chart image.
Formula (1):
L'=L/exp_time*factor (1)
wherein L is pixel brightness; the factor is the average exposure time, and under the exposure time, the color card color exposure under the blue, green and red lighting is basically uniform; exp_time is the exposure time of the camera;
secondly, because the light source has the problem of uneven actual brightness under the input brightness, the lamp beads with different brightness are required to be input in the shooting process, a group of color card images are shot under the same camera exposure parameters, and the brightness curves of the lamp beads are obtained through calibration. Specifically, for a set of discrete brightnesses L of the color chip image 1 ,L 2 ,...,L k And based on brightness L 1 ,L 2 ,...,L k Drawing a first curve; shooting gray-scale color blocks on a color card image to obtain a group of brightness M 1 ,M 2 ,...,M k Based on brightness M 1 ,M 2 ,...,M k Drawing a second curve; and fitting the first curve and the second curve to calculate a minimum error, namely a regression function f, that is, the minimum error, namely the regression function f, can be determined according to the curve conditions of the first curve and the second curve.
Brightness refers to the brightness of a color, also known as the color level and illuminance. Luminance, also called gray scale value, is a measure of the brightness of a color.
And finally, storing corresponding data for lamp beads with different spectrums of the same light source, correcting the exposure time to obtain a calibrated color card image, converting the calibrated color card image into a color space of linear brightness information such as color space to LAB (liquid crystal display) and the like by using the calibrated color card image as a multi-spectrum light source re-lighting coefficient, dividing the brightness value by the corresponding exposure time, and multiplying the corresponding exposure time to obtain the calibrated color card image.
For example, the exposure time Q of white light just in full exposure is selected as the same exposure time; at this exposure time Q, the color card color exposure under the blue, green, red strike is substantially uniform.
Alternatively, the exposure time may be used as a multi-spectral light source re-strike coefficient.
It should be noted that, when the light beads of each light source spectrum respectively light up the darkroom, the high-definition color card image shot by the professional camera needs to be controlled in the process of acquisition, so that the sensitivity and the exposure time of the camera are also required to be controlled.
And for the same light source, obtaining response curves of the light sources according to the brightness of the gray-scale color blocks on the color card and the corresponding light source intensity.
Step S202: and virtually polishing the calibrated color card image based on the preprocessed target ambient light sampling graph aiming at the target ambient light sampling graph corresponding to each virtual scene to obtain corresponding color card pixels.
It should be noted that, the specific implementation step S202 performs virtual lighting on the calibrated color chart image based on the preprocessed target ambient light sampling map for the target ambient light sampling map corresponding to each virtual scene, so as to obtain a process of corresponding color chart pixels, and includes the following steps:
step S11: and preprocessing a target ambient light sampling graph corresponding to each virtual scene to obtain a compressed initial pixel brightness value.
In the specific implementation process of step S11, the hybrid log gamma method may be used as a hybrid log curve. Specifically, the brightness value E of each pixel in the image is compressed, as shown in formula (2), to obtain a compressed initial pixel brightness value E'.
Formula (2):
Figure BDA0004026023000000081
where E is a linear but normalized image brightness signal to interval 0,1, i.e. the brightness value of a certain pixel. E' is a nonlinear luminance output signal, i.e. a compressed luminance value. The constants a, b, c are fixed to 0.17883277,0.28466892,0.55991073, respectively, and the constant d is determined by the brightness range of the multi-spectral lighting system, which is often determined as in one embodiment of this patent
Figure BDA0004026023000000082
It should be noted that, the target ambient light sampling map refers to a panoramic image with a high dynamic range, and has sampling information of an ambient environment around a certain location thereon. The aspect ratio of the image is generally 1:2.
In the embodiment of the invention, the picture is enhanced in a compression mode, that is, if the target ambient light picture does not have enough high dynamic range expression capability, the patent can utilize hybrid log gamma method and the like to enhance the picture so as to restore the dynamic range of the original picture.
Step S12: and mapping the initial pixel brightness value according to the spherical multispectral light source coordinates to obtain an image area.
In the specific implementation step S12, the pixels corresponding to the initial pixel brightness values are projected by using a positive-axis equiangular cylindrical projection manner, so that the position coordinates of the pixels are mapped by constructing spherical longitude and latitude coordinates, as shown in formula (3), the corresponding coordinates are obtained, and the image area of the target ambient light sampling map mapped to the lighting device is determined based on the mapped coordinates.
Specifically, the coordinate of any longitude and latitude coordinate lambda, phi on the coordinate axis of the positive-axis equiangular cylindrical projection on the sphere is shown in formula (3).
Equation (3):
Figure BDA0004026023000000083
wherein x and y are two-dimensional plane coordinates, lambda and lambda of the brightness value after compression in the target ambient light sampling graph respectively 0 The dimension coordinates and the equatorial latitude are respectively. Phi is the longitude coordinate.
Step S13: the image area is processed to determine the expected pixel value.
In the specific implementation process of step S13, since the multispectral light source system has sparse number of light sources and uneven distribution, that is, the area mapping of the cylindrical projection with the positive axis and the equal angle is also uneven, the area factors of all sampling points can be considered when the coordinate mapping is performed. That is, the sampling area corresponding to the pixel point in the image area corresponding to the target ambient light sampling map, the projection area and the pixel color can be determined by using a Feng Luo noy map algorithm or cone mapping method on the sphere, and the projection area and the pixel color are substituted into the formula (4) to calculate, so as to obtain the expected pixel value of the target ambient light sampling map. That is, a delaunay triangulation of the control points is generated according to the principle that the coordinate distance of the pixel points in the target ambient light sampling graph is closest, and the distance from any internal point to the control point is smaller than the distance from the point to other control points. The expected pixel value for each light source is determined as the average of all pixel values of the belonging triangulation.
To ensure that it meets the constraints of delaunay triangulation. And (3) making a perpendicular bisector on each side of the white triangular net, wherein the intersection point of the perpendicular bisectors is the vertex of the Thiessen polygon, the connection of the vertexes on the perpendicular bisectors forms the Thiessen polygon, and dividing the inner points of the Thiessen polygon to generate the Feng Luo Nori graph.
The inner points refer to pixel points on the target ambient light sampling graph, and the control points refer to points of the light source system after three-dimensional coordinates of the light sources are projected to two-dimensional image coordinates.
Equation (4):
Figure BDA0004026023000000091
c is the expected pixel value of the target ambient light sampling graph, S is the sampling area corresponding to the current sampling point, and the pixel point S=s is taken 1 ,...,s k Respectively have projection area m 1 ,...,m k The corresponding pixel color is c 1 ,...,c k . That is, m k Is the projection area of the pixel point k, c k The pixel point k corresponds to the pixel color.
Step S14: and virtually polishing the color card by utilizing the lamplight measured by the expected pixel value to obtain the corresponding color card pixel.
In the specific implementation step S14, virtual lighting is performed on the color chart by using the light measured by the expected pixel values of different target ambient light sampling graphs to obtain the corresponding color chart pixels.
It should be noted that, virtual lighting refers to that, for color card color C in any color card picture, the brightness value L in the color lookup table is used C =fI,O,L C As a mapping relationship between the color of the color chart and the color output O obtained by reflecting the specific light source input I.
Step S203: a first re-lighting coefficient for each virtual scene is determined based on the color chip pixels.
It should be noted that, the specific implementation step S203 determines the first re-lighting coefficient of each virtual scene based on the color card pixel, including the following steps:
step S21: and calculating by using the color card pixels and the given light source color pixels for each virtual scene to obtain a first re-lighting coefficient.
In the specific implementation of step S21, a non-negative least square method is performed between the color pixel of the given light source and the pixel of the expected color card, namely by P ij And L ijk P ij And L ijk Substituting formula (5) for calculation to obtain a first re-lighting coefficient alpha k
Equation (5):
Figure BDA0004026023000000101
wherein P is ij As the value of the intended color card color i under color channel j; l (L) ijk Illuminating the value of the color card color i in the color channel j as the light color k; p means P ij Is a vectorized representation of (2); l alpha is L ijk Is a vectorized representation of (c).
It should be noted that, taking into consideration the first re-lighting coefficient α k And cannot be negative, so that the optimal coefficient is obtained by adopting a non-negative least square method.
Step S204: and constructing an ambient light restoration model based on the corresponding relation between each virtual scene and the first re-lighting coefficient.
In the specific implementation process of step S204, a virtual scene corresponding to the target ambient light sampling map is constructed, and a corresponding relationship between the first re-lighting coefficients corresponding to the target ambient light sampling map is further constructed, so as to construct an ambient light restoration model.
It should be noted that, the specific implementation step S102 is based on the target virtual scene selected in the shooting instruction, and the process of determining the corresponding target re-lighting coefficient from the ambient light restoration model includes the following steps:
step S31: and traversing the corresponding relation between each virtual scene and the first re-lighting coefficient in the environment light restoration model, and determining the re-lighting coefficient corresponding to the target virtual scene.
In the specific implementation step S31, the corresponding relationship between each virtual scene and the first re-lighting coefficient in the ambient light restoration model is traversed, and the first re-lighting coefficient corresponding to the target virtual scene is searched.
Step S32: and taking the first re-lighting coefficient corresponding to the target virtual scene as a target re-lighting coefficient.
In the specific implementation process of step S32, the first re-polishing coefficient obtained by searching is taken as the target re-polishing coefficient.
Step S103: and taking the target heavy lighting coefficient as a pixel brightness value.
Step S104: and re-lighting the target object in the shooting instruction based on the pixel brightness value.
In the specific implementation process of step S104, the brightness value of the pixel is utilized to correct the light source response curve, so that the intensity information of all the light sources can be obtained; the intensity information of all the light sources is used as the light source input brightness, and illumination reduction can be performed; and re-polishing the target object in the shooting instruction by utilizing the restored illumination.
Step S105: and adding the target object image obtained by the re-lighting shooting into the target virtual scene to obtain a target virtual image.
In the specific implementation process of step S105, the target object image obtained by shooting with the shooting device is put into the target virtual scene, so that the real effect of visually restoring the target object image from the target virtual image is obtained.
The target image may be a human body or an object image.
In the embodiment of the invention, according to target ambient light sampling diagrams corresponding to different virtual scenes, ambient light is restored in the lighting equipment of the multispectral light source so as to construct an ambient light restoration model, and a target heavy lighting coefficient corresponding to the target virtual scene is determined through the ambient light restoration model; and re-lighting the target object in the shooting instruction based on the target re-lighting coefficient, so that the re-lighting effect of the virtual environment on the real object is more real, and the target object image obtained by re-lighting shooting is added into the target virtual scene to obtain a target virtual image. The authenticity of the image obtained in the above manner can be improved.
Based on the image processing method shown in the above embodiment of the present invention, correspondingly, another image processing apparatus is also disclosed in the embodiment of the present invention, as shown in fig. 3, where the apparatus includes:
a determining unit 301, configured to determine, when receiving a shooting instruction input by any user, a corresponding target re-lighting coefficient from an ambient light restoration model based on a target virtual scene selected in the shooting instruction, where the ambient light restoration model is constructed by a constructing unit 304;
a lighting unit 302, configured to take the target heavy lighting system as a pixel brightness value; performing re-lighting on the target object in the shooting instruction based on the pixel brightness value;
and the processing unit 303 is used for adding the target object image obtained by the re-lighting shooting to the target virtual scene to obtain a target image.
It should be noted that, the specific principle and execution process of each unit in the image processing apparatus disclosed in the above embodiment of the present invention are the same as the image processing method shown in the above embodiment of the present invention, and reference may be made to corresponding parts in the image processing method disclosed in the above embodiment of the present invention, and no redundant description is given here.
In the embodiment of the invention, according to target ambient light sampling diagrams corresponding to different virtual scenes, ambient light is restored in the lighting equipment of the multispectral light source so as to construct an ambient light restoration model, and a target heavy lighting coefficient corresponding to the target virtual scene is determined through the ambient light restoration model; and re-lighting the target object in the shooting instruction based on the target re-lighting coefficient, so that the re-lighting effect of the virtual environment on the real object is more real, and the target object image obtained by re-lighting shooting is added into the target virtual scene to obtain a target virtual image. The authenticity of the image obtained in the above manner can be improved.
Optionally, based on the image processing apparatus shown in the foregoing embodiment of the present invention, the determining unit 301 is specifically configured to: traversing the corresponding relation between each virtual scene and the first re-lighting coefficient in the environment light restoration model, and determining the re-lighting coefficient corresponding to the target virtual scene;
and taking the first re-lighting coefficient corresponding to the target virtual scene as a target re-lighting coefficient.
Optionally, based on the image processing apparatus shown in the foregoing embodiment of the present invention, the construction unit 304 is configured to: calibrating the color card image obtained by shooting by utilizing the various color lamp beads to obtain a calibrated color card image;
virtually polishing the calibrated color card image based on the preprocessed target ambient light sampling graph aiming at the target ambient light sampling graph corresponding to each virtual scene to obtain corresponding color card pixels;
determining a first re-lighting coefficient of each virtual scene based on the color card pixels;
and constructing an ambient light restoration model based on the corresponding relation between each virtual scene and the first re-lighting coefficient.
Optionally, based on the image processing apparatus shown in the foregoing embodiment of the present invention, for a target ambient light sampling map corresponding to each virtual scene, virtual lighting is performed on a calibrated color chart image based on the preprocessed target ambient light sampling map, so as to obtain a building unit 304 of a corresponding color chart pixel, which is specifically configured to: preprocessing a target ambient light sampling image corresponding to each virtual scene to obtain a compressed initial pixel brightness value;
mapping the initial pixel brightness value according to the spherical multispectral light source coordinates to obtain an image area;
processing the image area to determine an expected pixel value;
and virtually polishing the color card by utilizing the lamplight measured by the expected pixel value to obtain the corresponding color card pixel.
The embodiment of the invention also discloses an electronic device which is used for running a database storage process, wherein the image processing method disclosed in the above figures 1 and 2 is executed when the database storage process is run.
The embodiment of the invention also discloses a computer storage medium, which comprises a database storage process, wherein the equipment where the storage medium is controlled to execute the image processing method disclosed in the figures 1 and 2 when the database storage process runs.
In the context of this disclosure, a computer storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing an image, the method comprising:
receiving a shooting instruction input by any user;
determining a corresponding target re-lighting coefficient from an ambient light restoration model based on the target virtual scene selected in the shooting instruction, wherein the ambient light restoration model is used for storing a corresponding relation between each virtual scene and the first re-lighting coefficient;
taking the target heavy lighting coefficient as a pixel brightness value;
performing re-lighting on the target object in the shooting instruction based on the pixel brightness value;
and adding the target object image obtained by the re-lighting shooting into the target virtual scene to obtain a target virtual image.
2. The method of claim 1, wherein determining the corresponding target re-lighting coefficient from the ambient light restoration model based on the target virtual scene selected in the shooting instruction comprises:
traversing the corresponding relation between each virtual scene and the first re-lighting coefficient in the environment light restoration model, and determining the re-lighting coefficient corresponding to the target virtual scene;
and taking the first re-lighting coefficient corresponding to the target virtual scene as a target re-lighting coefficient.
3. The method of claim 1, wherein the process of constructing the ambient light reduction model comprises:
calibrating the color card image obtained by shooting by utilizing the various color lamp beads to obtain a calibrated color card image;
virtually polishing the calibrated color card image based on the preprocessed target ambient light sampling graph aiming at the target ambient light sampling graph corresponding to each virtual scene to obtain corresponding color card pixels;
determining a first re-lighting coefficient of each virtual scene based on the color card pixels;
and constructing an ambient light restoration model based on the corresponding relation between each virtual scene and the first re-lighting coefficient.
4. The method of claim 3, wherein virtually lighting the calibrated color chart image based on the preprocessed target ambient light sampling map for the target ambient light sampling map corresponding to each virtual scene to obtain the corresponding color chart pixel, comprises:
preprocessing a target ambient light sampling image corresponding to each virtual scene to obtain a compressed initial pixel brightness value;
mapping the initial pixel brightness value according to the spherical multispectral light source coordinates to obtain an image area;
processing the image area to determine an expected pixel value;
and virtually polishing the color card by utilizing the lamplight measured by the expected pixel value to obtain the corresponding color card pixel.
5. An image processing apparatus, the apparatus comprising:
the determining unit is used for determining a corresponding target re-lighting coefficient from the environment light restoration model based on a target virtual scene selected in any shooting instruction when the shooting instruction input by a user is received, wherein the environment light restoration model is obtained by construction of the constructing unit;
the lighting unit is used for taking the target re-lighting system as a pixel brightness value; performing re-lighting on the target object in the shooting instruction based on the pixel brightness value;
and the processing unit is used for adding the target object image obtained by the re-lighting shooting into the target virtual scene to obtain a target image.
6. The apparatus according to claim 5, wherein the determining unit is specifically configured to: traversing the corresponding relation between each virtual scene and the first re-lighting coefficient in the environment light restoration model, and determining the re-lighting coefficient corresponding to the target virtual scene;
and taking the first re-lighting coefficient corresponding to the target virtual scene as a target re-lighting coefficient.
7. The apparatus of claim 5, wherein the construction unit is configured to: calibrating the color card image obtained by shooting by utilizing the various color lamp beads to obtain a calibrated color card image;
virtually polishing the calibrated color card image based on the preprocessed target ambient light sampling graph aiming at the target ambient light sampling graph corresponding to each virtual scene to obtain corresponding color card pixels;
determining a first re-lighting coefficient of each virtual scene based on the color card pixels;
and constructing an ambient light restoration model based on the corresponding relation between each virtual scene and the first re-lighting coefficient.
8. The apparatus of claim 7, wherein the constructing unit for virtually lighting the calibrated color chart image based on the preprocessed target ambient light sampling map for the target ambient light sampling map corresponding to each virtual scene to obtain the corresponding color chart pixel is specifically configured to:
preprocessing a target ambient light sampling image corresponding to each virtual scene to obtain a compressed initial pixel brightness value;
mapping the initial pixel brightness value according to the spherical multispectral light source coordinates to obtain an image area;
processing the image area to determine an expected pixel value;
and virtually polishing the color card by utilizing the lamplight measured by the expected pixel value to obtain the corresponding color card pixel.
9. An electronic device, characterized in that the electronic device is arranged to run a program, wherein the program, when run, performs the method of processing an image according to any of claims 1-4.
10. A computer storage medium, characterized in that the storage medium comprises a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the method of processing an image according to any one of claims 1-4.
CN202211704923.6A 2022-12-29 2022-12-29 Image processing method and device, electronic equipment and storage medium Pending CN116017167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211704923.6A CN116017167A (en) 2022-12-29 2022-12-29 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211704923.6A CN116017167A (en) 2022-12-29 2022-12-29 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116017167A true CN116017167A (en) 2023-04-25

Family

ID=86024441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211704923.6A Pending CN116017167A (en) 2022-12-29 2022-12-29 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116017167A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229375A (en) * 2023-05-06 2023-06-06 山东卫肤药业有限公司 Internal environment imaging method based on non-light source incubator

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229375A (en) * 2023-05-06 2023-06-06 山东卫肤药业有限公司 Internal environment imaging method based on non-light source incubator
CN116229375B (en) * 2023-05-06 2023-08-25 山东卫肤药业有限公司 Internal environment imaging method based on non-light source incubator

Similar Documents

Publication Publication Date Title
US10931924B2 (en) Method for the generation of a correction model of a camera for the correction of an aberration
US20110228052A1 (en) Three-dimensional measurement apparatus and method
US8294762B2 (en) Three-dimensional shape measurement photographing apparatus, method, and program
US10950039B2 (en) Image processing apparatus
US8310499B2 (en) Balancing luminance disparity in a display by multiple projectors
US20180184071A1 (en) Photographing device and method for obtaining depth information
US7787692B2 (en) Image processing apparatus, image processing method, shape diagnostic apparatus, shape diagnostic method and program
US11022861B2 (en) Lighting assembly for producing realistic photo images
WO2023273094A1 (en) Method, apparatus, and device for determining spectral reflectance
CN108088658A (en) A kind of dazzle measuring method and its measuring system
CN116017167A (en) Image processing method and device, electronic equipment and storage medium
TW202103484A (en) System and method for creation of topical agents with improved image capture
Clark Photometric stereo using LCD displays
JP2018151832A (en) Information processing device, information processing method, and, program
WO2023273412A1 (en) Method, apparatus and device for determining spectral reflectance
JP2022177166A (en) Inspection method, program, and inspection system
WO2005109312A2 (en) Color characterization using color value clipping
CN111105365B (en) Color correction method, medium, terminal and device for texture image
US7514669B2 (en) Imaging apparatus and light source estimating device for the imaging apparatus
JP2008534951A (en) Illuminant estimation
KR102171773B1 (en) Inspection area determination method and visual inspection apparatus using the same
JPH11304589A (en) Ambient light measuring system
JP6897291B2 (en) Image evaluation device and image evaluation method
JPH0863618A (en) Image processor
Sitnik et al. Integrated shape, color, and reflectivity measurement method for 3D digitization of cultural heritage objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination